Welcome to the HACC (ml)Cluster!
Getting started:
Before you continue, please make sure you’ve completed the following:
- Complete the HACC user survey form.
- Sign the external user agreement forms.
- Familiarize yourself with the linux command line.
To request access to HACC, please contact the xilinx-center PI, Prof Deming Chen. Upon completion of the above checklist, the system admin will provide you with an HACC account.
NOTE: HACC cluster accounts are different from UIUC’s active directory accounts. Your accounts may have the same login ID, but HACC passwords are managed within the cluster only.
Your first login:
When you log into the HACC cluster for the first time make sure to do the following:
- Update your password from the provided default.
- Ensure that a home directory has been created for you. If you do not have a home directory, contact the system admin.
Be nice:
Please respect the time and constraints of other users. This is a shared research cluster with limited resources. Please do not attempt to consume all resources by submitting multiple jobs. Please do not perform multiple large compilations and builds on the shared development node.
What you need to know:
This cluster supports GPUs and a few different FPGAs (Alveos and VCKs). The cluster is intended for GPU, FPGA, and heterogeneous device/system research.
What you can’t do
- We only support Vitis flows for FPGAs
- You may develop your hardware in C/C++ or OpenCL or RTL
- You must work with the installed FPGA shells.
- We do not support custom FPGA images or custom shells.
- We do not support custom OS/Kernels
- You will not have root access
- No JTAG debugging available
How to use the cluster:
HACC has been designed with HPC principles in mind. As such, we use an HPC job scheduler to help manage resources. You will need to submit “jobs” to the scheduler in order to run your tasks. You can not directly execute your jobs on a node. You can perform 2 types of operations on the cluster:
- Development – Compiling, synthesizing, and generating bitstreams for FPGA/GPU accelerators.
We have one shared development node for these activities. This node is NOT governed by the job scheduler right now. Users are free to directly SSH into the node, and compile their jobs.
Note that we do not support the GUI based flow. The recommended operation is that users will perform their initial design and project setup on their local desktop machines. Once the project is ready to be built, users upload their projects to the cluster and only perform compilation, synthesis, P&R, etc on the cluster. - Compute – Running accelerated kernels on FPGAs/GPUs
Once the project is built, users should have an host executable and a xilinx XCLbin file (partial bitstream). Users may then submit a job to the scheduler and request time on an accelerated compute node.
Note, we migrated our storage from an old system to a new one. As such, if you have old files on the cluster that you need access to, ask one of our system administrators!
