Clusters
All clusters use SLURM (Simple Linux Utility for Resource Management) for managing job scheudling.
KU Community Cluster
The KU Community Cluster utilizes the condo model to bring together hardware purchased from different researchers into a single heterogenous cluster. This allows participating researchers to have dedicated use of their purchased nodes, or they may run larger computing jobs by sharing idle nodes owned by other researchers. The main benefit is access to a much larger cluster than would typically be available to a single research group.
The cluster hardware and software is administered by CRC staff. This removes the need for postdocs or graduate students from maintaining your computing system. CRC staff also assists users with any problems they may have using the cluster and helps new users by conducting training sessions and one on one sessions.
You must be an owner of hardware in the KU Community Cluster or be sponsored by an owner group within the cluster to gain access. There is no free partition of the cluster and we currenlty do not have a rate charge of core hours.
Bigjay
Bigjay is a NSF funded cluster providing traditional high performance and high throughput computing. It is currently restricted to the PIs, Co-PIs, and their collaborators. It however is part of the sixhour parititon within the KU Community Cluster.
Hawk
Hawk is a small condo model high performance cluster for use with Research Health Information (RHI), or Research Identifiable Data.
Individually identifiable health information that is used in research but that are not associated with or derived from a healthcare service event is considered research health information (RHI), or research identifiable data. Additionally, data that were previously considered protected health information (PHI) that are obtained pursuant to a HIPAA authorization or IRB waiver of HIPAA authorization are also considered RHI. Even though it is not PHI, you should still separate individual identifiable data elements from non-identifiable data elements whenever it is feasible. Furthermore, user access control should be implemented to provide the minimum necessary access to RHI and RHI must only be processed, stored, and transferred via approved methods.
Purchase
CRC offers a Standard Compute Unit (SCU). A SCU is defined as:
- Dual Intel Xeon 6542Y CPU (Total 48 cores @ 2.9GHz)
- 256GB or more 5600MT/s DDR5 memory
- with or without HDR100 (100Gb/s) Infiniband
- Gigabit (1Gb/s) Ethernet
- 480GB NVMe
- 5 Year Hardware Warranty
On top of the base SCU, in the past we have also offered:
- SCU w/ 512GB
- SCU w/ 1TB
- SCU w/ extra SSDs
- SCU w/ NVIDIA or AMD GPUs
All owner groups will receive up to 15TB of $WORK storage for free. Any additional space of $WORK can be purchased for $50 per TB per year. Invoices for storage will be sent 1 year after the storage's start date.
CRC will pay for all backend costs including network and infiniband cables and switches. There is no annual fee associated with the price of the node.
Email crchelp@ku.edu to receive prices for interested SCU options.
Ownership
Upon arrival of the nodes, nodes of the same kind will be assigned at random to buyers of those nodes.
Duration of Ownership
All the owner-based systems, which comes with each SCU, is covered with a five year warranty. This guarantees that owners will have access to the number of computer cores under this agreement for at least the five year duration of the warranty.
If a compute node breaks during the five year warranty period it will be replaced according to the terms of the hardware warranty (next business day for compute nodes). At the end of the five year warranty period you will be responsible for any hardware failures that the node may incur. If you don't wish to pay for the repair of the node, it will be taken offline and removed from the cluster.
After the five years of warranty has expired the node will be kept in the cluster as long as space, power, and cooling are available. If space, power, or cooling is needed the oldest nodes in the cluster will be removed with notification sent to the owner.
You may remove your node from the cluster at any time. It will need to be removed from the Advanced Computing Facility. The nodes being purchased are part of a chassis which houses four (4) nodes. If you do wish to take possession of the node you will only receive the node and will need to purchase additional equipment to use it.