CCAST currently operates two public clusters and one test/collaboration cluster out of its Research 1 & 2 datacenters.

 

Thunder Cluster

The Thunder cluster, acquired in 2012, was supported by the NSF Major Research Instrumentation (MRI) program in order to provide the state-of-art computing resources for entire NDSU research community.  The cluster is designed to scale both horizontally and vertically in order to meet rapid growth of computational needs. The original cluster consisted of sixty-three Intel Ivy Bridge compute nodes including two large memory nodes (Intel Sandy Bridge), and 14 MIC nodes (with Intel Phi 5110P accelerators).

In 2017, forty nodes (Intel Broadwell each with 44-cores and 128GB of RAM), as well as two GPU nodes were added to the original Thunder cluster increasing the theoretical peak performance of the cluster from ~40 TFLOPS to ~150 TFLOPS. Specific details of the current cluster nodes can be found in the <Thunder Hardware Listing> doc.

The cluster is housed in an 1,100 square foot high performance computing (HPC) server room equipped with ten rear door heat exchangers (RDHX’s) to remove up to 40 kW per rack. The server room can handle up to 500 kW of IT load and is expandable to 1,000 kW. All HPC equipment is fed by power that is conditioned by an uninterruptible power supply (UPS) system and backed up by a 2,000 kW diesel generator with fuel onsite for 36 hours of runtime at full capacity. The generator is rated for continuous operation and has advanced emission controls for unlimited hours of operation per year.

 Virtualization Cluster

 

Clusters

 

Nodes

Cores:

Memory:

Top of page