COMPUTING

From within NUS or via VPN, see Research Computing @ CA2DM

High Performance Computing Resources in Singapore

NUS is equipped with a very generous, ever expanding, an flexible pool of HPC resources that are available to the entire NUS community. CA2DM’s researchers therefore have access to general purpose High Performance Computing infrastructures at the NUS Computer Centre HPC Services.

CA2DM also has access to the Computer Centre’s pay-per-use system, so as long has sufficient funds are secured nodes can be reserved for dedicated personal use on a “per-problem” basis on their systems and on-demand.

Collaborations with A*Star may use their Computational Resource Centre, and with NTU their High Performance Computing Centre.

In 2016, a collaboration between A*Star, NUS, NTU and SUTD has created Singapore’s National SuperComputing Centre, with its 1 Petaflop cluster, which is also available for use by our researchers.

CA2DM Resources

Besides the access to the shared infrastructures above, the Centre for Advanced 2D Materials and Graphene Research Centre has it’s own dedicated High Performance Computing cluster dimensioned and configured taking into account the specific needs of our researchers and research targets. From within NUS or via VPN, you can get more information here.

HPC Cluster Overview

The total hardware specifications for our HPC cluster are

  • ~2000 Intel Xeon cores
  • ~3000 AMD Epyc cores
  • ~50 TB of RAM
  • ~100 TB parallel file system storage space
  • ~500 TB scratch storage space
  • ~600 TB long term storage space

Intel cluster

Sandybridge

The cluster’s first 32 “thin” nodes are based on IBM’s System x iDataPlex dx360 M4. Each node has two Xeon E5 2670 processors (16 cores/node) and 4GB of RAM per core

The cluster’s 6 “fat” nodes are based in IBM’s System x 3750 M4. Each node has four Xeon E5 4640 processors (32 cores/node) and 24 GB of RAM per core

Ivybridge

In September 2014, 32 new nodes have been added, each with two Xeon E5 2680 v2 processors (20 cores/node) and 12 GB of RAM per core

Haswell

In May 2016, 24 new nodes have been added, each with two Xeon E5 2680 v3 processors (24 cores/node) and 10 GB of RAM per core.

The Intel cluster is being phased out and will soon reach its End-Of-Life.

AMD Cluster

Naples

In 2019, 20 new  nodes have been added, each with two  AMD Epyc 7601 processors (64 cores/node) and 8GB of RAM per core.

Rome

In 2020, 32 new nodes have been added, each with two AMD Epyc 7452 processors (64 cores/node) and 8GB of RAM per core.

The total peak performance of the AMD cluster is currently over 100 double precision TFLOPS.

NOTE:
External users collaborating in CA2DM’s research projects can apply for access, for periods of up to one year. Please contact c2dhpc@nus.edu.sg