From within NUS or via VPN, see Research Computing @ CA2DM
High Performance Computing Resources in Singapore
NUS is equipped with a very generous, ever expanding, an flexible pool of HPC resources that are available to the entire NUS community. CA2DM’s researchers therefore have access to general purpose High Performance Computing infrastructures at the NUS Computer Centre HPC Services.
CA2DM also has access to the Computer Centre’s pay-per-use system, so as long has sufficient funds are secured nodes can be reserved for dedicated personal use on a “per-problem” basis on their systems and on-demand.
In 2016, a collaboration between A*Star, NUS, NTU and SUTD has created Singapore’s National SuperComputing Centre, with its 1 Petaflop cluster, which is also available for use by our researchers.
Besides the access to the shared infrastructures above, the Centre for Advanced 2D Materials and Graphene Research Centre has it’s own dedicated High Performance Computing cluster dimensioned and configured taking into account the specific needs of our researchers and research targets. From within NUS or via VPN, you can get more information here.
Usage accounting is performed in terms of core-hours per user for all users, which are grouped according to Principal Investigator (PI).
The charges per usage depend firstly on whether the PI is industrial or academic, and if academic whether external to CA2DM-GRC or internal.
Internal PIs may qualify for subsidized rates and/or core-hour packages depending on the research topic and contributions to the cluster.
Please refer to the charging scheme for more information.
HPC Cluster Overview
The total hardware specifications for our HPC cluster are
- ~2000 Intel Xeon cores
- ~3000 AMD Epyc cores
- ~50 TB of RAM
- ~100 TB parallel file system storage space
- ~500 TB scratch storage space
- ~600 TB long term storage space
The cluster’s first 32 “thin” nodes are based on IBM’s System x iDataPlex dx360 M4. Each node has two Xeon E5 2670 processors (16 cores/node) and 4GB of RAM per core
The cluster’s 6 “fat” nodes are based in IBM’s System x 3750 M4. Each node has four Xeon E5 4640 processors (32 cores/node) and 24 GB of RAM per core
In September 2014, 32 new nodes have been added, each with two Xeon E5 2680 v2 processors (20 cores/node) and 12 GB of RAM per core
In May 2016, 24 new nodes have been added, each with two Xeon E5 2680 v3 processors (24 cores/node) and 10 GB of RAM per core.
The Intel cluster is being phased out and will soon reach its End-Of-Life.
In 2019, 20 new nodes have been added, each with two AMD Epyc 7601 processors (64 cores/node) and 8GB of RAM per core.
In 2020, 32 new nodes have been added, each with two AMD Epyc 7452 processors (64 cores/node) and 8GB of RAM per core.
The total peak performance of the AMD cluster is currently over 100 double precision TFLOPS.
External users collaborating in CA2DM’s research projects can apply for access, for periods of up to one year. Please contact email@example.com