COMPUTING

High Performance Computing Resources in Singapore

The NUS is equipped with a very generous, ever expanding, an flexible pool of HPC resources that are available to the entire NUS community. CA2DM’s researchers therefore have access to general purpose High Performance Computing infrastructures at the NUS Computer Centre HPC Services.

CA2DM also has access to the Computer Centre’s pay-per-use system, so as long has sufficient funds are secured nodes can be reserved for dedicated personal use on a “per-problem” basis on their systems and on-demand.

Collaborations with A*Star may use their Computational Resource Centre, and with NTU their High Performance Computing Centre.

In 2016, a collaboration between A*Star, NUS, NTU and SUTD has created Singapore’s National SuperComputing Centre, with its 1 Petaflop cluster, which is also available for use by our researchers.

CA2DM Resources

Besides the access to the shared infrastructures above, the Centre for Advanced 2D Materials and Graphene Research Centre has it’s own dedicated High Performance Computing cluster dimensioned and configured taking into account the specific needs of our researchers and research targets.

Charging Scheme

Usage accounting is performed in terms of core-hours per user for all users, which are grouped according to Principal Investigator (PI).

The charges per usage depend firstly on whether the PI is industrial or academic, and if academic whether external to CA2DM-GRC or internal.

Internal PIs may qualify for subsidized rates and/or core-hour packages depending on the research topic and contributions to the cluster.

Please refer to the charging scheme for more information.

Cluster Overview

Above: cluster deployment

The total hardware specifications for our HPC cluster are

  • 2000 Xeon E5 cores
  • 20 TB of RAM
  • 8 TB global scratch space
  • 20 TB distributed local scratch space
  • 24 TB parallel file system storage space
  • 100 TB long term storage space
  • 4 48port 10G ethernet switches
  • 5 36port FDR infiniband Switches

“Thin” nodes

The cluster’s first 32 “thin” nodes are based on IBM’s System x iDataPlex dx360 M4. Each node has two Xeon E5 2670 processors (16 cores/node), 4GB of RAM per core, two SAS drives in a RAID 0 configuration and Infiniband connectivity between the nodes and to the shared storage.

In September 2014, 32 new nodes have been added, each with two Xeon E5 2680 v2 processors (20 cores/node), 12 GB of RAM per core, two SAS drives in a RAID 0 configuration and Infiniband connectivity between the nodes and to the shared storage.

In May 2016, 24 new nodes have been added, each with two Xeon E5 2680 v3 processors (24 cores/node), 10 GB of RAM per core, two SAS drives in a RAID 0 configuration and Infiniband connectivity between the nodes and to the shared storage.

“Fat” nodes

The cluster’s 6 “fat” nodes are based in IBM’s System x 3750 M4. Each node has four Xeon E5 4640 processors (32 cores/node), 24 GB of RAM per core, two SAS drives in a RAID 0 configuration and infiniband connectivity between nodes and to the shared storage.

NOTE:
External users collaborating in CA2DM’s research projects can apply for access, for periods of up to one year. Please contact c2dmdc@nus.edu.sg