Publications
Biomedical Computing Centre

The Biomedical Computing Centre (BCC) is one of the important core facilities at the Li Ka Shing Institute of Health Sciences (LiHS). The BCC provides technology integration and deployment, scientific consultation, collaboration training, and high-performance computing support for genomics and bioinformatics research, including data analysis following massively parallel sequencing to LiHS faculties and staffs. 

Located on 7th floor of Li Ka Shing Medical Sciences Building, the BCC commenced operation in the summer of 2010 and expanded the Centre in 2014. Currently, the BCC accommodate with 287kW electric power for a wide array of IT facilities. BCC is following the Tier 3 Data Centre Standard and TIA-942 Data Center Standards. 

The existing network environment of BCC consists of a pair of 100GE-ready and numbers of 40GE-enabled high speed backbone switches which are followed by eight high-speed 10GE optical edge switches to form a loop-free Sping-and-Leaf Ethernet network. Another pair of InfiniBand Gateway (FDR) switches supports low-latency and high-performance computing services. In addition, four 10GE optical Ethernet switches provide 10Gpbs Science Network to the CUHK campus and Internet 2 for research collaboration around the world.

The core network would be upgraded in the coming months with a pair of 100Gpbs Ethernet core backbone switch under a spine-and-Leaf architecture. The Infiniband Network (IB network) would also be upgraded to a two-layer Full Non-Blocking HDR 200Gpbs core switch and EDR 100Gpbs layer switch for low latency and high-bandwidth application and analysis operations.

The storage environment of BCC comprises 3 petabytes Intel Enterprise Edition for high-throughput Lustre (IEEL) storage file system as high-performance tier (Tier 1), 3 petabytes scaled-out Network-Attached Storage system as linear-online tier (Tier 2), and around 1 petabyte peripheral storage system. All the storage systems will be upgraded with a set of Tier1, Tier2 and Tier3 storage systems with a total of 30PB capacity. Tier1 will incorporate a high standard of parallel file systems (BeeGFS) and Tier2 systems would be upgrade to a 32 nodes Dell EMC PowerScale A2000 systems, while Tier3 will be replaced by an Oracle ZS7-2 high-end storage systems with SL150 LTO Tape Library systems, providing a total of 11PB storage capacity.

The existing computing environment of BCC composes of 10 sets of high-performance computing server equipped with 64 threads and 256G RAM on each, two full 16-node high performance computing clusters on blade systems and one computing server with 1 terabyte RAM for in-memory computing would gradually be replaced by a set of 44 computing servers (totaling 3,376 CPU cores, 22,687.2 GFLOPS Processing Power), one set of 4 General Purpose Graphical Processing Units with 81,920 CUDA/GPU cores, (2000 TFLOPS Processing Power), and two set of Large-Scale GPU (AMD and RISC base) Servers with a total of 82,944 CUDA/GPU cores, and a total of 5.6PetaFLOPS of computational power. The current large memory computing server (384G RAM) and GPU server with 5 Nvidia K40M GPU cards will be integrated to the new GPU Servers cluster for Deep Learning and Intensive AI tasks.

Major IT Facilities List:

Computing Systems:

  • Computing nodes:
    • 44 units of Dell PowerEdge 740 and 870 Intel Computing Servers with 1TB, 2TB and 3TB System Memory
    • 3376 Total CPU Cores;  22687.2 GFLOPS Processing Power
  • General Purpose GPU Servers:
    • Dell PowerEdge C4140 GPU Server
    • 520 CPU Cores, 81920 GPU Cores;  2000 TFlops Computational Performance
  • Large-Scale GPU Systems:
    • AMD based Systems 
      • 1 x Nvidia DGX A100
      • AMD 7742 64 cores CPU; 8 V100 GPU; 1TB RAM 
      • 55296 CUDA Cores; 5 PetaFLOPS AI; 10 petaOPS INT8 
    • RISC based Systems
      • 2 x IBM Power AC922; 4 Nvidia A100 GPU
      • Dual Power 9 32 core CPU; 1TB RAM
      • 27648 CUDa Cosres

Storage Systems:

  • High Performance Tier Storage Systems:
    • Tier 1: 2.764 PB Dell PowerEdge 840 Server Cluster for BeeGFS Parallel and High Throughput File Systems
    • Tier 2: 10PB Dell EMC A2000 32 nodes
  • Tier 3: Near-Online Storage Tier : Oracle ZS7-2 6.384PB Capacity; Backup and Archive:  Oracle SL150 Tape Library Systems, 5PB Capacity.
Networks Systems:
  • Dell Force10 4810 x 4
  • HUAWEI CE7850 x 2, CE6810 x 8 and s5720 x 4
  • Mellanox 6036G x 2
  • Two-layer Full Non-Blocking Infiniband HDR 200Gpbs core and EDR 100Gpbs host network
  • Spine and Leaf dual path 100Gpbs Ethernet core network

For enquiries, please contact the Biomedical Computing Centre:

Mr. Michael Lau
Tel: (852) 3763 6016
Email: michael.cflau@cuhk.edu.hk