About

Curie is running since summer 2015.

It has been upgrade to Centos 7 in september 2018.

He replace sv3/6/7/9.

Cluster

Master

  • Dell R520
  • 2 Xeon E5-2430v2 (6C/12T)
  • 64GB of ram
  • 8x4TB raid6 storage (14TB available to the system)

master

Storage

Nodes

  • Each node contain one or more disks to serve as local $TMPDIR or BeeOND.
  • They also have access to the BeeGFS shared filesystem.

NAS server

  • Thereis a storage solution reachable only by the master, to store datas which are not necessary on BeeGFS or $HOME.
  • We advise to tar it on the nas.
  • 10TB are available to users.

TMPDIR / BeeBGS

  • BeeGFS (ex-fhgfs) is the shared filesystem which replace Lustre on sv6.
  • 33TB available.

Partitions / Familys

epycv1 / compute-0-0

  • 1 Dell R7425
  • 2 AMD Epyc 7351 16C/32T, 2.4GHz.
  • 256GB RAM.
  • Architecture : Zen - Naples.
  • 10Gigabit ethernet network. Dell R7425

xeonv1 / compute-1-x

  • 14 Dell R620.
  • 2 Xeon E5-2670 8C/16T, 2.6Ghz.
  • 64GB RAM.
  • Architecture : Sandy Bridge.
  • 10Gigabit ethernet network. Dell R620

xeonv2 / compute-2-x

  • 7 Dell R620.
  • 2 Xeon E5-2680v2 10C/20T, 2.8Ghz.
  • 64GB RAM.
  • Architecture : Ivy Bridge.
  • 10Gigabit ethernet network. Dell R620

xeonv3 / compute-3-x

  • 9 Dell R630.
  • 2 Xeon E5-2680v3 12C/24T, 2.5Ghz.
  • 128GB RAM.
  • Architecture : Haswell.
  • 10Gigabit ethernet network. Dell R630

xeonv4 / compute-7-x

  • 4 Dell R630.
  • 2 Xeon E5-2680v4 14C/28T, 2.4Ghz.
  • 128GB RAM.
  • Architecture : Broadwell.
  • 10Gigabit ethernet network. Dell R630

xeonv5 / compute-9-x

  • 5 Dell R640.
  • 2 Xeon Gold 6130 16C/32T, 2.1GHz.
  • 192GB RAM, DDR4-2666.
  • Architecture : Skylake.
  • 10Gigabit ethernet network. Dell R640

xeonv6 / compute-11-0

  • 1 Dell R640.
  • 2 Xeon Gold 5218 16C/32T, 2.3GHz.
  • 192GB RAM, DDR4-2666.
  • Architecture : Cascade Lake.
  • 10Gigabit ethernet network. Dell R640

xeonv3_mono / compute-4-x & compute-5-0

  • 3 Dell R630

    compute-4-X : 1 Xeon E5-2637v3 4C/8T, 3.5Ghz, 192 GB RAMm, 4 x 800GB SSD available through --gres=ioperso.

    compute-5-0 : 2 Xeon E5-2637v3 4C/8T, 3.5Ghz, 512 GB RAM,, 8 x 800GB SSD available through --gres=ioperso.

  • Architecture : Haswell.

  • 10Gigabit ethernet network. Dell R630

napab / compute-5-1 & compute-5-2 & compute-8-1

  • 3 Dell R730
  • 2 Xeon E5-2637v3 4C/8T @ 3.5GHz, 512GB RAM (compute-5-x).
  • 2 Xeon E5-2637v4 4C/8T @ 3.5GHz, 512GB RAM (compute-8-1).
  • 2 SSD 800GB available through --gres=ioperso (compute-5-x).
  • 8 SSD 800GB available through --gres=ioperso (compute-8-1).
  • Architecture : Haswell (compute-5-x).
  • Architecture : Broadwell (compute-8-1).
  • 10Gigabit ethernet network.

Ask Nadia / Sophie / Fabienne B. before using it.

Dell R730

xeonv4_mono / compute-8-0

  • 1 Dell R630
  • 2 Xeon E5-2667v4 8C/16T, 3.2GHz
  • 512GB RAM, DDR4-2400
  • 8 x 380GB SSD available through --gres=ioperso
  • Architecture : Broadwell.
  • 10Gigabit ethernet netowrk. Dell R630

xeonv5_mono / compute-10-0

  • 1 Dell R740
  • 2 Xeon Gold 5122 4C/8T, 3.6GHz
  • 384 GB RAM, DDR4-2666
  • 8 x 380GB SSD available through --gres=ioperso
  • Architecture : Skylake.
  • 10Gigabit ethernet network. Dell R740

actipnmr / compute-12-x

  • 4 Dell R7525
  • 2 AMD EPYC 7402, 24C/48T, 2,8GHz
  • 512GB RAM, DDR4-3200
  • 4 SSDs per node, available through --gres=ioperso
  • Architecture : Zen 2 / Rome
  • Network : Ethernet 10Gb

Ask Hélène or Nicolas before using it.

SLURM header :

#!/bin/bash

#SBATCH -p actipnmr             # actipnmr partition
##SBATCH -c 4                   # aka --cpus-per-task, 4 cpus here
#SBATCH --hint=nomultithread    # use physical core, no HT/SMT
#SBATCH --mem-per-cpu=5200      # (MB) DefMemPerCPU on this partition : 5200
##SBATCH --mem=YYYYY            # real memory required per node
##SBATCH --gres=ioperso         # if you need fast local storage as TMPDIR

Dell R7525

molqed / compute-13-x

  • 2 Dell R7525
  • 2 AMD EPYC 7402, 24C/48T, 2,8GHz
  • 512GB RAM, DDR4-3200
  • 8 SSDs per node, available through --gres=ioperso
  • Architecture : Zen 2 / Rome
  • Network : Ethernet 10Gb

Ask Trond beforce using it.

SLURM header :

#!/bin/bash

#SBATCH -p molqed               # molqed partition
##SBATCH -c 4                   # aka --cpus-per-task, 4 cpus here
#SBATCH --hint=nomultithread    # use physical core, no HT/SMT
#SBATCH --mem-per-cpu=5200      # (MB) DefMemPerCPU on this partition : 5200
##SBATCH --mem=YYYYY            # real memory required per node
##SBATCH --gres=ioperso         # if you need fast local storage as TMPDIR

Dell R7525

zen2 / compute-12-x, compute-13-1, compute-14-x

  • 6 Dell R7525 plus molqed and actipnmr
  • 2 AMD EPYC 7402, 24C/48T, 2,8GHz
  • 512GB RAM, DDR4-3200
  • 4 SSDs per node, available through --gres=ioperso
  • Architecture : Zen 2 / Rome
  • Network : Ethernet 10Gb

SLURM header :

#!/bin/bash

#SBATCH -p zen2			# zen2 partition
##SBATCH -c 4			# aka --cpus-per-task, 4 cpus here
#SBATCH --hint=nomultithread	# use physical core, no HT/SMT
#SBATCH --mem-per-cpu=5000	# (MB) DefMemPerCPU on this partition : 5000
##SBATCH --mem=YYYYY		# real memory required per node
##SBATCH --gres=ioperso		# if you need fast local storage as TMPDIR

Dell R7525

nanox-zen3 / compute-15-x

  • 3 Dell R7525
  • 2 AMD EPYC 7513, 32C/64T, 2,6GHz
  • 512GB RAM, DDR4-3200
  • 4 SSDs per node, available through --gres=ioperso
  • Architecture : Zen 3 / Milan
  • Network : Ethernet 10Gb

SLURM header :

#!/bin/bash

#SBATCH -p nanox-zen3           # nanox-zen3 partition
##SBATCH -c 4                   # aka --cpus-per-task, 4 cpus here
#SBATCH --hint=nomultithread    # use physical core, no HT/SMT
#SBATCH --mem-per-cpu=5000      # (MB) DefMemPerCPU on this partition : 5000
##SBATCH --mem=YYYYY            # real memory required per node
##SBATCH --gres=ioperso         # if you need fast local storage as TMPDIR

Dell R7525