In addition to the resources available from the national supercomputing center (GENCI) and from the regional computing mesocenter CALMIP, the LCPQ has a local cluster. This cluster is constituted from several nodes that are characterized by different configurations which match the needs of the different teams. Here is a brief description of the cluster, totalizing 64 nodes, i.e. 1750 cores/3500 threads and more than 14 TB of RAM :
For parallel computing :
intel sandy bridge nodes : 16 cores and 64 GB of ram per node.
intel ivy bridge nodes : 20 cores and 64 GB of ram per node.
intel haswell nodes : 24 cores and 128 GB of ram per node.
intel broadwell nodes : 28 cores and 128 GB of ram per node.
intel skylake nodes : 32 cores and 192 GB of ram per node.
AMD zen 1 nodes : 32 cores and 256 GB of ram per node.
AMD zen 2 nodes : 48 cores and 512 GB of ram per node.
AMD zen 3 nodes : 64 cores and 512 GB of ram per node.
For monolithic computing :
intel haswell nodes : 4 cores, 192 GB of ram per node and dedicated SSDs per node.
intel haswell node : 8 cores, 512 GB of ram per node and dedicated SSDs per node.
intel haswell nodes : 8 cores, 512 GB of ram per node and dedicated SSDs per node.
intel broadwell node : 8 cores, 512 GB of ram and dedicated SSDs per node.
intel broadwell node : 16 cores, 512 GB of ram and dedicated SSDs per node.
intel skylake node : 8 cores, 384 GB of ram and dedicated SSDs.
For storage :
A 33 TB BeeGFS filesystem available on each nodes.
A 15 TB archiving space available for users.
Computing are managed by the SLURM workload manager. This cluster is hosted at the Université Paul Sabatier datacenter.
Cluster documentation is available here : https://www.lcpq.ups-tlse.fr/cluster/