Cluster | Access Condition | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | |||||||||
uvb | 2011-01-04 | 2011-01-04 | 4 | 2 | Intel Xeon X5670 | 12 | x86_64 | 96 GiB RAM | 250.0 GB | 1 Gbps (SR-IOV) + 40 Gbps InfiniBand |
Cluster | Date of arrival | Manufacturing date | Nodes | CPU | Memory | Storage | Network | Accelerators | |||
---|---|---|---|---|---|---|---|---|---|---|---|
# | Name | Cores | Architecture | ||||||||
esterel5 | 2025-02-25 | 2016-06-08 | 2 | 2 | Intel Xeon E5-2630 v4 | 20 | x86_64 | 128 GiB RAM | 2.0 TB + 1.6 TB | 1 Gbps + 40 Gbps InfiniBand | 3 x Nvidia GeForce GTX 1080 (8 GiB) |
esterel7 | 2025-03-06 | 2017-05-23 | 2 | 2 | Intel Xeon E5-2620 v4 | 16 | x86_64 | 128 GiB RAM | 999.7 GB + 399.4 GB | 1 Gbps + 40 Gbps InfiniBand | 4 x Nvidia GeForce GTX 1080 Ti (11 GiB) |
esterel10 | 2024-12-19 | 2017-11-15 | 3 | 2 | Intel Xeon E5-2630 v4 | 20 | x86_64 | 128 GiB RAM | 1.6 TB + 2 x 600.1 GB | 1 Gbps + 56 Gbps InfiniBand | [1-2]:4 x Nvidia GeForce GTX 1080 Ti (11 GiB) 3:3 x Nvidia GeForce GTX 1080 Ti (11 GiB) |
esterel41 | 2025-01-25 | 2024-03-01 | 1 | 2 | Intel Xeon Gold 6426Y | 32 | x86_64 | 512 GiB RAM | 479.6 GB + 2.9 TB | 1 Gbps + 56 Gbps InfiniBand | 2 x Nvidia L40 (45 GiB) |
mercantour2 | 2025-01-16 | 2015-09-01 | 8 | 2 | Intel Xeon E5-2650 v2 | 16 | x86_64 | 256 GiB RAM | 1.0 TB | 1 Gbps (SR-IOV) + 40 Gbps InfiniBand | |
mercantour5 | 2025-02-24 | 2019-07-30 | 4 | 2 | Intel Xeon Gold 6240 | 36 | x86_64 | 384 GiB RAM | 599.6 GB + 959.1 GB | 1 Gbps + 40 Gbps InfiniBand | |
mercantour6 | 2025-02-27 | 2020-10-05 | 1 | 2 | AMD EPYC 7542 | 64 | x86_64 | 1 TiB RAM | 239.4 GB + 1.9 TB | 1 Gbps + 40 Gbps InfiniBand | |
musa | 2025-01-16 | 2024-12-09 | 6 | 2 | AMD EPYC 9254 | 48 | x86_64 | 512 GiB RAM | 6.4 TB | 25 Gbps | 2 x Nvidia H100 NVL (94 GiB) |
4 nodes, 8 CPUS, 48 cores (json, drawgantt).
Reservation example:
fsophia:~$ oarsub -p uvb -I
/dev/disk0
) (primary disk) eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel 82576 Gigabit Network Connection, driver: igb
eth1/eno2, Ethernet, model: Intel 82576 Gigabit Network Connection, driver: igb - unavailable for experiment
ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
ib1, InfiniBand, configured rate: 10 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment
2 nodes, 4 CPUS, 40 cores (json, drawgantt).
Reservation example:
fsophia:~$ oarsub -p esterel5 -q production -I
/dev/disk0
) (primary disk) /dev/disk1
) eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
2 nodes, 4 CPUS, 32 cores (json, drawgantt).
Reservation example:
fsophia:~$ oarsub -p esterel7 -q production -I
/dev/disk0
) (primary disk) /dev/disk1
) eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
3 nodes, 6 CPUS, 60 cores, split as follows due to differences between nodes (json, drawgantt).
Reservation example:
fsophia:~$ oarsub -p esterel10 -q production -I
/dev/disk0
) (primary disk) /dev/disk1
) /dev/disk2
) eth0/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
eth1/enp1s0f1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
/dev/disk0
) (primary disk) /dev/disk1
) /dev/disk2
) eth0/eno1, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
eth1/enp1s0f1, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
ib0, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
1 nodes, 2 CPUS, 32 cores (json, drawgantt).
Reservation example:
fsophia:~$ oarsub -p esterel41 -q production -I
/dev/disk0
) (primary disk) /dev/disk1
) eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
eth1/ens15f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
eth2/ens15f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
eth3/ens15f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
ibs3, InfiniBand, configured rate: 56 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core
8 nodes, 16 CPUS, 128 cores, split as follows due to differences between nodes (json, drawgantt).
Reservation example:
fsophia:~$ oarsub -p mercantour2 -q production -I
/dev/disk0
) (primary disk) eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb
eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
ib1, InfiniBand, configured rate: 10 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment
/dev/disk0
) (primary disk) eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb
eth1/eno2, Ethernet, model: Intel I350 Gigabit Network Connection, driver: igb - unavailable for experiment
ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core
ib1, InfiniBand, configured rate: 10 Gbps, model: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s Interface, driver: mlx4_core - unavailable for experiment
4 nodes, 8 CPUS, 144 cores (json, drawgantt).
Reservation example:
fsophia:~$ oarsub -p mercantour5 -q production -I
/dev/disk0
) (primary disk) /dev/disk1
) eth0/enp1s0f0np0, Ethernet, configured rate: 1 Gbps, model: Intel I350 Gigabit Network Connection, driver: igb - no KaVLAN
ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27500 Family [ConnectX-3], driver: mlx4_core
1 nodes, 2 CPUS, 64 cores (json, drawgantt).
Reservation example:
fsophia:~$ oarsub -p mercantour6 -q production -I
/dev/disk0
) (primary disk) /dev/disk1
) eth0/eno1, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - no KaVLAN
eth1/eno2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment
ib0, InfiniBand, configured rate: 40 Gbps, model: Mellanox Technologies MT27520 Family [ConnectX-3 Pro], driver: mlx4_core
6 nodes, 12 CPUS, 288 cores (json, drawgantt).
Reservation example:
fsophia:~$ oarsub -p musa -q production -I
/dev/disk0
) (primary disk) eth0/enp1s0f0np0, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en
eth1/ens22f1np1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment