Resources Explorer
global
grenoble
lille
louvain
luxembourg
lyon
nancy
nantes
rennes
sophia
strasbourg
toulouse

Summary


Hardware summary for grenoble
  • 8 clusters
  • 65 nodes
  • 2176 CPU cores
  • 60 GPUs
  • 390144 GPUs cores
  • 14.00 TiB RAM
  • 6.00 TiB PMEM
  • 96 SSDs and 72 HDDs on nodes (total: 247.25 TB)
  • 148.8 TFLOPS (excluding GPUs)

Clusters summary


default queue resources

ClusterAccess ConditionDate of arrivalManufacturing dateNodesCPUMemoryStorageNetworkAccelerators
#NameCoresArchitecture
dahu2018-03-222017-12-12322Intel Xeon Gold 613032x86_64192 GiB RAM240.1 GB + 480.1 GB + 4.0 TB10 Gbps + 100 Gbps Omni-Path
dracexotic2020-10-052016-10-17122POWER8NVL 120ppc64le128 GiB RAM1.0 TB + 1.0 TB10 Gbps + 2 x 100 Gbps InfiniBand
4 x Nvidia Tesla P100-SXM2-16GB (16 GiB)
servanexotic2021-12-152021-12-1022AMD EPYC 735248x86_64128 GiB RAM1.6 TB + 1.6 TB25 Gbps + 2 x 100 Gbps FPGA/EthernetXilinx Alveo U200
trollexotic2019-12-232019-11-2142Intel Xeon Gold 521832x86_64384 GiB RAM + 2 TiB PMEM480.1 GB + 1.6 TB25 Gbps (SR-IOV) + 100 Gbps Omni-Path
yetiexotic2018-01-162017-12-2644Intel Xeon Gold 613064x86_64768 GiB RAM480.1 GB + 3 x 2.0 TB* + 2 x 1.6 TB10 Gbps + 100 Gbps Omni-Path
*: disk is reservable
testing queue resources

ClusterDate of arrivalManufacturing dateNodesCPUMemoryStorageNetworkAccelerators
#NameCoresArchitecture
chartreuse22025-01-132016-11-1442Intel Xeon E5-2640 v420x86_6432 GiB RAM1.2 TB10 Gbps (SR-IOV)
kinovis2025-02-102024-06-2662Intel Xeon Gold 6442Y48x86_64256 GiB RAM1.9 TB2 x 25 Gbps + 100 Gbps
2 x Nvidia L40S (45 GiB)
nessie2024-11-082024-08-1912Intel Xeon Gold 643064x86_64128 GiB RAM1.9 TB + 480.1 GB25 Gbps

Clusters in the default queue


dahu


32 nodes, 64 CPUS, 1024 cores, split as follows due to differences between nodes (json, drawgantt).

Reservation example:

fgrenoble:~$ oarsub -p dahu -I
dahu-[1,4-32] (30 nodes, 60 CPUS, 960 cores)
Model:
Dell PowerEdge C6420
Manufacturing date:
2017-12-12
Date of arrival:
2018-03-22
CPU:
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz, 2 CPUs/node, 16 cores/CPU
Memory:
192 GiB RAM
Storage:
  • disk0, 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (/dev/disk0) (primary disk)
  • disk1, 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (/dev/disk1)
  • disk2, 4 TB HDD SATA Seagate ST4000NM0265-2DC (/dev/disk2)
Networks:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e

  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

dahu-2 (1 nodes, 2 CPUS, 32 cores)
Model:
Dell PowerEdge C6420
Manufacturing date:
2017-12-12
Date of arrival:
2018-03-22
CPU:
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz, 2 CPUs/node, 16 cores/CPU
Memory:
192 GiB RAM
Storage:
  • disk0, 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (/dev/disk0) (primary disk)
  • disk1, 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (/dev/disk1)
  • disk2, 4 TB HDD SATA Toshiba TOSHIBA MG08ADA4 (/dev/disk2)
Networks:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e

  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

dahu-3 (1 nodes, 2 CPUS, 32 cores)
Model:
Dell PowerEdge C6420
Manufacturing date:
2017-12-12
Date of arrival:
2018-03-22
CPU:
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz, 2 CPUs/node, 16 cores/CPU
Memory:
192 GiB RAM
Storage:
  • disk0, 240 GB SSD SATA Samsung MZ7KM240HMHQ0D3 (/dev/disk0) (primary disk)
  • disk1, 480 GB SSD SATA Samsung MZ7KM480HMHQ0D3 (/dev/disk1)
  • disk2, 4 TB HDD SATA Seagate ST4000NM018B-2TF (/dev/disk2)
Networks:
  • eth0/enp24s0f0, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e

  • eth1/enp24s0f1, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

drac


12 nodes, 24 CPUS, 240 cores (json, drawgantt).

Reservation example:

fgrenoble:~$ oarsub -p drac -t exotic -I
Access condition:
exotic job type
Model:
IBM PowerNV S822LC (8335-GTB)
Manufacturing date:
2016-10-17
Date of arrival:
2020-10-05
CPU:
POWER8NVL, altivec supported, 2 CPUs/node, 10 cores/CPU
Memory:
128 GiB RAM
Storage:
  • disk0, 1 TB HDD SATA Seagate ST1000NX0313 (/dev/disk0) (primary disk)
  • disk1, 1 TB HDD SATA Seagate ST1000NX0313 (/dev/disk1)
Networks:
  • eth0/enP1p1s0f0, Ethernet, configured rate: 10 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x

  • eth1/enP1p1s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment

  • eth2/enP1p1s0f2, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment

  • eth3/enP1p1s0f3, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme II BCM57800 1/10 Gigabit Ethernet, driver: bnx2x - unavailable for experiment

  • eth4/enP9p7s0f0, Ethernet, configured rate: 1 Gbps, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

  • eth5/enP9p7s0f1, Ethernet, model: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe, driver: tg3 - unavailable for experiment

  • ib0, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core

  • ib1, InfiniBand, configured rate: 100 Gbps, model: Mellanox Technologies MT27700 Family [ConnectX-4], driver: mlx5_core

GPU:
4 x Nvidia Tesla P100-SXM2-16GB (16 GiB)
Compute capability: 6.0
Note:
This cluster is defined as exotic. Please read the exotic page for more information.

servan


2 nodes, 4 CPUS, 96 cores (json, drawgantt).

Reservation example:

fgrenoble:~$ oarsub -p servan -t exotic -I
Access condition:
exotic job type
Model:
Dell PowerEdge R7525
Manufacturing date:
2021-12-10
Date of arrival:
2021-12-15
CPU:
AMD EPYC 7352 24-Core Processor, 2 CPUs/node, 24 cores/CPU
Memory:
128 GiB RAM
Storage:
  • disk0, 2 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (/dev/disk0) (primary disk)
  • disk1, 2 TB SSD NVME Dell Dell Ent NVMe v2 AGN MU U.2 1.6TB (/dev/disk1)
Networks:
  • eth0/eno33, Ethernet, configured rate: 25 Gbps, model: Intel Ethernet Controller E810-C for SFP, driver: ice

  • eth1/eno34, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment

  • eth2/eno35, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment

  • eth3/eno36, Ethernet, model: Intel Ethernet Controller E810-C for SFP, driver: ice - unavailable for experiment

  • fpga0, FPGA/Ethernet, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: n/a

  • fpga1, FPGA/Ethernet, configured rate: 100 Gbps, model: Xilinx Alveo U200 QSFP28 (100GbE), driver: n/a

FPGA:
Xilinx Alveo U200
Note:
This cluster is defined as exotic. Please read the exotic page for more information.

troll


4 nodes, 8 CPUS, 128 cores (json, drawgantt).

Reservation example:

fgrenoble:~$ oarsub -p troll -t exotic -I
Access condition:
exotic job type
Model:
Dell PowerEdge R640
Manufacturing date:
2019-11-21
Date of arrival:
2019-12-23
CPU:
Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz, 2 CPUs/node, 16 cores/CPU
Memory:
384 GiB RAM + 2 TiB PMEM
Storage:
  • disk0, 480 GB SSD SATA Micron MTFDDAK480TDN (/dev/disk0) (primary disk)
  • disk1, 2 TB SSD NVME Dell Dell Ent NVMe AGN MU AIC 1.6TB (/dev/disk1)
Networks:
  • eth0/eno1, Ethernet, configured rate: 25 Gbps, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core

  • eth1/eno2, Ethernet, model: Mellanox Technologies MT27710 Family [ConnectX-4 Lx], driver: mlx5_core - unavailable for experiment

  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Note:
This cluster is defined as exotic. Please read the exotic page for more information.

yeti


4 nodes, 16 CPUS, 256 cores, split as follows due to differences between nodes (json, drawgantt).

Reservation example:

fgrenoble:~$ oarsub -p yeti -t exotic -I
yeti-1 (1 nodes, 4 CPUS, 64 cores)
Access condition:
exotic job type
Model:
Dell PowerEdge R940
Manufacturing date:
2017-12-26
Date of arrival:
2018-01-16
CPU:
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz, 4 CPUs/node, 16 cores/CPU
Memory:
768 GiB RAM
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (/dev/disk0) (primary disk)
  • disk1, 2 TB HDD SAS Seagate ST2000NX0463 (/dev/disk1) (reservable)
  • disk2, 2 TB HDD SAS Seagate ST2000NX0463 (/dev/disk2) (reservable)
  • disk3, 2 TB HDD SAS Seagate ST2000NX0463 (/dev/disk3) (reservable)
  • disk4, 2 TB SSD NVME Dell Dell Express Flash PM1725b 1.6TB AIC (/dev/disk4)
  • disk5, 2 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (/dev/disk5)
Networks:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e

  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Note:
This cluster is defined as exotic. Please read the exotic page for more information.
yeti-2,4 (2 nodes, 8 CPUS, 128 cores)
Access condition:
exotic job type
Model:
Dell PowerEdge R940
Manufacturing date:
2017-12-26
Date of arrival:
2018-01-16
CPU:
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz, 4 CPUs/node, 16 cores/CPU
Memory:
768 GiB RAM
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G7R (/dev/disk0) (primary disk)
  • disk1, 2 TB HDD SAS Seagate ST2000NX0463 (/dev/disk1) (reservable)
  • disk2, 2 TB HDD SAS Seagate ST2000NX0463 (/dev/disk2) (reservable)
  • disk3, 2 TB HDD SAS Seagate ST2000NX0463 (/dev/disk3) (reservable)
  • disk4, 2 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (/dev/disk4)
  • disk5, 2 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (/dev/disk5)
Networks:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e

  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Note:
This cluster is defined as exotic. Please read the exotic page for more information.
yeti-3 (1 nodes, 4 CPUS, 64 cores)
Access condition:
exotic job type
Model:
Dell PowerEdge R940
Manufacturing date:
2017-12-26
Date of arrival:
2018-01-16
CPU:
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz, 4 CPUs/node, 16 cores/CPU
Memory:
768 GiB RAM
Storage:
  • disk0, 480 GB SSD SATA Intel SSDSC2KG480G8R (/dev/disk0) (primary disk)
  • disk1, 2 TB HDD SAS Seagate ST2000NX0463 (/dev/disk1) (reservable)
  • disk2, 2 TB HDD SAS Seagate ST2000NX0463 (/dev/disk2) (reservable)
  • disk3, 2 TB HDD SAS Seagate ST2000NX0463 (/dev/disk3) (reservable)
  • disk4, 2 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (/dev/disk4)
  • disk5, 2 TB SSD NVME Dell Dell Express Flash NVMe PM1725 1.6TB AIC (/dev/disk5)
Networks:
  • eth0/eno1, Ethernet, configured rate: 10 Gbps, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e

  • eth1/eno2, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • eth2/eno3, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • eth3/eno4, Ethernet, model: Intel Ethernet Controller X710 for 10GbE SFP+, driver: i40e - unavailable for experiment

  • ib0, Omni-Path, configured rate: 100 Gbps, model: Intel Omni-Path HFI Silicon 100 Series [discrete], driver: hfi1

Note:
This cluster is defined as exotic. Please read the exotic page for more information.

Clusters in the testing queue


chartreuse2


4 nodes, 8 CPUS, 80 cores (json, drawgantt).

Reservation example:

fgrenoble:~$ oarsub -p chartreuse2 -q testing -I
Access condition:
testing queue
Model:
Dell PowerEdge C6320
Manufacturing date:
2016-11-14
Date of arrival:
2025-01-13
CPU:
Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz, 2 CPUs/node, 10 cores/CPU
Memory:
32 GiB RAM
Storage:
  • disk0, 1 TB HDD SAS Toshiba AL14SEB120N (/dev/disk0) (primary disk)
Networks:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 10 Gbps, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe

  • eth1/eno2, Ethernet, model: Intel 82599ES 10-Gigabit SFI/SFP+ Network Connection, driver: ixgbe - unavailable for experiment

kinovis


6 nodes, 12 CPUS, 288 cores (json, drawgantt).

Reservation example:

fgrenoble:~$ oarsub -p kinovis -q testing -I
Access condition:
testing queue
Model:
HPE Proliant DL380 Gen11
Manufacturing date:
2024-06-26
Date of arrival:
2025-02-10
CPU:
Intel(R) Xeon(R) Gold 6442Y, 2 CPUs/node, 24 cores/CPU
Memory:
256 GiB RAM
Storage:
  • disk0, 2 TB SSD SATA HPE MR416i-o Gen11 (/dev/disk0) (primary disk)
Networks:
  • eth0/enp1s0f0np0, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en

  • eth1/ens15f1np1, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en (multi NICs example)

  • eth2/ens4np0, Ethernet, configured rate: 100 Gbps, model: Mellanox Technologies MT28908 Family [ConnectX-6], driver: mlx5_core (multi NICs example)

GPU:
2 x Nvidia L40S (45 GiB)
Compute capability: 8.9

nessie


1 nodes, 2 CPUS, 64 cores (json, drawgantt).

Reservation example:

fgrenoble:~$ oarsub -p nessie -q testing -I
Access condition:
testing queue
Model:
HPE ProLiant DL385 Gen10+ v2
Manufacturing date:
2024-08-19
Date of arrival:
2024-11-08
CPU:
Intel(R) Xeon(R) Gold 6430, 2 CPUs/node, 32 cores/CPU
Memory:
128 GiB RAM
Storage:
  • disk0, 2 TB SSD NVME HP VO001920KYDMT (/dev/disk0) (primary disk)
  • disk1, 480 GB SSD SATA HP VK000480GZCNE (/dev/disk1)
Networks:
  • eth0, Ethernet, configured rate: 25 Gbps, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en - no KaVLAN

  • eth1, Ethernet, model: Broadcom Inc. and subsidiaries BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller, driver: bnxt_en - unavailable for experiment