Hardware

MOX researchers and collaborators have access to different classes of hardware for HPC, which can be grouped in three main categories:

  • MOX resources varies from workstations to small size clusters (up to 160 computing cores) and entry-level GPU facilities, in a high-configurable and personalized environment.
  • POLIMI HPC resources are built upon the collaboration between the Department of Mathematics and other Polimi Departments, and are available for specific projects, presently confined to application in the statistical big data analysis.
  • CINECA supercomputing resources form a world top class HPC environment, available for large scale computing both on CPU, GPU and MIC architecture.

HPC resources are available upon authorization; please ask for eligibility to MOX Helpdesk.


Resources @MOX

gigat_z1gigat:
5 nodes, 20 Intel Xeon E5-4610v2 @2.30GHz, 160 cores, 1.2TB RAM, O.S. RHES 6.5
6 nodes, 12 Intel Xeon E5-2640v4 @ 2.40GHz, 120 cores, 384GB RAM, O.S. Centos 6.7
Cluster for parallel applications, both MPI and openMP (the latter up to 32 cores), with a high resident memory per node (up to 256GB). Nodes are interconnected by a dedicated Gigabit Ethernet. Single-core performance both MPI and openMP (the latter up to 32 cores), is similar to Cerbero (see further on this page), so this resource is not made available for sequential runs.
Queues available: gigat, gigatlong

cerbero: 6 cores I7-3930K @3.20GHz + 20 cores Intel Xeon E5-2640v4 @ 2.40GHz, GPU NVidia GT520
Resource for sequential applications and parallel CPU-GPU test runs.
Queues available: cerbero

idra: 16 nodes, 32 Intel Xeon X5560 @2.80GHz, 128 cores, 432GB RAM, O.S. RHES 6.2
Cluster for parallel applications, both MPI and openMP (the latter up to 8 cores), one node has RAM increased to 64GB for suited tasks.  Single-core performance is about 0.5x  gigat, so this resource is not the best choice for long production runs.
Queues available: idra, idralong

 


Resources @POLIMI

biginsights_z1biginsights: 3 nodes, 12 IBM POWER7 @4.2GHz, 48 cores, 384GB RAM, 15TB HDD, O.S. RHES 6.5
db2: 1 node, 2 IBM POWER7 @4.30GHz, 512GB RAM, 7TB HDD
Etherogeneous cluster for the analysis of Big Data through the BigInsights software suite, an IBM optimized implementation of the standard Hadoop suite. IBM GPFS parallel filesystem and IBM DB2blu (in-memory columnar database for top performance applications)  are also available.
The cluster is part of the CIC-BigData Iniziative by DMAT, DEIB, DIG @Polimi and IBM.

 


Resources @CINECA

Thank to an agreement between Politecnico di Milano and Cineca, researchers and students from our University are granted access to the top-quality HPC resources from the biggest italian provider. The working environment is similar to that present at MOX; linux RHES operative system, PBS queueing system. Also the software environment is similar, and some of the main softwares licensed at Politecnico are installed, with the same licensing scheme, also on the Cineca supercomputers.
MOX researchers are invited to use these facilities for demanding production runs.

marconi: 1512 nodes, Intel Broadwell 2x Intel Xeon E5-2697 v4 @2.3 GHz18 cores each, 128GB RAM each

galileo: 524 nodes, Intel Haswell 2 x Intel Xeon 2630 v3@2.4GHz, 8 cores each, 128GB RAM each

pico: 74 nodes, 2 x Intel Xeon 2670 v2@2.5GHz, 10 cores each, 128GB RAM each