Infrastructures
Argos | Hippocrate | Grid computing | CADMOS
Types of scientific computing resources available at the Ci:
The new features are higlighted in blue
Argos
Argos is an Intel Xeon based 8 cores interactive compute server offering GPGPU capabilities:
Job category --> Any suitable interactive job
Access --> via ssh to argos1.unil.ch
OS --> Redhat ELS 6.1
CPUs --> 1 x dual Xeon 5570 2.97 GHz node (8 cores), 24 GB RAM
GPGPU --> 1 x Nvidia Tesla c1060 (220 CUDA cores)
1 x Nvidia Tesla c2070 (448 CUDA cores)
Network --> 10 Gb/s Ethernet
Hippocrate
Hippocrate is a small 96 cores Intel Xeon based HPC cluster:
Job category --> - Any suitable parallel job or any sequential long (< 96h) running job
- A maximum of 1000 jobs per user in the queuing system
Access --> via ssh to argos1.unil.ch
OS --> RedHat ELS 6.1
Nodes --> 8 x dual Xeon 5670 2.97 GHz (12 cores), 48 GB RAM for batch processing
1 x dual Xeon 5670 2.97 GHz (12 cores), 96 GB RAM for interactive jobs
3 x Nvidia Tesla m2090 (512 CUDA cores) on 2 batch and 1 interactive nodes
Interconnect --> Infiniband 40 Gb/s
Network --> 10 Gb/s Ethernet
Parallel env. --> - SMP (on 12 cores)
- PVM
- MPI : OpenMPI, MVAPICH
Batch schedduler --> Grid Engine v6.2u5
Submission queues --> - all.q (the default one)
* max 72 monothreaded compute slots
* max 3.9 GB of memory per slot (RAM + VRAM)
* max 96 hours of Wall Clock Time per slot
- short.q ( add " #$ -q short.q " in your submission script)
* max 96 monothreaded compute slots
* max 3.9 GB of memory per slot (RAM + VRAM)
* max 24 hours of Wall Clock Time per slot
- int.q is an interactive submission queue
* max 12 interactive compute [threads|slots]
* max 7.9 GB of RAM + VRAM per [threads|slots]
* max 24 hours of Wall Clock Time per [threads|slots]
* type "qrsh -q int.q -pe smp 1" on argos1 to open a
remote interactive shell with 1 [thread|slot]
* in the "-pe smp n" argument, n MUST match the the number
of threads your application will use!
* Type Ctl-D to release the slots
Grid computing
The users of UNIL have also access to two grid-based infrastructures:
GridUNIL --> a Condor based campus grid
--> ~ 300 distributed cores mainly on OSX and some on Linux or Windows platform
--> a grid distributed version of the R statistic environment
Job category --> any suitable short (< 12h) embarrassingly parallel jobs
SMSCG --> the Swiss Multisciences Computing Grid
--> ~ 4000 cores mainly on Intel/Linux platform
Job category --> depends on the type of the targeted grid enabled resources
Grid submission clients:
Access --> via argos1.unil.ch
Condor-G --> GridUNIL, SGE HPC cluster, SMSCG and many other grid Infrastructures
ARC client --> SMSCG and many other ARC based grid Infrastructures
Globus client --> GridUNIL and many other Globus based grid Infrastructures
Ganga client (GUI) --> GridUNIL, SGE HPC cluster, SMSCG and many other grid Infrastructures
CADMOS
CADMOS is the Center for Advanced Modeling Science. It is a joint initiative between UNIL EPFL and UNIGE. They offer an access to massively parallel computing resources currently based on an IBM BlueGene/P.


