[Halld-offline] List of GlueX-related computer farms
Mark Ito
marki at jlab.org
Thu Jun 8 13:22:41 EDT 2017
We have a wiki page
<https://halldweb.jlab.org/wiki/index.php/Computer_Farms>, reproduced
below, that dates from 2009 and lists computing resources at
GlueX-collaborating institutions. I would like to get this updated with
current information. Please take a look and make updates as you see fit.
At this point partial information is better than eight-year-old information.
_______________________________________
Computer Farms
From GlueXWiki
Several computer farms exist that can potentially be used by GlueX
collaborators. This page attempts to list the farms and some rough
parameters that can be used to gauge their ability. For each farm, there
is a contact person who you will need to coordinate with in order to get
access. The exception being the JLab farm which can be accessed through
the CUE system.
Institution contact Nodes Cores CPU Memory OS Notes
JLab Sandy Philpott <mailto:Sandy.Philpott at jlab.org> 240 400
2.66GHz-3.2GHz 500MB-2GB/node Fedora 8 This is the "Scientific
Computing" farm only (there is another HPC farm for lattice
calculations.) This is available to anyone with a JLab CUE computer
account. However, it is often busy with processing experimental data.
Indiana Univ. Matt Shepherd <mailto:mashephe at indiana.edu> 55 110
1.6GHz
Indiana Univ. Matt Shepherd <mailto:mashephe at indiana.edu> 768 1536
2.5GHz
This is University-level farm that we can get access to if really
needed. From Matt's e-mail:
/"If we need mass simulation work, we can also try to tap into the
university research computing machines (Big Red has 768 dual 2.5 GHz
nodes), but these might be best reserved for very large simulation jobs
like Pythia background for high-stats analyses."/
Univ. of Edinburgh Dan Watts <mailto:dwatts1 at ph.ed.ac.uk> 1456 1456
Linux This is a large, high performance farm from which you can buy
time. From Dan's e-mail:
/"We have access to a very large farm here at Edinburgh. We can apply to
purchase priority time on the farm or have a current base subscription
which schedules the jobs with lower priority (but seems to run fine for
the current usage). It is a high-performance cluster of servers (1456
processors) and storage (over 275Tb of disk)."/
Glasgow Univ. Ken Livingston <mailto:k.livingston at physics.gla.ac.uk> 32
9
26 64
72
52 2MHz Opteron
1.8MHz Opteron
1MHz PIII 1G
16G
0.5G Fedora 8
Carnegie Mellon Univ. Curtis Meyer <mailto:cmeyer at ernest.phys.cmu.edu>
47 32x8+15*2=286 32 AMD Barcelona, 15 Xeon(older) 1GB/core RHEL5
Univ. of Connecticut Richard Jones <mailto:richard.t.jones at uconn.edu>
360 240 AMD 2GHz
60 AMD 2.4GHz
60 1GHz Xeon 1GB/core Centos 5 From Richard's e-mail:
/"This cluster has local users who have top priority, but right now it
is really under-utilized. Scheduling is by condor, login on head node
only."/
Florida State Univ. Paul Eugenio <mailto:eugenio at fsu.edu> 60 118 88
cores: Intel Core 2 Quad Q6600 2.4GHz
30 cores: AMD MP 2600 1-2GB/core Upgrading to Rocks 5.1 (CentOS 5
based) (currently CentOS 4.5 (Rocks 4.3) FSU Nuclear Physics Group cluster
Florida State Univ. Paul Eugenio <mailto:eugenio at fsu.edu> 400 2788
D-Core 2220 2.8 GHz Opterons, Q-Core 2356 2.3 GHz Opteron, Q-Core AMD
2382 Processors Shanghai 2.6GHz 2GB/core x86_64 Centos 5 based Rocks
5.0 cluster FSU HPC university cluster
Univ. of Regina Zisis Papandreou <mailto:zisis at uregina.ca> 10 20
Intel(R) Xeon(TM) CPU 2.80GHz 1GB/node Red Hat 9 Batch system: condor
queue, access through head node. NFS disk handling; close to 0.75 TB
Retrieved from
"https://halldweb.jlab.org/wiki/index.php?title=Computer_Farms&oldid=14634"
* This page was last modified on 30 July 2009, at 08:10.
--
Mark Ito, marki at jlab.org, (757)269-5295
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-offline/attachments/20170608/14ff0a4f/attachment.html>
More information about the Halld-offline
mailing list