<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>We have a <a moz-do-not-send="true"
href="https://halldweb.jlab.org/wiki/index.php/Computer_Farms">wiki
page</a>, reproduced below, that dates from 2009 and lists
computing resources at GlueX-collaborating institutions. I would
like to get this updated with current information. Please take a
look and make updates as you see fit. At this point partial
information is better than eight-year-old information.</p>
<p>_______________________________________</p>
<p>
</p>
<div id="globalWrapper">
<div id="column-content">
<div id="content" class="mw-body" role="main">
<h1 id="firstHeading" class="firstHeading" lang="en"><span
dir="auto">Computer Farms</span></h1>
<div id="bodyContent" class="mw-body-content">
<div id="siteSub">From GlueXWiki</div>
<div id="mw-content-text" dir="ltr" class="mw-content-ltr"
lang="en">
<p>Several computer farms exist that can potentially be
used by GlueX collaborators. This page attempts to list
the farms and some rough parameters that can be used to
gauge their ability. For each farm, there is a contact
person who you will need to coordinate with in order to
get access. The exception being the JLab farm which can
be accessed through the CUE system.
</p>
<p><br>
</p>
<table border="1">
<tbody>
<tr>
<th> Institution
</th>
<th> contact
</th>
<th> Nodes
</th>
<th> Cores
</th>
<th> CPU
</th>
<th> Memory
</th>
<th> OS
</th>
<th> Notes
</th>
</tr>
<tr>
<td> JLab
</td>
<td> <a rel="nofollow" class="external text"
href="mailto:Sandy.Philpott@jlab.org">Sandy
Philpott</a>
</td>
<td> 240
</td>
<td> 400
</td>
<td> 2.66GHz-3.2GHz
</td>
<td> 500MB-2GB/node
</td>
<td> Fedora 8
</td>
<td> This is the "Scientific Computing" farm only
(there is another HPC farm for lattice
calculations.) This is available to anyone with a
JLab CUE computer account. However, it is often
busy with processing experimental data.
</td>
</tr>
<tr>
<td> Indiana Univ.
</td>
<td> <a rel="nofollow" class="external text"
href="mailto:mashephe@indiana.edu">Matt Shepherd</a>
</td>
<td> 55
</td>
<td> 110
</td>
<td> 1.6GHz
</td>
<td>
<br>
</td>
<td>
<br>
</td>
<td style="width:300px">
<br>
</td>
</tr>
<tr>
<td> Indiana Univ.
</td>
<td> <a rel="nofollow" class="external text"
href="mailto:mashephe@indiana.edu">Matt Shepherd</a>
</td>
<td> 768
</td>
<td> 1536
</td>
<td> 2.5GHz
</td>
<td>
<br>
</td>
<td>
<br>
</td>
<td> This is University-level farm that we can get
access to if really needed. From Matt's e-mail: <br>
<i>"If we need mass simulation work, we can also
try to tap into the university research
computing machines (Big Red has 768 dual 2.5 GHz
nodes), but these might be best reserved for
very large simulation jobs like Pythia
background for high-stats analyses."</i>
</td>
</tr>
<tr>
<td> Univ. of Edinburgh
</td>
<td> <a rel="nofollow" class="external text"
href="mailto:dwatts1@ph.ed.ac.uk">Dan Watts</a>
</td>
<td> 1456
</td>
<td> 1456
</td>
<td>
<br>
</td>
<td>
<br>
</td>
<td> Linux
</td>
<td> This is a large, high performance farm from
which you can buy time. From Dan's e-mail: <br>
<i>"We have access to a very large farm here at
Edinburgh. We can apply to purchase priority
time on the farm or have a current base
subscription which schedules the jobs with lower
priority (but seems to run fine for the current
usage). It is a high-performance cluster of
servers (1456 processors) and storage (over
275Tb of disk)."</i>
</td>
</tr>
<tr>
<td> Glasgow Univ.
</td>
<td> <a rel="nofollow" class="external text"
href="mailto:k.livingston@physics.gla.ac.uk">Ken
Livingston</a>
</td>
<td> 32<br>
9<br>
26
</td>
<td> 64<br>
72<br>
52
</td>
<td> 2MHz Opteron<br>
1.8MHz Opteron<br>
1MHz PIII
</td>
<td> 1G<br>
16G<br>
0.5G
</td>
<td> Fedora 8
</td>
<td>
<br>
</td>
</tr>
<tr>
<td> Carnegie Mellon Univ.
</td>
<td> <a rel="nofollow" class="external text"
href="mailto:cmeyer@ernest.phys.cmu.edu">Curtis
Meyer</a>
</td>
<td> 47
</td>
<td> 32x8+15*2=286
</td>
<td> 32 AMD Barcelona, 15 Xeon(older)
</td>
<td> 1GB/core
</td>
<td> RHEL5
</td>
<td>
<br>
</td>
</tr>
<tr>
<td> Univ. of Connecticut
</td>
<td> <a rel="nofollow" class="external text"
href="mailto:richard.t.jones@uconn.edu">Richard
Jones</a>
</td>
<td>
<br>
</td>
<td> 360
</td>
<td> 240 AMD 2GHz<br>
60 AMD 2.4GHz<br>
60 1GHz Xeon
</td>
<td> 1GB/core
</td>
<td> Centos 5
</td>
<td> From Richard's e-mail: <br>
<i>"This cluster has local users who have top
priority, but right now it is really
under-utilized. Scheduling is by condor, login
on head node only."</i>
</td>
</tr>
<tr>
<td> Florida State Univ.
</td>
<td> <a rel="nofollow" class="external text"
href="mailto:eugenio@fsu.edu">Paul Eugenio</a>
</td>
<td> 60
</td>
<td> 118
</td>
<td> 88 cores: Intel Core 2 Quad Q6600 2.4GHz<br>
30 cores: AMD MP 2600
</td>
<td> 1-2GB/core
</td>
<td> Upgrading to Rocks 5.1 (CentOS 5 based) <font>(currently
CentOS 4.5 (Rocks 4.3)</font>
</td>
<td> FSU Nuclear Physics Group cluster
</td>
</tr>
<tr>
<td> Florida State Univ.
</td>
<td> <a rel="nofollow" class="external text"
href="mailto:eugenio@fsu.edu">Paul Eugenio</a>
</td>
<td> 400
</td>
<td> 2788
</td>
<td> D-Core 2220 2.8 GHz Opterons, Q-Core 2356 2.3
GHz Opteron, Q-Core AMD 2382 Processors Shanghai
2.6GHz
</td>
<td> 2GB/core
</td>
<td> x86_64 Centos 5 based Rocks 5.0 cluster
</td>
<td> FSU HPC university cluster
</td>
</tr>
<tr>
<td> Univ. of Regina
</td>
<td> <a rel="nofollow" class="external text"
href="mailto:zisis@uregina.ca">Zisis Papandreou</a>
</td>
<td> 10
</td>
<td> 20
</td>
<td> Intel(R) Xeon(TM) CPU 2.80GHz
</td>
<td> 1GB/node
</td>
<td> Red Hat 9
</td>
<td> Batch system: condor queue, access through head
node. NFS disk handling; close to 0.75 TB
</td>
</tr>
</tbody>
</table>
</div>
<div class="printfooter">
Retrieved from "<a dir="ltr"
href="https://halldweb.jlab.org/wiki/index.php?title=Computer_Farms&oldid=14634">https://halldweb.jlab.org/wiki/index.php?title=Computer_Farms&oldid=14634</a>"</div>
</div>
</div>
</div>
<div id="footer" role="contentinfo">
<ul id="f-list">
<li id="lastmod"> This page was last modified on 30 July 2009,
at 08:10.</li>
</ul>
</div>
</div>
<pre class="moz-signature" cols="72">--
Mark Ito, <a class="moz-txt-link-abbreviated" href="mailto:marki@jlab.org">marki@jlab.org</a>, (757)269-5295
</pre>
</body>
</html>