<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
From the last offline meeting minutes:<br>
<br>
<ol>
<li value="5">
<ul>
<li> Mark reported that thousands of Data Challenge 3 jobs
have been submitted.</li>
<li> We will will likely need to do a lot of simulation of 5.5
GeV beam.</li>
<li> We had expected to be launching reconstruction jobs by
now, back when 10 GeV was the plan.</li>
<li> Justin pointed out that simulation will be memory limited
since it must be run single threaded.</li>
</ul>
</li>
</ol>
<p><br>
</p>
<div class="moz-cite-prefix">On 04/03/2015 12:30 PM, Heyes Graham
wrote:<br>
</div>
<blockquote cite="mid:4A9FA7D1-AF72-48B1-A918-65EBF3432654@jlab.org"
type="cite">
<pre wrap="">The farm14 nodes were added to meet the estimated requirements of the GLUEX commissioning runs. With various issues culminating in power outage that lead to the CHL failure GLUEX has taken no data at all this spring and none is expected for at least another week. Even then, with only one CHL, we will have 6 GeV running instead of 12 and who knows how that affects things. Perhaps someone from the GLUEX side can comment on that.
The same is unfortunately also true of HPS who were supposed to be using a fair chunk of the farm but are also unable to take data.
Hopefully someone can come up with a use for the idle cycles but the fact that they exist isn’t a huge surprise.
Regards to all,
        Graham
</pre>
<blockquote type="cite">
<pre wrap="">On Apr 3, 2015, at 12:06, Sandy Philpott <a class="moz-txt-link-rfc2396E" href="mailto:philpott@jlab.org"><philpott@jlab.org></a> wrote:
All,
Since a picture is worth a thousand words, here is the FY15 farm usage graph at the halfway point, to visualize my original idle cores message. For reference, when the farm14 nodes were added in the fall the total farm core count reached ~4000.
Regards,
Sandy
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">-------- Forwarded Message --------
Subject:         can GlueX use idle cores at JLab?
Date:         Tue, 31 Mar 2015 11:52:19 -0400 (EDT)
From:         Sandy Philpott <a class="moz-txt-link-rfc2396E" href="mailto:philpott@jlab.org"><philpott@jlab.org></a>
To:         <a class="moz-txt-link-abbreviated" href="mailto:halld-offline@jlab.org">halld-offline@jlab.org</a>
CC:         Heyes Graham <a class="moz-txt-link-rfc2396E" href="mailto:heyes@jlab.org"><heyes@jlab.org></a> , Chip Watson <a class="moz-txt-link-rfc2396E" href="mailto:watson@jlab.org"><watson@jlab.org></a> , Mark Ito <a class="moz-txt-link-rfc2396E" href="mailto:marki@jlab.org"><marki@jlab.org></a> , David Lawrence <a class="moz-txt-link-rfc2396E" href="mailto:davidl@jlab.org"><davidl@jlab.org></a>
Hello GlueX,
The newest Haswell farm14 nodes of 2400 cores at JLab have been mostly idle since their installation last fall. That's much of 1.7 M Haswell core-hours each month that are largely unused, or almost 5 M core hours of idle time so far.
Could Hall D keep simulation jobs in the queue and running indefinitely, rather than just during times of the data challenges? Are there other jobs to run? Otherwise, many of the available computing cycles in the farm for Experimental Physics are falling on the floor rather than being used.
Feedback and perspective welcome,
Sandy
</pre>
</blockquote>
</blockquote>
<pre wrap=""><farm_FY15_usage.png>
</pre>
</blockquote>
<pre wrap="">
_______________________________________________
Halld-offline mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Halld-offline@jlab.org">Halld-offline@jlab.org</a>
<a class="moz-txt-link-freetext" href="https://mailman.jlab.org/mailman/listinfo/halld-offline">https://mailman.jlab.org/mailman/listinfo/halld-offline</a></pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Mark M. Ito, Jefferson Lab, <a class="moz-txt-link-abbreviated" href="mailto:marki@jlab.org">marki@jlab.org</a>, (757)269-5295
</pre>
</body>
</html>