<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Folks,</p>
<p>Please find the minutes below and at</p>
<p>
<a class="moz-txt-link-freetext" href="https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_November_1,_2017#Minutes">https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_November_1,_2017#Minutes</a></p>
<p> - Mark</p>
<p>_________________________</p>
<h2><span class="mw-headline" id="Minutes">Minutes, </span><span
dir="auto">GlueX Offline Meeting, November 1, 2017</span></h2>
<h2><span class="mw-headline" id="Minutes"></span></h2>
<p>Present:
</p>
<ul>
<li> <b> CMU: </b>: Curtis Meyer</li>
<li> <b> FSU </b>: Sean Dobbs</li>
<li> <b> JLab: </b>: Alex Austregesilo, Thomas Britton, Brad
Cannon, Eugene Chudakov, Sebastian Cole, Mark Ito (chair), Simon
Taylor, Beni Zihlmann</li>
</ul>
<p>There is a <a rel="nofollow" class="external text"
href="https://bluejeans.com/s/L5K9O/">recording of this meeting</a>
on the BlueJeans site. Use your JLab credentials to access it.
</p>
<h3><span class="mw-headline" id="Announcements">Announcements</span></h3>
<ol>
<li> <a rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2017-October/002970.html">Change
to cache deletion policy</a> Files that were requested from
tape by farm jobs are now first in line for deletion once the
requesting job has finished. This extends the life of files that
users create or request directly on the cache disk.</li>
<li> <a rel="nofollow" class="external text"
href="https://github.com/JeffersonLab/sim-recon/releases/tag/2.18.0">sim-recon
2.18.0</a> This release went out on October 10. It has the
extensive changes to the Analysis Library from Paul Mattione.</li>
<li> <a rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2017-October/002984.html">MCwrapper
v1.9</a> Thomas has added a facility to automatically generate
an event sample, in a user-specified run range, with runs
populated in proportion to the number of events in the real
data.</li>
<li> <b>Launches</b>. Alex reported that the most recent analysis
launch has been inadvertently running with a job-count
restriction left over from an incident with an analysis launch a
few weeks back where SciComp had to throttle the number of jobs.
The restriction was lifted this morning.</li>
</ol>
<h3><span class="mw-headline"
id="Review_of_minutes_from_the_last_meeting">Review of minutes
from the last meeting</span></h3>
<p>We went over the <a
href="https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_October_4,_2017#Minutes"
title="GlueX Offline Meeting, October 4, 2017">minutes from
October 4</a>.
</p>
<h4><span class="mw-headline" id="OASIS_file_system">OASIS file
system</span></h4>
<p>Mark discussed the cron job update problem with Richard Jones.
Another way has been identified; no more problem.
</p>
<h4><span class="mw-headline" id="HDvis_Update">HDvis Update</span></h4>
<p>Thomas reported a but in the JANA server (which sends events up
to the web browser for display). It crashes on RHEL7 and CentOS7
on a language feature not supported by the gcc 4.8.5. compiler
found on those platforms. Turns out to work on RHEL6 and CentOS6
(gcc 4.9.2). Several avenues for a fix are being pursued, in
consultation with Dmitry Romanov.
</p>
<h4><span class="mw-headline" id="Work_Disk_Clean-Up">Work Disk
Clean-Up</span></h4>
<p>Mark reported that in he latest twist, we will not need to reduce
our usage on /wok to a level below 45 TB before moving the the
new, non-Lustre file server. Our current 55 TB will fit since not
all of the Halls will be moving at once.
</p>
<p>For the final cut-over from Lustre to non-Lustre, we will have to
stop writing yo /work for a short period of time. This may (or may
not) present a problem if we have a large launch underway. This
issue needs further discussion, but should not be a big problem.
</p>
<h3><span class="mw-headline"
id="Report_from_the_SciComp-Physics_meeting">Report from the
SciComp-Physics meeting</span></h3>
<ul>
<li> Chip Watson suggested that we try to stress the data stream
bandwidth from the Counting House to the Computer Center before
the December run, ideally in concert with a similar test by Hall
B.</li>
<li> ifarm1101 and the CentOS6 farm nodes will not be upgraded
from CentOS 6.5 to 6.9. It is not worth the effort for a handful
of nodes.</li>
<li> Physics will initiate another procurement of Lustre-based
disk, 200 TB worth, to augment our volatile and work space.
There is the possibility for more if we can justify it.</li>
</ul>
<h3><span class="mw-headline" id="Computing_Milestones">Computing
Milestones</span></h3>
<p>Mark showed us an <a rel="nofollow" class="external text"
href="https://halldweb.jlab.org/talks/2017/milestone_planning.pdf">email
from Amber Boehnlein</a> from back in August, proposing an
effort to develop "milestones" for Scientific Computing in the 12
GeV era, as an aide to Lab management for gauging progress. Work
on this has languished, and needs to be pick up again.
</p>
<h3><span class="mw-headline" id="Review_of_recent_pull_requests">Review
of recent pull requests</span></h3>
<p>We went over the list of open and closed requests <a
rel="nofollow" class="external text"
href="https://github.com/JeffersonLab/sim-recon/pulls?q=is%3Aopen+is%3Apr">list
of open and closed requests</a>.
</p>
<p><a rel="nofollow" class="external text"
href="https://github.com/JeffersonLab/sim-recon/pull/947">Pull
request #947</a>, "Updated FCAL geometry to new block size" has
implications for detector calibrations. It comes in concert with
updates to the FCAL geometry from Richard (HDDS pull requests <a
rel="nofollow" class="external text"
href="https://github.com/JeffersonLab/hdds/pull/38">#38</a> and
[<a rel="nofollow" class="external free"
href="https://github.com/JeffersonLab/hdds/pull/42">https://github.com/JeffersonLab/hdds/pull/42</a>
#42). So far, the observed effects on the calibrations have been
slight. Sean is monitoring the situation to see where
recalibrations may be necessary.
</p>
<h3><span class="mw-headline"
id="Review_of_recent_discussion_on_the_GlueX_Software_Help_List">Review
of recent discussion on the GlueX Software Help List</span></h3>
<p>We went over <a rel="nofollow" class="external text"
href="https://groups.google.com/forum/#%21forum/gluex-software">recent
items</a> with no significant discussion.
</p>
<h3><span class="mw-headline" id="CPU_Usage_Projections">CPU Usage
Projections</span></h3>
<p>Beni asked about projections for our demands on the farm during
the upcoming run.
</p>
<ul>
<li> Sean thought that now that firmware problems have been
understood, calibrations should go faster.</li>
<li> Alex expressed doubt about whether we can support a full
reconstruction launch during the run with its attendant
monitoring jobs. Mark pointed out that priorities can be
adjusted betwwen accounts to give time-critical jobs priority.</li>
<li> Mark ponted out that we need to review our CPU needs in light
of our newly acquired experience, much <a rel="nofollow"
class="external text"
href="https://docs.google.com/spreadsheets/d/1QQQ2R3QrJkJgN37lt9yRlhVylk4KDJQKAn8m6Is8paM/edit?usp=sharing">like
we are doing for disk usage</a>.</li>
</ul>
<pre class="moz-signature" cols="72">--
Mark Ito, <a class="moz-txt-link-abbreviated" href="mailto:marki@jlab.org">marki@jlab.org</a>, (757)269-5295
</pre>
</body>
</html>