<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Addendum:</p>
<h3><span class="mw-headline" id="Lustre_Issues">Lustre Issues</span></h3>
<p>Sean noted a couple of recent items with the Lustre disk system
and GlueX work.
</p>
<ol>
<li> He reminded us that the SQLite version of the CCDB and RCDB
do not work if the database files are stored on a Lustre file
system. They must be copied to a more traditional file system
before running GlueX software against them.</li>
<li> Recently calibration jobs run under the gxproj3 account were
flagged by SciComp as causing excessive IO-ops on Lustre. Sean
believes that this was due to many jobs starting at the same
time and each one copying its input data file to the local farm
node disk before processing. This practice was recommended by
SciComp some months ago. After further consultation with SciComp
he has switched to analyzing the data directly from the
Lustre-based cache disk (no local copy). SciComp has not weighed
in about whether this helps the situation or not. No news is
good news?</li>
</ol>
<br>
<div class="moz-cite-prefix">On 06/16/2017 01:23 PM, Mark Ito wrote:<br>
</div>
<blockquote type="cite"
cite="mid:33b362a0-f46d-899d-7b1b-34e91971cfc0@jlab.org">
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<p>Please find the minutes below and in <a moz-do-not-send="true"
href="https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_June_14,_2017#Minutes">the
standard location</a>.</p>
<p>_______________</p>
<h2><span class="mw-headline" id="Minutes">Minutes</span></h2>
<p>Present: </p>
<ul>
<li> <b>CMU</b>: Naomi Jarvis</li>
<li> <b>JLab</b>: Thomas Britton, Brad Cannon, Eugene Chudakov,
Hovanes Egiyan, Mark Ito (chair), Dmitry Romanov, Beni
Zihlmann</li>
<li> <b>NU</b>: Sean Dobbs</li>
<li> <b>UConn</b>: Richard Jones</li>
<li> <b>Yerevan</b>: Hrach Marukyan</li>
</ul>
<p>There is a <a rel="nofollow" class="external text"
href="https://bluejeans.com/s/uEQso/" moz-do-not-send="true">recording
of this meeting</a> on the BlueJeans site. Use your JLab
credential to access it. </p>
<h3><span class="mw-headline" id="Announcements">Announcements</span></h3>
<ol>
<li> <a rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2017-June/002796.html"
moz-do-not-send="true">New release of HDDS: version 3.11</a>.
Mark noted that this release contains recent changes to target
and start counter geometry from Simon Taylor.</li>
<li> <a rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2017-June/002810.html"
moz-do-not-send="true">hdpm 0.7.0</a>. Nathan went over his
announcement. New features include
<ul>
<li> AmpTools' new location at GitHub is handled.</li>
<li> New package: PyPWA</li>
<li> Revised actions for hdpm sub-commands.</li>
</ul>
</li>
</ol>
<h3><span class="mw-headline"
id="Review_of_minutes_from_the_last_meeting">Review of minutes
from the last meeting</span></h3>
<p>We went over the <a
href="https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_May_31,_2017#Minutes"
title="GlueX Offline Meeting, May 31, 2017"
moz-do-not-send="true">minutes from May 31</a>. </p>
<h4><span class="mw-headline" id="Progress_on_the_OSG">Progress on
the OSG</span></h4>
<p>Richard gave an update on progress with the OSG. For all the
details, please see the <a rel="nofollow" class="external text"
href="https://bluejeans.com/s/uEQso/" moz-do-not-send="true">recording</a>
starting. Some notes: </p>
<ul>
<li> scosg16.jlab.org is fully functional as an OSG submit host
now.</li>
<li> Jobs similar to Data Challenge 2 are going out.</li>
<li> <b>Using containers</b> to deliver and run our software to
remote nodes:
<ul>
<li> [<a rel="nofollow" class="external free"
href="https://en.wikipedia.org/wiki/Docker_%28software%29"
moz-do-not-send="true">https://en.wikipedia.org/wiki/Docker_(software)</a>
Docker was subject of initial focus, turns out it was
designed to solve network (and other system resources)
isolation problems, e. g., for deployment of web services
on foreign OS.</li>
<li> [<a rel="nofollow" class="external free"
href="http://singularity.lbl.gov/"
moz-do-not-send="true">http://singularity.lbl.gov/</a>
Singularity aimed at mobility of compute, which is the
problem we are trying to solve. OSG has embraced it as the
on-the-grid-node-at-run-time solution.</li>
<li> Richards original solution to was make a
straight-forward Singularity container with everything we
need to run. That came to 7 GB, too large to use under
OASIS (OSG's file distribution system).</li>
<li> With guidance from OSG folks, he has implemented a
solution that allows us to run. [The details are many and
various and will not be recorded here. Again, see the
recording.] The broad features are:
<ul>
<li> Singularity on the grid node runs using system
files (glibc, ld, system provide shared libraries,
etc.) stored outside the container on OASIS.</li>
<li> Software is distributed in two parts. The system
files mentioned in the previous item, and our standard
built-by-us-GlueX-software-stack, distributed via
OASIS without any need for containerization.</li>
</ul>
</li>
</ul>
</li>
<li> <b>Scripts for running the system</b>
<ul>
<li> osg_container.sh: script that runs on the grid node</li>
<li> my_grid_job.py
<ul>
<li> runs generator, simulation, smearing,
reconstruction, and analysis</li>
<li> hooks for submitting, no knowledge of Condor
required</li>
<li> will report on job status</li>
</ul>
</li>
<li> Richard will send out an email with instructions.</li>
</ul>
</li>
<li> <b>Problem with CentOS 6 nodes</b>
<ul>
<li> Some grid nodes are hanging on the hd_root step.</li>
<li> CentOS 7 nodes seem OK. CentOS 6 nodes have the
problem. Unfortunately, the majority of nodes out there
are CentOS-6-like, including all of the nodes deployed at
GlueX collaborating university sites.</li>
<li> The issue seems to be related to access of the SQLite
form of CCDB. OSG guys are working on a solution. David
Lawrence has been consulted. Dmitry thinks he has a way
forward that involves deploying an in-memory realization
of the database.</li>
</ul>
</li>
</ul>
<h4><span class="mw-headline" id="Event_Display">Event Display</span></h4>
<p>Dmitry and Thomas will give an update at the next meeting. </p>
<h3><span class="mw-headline" id="Other_Items">Other Items</span></h3>
<ul>
<li> Brad mentioned that our Doxygen documentation pages are
down. Mark will take a look.</li>
<li> Eugene asked about the manner in which we document details
of simulation runs and whether enough information is retained
to reproduce the results. Mark showed him <a rel="nofollow"
class="external text"
href="https://halldweb.jlab.org/gluex_simulations/sim1.2.1/"
moz-do-not-send="true">the site for sim 1.2.1</a> as an
example of what we do for the "public" simulation runs.</li>
</ul>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Halld-offline mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Halld-offline@jlab.org">Halld-offline@jlab.org</a>
<a class="moz-txt-link-freetext" href="https://mailman.jlab.org/mailman/listinfo/halld-offline">https://mailman.jlab.org/mailman/listinfo/halld-offline</a></pre>
</blockquote>
<br>
</body>
</html>