<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
People,<br>
<br>
Please find the minutes below and at <a class="moz-txt-link-freetext" href="https://goo.gl/5Q5b1S">https://goo.gl/5Q5b1S</a> .<br>
<br>
-- Mark<br>
________________________________________________________<br>
<div id="globalWrapper">
<div id="column-content">
<div id="content" class="mw-body" role="main">
<h2 id="firstHeading" class="firstHeading" lang="en"><span
dir="auto">GlueX Offline Meeting, July 6, 2016, </span><span
class="mw-headline" id="Minutes">Minutes</span></h2>
<div id="bodyContent" class="mw-body-content">
<div id="mw-content-text" dir="ltr" class="mw-content-ltr"
lang="en">
<p>You can <a rel="nofollow" class="external text"
href="https://bluejeans.com/s/9Xee/">view a recording
of this meeting</a> on the BlueJeans site.
</p>
<p>Present:
</p>
<ul>
<li> <b>CMU</b>: Naomi Jarvis, Curtis Meyer, Mike Staib</li>
<li> <b>FSU</b>: Brad Cannon</li>
<li> <b>GSI</b>: Nacer Hamdi</li>
<li> <b>JLab</b>: Alexander Austregesilo, Alex Barnes,
Mark Ito (chair), David Lawrence, Paul Mattione,
Justin Stevens, Simon Taylor</li>
<li> <b>NU</b>: Sean Dobbs</li>
<li> <b>Regina</b>: Tegan Beattie</li>
<li> <b>UConn</b>: Richard Jones</li>
</ul>
<h3><span class="mw-headline" id="Announcements">Announcements</span></h3>
<ol>
<li> <b>Intel Lustre upgrade</b>. Mark reminded us
about the <a rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/jlab-scicomp-briefs/2016q2/000126.html">upgrade</a>
done a few weeks ago. Mark spoke with Dave Rackley
earlier today July 6, 2016.
<ul>
<li> Lustre on servers was installed, an Intel
version. Call support is available. We pay for it.</li>
<li> There were hangs after upgrade. After that the
Intel Lustre client was installed on the ifarms.
There have been no incidents since. Installs are
still rolling out for farm and HPC nodes.</li>
<li> Please report issues if they are encountered.</li>
</ul>
</li>
<li> <b>New release: sim-recon 2.1.0</b>. <a
rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2016-June/002387.html">This
release</a> came out about a month ago. A new should
arrive this week.</li>
<li> <b>REST backwards compatibility now broken</b>. <a
rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/halld-physics/2016-May/000675.html">Paul's
email</a> describes the situation. You cannot read
old REST files with new sim-recon code.</li>
<li> <b>Raw data copy to cache</b>. After <a
rel="nofollow" class="external text"
href="https://halldweb.jlab.org/talks/2016/raw_data_to_cache.pdf">some
discussion with the Computer Center</a>, we will now
have the first files from each run appear on the cache
disk without having to fetch them from the Tape
Library.</li>
<li> <b>New HDPM "install" command</b>. Nathan Sparks
explains it in <a rel="nofollow" class="external
text"
href="https://mailman.jlab.org/pipermail/halld-offline/2016-May/002369.html">his
email</a>. It replaces the "fetch-dist" command.</li>
</ol>
<h3><span class="mw-headline"
id="New_wiki_documentation_for_HDDM">New wiki
documentation for HDDM</span></h3>
<p>Richard led us through <a rel="nofollow"
class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2016-July/002408.html">his
new wiki page</a> which consolidates and updates
documentation for the HDDM package. A new feature is a
Python API for HDDM. Here is the table of contents:
</p>
<pre> 1 Introduction
2 Templates and schemas
3 How to get started
4 HDDM in python
4.1 writing hddm files in python
4.2 reading hddm files in python
4.3 advanced features of the python API
5 HDDM in C++
5.1 writing hddm files in C++
5.2 reading hddm files in C++
5.3 advanced features of the C++ API
6 HDDM in c
6.1 writing hddm files in c
6.2 reading hddm files in c
6.3 advanced features of the c API
7 Advanced features
7.1 on-the-fly compression/decompression
7.2 on-the-fly data integrity checks
7.3 random access to hddm records
8 References
</pre>
<p>Some notes from the discussion:
</p>
<ul>
<li> If a lot of sparse single-event access is
anticipated the zip format may be better because of
the smaller buffer size. Bzip2 is the default now.</li>
<li> The random access features allows access to
"bookmarks" for individual events that can be saved
and used for quick access later, even for compressed
files.</li>
<li> The Python API can be used in conjunction with
PyROOT to write ROOT tree generators using any HDDM
file as input quickly and economically.</li>
</ul>
<h4><span class="mw-headline" id="REST_file_I.2FO">REST
file I/O</span></h4>
<p>Mike described a throughput limit he has seen for
compressed REST data vs. non-compressed. See <a
rel="nofollow" class="external text"
href="https://halldweb.jlab.org/wiki/images/c/c0/RestRates6Jul2016.pdf">his
slides</a> for plots and details. The single-threaded
HDDM reader limits scaling with the number of event
analysis threads if it is reading compressed data. The
curve turns over at about 6 or 7 threads. On the other
hand, compressed data presents less load on disk-read
bandwidth, and so multiple jobs contending for that
bandwidth might do better with compressed data.
</p>
<p>Richard agreed to buffer input and launch a
user-defined number of threads to do HDDM input. That
should prevent starvation of the event analysis threads.
</p>
<h3><span class="mw-headline"
id="Review_of_minutes_from_June_8">Review of minutes
from June 8</span></h3>
<p>We went over <a
href="https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_June_8,_2016#Minutes"
title="GlueX Offline Meeting, June 8, 2016">the
minutes</a>.
</p>
<ul>
<li> <b>Small files are still being retained on the
cache disk</b>, without automatic archiving to tape.
Mark will repeat his plea for small file deletion
soon.
<ul>
<li> Alex A. pointed out that it is now possible to
pin small files and to force a write to tape. That
was not the case a couple of weeks ago.</li>
<li> Sean reminded us that we had put in a request
for a get-and-pin command from jcache. Mark will
check on status.</li>
</ul>
</li>
<li> <b>RCDB is now fully integrated into the
build_scripts system</b>. It is now built on the
JLab CUE on nodes where C++11 features are supported.
You can now incorporate your RCDB C++ API calls in
sim-recon plugins and SCons will do the right thing
build-wise as long as you have the RCDB_HOME
environment defined properly.</li>
</ul>
<h3><span class="mw-headline"
id="Spring_2016_Run_Processing_Status">Spring 2016 Run
Processing Status</span></h3>
<h4><span class="mw-headline"
id="Distributing_REST_files_from_initial_launch">Distributing
REST files from initial launch</span></h4>
<p>Richard, Curtis, and Sean commented on the REST file
distribution process. Matt Shepherd copied "all" of the
files from JLab to IU and has pushed them to UConn, CMU,
and Northwestern, as per <a rel="nofollow"
class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2016-June/002390.html">his
proposal</a>, using Globus Online. He was able to get
about 10 MB/s from JLab to IU. Similar speeds, within
factors of a few, were obtained in the
university-to-university transfers. All cautioned that
one needs to think a bit carefully about network and
networking hardware configurations to get acceptable
bandwidth.
</p>
<p>Alex A. cautioned us that there were some small files
in Batch 1, and to a smaller extent in Batch 2, that
either were lost before getting archived to tape, or
that are in the Tape Library, but were not pinned and
disappeared from the cache disk.
</p>
<h4><span class="mw-headline" id="Launch_Stats">Launch
Stats</span></h4>
<p>Alex pointed us to <a rel="nofollow" class="external
text"
href="https://halldweb.jlab.org/data_monitoring/launch_analysis/index.html">Launch
Stats webpage</a> that now contains links to the full
reconstruction launches statistics page. We looked at <a
rel="nofollow" class="external text"
href="https://halldweb.jlab.org/data_monitoring/recon/summary_swif_output_recon_2016-02_ver01_batch01.html">the
page for Batch 01</a>.
</p>
<ul>
<li> The page shows statistics on jobs run.</li>
<li> We discussed the plot of the number of jobs at each
state of farm processing as a function of time. For
the most part we were limited by the number of farm
nodes, but there were times when we were waiting for
raw data files from tape.</li>
<li> We never had more than about 500 jobs running at a
time.</li>
<li> Memory usage was about 7 GB for Batch 1, a bit more
for Batch 2.</li>
<li> The jobs ran with 14 threads.</li>
<li> One limit on farm performance was CLAS jobs which
required large amounts of memory such that farm nodes
were running with large fractions of idle cores.</li>
</ul>
<h3><span class="mw-headline"
id="mcsmear_and_CCDB_variation_setting">mcsmear and
CCDB variation setting</span></h3>
<p>David noticed last week that he had to set the choose
the mc variation of CCDB to get sensible results from
the FCAL when running mcsmear. This was because of
change in the way the non-linear energy correct was
being applied. He asked whether we want to make this the
default for mcsmear since it is only run on simulated
data.
</p>
<p>The situation is complicated by the fact that not all
simulated data should use the mc variation. That is only
appropriate for getting the "official" constants
intended for simulating data already in the can. Note
that if no variation is specified at all, then the
default variation is used; that was the problem that
David discovered.
</p>
<p>After some discussion, we decided to ask Sean to put in
a warning in mcsmear if no variation is named at all.
</p>
<h3><span class="mw-headline" id="ROOT_6_upgrade.3F">ROOT
6 upgrade?</span></h3>
<p>Mark has done a test build of a recent version of our
software with ROOT 6. We had mentioned that we should
transition from 5 to 6 once we have established use of a
C++-11 compliant compiler. That has been done now.
</p>
<p>Paul pointed out that the change may break some ROOT
macros used by individuals, including some used for
calibration. On the other hand the change has to happen
at some time.
</p>
<p>Mark told us he will not make the change for the
upcoming release, but will consider it for the one after
that. In any case will discuss it further.
</p>
</div>
<div class="printfooter">
Retrieved from "<a dir="ltr"
href="https://halldweb.jlab.org/wiki/index.php?title=GlueX_Offline_Meeting,_July_6,_2016&oldid=76137">https://halldweb.jlab.org/wiki/index.php?title=GlueX_Offline_Meeting,_July_6,_2016&oldid=76137</a>"</div>
</div>
</div>
</div>
<div id="footer" role="contentinfo">
<ul id="f-list">
<li id="lastmod"> This page was last modified on 7 July 2016,
at 12:31.</li>
</ul>
</div>
</div>
<br>
<pre class="moz-signature" cols="72">--
<a class="moz-txt-link-abbreviated" href="mailto:marki@jlab.org">marki@jlab.org</a>, (757)269-5295
</pre>
</body>
</html>