<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=utf-8">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <p>Folks,</p>
    <p>Find the minutes below and <a moz-do-not-send="true"
href="https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_August_7,_2018">here</a>.</p>
    <p>  -- Mark</p>
    <p>___________________</p>
    <p>
    </p>
    <div id="globalWrapper">
      <div id="column-content">
        <div id="content" class="mw-body" role="main">
          <h1 id="firstHeading" class="firstHeading" lang="en"><span
              dir="auto">Minutes, GlueX Offline Meeting, August 7, 2018</span></h1>
          <div id="bodyContent" class="mw-body-content">Present:
            <div id="mw-content-text" dir="ltr" class="mw-content-ltr"
              lang="en">
              <ul>
                <li> <b> CMU: </b> Curtis Meyer</li>
                <li> <b> FIU: </b> Mahmoud Kamel</li>
                <li> <b> FSU: </b> Sean Dobbs</li>
                <li> <b> JLab: </b> Alex Austregesilo, Thomas Britton,
                  Mark Dalton, Stuart Fegan, Mark Ito (chair), David
                  Lawrence, Justin Stevens, Beni Zihlmann</li>
              </ul>
              <p>The chairman neglected to hit the record button on
                BlueJeans.<br>
              </p>
              <h3><span class="mw-headline" id="Announcements">Announcements</span></h3>
              <ol>
                <li> <b><a rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2018-August/003306.html">reconstruction
                      launch version set:
                      version_recon-2017_01-ver03_jlab.xml</a></b>. The
                  tag of sim-recon used in the reconstruction has been
                  built on five platforms.</li>
                <li> Status of Recon Launch: Alex A.
                  <ul>
                    <li> We are using QCD12 boxes with farm18 nodes
                      shown on [image:farmnodes.png|the SciComp
                      webpage], but not in active use yet.</li>
                    <li> There are 700-800 jobs running simultaneously.</li>
                    <li> There will be 300 to 400 more when the farm18
                      nodes are activated.</li>
                    <li> We are 85% done with 2016 data; it will be done
                      in 2 or 3 days.</li>
                    <li> Spring 2017 reconstruction should take 15 to 20
                      days.</li>
                    <li> Possible problem with cache disk space if we
                      run more jobs simultaneously, our pin quota is
                      used up.</li>
                    <li> David remarked that since we are copying the
                      raw data to the local disk first, they could be
                      unpinned as soon as they are copied.</li>
                  </ul>
                </li>
              </ol>
              <h3><span class="mw-headline"
                  id="Review_of_minutes_from_the_July_24_meeting">Review
                  of minutes from the July 24 meeting</span></h3>
              <p>We went over <a
href="https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_July_24,_2018#Minutes"
                  title="GlueX Offline Meeting, July 24, 2018">the
                  minutes</a>.
              </p>
              <h4><span class="mw-headline" id="NERSC_Update">NERSC
                  Update</span></h4>
              <p>David gave us an update.
              </p>
              <ul>
                <li> Chris Larrieu is back from vacation and has
                  addressed some swif2 issues.</li>
                <li> Test of reconstruction of one run with 220 files
                  ran into a 20-job-at-a-time limit imposed by swif2.
                  The limit is motivated by having only 1 TB of disk
                  space at NERSC. More space than that is needed to keep
                  the pipe full.</li>
                <li> David has consulted with a Brookhaven physicist who
                  has been working with more space.</li>
                <li> Reserving an entire node is possible, but you have
                  to "pay" in advance for the time and it may be hard to
                  get credit back for failed jobs.</li>
                <li> David plans to move to a 20 TB "cache" disk (with a
                  file lifetime limit).</li>
                <li> The plan is to try a monitoring launch over Spring
                  2018 data first.</li>
                <li> Sean asked about what software tag was going to be
                  used. He cautioned that there is CDC reconstruction
                  code that should be added to augment the code being
                  used for the current reconstruction launch.</li>
                <li> Alex cautioned that the monitoring launch uses many
                  more plugins than are used in reconstruction launches.
                  More memory may be required.</li>
              </ul>
              <h3><span class="mw-headline"
                  id="Splitting_up_Sim-Recon:_Aftermath">Splitting up
                  Sim-Recon: Aftermath</span></h3>
              <p>Mark led us through <a rel="nofollow" class="external
                  text"
href="https://mailman.jlab.org/pipermail/halld-offline/2018-July/003292.html">the
                  announcement of the split</a> performed Monday, July
                30 and <a
href="https://halldweb.jlab.org/wiki/index.php/Converting_sim-recon_tags_and_branches_to_the_split_repositories"
                  title="Converting sim-recon tags and branches to the
                  split repositories">a wiki page he wrote</a>
                describing how to recover branches and tags from the
                sim-recon repository when using the new halld_recon and
                halld_sim repositories.
              </p>
              <p>Items that still need to be addressed:
              </p>
              <ol>
                <li> The use of the HALLD_MY directory needs to be
                  revisited with the split repositories.</li>
                <li> A procedure for recovering tagged versions of
                  sim-recon and deploying them in the split repositories
                  needs to be developed.</li>
                <li> The automatic builds triggered by pull requests
                  needs to be implemented on the new repositories.</li>
              </ol>
              <h3><span class="mw-headline" id="HDGeant4_issues">HDGeant4
                  issues</span></h3>
              <p>We reviewed the recent pull requests from Richard Jones
                fixing separate issued in the FDC simulation one in
                HDGeant (GEANT 3) and the other in HDGeant4. See [his
                comment, submitted today, on <a rel="nofollow"
                  class="external text"
                  href="https://github.com/JeffersonLab/HDGeant4/issues/54">HDGeant4
                  Issue #54</a>. Corresponding pull requests to the
                halld_sim and hdgeant4 repositories have been merged to
                their respective master branches.
              </p>
              <h3><span class="mw-headline"
                  id="Review_of_recent_pull_requests">Review of recent
                  pull requests</span></h3>
              <p>The title of <a rel="nofollow" class="external text"
                  href="https://github.com/JeffersonLab/sim-recon/pull/1180">Pull
                  request #1180</a> from David served as a reminder to
                upbraid us for adding frustration to his workflow. The
                issue is respect (or rather disrespect) for a mechanism
                for building sim-recon (at the time of the request)
                without all of the packages we build, i. e., a mechanism
                for having optional packages. Whether a package is
                optional or not is signaled by the absence or presence
                of the home environment variable for the package. When
                collaborators do not respect this convention, David is
                stuck either building the suddenly non-optional package
                or coding the mechanism in himself.
              </p>
              <p>David has looked into the idea of having build
                "flavors;" configurations of the build with optional
                packages explicitly identified. That takes the
                configuration out of the shell environment. In general,
                he thinks that we may be due for re-factoring the SCons
                build system (SBMS) in any case.
              </p>
              <h3><span class="mw-headline"
                  id="Review_of_recent_discussion_on_the_GlueX_Software_Help_List">Review
                  of recent discussion on the GlueX Software Help List</span></h3>
              <p>We looked at <a rel="nofollow" class="external text"
                  href="https://groups.google.com/forum/#%21forum/gluex-software">recent
                  posts</a>.
              </p>
              <ul>
                <li> None of those present, other than Mark, have
                  experienced the halld web authentication error (401).</li>
                <li> The cause of the g++ internal compiler error, on
                  random source code files, during the single-threaded
                  build of hdgeant4 on the ifarm machines and on no
                  others is still a mystery.</li>
              </ul>
            </div>
            <div class="printfooter">
              Retrieved from "<a dir="ltr"
href="https://halldweb.jlab.org/wiki/index.php?title=GlueX_Offline_Meeting,_August_7,_2018&oldid=88570">https://halldweb.jlab.org/wiki/index.php?title=GlueX_Offline_Meeting,_August_7,_2018&oldid=88570</a>"</div>
          </div>
        </div>
      </div>
      <div id="footer" role="contentinfo">
        <ul id="f-list">
          <li id="lastmod"> This page was last modified on 7 August
            2018, at 20:16.</li>
        </ul>
      </div>
    </div>
  </body>
</html>