<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Folks,</p>
    <p>Please find the minutes <a moz-do-not-send="true"
href="https://halldweb.jlab.org/wiki/index.php/GlueX_Software_Meeting,_January_7,_2020#Minutes">here</a>
      and below.</p>
    <p>  -- Mark</p>
    <p>
    </p>
    <div id="globalWrapper">
      <div id="column-content">
        <div id="content" class="mw-body" role="main">
          <h2 id="firstHeading" class="firstHeading" lang="en"><span
              dir="auto">GlueX Software Meeting, January 7, 2020, </span><span
              class="mw-headline" id="Minutes">Minutes</span></h2>
          <div id="bodyContent" class="mw-body-content">
            <div id="mw-content-text" dir="ltr" class="mw-content-ltr"
              lang="en">
              <p>Present:
              </p>
              <ul>
                <li> <b> CMU: </b>: Naomi Jarvis</li>
                <li> <b> JLab: </b> Alex Austregesilo, Mark Dalton,
                  Mark Ito (chair), Igal Jaegle, David Lawrence, Keigo
                  Mizutani, Justin Stevens, Simon Taylor, Beni Zihlmann</li>
              </ul>
              <p><br>
                There is a <a rel="nofollow" class="external text"
                  href="https://bluejeans.com/s/L7pCN/">recording of his
                  meeting</a> on the BlueJeans site. Use your JLab
                credentials to access it.
              </p>
              <h3><span class="mw-headline" id="Announcements">Announcements</span></h3>
              <ol>
                <li> <a rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2019-December/003841.html">New
                    version set: version_4.12.0.xml</a>. This version
                  set is suitable for analyzing data from the Fall 2019
                  run.</li>
                <li> <a rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2019-December/003860.html">new
                    version sets, new simulation (halld_sim-4.11.0), old
                    reconstruction</a>. These versions sets use the same
                  version of halld_sim as in version_4.12.0.xml, but
                  with old version of halld_recon, versions used in
                  previous analysis launches. New branches of those old
                  versions were needed to accommodate the new
                  particle_type.h used in the latest halld_sim version.</li>
              </ol>
              <h3><span class="mw-headline"
                  id="Review_of_Minutes_from_the_Last_Software_Meeting">Review
                  of Minutes from the Last Software Meeting</span></h3>
              <p>We went over <a
href="https://halldweb.jlab.org/wiki/index.php/GlueX_Software_Meeting,_December_10,_2019#Minutes"
                  title="GlueX Software Meeting, December 10, 2019">the
                  minutes from December 10</a>.
              </p>
              <ul>
                <li> Mark has made some progress on a path forward for
                  conversion to Python 3. More correctly, he found there
                  is an easy way to use Python-2-compatible SCons on
                  systems with Python 3 as the default and vice-versa.
                  It turns out that at least for RedHat-like systems,
                  modern distributions ship SCons in both flavors.</li>
                <li> There has been no news on the release of CCDB 2.0.</li>
                <li> The RCDB errors reported at the meeting were
                  solved.</li>
                <li> The problem reported with using CCDB from
                  hallddb-farm still appear to be with us. Jobs hang
                  forever at start-up when accessing calibration
                  constants.</li>
                <li> The problem that Mark Dalton reported with genBH
                  was due to a bug in HDDM that <a rel="nofollow"
                    class="external text"
                    href="https://github.com/JeffersonLab/halld_recon/pull/248">Richard
                    has since fixed</a>.</li>
              </ul>
              <h3><span class="mw-headline"
                  id="Review_of_Minutes_from_the_Last_HDGeant4_Meeting">Review
                  of Minutes from the Last HDGeant4 Meeting</span></h3>
              <p>We went over the minutes from <a
href="https://halldweb.jlab.org/wiki/index.php/HDGeant4_Meeting,_December_17,_2019#Minutes"
                  title="HDGeant4 Meeting, December 17, 2019">the
                  meeting on December 17</a>. Alex has done work
                exploring the effect of widening the timing cuts on
                charged hadrons which hit the BCAL. See the discussion
                below in the section on halld_recon pull requests.
              </p>
              <h3><span class="mw-headline"
                  id="Rebooting_the_Work_Disk_Server">Rebooting the Work
                  Disk Server</span></h3>
              <p>The reboot of the work disk server, done on December
                17, seems to have fixed the file locking problems we
                have been experiencing for months now. Brad Sawatzky
                from Hall C also reports success with his builds, builds
                that would reliably fail in the recent past.
              </p>
              <p>Getting all of the Halls to agree to bring the disk
                down took some convincing given that any benefit was
                speculative. Kurt Strosahl of SciComp did see some
                anomalies while doing some pre-reboot diagnostics. There
                is no proof that the fix is permanent; we will have to
                continue to monitor the server.
              </p>
              <h3><span class="mw-headline"
                  id="Report_from_the_Last_SciComp_Meeting">Report from
                  the Last SciComp Meeting</span></h3>
              <p>Mark led us through <a rel="nofollow" class="external
                  text"
href="https://markito3.wordpress.com/2019/12/19/scicomp-meeting-december-19-2019/">his
                  notes from the meeting on December 19</a>.
              </p>
              <ul>
                <li> We are on track for a 4 PB expansion of our Lustre
                  capacity for Experiment Nuclear Physics (i.e, all
                  Halls).</li>
                <li> There will be work this summer to transition to a
                  single tape library. We have two at present. With two
                  tape bandwidth can get <a rel="nofollow"
                    class="external text"
                    href="https://www.merriam-webster.com/dictionary/silo">siloed
                    (see sense 3)</a>.</li>
              </ul>
              <h3><span class="mw-headline"
                  id="Fix_for_Recent_Raw_Data_Runs">Fix for Recent Raw
                  Data Runs</span></h3>
              <p>Mark I. highlighted <a rel="nofollow" class="external
                  text"
href="https://mailman.jlab.org/pipermail/halld-offline/2019-December/003858.html">Sean
                  Dobb's recent fix</a> for anomalies found in the raw
                data from last Fall. Naomi pointed out that there are
                actually three such anomalies related to the the new
                firmware used in the Flash-250s. David addressed one of
                them in <a rel="nofollow" class="external text"
                  href="https://github.com/JeffersonLab/halld_recon/pull/247">halld_recon
                  Pull Request #247</a>
              </p>
              <p>Mark D. has compiled <a
href="https://halldweb.jlab.org/wiki/index.php/FADC250_Firmware_Versions"
                  title="FADC250 Firmware Versions">a collection of
                  information on this and related issues</a>.
              </p>
              <p>[Added in press: Mark D. reports that currently all
                FADCs are running with the latest version of the
                firmware (Version C13).]
              </p>
              <h3><span class="mw-headline"
                  id="New_Farm_Priority_Scheme">New Farm Priority Scheme</span></h3>
              <p>In December, several folks reported anomalies in how
                gxproj accounts were being scheduled on the JLab farm.
                Starting yesterday, a new Slurm configuration for farm
                job priorities <a rel="nofollow" class="external text"
href="https://mailman.jlab.org/pipermail/halld-offline/2020-January/003862.html">was
                  rolled out</a>. Priorities are assigned in a hierarchy
                such that resources at each level are apportioned
                according to "shares," and groups that are inactive at
                any given time have their shares apportioned among the
                active groups. The following figure shows the hierarchy
                and the current share assignments.
              </p>
              <p><a
href="https://halldweb.jlab.org/wiki/index.php/File:Fairshare_2020-01-06.png"
                  class="image"><img alt="Fairshare 2020-01-06.png"
src="https://halldweb.jlab.org/wiki/images/thumb/f/f2/Fairshare_2020-01-06.png/700px-Fairshare_2020-01-06.png"
                    width="700" height="385"></a>.
              </p>
              <p>N.B.: Production accounts must use the the "gluex-pro"
                project to the the benefit if increased priority.
              </p>
              <p>Beni reported seeing extremely long retrieval times for
                tape files in December.
              </p>
              <h3><span class="mw-headline"
                  id="Review_of_recent_issues_and_pull_requests">Review
                  of recent issues and pull requests</span></h3>
              <ul>
                <li> <a rel="nofollow" class="external text"
                    href="https://github.com/JeffersonLab/halld_recon/issues/257">halld_recon
                    Issue #257</a>: "Broken beam bunch selection with
                  wide timing cuts." Alex reported on recent work in
                  studying the effect of the width of timing cuts in the
                  BCAL on the efficiency calculated in Monte Carlo for ρ
                  events. There has been a long-standing <a
                    rel="nofollow" class="external text"
                    href="https://github.com/JeffersonLab/HDGeant4/issues/93">problem
                    in the efficiency comparison</a> for this topology
                  between Geant3 and Geant4 when plotted as a function
                  of beam photon energy. Using a very wide cut
                  (&pm;5.0 ns versus the standard &pm;1.0 ns) he
                  saw the two efficiencies come together, but the
                  overall level drop. He traced this to a bug in the
                  analysis library where the RF bunch assignment, in the
                  case of multiple acceptable RF bunches, was based on
                  the assignment with the highest χ<sup>2</sup> rather
                  than the lowest. After <a rel="nofollow"
                    class="external text"
                    href="https://github.com/JeffersonLab/halld_recon/pull/258">correcting
                    the code</a> the overall efficiency level rose to a
                  level higher than that with the narrow timing cut, as
                  expected.</li>
                <li> <a rel="nofollow" class="external text"
                    href="https://github.com/JeffersonLab/halld_recon/issues/256">halld_recon
                    Issue #256</a>: "ReactionFilter plugin is not
                  working with an (atomic) electron target." Igal
                  reported a problem where no events were selected by
                  the ReactionFilter when run on Compton events from his
                  recently installed generator. The simulation was done
                  with no magnetic field and the resulting straight
                  tracks from electrons were all being assigned a
                  positive charge (the default when the charge cannot be
                  determined) and so events appeared to be
                  electron-free. There was an extended discussion of how
                  to reconcile this problem; the discussion will
                  continue offline.</li>
                <li> <a rel="nofollow" class="external text"
                    href="https://github.com/JeffersonLab/halld_sim/pull/102">halld_sim
                    Pull Request #102</a> "Ijaegle primex evt" and <a
                    rel="nofollow" class="external text"
                    href="https://github.com/JeffersonLab/halld_sim/pull/104">halld_sim
                    Pull Request #104</a>: "Increase gen. config. length
                  name." These pull requests from Igal add an η
                  generator and a double Compton generator respectively
                  to the repository.</li>
                <li> <a rel="nofollow" class="external text"
                    href="https://github.com/JeffersonLab/halld_recon/pull/253">halld_recon
                    Pull Request #253</a>: "Add trd to fit." Simon has
                  been working to add TRD data from last year's DIRC
                  commissioning run to the track fit. This is work in
                  progress.</li>
              </ul>
              <h3><span class="mw-headline"
                  id="Recent_Discussion_on_the_GlueX_Software_Help_List">Recent
                  Discussion on the GlueX Software Help List</span></h3>
              <p>We looked over <a rel="nofollow" class="external text"
href="https://groups.google.com/forum/#!forum/gluex-software">the list</a>
                without significant discussion.
              </p>
              <h3><span class="mw-headline" id="New_Meeting_Time.3F">New
                  Meeting Time?</span></h3>
              <p>Teaching schedules have changed. We will move this
                meeting and the HDGeant4 meeting time to 3:30 pm. They
                will remain on Tuesdays.
              </p>
              <h3><span class="mw-headline" id="Action_Item_Review">Action
                  Item Review</span></h3>
              <ol>
                <li> Figure out how to do build on a system with Python
                  3 as the default.</li>
                <li> Look at getting CCDB constants from hallddb-farm.</li>
                <li> Develop a plan for calculate efficiencies when
                  timing cuts on charged tracks hitting the BCAL depend
                  on distributions with long, not-well-understood tails</li>
              </ol>
            </div>
            <div class="printfooter">
              Retrieved from "<a dir="ltr"
href="https://halldweb.jlab.org/wiki/index.php?title=GlueX_Software_Meeting,_January_7,_2020&oldid=95667">https://halldweb.jlab.org/wiki/index.php?title=GlueX_Software_Meeting,_January_7,_2020&oldid=95667</a>"</div>
          </div>
        </div>
      </div>
      <div id="footer" role="contentinfo">
        <ul id="f-list">
          <li id="lastmod"> This page was last modified on 7 January
            2020, at 18:32.</li>
        </ul>
      </div>
    </div>
  </body>
</html>