[Halld-offline] Software Meeting Minutes, January 7, 2020

Mark Ito marki at jlab.org
Tue Jan 7 18:36:00 EST 2020


Folks,

Please find the minutes here 
<https://halldweb.jlab.org/wiki/index.php/GlueX_Software_Meeting,_January_7,_2020#Minutes> 
and below.

   -- Mark


    GlueX Software Meeting, January 7, 2020, Minutes

Present:

  * *CMU: *: Naomi Jarvis
  * *JLab: * Alex Austregesilo, Mark Dalton, Mark Ito (chair), Igal
    Jaegle, David Lawrence, Keigo Mizutani, Justin Stevens, Simon
    Taylor, Beni Zihlmann


There is a recording of his meeting <https://bluejeans.com/s/L7pCN/> on 
the BlueJeans site. Use your JLab credentials to access it.


      Announcements

 1. New version set: version_4.12.0.xml
    <https://mailman.jlab.org/pipermail/halld-offline/2019-December/003841.html>.
    This version set is suitable for analyzing data from the Fall 2019 run.
 2. new version sets, new simulation (halld_sim-4.11.0), old
    reconstruction
    <https://mailman.jlab.org/pipermail/halld-offline/2019-December/003860.html>.
    These versions sets use the same version of halld_sim as in
    version_4.12.0.xml, but with old version of halld_recon, versions
    used in previous analysis launches. New branches of those old
    versions were needed to accommodate the new particle_type.h used in
    the latest halld_sim version.


      Review of Minutes from the Last Software Meeting

We went over the minutes from December 10 
<https://halldweb.jlab.org/wiki/index.php/GlueX_Software_Meeting,_December_10,_2019#Minutes>. 


  * Mark has made some progress on a path forward for conversion to
    Python 3. More correctly, he found there is an easy way to use
    Python-2-compatible SCons on systems with Python 3 as the default
    and vice-versa. It turns out that at least for RedHat-like systems,
    modern distributions ship SCons in both flavors.
  * There has been no news on the release of CCDB 2.0.
  * The RCDB errors reported at the meeting were solved.
  * The problem reported with using CCDB from hallddb-farm still appear
    to be with us. Jobs hang forever at start-up when accessing
    calibration constants.
  * The problem that Mark Dalton reported with genBH was due to a bug in
    HDDM that Richard has since fixed
    <https://github.com/JeffersonLab/halld_recon/pull/248>.


      Review of Minutes from the Last HDGeant4 Meeting

We went over the minutes from the meeting on December 17 
<https://halldweb.jlab.org/wiki/index.php/HDGeant4_Meeting,_December_17,_2019#Minutes>. 
Alex has done work exploring the effect of widening the timing cuts on 
charged hadrons which hit the BCAL. See the discussion below in the 
section on halld_recon pull requests.


      Rebooting the Work Disk Server

The reboot of the work disk server, done on December 17, seems to have 
fixed the file locking problems we have been experiencing for months 
now. Brad Sawatzky from Hall C also reports success with his builds, 
builds that would reliably fail in the recent past.

Getting all of the Halls to agree to bring the disk down took some 
convincing given that any benefit was speculative. Kurt Strosahl of 
SciComp did see some anomalies while doing some pre-reboot diagnostics. 
There is no proof that the fix is permanent; we will have to continue to 
monitor the server.


      Report from the Last SciComp Meeting

Mark led us through his notes from the meeting on December 19 
<https://markito3.wordpress.com/2019/12/19/scicomp-meeting-december-19-2019/>. 


  * We are on track for a 4 PB expansion of our Lustre capacity for
    Experiment Nuclear Physics (i.e, all Halls).
  * There will be work this summer to transition to a single tape
    library. We have two at present. With two tape bandwidth can get
    siloed (see sense 3) <https://www.merriam-webster.com/dictionary/silo>.


      Fix for Recent Raw Data Runs

Mark I. highlighted Sean Dobb's recent fix 
<https://mailman.jlab.org/pipermail/halld-offline/2019-December/003858.html> 
for anomalies found in the raw data from last Fall. Naomi pointed out 
that there are actually three such anomalies related to the the new 
firmware used in the Flash-250s. David addressed one of them in 
halld_recon Pull Request #247 
<https://github.com/JeffersonLab/halld_recon/pull/247>

Mark D. has compiled a collection of information on this and related 
issues 
<https://halldweb.jlab.org/wiki/index.php/FADC250_Firmware_Versions>.

[Added in press: Mark D. reports that currently all FADCs are running 
with the latest version of the firmware (Version C13).]


      New Farm Priority Scheme

In December, several folks reported anomalies in how gxproj accounts 
were being scheduled on the JLab farm. Starting yesterday, a new Slurm 
configuration for farm job priorities was rolled out 
<https://mailman.jlab.org/pipermail/halld-offline/2020-January/003862.html>. 
Priorities are assigned in a hierarchy such that resources at each level 
are apportioned according to "shares," and groups that are inactive at 
any given time have their shares apportioned among the active groups. 
The following figure shows the hierarchy and the current share assignments.

Fairshare 2020-01-06.png 
<https://halldweb.jlab.org/wiki/index.php/File:Fairshare_2020-01-06.png>.

N.B.: Production accounts must use the the "gluex-pro" project to the 
the benefit if increased priority.

Beni reported seeing extremely long retrieval times for tape files in 
December.


      Review of recent issues and pull requests

  * halld_recon Issue #257
    <https://github.com/JeffersonLab/halld_recon/issues/257>: "Broken
    beam bunch selection with wide timing cuts." Alex reported on recent
    work in studying the effect of the width of timing cuts in the BCAL
    on the efficiency calculated in Monte Carlo for ρ events. There has
    been a long-standing problem in the efficiency comparison
    <https://github.com/JeffersonLab/HDGeant4/issues/93> for this
    topology between Geant3 and Geant4 when plotted as a function of
    beam photon energy. Using a very wide cut (±5.0 ns versus the
    standard ±1.0 ns) he saw the two efficiencies come together, but
    the overall level drop. He traced this to a bug in the analysis
    library where the RF bunch assignment, in the case of multiple
    acceptable RF bunches, was based on the assignment with the highest
    χ^2 rather than the lowest. After correcting the code
    <https://github.com/JeffersonLab/halld_recon/pull/258> the overall
    efficiency level rose to a level higher than that with the narrow
    timing cut, as expected.
  * halld_recon Issue #256
    <https://github.com/JeffersonLab/halld_recon/issues/256>:
    "ReactionFilter plugin is not working with an (atomic) electron
    target." Igal reported a problem where no events were selected by
    the ReactionFilter when run on Compton events from his recently
    installed generator. The simulation was done with no magnetic field
    and the resulting straight tracks from electrons were all being
    assigned a positive charge (the default when the charge cannot be
    determined) and so events appeared to be electron-free. There was an
    extended discussion of how to reconcile this problem; the discussion
    will continue offline.
  * halld_sim Pull Request #102
    <https://github.com/JeffersonLab/halld_sim/pull/102> "Ijaegle primex
    evt" and halld_sim Pull Request #104
    <https://github.com/JeffersonLab/halld_sim/pull/104>: "Increase gen.
    config. length name." These pull requests from Igal add an η
    generator and a double Compton generator respectively to the repository.
  * halld_recon Pull Request #253
    <https://github.com/JeffersonLab/halld_recon/pull/253>: "Add trd to
    fit." Simon has been working to add TRD data from last year's DIRC
    commissioning run to the track fit. This is work in progress.


      Recent Discussion on the GlueX Software Help List

We looked over the list 
<https://groups.google.com/forum/#!forum/gluex-software> without 
significant discussion.


      New Meeting Time?

Teaching schedules have changed. We will move this meeting and the 
HDGeant4 meeting time to 3:30 pm. They will remain on Tuesdays.


      Action Item Review

 1. Figure out how to do build on a system with Python 3 as the default.
 2. Look at getting CCDB constants from hallddb-farm.
 3. Develop a plan for calculate efficiencies when timing cuts on
    charged tracks hitting the BCAL depend on distributions with long,
    not-well-understood tails

Retrieved from 
"https://halldweb.jlab.org/wiki/index.php?title=GlueX_Software_Meeting,_January_7,_2020&oldid=95667"

  * This page was last modified on 7 January 2020, at 18:32.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-offline/attachments/20200107/8debc7a7/attachment-0001.html>


More information about the Halld-offline mailing list