[Halld-offline] Software Meeting Minutes, September 17, 2019
Mark Ito
marki at jlab.org
Wed Sep 18 17:15:45 EDT 2019
Please find the minutes here
<https://halldweb.jlab.org/wiki/index.php/GlueX_Software_Meeting,_September_17,_2019#Minutes>
and below.
_________________________________________
GlueX Software Meeting, September 17, 2019, Minutes
Present:
* *CMU: * Naomi Jarvis
* *FSU: * Sean Dobbs
* *JLab: * Alexander Austregesilo, Mark Ito (chair), David Lawrence,
Simon Taylor, Beni Zihlmann
There is a recording of his meeting <https://bluejeans.com/s/97cGP/> on
the BlueJeans site. Use your JLab credentials to access it.
Announcements
1. Collaboration Meeting
<https://halldweb.jlab.org/wiki/index.php/GlueX-Collaboration-Oct-2019>:
Sean has proposed a list of speakers for the Offline Session on
Thursday. Alex will substitute for David and give a status of data
processing.
2. New DB Servers -- HALLDDB-A and HALLDDB-B Online
<https://mailman.jlab.org/pipermail/halld-offline/2019-September/003758.html>:
the new servers were stood up to relieve stress on halldb.jlab.org
(our main database server) from farm jobs. Testing is still in
progress but users are welcome to try it out.
3. *No online compression this Fall*. David has discussed the issue
with Graham and they agree that compression of raw data is not ready
for the November run. In addition using ramdisk on the front end,
improvements in the Data Transfer Node (for off-site transfers), and
expansion of disk space at JLab all reduce the need for immediate
relief on data size.
Review of minutes from the last Software Meeting
We went over the minutes from September 3
<https://halldweb.jlab.org/wiki/index.php/GlueX_Software_Meeting,_September_3,_2019#Minutes>.
David gave us an update on NERSC and PSC.
* At NERSC, batch 3 of the Fall 2018 data reconstruction is finished.
80% of the output has been brought back to the Lab.
* At the Pittsburgh Supercomputing Center (PSC) there is a steady rate
of about 300 jobs a day, slower than NERSC, but with fewer job
failures. It is not clear why the pace is so slow.
* At NERSC, Perlmutter will be coming on line next year with an
attendant large increase in computing capacity.
* The XSEDE proposal at PSC has been approved with 5.9 million units.
October 1 is the nominal start date. Note that our advance award was
850 thousand units.
Report from the last HDGeant4 Meeting
We forgot to go over the minutes from the September 10 meeting
<https://halldweb.jlab.org/wiki/index.php/HDGeant4_Meeting,_September_10,_2019#Minutes>.
Maybe next time.
Reconstruction Software for the upgraded Time-of-Flight
Sean went through and made the needed changes. The DGeometry class was
modified to load in the geometry information. The new DTOFGeometry class
was changed to present the info in a more reasonable way. There were
places where geometry parameters where hard-coded. These were changed to
use the information from the CCDB-resident HDDS files. The process
benefited from the structure where the DGeometry class parses the HDDS
XML and the individual detector geometry classes turn that information
into useful parametrizations.
Right now hits are not showing up in the simulation (HDGeant4). Fixing
this is the next task.
Fixing Crashes When Running over Data with Multiple Runs
Sean described his fix of a long standing problem, first reported by
Elton Smith, where the ReactionFilter crashes when run over data that
contains multiple runs. This closes halld_recon issue #111
<https://github.com/JeffersonLab/halld_recon/issues/111>. In particular,
the DParticleID class assumed the that run number never changes.
Necessary refresh of constants from the CCDB on run number boundaries
was thus never done.
Tagger Counter Energy Assignment Bug
Beni brought to our attention an issue that was discussed at the last
Beamline Meeting. Currently, tagger energies are set as a fraction of
the endpoint energy. But since the electron beam energy can change from
run to run, albeit by a small amount, the reported energy of a
particular tagger counter will change when the tagged electron energy
bin is really determined by the strength of the tagger magnet field.
Richard Jones is working on a proposal on how this should be fixed.
Software Versions and Calibration Constant Compatibility
Sean led us through an issue he described in an earlier email
<https://mailman.jlab.org/pipermail/halld-offline/2019-September/003761.html>
to the Offline List. The basic issue is that older versions of mcsmear
are not compatible with recent constants used in smearing the FCAL. We
discussed the issue and concluded that the problem was changing the
meaning of columns in the table, rather than creating a new calibration
type with the new interpretation. Because this situation, the software
has to know which interpretation is correct for a given set of
constants. Old software versions are not instrumented to do so, of
course. If the constants are under a different type, the then the
software will know which type is it using and do the right thing. And
old software, only knowing about the old type will do the right thing as
well.
Sean is thinking about how we will address this going forward.
CCDB Ancestry Control
Mark presented a set of issues that arise with CCDB 2.0 (coming soon).
See his slides
<https://docs.google.com/presentation/d/1P5mE3SApCmeWv4oNrW58tzeD6zj9neQlr5XORigYCXM/edit?usp=sharing>
for all of the dirty details.
In CCDB 1.x we can "freeze" calibration constants in time by setting a
"calib-time" for the system to use. All calibration changes made after
that time will be ignored. Because of the hierarchical structure of
calibration "variations" there is a valid use case where the user may
want constants at the level of the named variation to float, but freeze
the constants coming from variations higher in the hierarchy. This use
case is not supported under CCDB 1.x, but is provided for in CCDB 2.0.
The implementation provides a rich set of choices for freezing (or not
freezing) variations in the hierarchy. Too rich in fact. The discussion
was about how to limit the scope of what can be done so users are
presented with an understandable, tractable set of options. There was a
lot of discussion. See the recording if interested.
No final decision was made, but at least by the end of the meeting
everyone was aware of the nature of the problem.
Retrieved from
"https://halldweb.jlab.org/wiki/index.php?title=GlueX_Software_Meeting,_September_17,_2019&oldid=94126"
* This page was last modified on 18 September 2019, at 17:13.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-offline/attachments/20190918/4c976e2d/attachment.html>
More information about the Halld-offline
mailing list