[Halld-offline] Hall D Software Meeting Minutes, October 16, 2018
marki at jlab.org
Wed Oct 17 19:19:56 EDT 2018
Find the minutes here
Minutes, GlueX Offline Software Meeting, October 16, 2018
* *CMU: * Naomi Jarvis
* *FIU: * Mahmoud Kamel
* *FSU: * Sean Dobbs
* *IU: * Ahmed Foda
* *JLab: * Alex Austregesilo, Thomas Britton, Mark Ito (chair), David
Lawrence, Simon Taylor, Beni Zihlmann
* *W&M: * Justin Stevens
There is a recording of this meeting <https://bluejeans.com/s/KkfFM/> on
the BlueJeans site. Use your JLab credentials to access it.
1. New version of MCwrapper: version 2.0.2
The bot is now in beta!
2. New version of build_scripts: version 1.4.2
A two-stage build process is supported.
3. New version of halld_recon: recon-ver03.2
This version is being used in the monitoring launch at NERSC.
4. More new versions: version_3.7_jlab.xml feat. halld-recon 3.2.0,
halld-sim 3.5.0, gluex_root_analysis 0.5
A periodic-code-update version set.
Review of minutes from the October 2 meeting
We went over the minutes
We spent some time discussing the issue on the kinematic fitter reported
by Hao Li and Mike McCracken. See the thread on the software help list
Beni pointed out that this is an issue that this working group should
[Added in press: Sean created a GitHub issue to track progress on this
problem <https://github.com/JeffersonLab/halld_recon/issues/39>. He
assigned the issue to himself and Thomas.]
David and Mark reported that they met with Curtis two weeks ago to start
planning for the review. Curtis has put together a wiki page to collect
They started a list of topics to address in the short time allowed, most
importantly an updated estimate of future computing resource needs. They
also want to highlight recent use of off-site computing resources.
Getting to the ROOT of things...
Mark reviewed the recent email from Graham Heyes
on opening a communication channel between local ROOT users and the ROOT
development team. Bob Michaels is organizing a meeting with Alex Naumann
of CERN. The meeting will be announced when plans are set. Interested
parties are welcome.
Hardware needs, feedback to Chip
We reviewed Chip's presentation (user=writer...(contact me for more
on options we have for spending on computing resources. Several of us
(Mark, David, Alex, Sean, Thomas, Curtis, and Richard) met with him
about two weeks ago where he solicited input in what our needs are
vis-a-vis FY19 equipment purchases. Discussion points:
* Alex told us that the 2017-01 data produced 11-0 TB of REST files.
The associated root trees are about the same size.
* The 2018-01 data should be about three times the size of 2017-01.
* We have been running into problems with limited Lustre-based disk
space for the past year or so.
* David told us the the launches at NERSC would benefit from dedicated
space to stage the raw data files so that that effort does not
compete with others.
In the end we settled on rough proportions of where we would like our
share of resources to go (on a dollar basis):
* 50% Lustre-based disk space (about a petabyte of space)
* 40% Computing nodes (16 40-core nodes (80 hyper-threads) or 5.6
million core hours per year)
* 10% SSD disk space (about 25 TB on top of existing 25 TB dedicated
to raw data input staging, could be much cheaper (i. e. more space)
if other Halls want a like amount)
We would want to defer the purchase of the computing nodes under the
assumption that a later purchase might get us more computing per dollar
The need for SSD disk space is not certain, but several applications
might potentially benefit. This small fraction would serve to gain
experience to see if more high-speed disk would help us.
Mark will talk to Hall B about what they are planning.
Review of Offline Work Packages
We went over the list of Analysis Software Work Packages
Mark did a pass at marking up the list and Sean commented on the
mark-ups on the corresponding "discussion" page
If people have ideas about other work packages, please add them to the
Mark agreed to break the list into two categories (Analysis and General)
and fill in names of those he knows will volunteer to supervise packages.
At the last meeting several topics were raised.
Encouraging wider participation in these meetings
* The work packages may be a way to get new collaboration members to
attend once they get volunteered to do the work.
* Some of the expert-level discussion might not be of general interest.
* Many of the topics discussed at other working group meetings,
especially the Analysis Working Group, might be more appropriately
hosted at this meeting. Those topics should be identified by people
who attend these other meetings.
Rename this meeting
One of us (a.k.a. Naomi) argued that "Offline" in the title did not have
an auspicious connotation, e. g., "off the main line" or "off topic", i.
e., "irrelevant". We formed a consensus around "Hall D Software" rather
than "Offline Software". We are reprinting the business cards now.
Tutorials and Workfests
Past gatherings have proved useful. There was a lot of discussion, but
we arrived at
1. having occasional workfests focused on specific work packages
limited to those interested the specific topic.
2. once a year, on the day before the Spring Collaboration Meeting,
having a half-day software tutorial like we did last Spring.
Review of recent pull requests
David explained his recent pull request (#38)
<https://github.com/JeffersonLab/halld_recon/pull/38>. There is now an
option whereby DANA applications can create a local copy of the CCDB
SQLite file indicated in JANA_CALIB_URL. This will help with slow start
up, due to multiple processes hitting a single file, on the monitoring
farm and other similar applications.
* This page was last modified on 17 October 2018, at 19:15.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Halld-offline