[Halld-offline] Offline Software Meeting Minutes, January 10, 2018

Mark Ito marki at jlab.org
Wed Jan 10 17:35:47 EST 2018


Please find the minutes below and at


   - Mark


    GlueX Offline Meeting, January 10, 2018, Minutes


  * *CMU *: Curtis Meyer
  * *FIU *: Mahmoud Kamel, Joerg Reinhold
  * *Glasgow *: Peter Pauli
  * *JLab: *: Alex Austregesilo, Amber Boehnlein, Thomas Britton, Mark
    Ito (chair), David Lawrence, Simon Taylor
  * *Yerevan *: Hrach Marukyan

There is a recording of this meeting <https://bluejeans.com/s/q6j0w/> on 
the BlueJeans site. Use your JLab credentials to access it.


 1. New simulation branch
    Sean's email identifies the branch we should be using in simulation
    to used with the latest reconstruction launch.
 2. MCwrapper 1.12
    Thomas has release a new version. It supports submission to the Open
    Science Grid. Changes coming in the next release:
      * Fix to a problem identified by Jon Zarling having to do with
        RCDB on RHEL6/CentOS6.
      * Fix to a problem pointed out by Nacer Hamdi having to do with
        amorphous radiator runs.

      Review of minutes from the last meeting

We looked at the minutes of the meeting on December 13 

We noted that we still need a tagged version of CCDB.

      Refresh ROOT version?

We looked at the list of releases of ROOT 
<https://root.cern.ch/releases> and noted that the version that we are 
using at present, 6.08.06, is already marked as "Old" on the site. 
Normally we would consider upgrading.

  * David reported that the latest version changes the interface into
    the TMVA routines.
  * Alex pointed out that it is right at the beginning of a run, not a
    great time to change the software.
  * Others pointed out the next time we will not be running is several
    months from now (hopefully).
  * No one present had an example of a new feature that we would benefit

We decided for now to do nothing. If collaborators have opinions about 
an upgrade, particularly if there are new features they want to take 
advantage of, please write to the offline list or contact Mark.

      New track matching on the master branch

We went through Mark's email 
from before the holidays describing reduced efficiency for FCAL photons 
associated with a large change in the track matching code from Simon.

  * Simon reported that he thinks the anomalies are due to a lack of
    tuning of matching parameters with the new algorithm. He will look
    into this.
  * Alex noted that a significant change like this makes comparison with
    previous reconstruction results difficult when trying to monitor
    incoming data.
  * Alex also pointed out some strangeness in the TOF occupancy for
    track-matched hits.
  * We discussed options for maintaining availability of the old algorithm:
     1. The latest tag was applied before the change, so that can be
        used. Alex remarked that there are changes made after that tag
        that one might want to have.
     2. We discussed how hard it would be to reverse the changes on the
        master branch. There is some fear that it might not be easy.
     3. Another options is to create a parallel branch that has all
        changes except those brought in for the new algorithm. This
        faced difficulties similar to the previous option.

We decided to keep the changed on the master branch for now while Simon 
pursues his parameter setting studies. In the meantime Mark will look at 
the feasibility of implementing option (3).

We discussed how to merge in changes to the master when the number of 
changed lines of code is large and the effects potentially significant. 
The majority of pull requests are clearly not of this nature. We 
coalesced on a policy where, if there is concern about a large change, 
the proposed branch should be tested by someone other than the author, 
beyond the light testing we get with the pull-request auto-build. For 
example, the offline monitoring suite can be run against the branch. 
Collaborators should not blithely merge in a pull request that is large 
without some discussion in the pull-request conversation on GitHub. Here 
"large" is somewhat vague; we are hoping we will collectively recognize 
a large change when we see one.

      Docker Containers + GlueX

David has been working on getting reconstructions jobs running at NERSC 
in the context of his LDRD grant for JANA2. Doing that involves using 
containers; a tool which is gaining widespread use in recent years. He 
described his recent experience and plans for further work. Please see 
his slides 
for all of the details.

      Hall D Disk Usage

Alex brought our attention to the level of use of work, cache, and 
volatile to support recent reconstruction and analysis launches. We are 
near the upper limits on all of them. See the SciComp webpages 
<https://scicomp.jlab.org/scicomp/#/?username=> for the status. Work 
especially has been a problem. With the move to the new fileserver, we 
run into hard limits when we exceed our allotment and that allotment is 
much smaller than we were using before the move. The following table was 
shown, reflecting work disk use as of December 31.

Sum of all files owned by user
Rank 	Total Size (GB) 	User
1 	5305.20 	acernst
2 	4032.21 	gxproj5
3 	3716.53 	jrsteven
4 	3380.35 	somov
5 	3038.87 	gluex
6 	2715.98 	staylor
7 	2678.60 	stepi
8 	2117.81 	aaustreg
9 	1815.50 	ilarin
10 	1703.46 	gxproj1

David suggested sending out this table to the offline email list on a 
regular basis. In any case collaborators are encouraged to evaluate the 
amount of data that they need spinning. The rest should be archived to 

Mark Ito, marki at jlab.org, (757)269-5295

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-offline/attachments/20180110/ead8d59e/attachment.html>

More information about the Halld-offline mailing list