[Halld-offline] Offline Software Meeting Minutes, January 7, 2014

Mark M. Ito marki at jlab.org
Thu Jan 8 08:31:00 EST 2015


Please find the minutes at 
and below.

   -- Mark

GlueX Offline Meeting, January 7, 2015


  * *FIU*: Mahmoud Kamel
  * *FSU*: Aristeidis Tsaris
  * *JLab*: Mark Ito (chair), David Lawrence, Paul Mattione, Kei Moriya,
    Nathan Sparks, Simon Taylor
  * *NU*: Sean Dobbs

Review of Minutes from December 10

We looked over the minutes 
of the last meeting.

  * Kei is using the gxproj1 account for offline monitoring jobs and
    Mark is using gxproj2 for Data Challenge 3 (DC3).
  * Mark was able to get the stand-alone version of Dmitry's Run
    Conditions Database (RCDB) running. He will circulate instructions
    on how to do that. He will also approach the Computer Center on
    installing it on halldweb1.
  * Paul mentioned that he could not find documentation on the version
    management system that Mark presented last time. Turns out it does
    not exist yet. Paul suggested that the documentation be featured in
    the "getting started" section of the wiki.
  * Mark presented a new version
    of the main "Offline Software" wiki page. It is still under

DAQ and TTab Plugins Converted into Libraries

David reviewed for us the email 
he sent today. Now it is no longer necessary to specify these plugins as 
JANA command options.

HDDM Versions and Backward Compatibility

We reviewed principles 
that Richard proposed for future changes to REST format. The issue is 
preservation of backward compatibility, being able to analyze old REST 
data with new code. The scheme does the preservation at the cost of 
complication in the element names for those that are changed from the 
elements they replace. We did not decide if this approach should be 
enshrined in policy, but will discuss it further in the future.

Commissioning Run Review

We reviewed tasks and progress from the recent run.

Offline Monitoring Report

Kei presented issues 
from the offline monitoring reconstruction jobs during the run. The slides:

  * Offline Monitoring Summary
  * Disk Usage For Each Week
  * 2-track Skim Output
  * Number of Files Processed
  * EVIO Statistics
  * Errors found:
      o DAQ plugin
      o Too Many FDC Hits
      o Insufficient Buffer Space
      o Bad Alloc
      o Mismatch in Trigger Bank
      o Crash
      o Unknown Module Type
      o F1TDC Block Header
      o JEventSource_EVIO::MergeObjLists
  * Looking Ahead

Commissioning Branch-to-Trunk Migration

Most of the development that went on during the run was checked into the 
commissioning branch. A lot (but not all) of this code now has to be 
moved the trunk, in particular those changes that improve the 
reconstruction in general. Simon has been managing the commissioning 
branch and will look into doing this transfer.


There was interest during the run for REST-formatted data for high-level 
analysis. Kei has already produced these files (see his talk above). 
They can be found in 
A lot of recent analysis has been based on raw data with corrections 
done on raw or reconstructed quantities to get better results. At 
present these corrections are not reflected in the REST data. In order 
to make future production of REST data more useful we need to move 
corrections and calibrations into the standard reconstruction.
Sean will coordinate production/update of calibration constants and 
capture of correction algorithms through the Calibration Working Group. 
The goal is to make a push for some substantial progress and then to 
re-make the REST data set.


Paul brought up the a private discussion we had on the email list on 
"data reconstruction trains". The idea is to have a single set of jobs 
do reconstruction on raw data and have several "cars" attached to the 
jobs for specialized purposes. Each individual project would then avoid 
having to fetch the data from tape and pay the CPU price of 
reconstruction. However such an effort requires significant coordination 
and management. We did not move to start designing a system right away 
since (a) the need is only prospective at this point and (b) a lot of 
the savings for high-level analysis can be achieved with useful 
REST-formatted data.

Storage of Data Taken between Runs

David pointed out that there will be cosmic and test running in the 
coming months before the spring run and we need to decide where in the 
tape directory hierarchy they should be stored. The consensus was to 
simply create a new run period directory in parallel to 
"RunPeriod-2014-10" (used to store data from the commissioning run just 
ended), something like "RunPeriod-2015-01".
Retrieved from 

Mark M. Ito, Jefferson Lab, marki at jlab.org, (757)269-5295

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-offline/attachments/20150108/cf6a8abf/attachment-0002.html>

More information about the Halld-offline mailing list