[Halld-offline] Offline Software Meeting Minutes, June 14, 2017
Mark Ito
marki at jlab.org
Fri Jun 16 18:16:59 EDT 2017
Addendum:
Lustre Issues
Sean noted a couple of recent items with the Lustre disk system and
GlueX work.
1. He reminded us that the SQLite version of the CCDB and RCDB do not
work if the database files are stored on a Lustre file system. They
must be copied to a more traditional file system before running
GlueX software against them.
2. Recently calibration jobs run under the gxproj3 account were flagged
by SciComp as causing excessive IO-ops on Lustre. Sean believes that
this was due to many jobs starting at the same time and each one
copying its input data file to the local farm node disk before
processing. This practice was recommended by SciComp some months
ago. After further consultation with SciComp he has switched to
analyzing the data directly from the Lustre-based cache disk (no
local copy). SciComp has not weighed in about whether this helps the
situation or not. No news is good news?
On 06/16/2017 01:23 PM, Mark Ito wrote:
>
> Please find the minutes below and in the standard location
> <https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_June_14,_2017#Minutes>.
>
> _______________
>
>
> Minutes
>
> Present:
>
> * *CMU*: Naomi Jarvis
> * *JLab*: Thomas Britton, Brad Cannon, Eugene Chudakov, Hovanes
> Egiyan, Mark Ito (chair), Dmitry Romanov, Beni Zihlmann
> * *NU*: Sean Dobbs
> * *UConn*: Richard Jones
> * *Yerevan*: Hrach Marukyan
>
> There is a recording of this meeting <https://bluejeans.com/s/uEQso/>
> on the BlueJeans site. Use your JLab credential to access it.
>
>
> Announcements
>
> 1. New release of HDDS: version 3.11
> <https://mailman.jlab.org/pipermail/halld-offline/2017-June/002796.html>.
> Mark noted that this release contains recent changes to target and
> start counter geometry from Simon Taylor.
> 2. hdpm 0.7.0
> <https://mailman.jlab.org/pipermail/halld-offline/2017-June/002810.html>.
> Nathan went over his announcement. New features include
> * AmpTools' new location at GitHub is handled.
> * New package: PyPWA
> * Revised actions for hdpm sub-commands.
>
>
> Review of minutes from the last meeting
>
> We went over the minutes from May 31
> <https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_May_31,_2017#Minutes>.
>
>
>
> Progress on the OSG
>
> Richard gave an update on progress with the OSG. For all the details,
> please see the recording <https://bluejeans.com/s/uEQso/> starting.
> Some notes:
>
> * scosg16.jlab.org is fully functional as an OSG submit host now.
> * Jobs similar to Data Challenge 2 are going out.
> * *Using containers* to deliver and run our software to remote nodes:
> o [https://en.wikipedia.org/wiki/Docker_(software)
> <https://en.wikipedia.org/wiki/Docker_%28software%29> Docker
> was subject of initial focus, turns out it was designed to
> solve network (and other system resources) isolation problems,
> e. g., for deployment of web services on foreign OS.
> o [http://singularity.lbl.gov/ Singularity aimed at mobility of
> compute, which is the problem we are trying to solve. OSG has
> embraced it as the on-the-grid-node-at-run-time solution.
> o Richards original solution to was make a straight-forward
> Singularity container with everything we need to run. That
> came to 7 GB, too large to use under OASIS (OSG's file
> distribution system).
> o With guidance from OSG folks, he has implemented a solution
> that allows us to run. [The details are many and various and
> will not be recorded here. Again, see the recording.] The
> broad features are:
> + Singularity on the grid node runs using system files
> (glibc, ld, system provide shared libraries, etc.) stored
> outside the container on OASIS.
> + Software is distributed in two parts. The system files
> mentioned in the previous item, and our standard
> built-by-us-GlueX-software-stack, distributed via OASIS
> without any need for containerization.
> * *Scripts for running the system*
> o osg_container.sh: script that runs on the grid node
> o my_grid_job.py
> + runs generator, simulation, smearing, reconstruction, and
> analysis
> + hooks for submitting, no knowledge of Condor required
> + will report on job status
> o Richard will send out an email with instructions.
> * *Problem with CentOS 6 nodes*
> o Some grid nodes are hanging on the hd_root step.
> o CentOS 7 nodes seem OK. CentOS 6 nodes have the problem.
> Unfortunately, the majority of nodes out there are
> CentOS-6-like, including all of the nodes deployed at GlueX
> collaborating university sites.
> o The issue seems to be related to access of the SQLite form of
> CCDB. OSG guys are working on a solution. David Lawrence has
> been consulted. Dmitry thinks he has a way forward that
> involves deploying an in-memory realization of the database.
>
>
> Event Display
>
> Dmitry and Thomas will give an update at the next meeting.
>
>
> Other Items
>
> * Brad mentioned that our Doxygen documentation pages are down. Mark
> will take a look.
> * Eugene asked about the manner in which we document details of
> simulation runs and whether enough information is retained to
> reproduce the results. Mark showed him the site for sim 1.2.1
> <https://halldweb.jlab.org/gluex_simulations/sim1.2.1/> as an
> example of what we do for the "public" simulation runs.
>
>
>
> _______________________________________________
> Halld-offline mailing list
> Halld-offline at jlab.org
> https://mailman.jlab.org/mailman/listinfo/halld-offline
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-offline/attachments/20170616/543dda91/attachment-0002.html>
More information about the Halld-offline
mailing list