[Halld-offline] Offline Software Meeting Minutes, October 4, 2017
Mark Ito
marki at jlab.org
Thu Oct 5 13:25:32 EDT 2017
Folks,
Please find the minutes below and at
https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_October_4,_2017#Minutes
.
- Mark
__________________________
GlueX Offline Meeting Minutes, October 4, 2017
Present:
* *CMU: *: Naomi Jarvis
* *FIU *: Joerg Reinhold
* *FSU *: Sean Dobbs
* *JLab: *: Alex Austregesilo, Amber Boehnlein, Thomas Britton, Eugene
Chudakov, Sebastian Cole, Mark Ito (chair), David Lawrence, Sascha
Somov, Simon Taylor, Beni Zihlmann
There is a recording of this meeting <https://bluejeans.com/s/1KDeL/> on
the BlueJeans site. Use your JLab credentials to access it.
Announcements
1. new tag of mcsmear branch of sim-recon, version 3
<https://mailman.jlab.org/pipermail/halld-offline/2017-September/002948.html>.
Sean released a new tag of the mcsmear development branch of
sim-recon. Mark has built it at JLab.
2. old, empty volatile directories now being deleted
<https://mailman.jlab.org/pipermail/halld-offline/2017-September/002955.html>
Directories on the volatile disk that are (a) empty and (b) remain
so for three months are now being deleted.
3. Farm jobs should use SQLite form of the CCDB
<https://mailman.jlab.org/pipermail/halld-offline/2017-September/002954.html>
If hundreds of farm jobs access the MySQL database server for
calibration constants, the server gets overwhelmed. Use the
disk-resident SQLite version of the database when running on the farm.
* Thomas has added an option to MCwrapper for selecting SQLite
files for CCDB.
Review of minutes from the last meeting
We went over the minutes from September 20
<https://halldweb.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_September_20,_2017#Minutes>.
* The automatic updates of the *Oasis file system * are still not
functioning fully automatically. Mark has a ticket into the Grid
folks <https://ticket.opensciencegrid.org/34965> asking for guidance
on how to proceed.
* The first monthly Analysis Launch is now finished
<https://mailman.jlab.org/pipermail/halld-physics/2017-October/001144.html>.
HDvis update
Thomas guided us through the latest version of the new event display
<https://halldweb.jlab.org/talks/2017/HDvis2/js/event.html>. See the
demo starting at 10:00 in the recording.
* The orthographic view is now in.
* Thomas has been talking to groups about what they want to see in the
display of their detector system.
* He will give a demo at the collaboration meeting.
* David requested that an axis indicator appear.
* Naomi suggested fixed, standard views for re-orienting oneself.
Thomas said that that is in the plan.
* Added in press: there was a glitch with mouse-over in the demo.
Subsequent to the meeting, Thomas found and fixed it. It appeared
only with the Firefox browser.
Reducing work disk usage
Mark went over his recent email
<https://mailman.jlab.org/pipermail/halld-offline/2017-September/002952.html>.
Some points of information:
* We are getting a new work disk. It with be ZFS and not Lustre.
* The new disk is significantly smaller than the work space we are
using now. We will have to reduce our usage from 77 TB down to 45.
* Some reduction (about 10 TB) has already occurred in response to
Mark's email.
* We looked at the Work Disk Usage Leader-board
<https://halldweb.jlab.org/disk_management/work_report.html>.
* Mark reported that it seems that SciComp has been slowing reducing
our cache and volatile reservations in response to ever increasing
usage on work.
o Work is not formally managed. The work allocation is done by
UNIX group is really total group Lustre usage minus volatile
group usage minus cache group usage. Volatile and cache however
are managed by directory location so there are corner cases. As
an unmanaged work grows, cache and volatile feel the squeeze.
* Mark will be contacting individuals and consulting on reducing their
work disk footprint.
If you have old, unused files under /work/halld, now would be a great
time to delete them or archive them to tape.
Preliminary Tape and Disk Usage Projections
Mark went over a spreadsheet
<https://docs.google.com/spreadsheets/d/1QQQ2R3QrJkJgN37lt9yRlhVylk4KDJQKAn8m6Is8paM/edit?usp=sharing>
summarizing the files produced by the various launches for the Spring 16
and Spring 17 runs. The input was an interview with Paul Mattione and
Alex on the previous day. He outlines two strategies for what we might
want to keep on disk, high-usage and low-usage. The high usage strategy
requires 260 TB spinning including both Spring 16 and Sprint 17 running.
The low-usage only 86 TB. He noted that our current usage is close to
the middle of this range.
This is a work in progress. Beni pointed out that Monte Carlo data is
not counted. Calibrations skims were left out also; Mark has not gotten
to doing the research on those. At this point, these numbers are lower
limits.
Review of recent pull requests
Paul checked in a major change to the analysis library. He will describe
the new stuff at the collaboration meeting next week.
A couple of the recent requests
<https://github.com/JeffersonLab/sim-recon/pulls?q=is%3Apr+is%3Aclosed>
(one from David, one from Will McGinley) showed large numbers of files
differing on the proposed branch from the version on the master branch,
nearly 100, when the number of changes being proposed were only a few.
These were due to Paul's analysis library changes getting merged before
the requests in question got merged. Since there were no conflicts on
the two branches, these merges can go ahead without causing any problem.
Review of recent discussion on the GlueX Software Help List
We went over the recent threads
<https://groups.google.com/forum/#%21forum/gluex-software>.
As the result of one discussion
<https://groups.google.com/forum/#%21topic/gluex-software/2FqnXJepUY8>
Mark built a version of the code from the last reconstruction launch on
Spring 16 data. In the future, we may want to do builds of certain
launches on the group disk as a matter of course.
--
Mark Ito, marki at jlab.org, (757)269-5295
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-offline/attachments/20171005/840a1865/attachment.html>
More information about the Halld-offline
mailing list