<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=utf-8">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    Folks,<br>
    <br>
    Find the minutes below and at
    <a class="moz-txt-link-freetext" href="https://halldweb1.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_January_21,_2015">https://halldweb1.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_January_21,_2015</a>
    .<br>
    <br>
      -- Mark<br>
    _______________________________<br>
    <br>
    <div id="globalWrapper">
      <div id="column-content">
        <div id="content"> <br>
          GlueX Offline Meeting Minutes, January 21, 2015<br>
          <div id="bodyContent"><br>
            Present:
            <br>
            <ul>
              <li> <b>CMU</b>: Curtis Meyer
              </li>
              <li> <b>FIU</b>: Mahmoud Kamel
              </li>
              <li> <b>FSU</b>: Aristeidis Tsaris
              </li>
              <li> <b>JLab</b>: Alex Barnes, Mark Ito (chair), David
                Lawrence, Paul Mattione, Kei Moriya, Eric Pooser, Simon
                Taylor, Beni Zihlmann
              </li>
              <li> <b>NU</b>: Sean Dobbs
              </li>
            </ul>
            <br>
            <span class="mw-headline" id="Announcements">Announcements</span><br>
            <ol>
              <li> Our volatile disk was expanded recently. The
                reservation increased from 10 to 20 TB, and the quota
                from 30 to 50 TB. We are using just over 20 TB
                presently.
              </li>
              <li> Marty Wise of Computing and Network Infrastructure
                (CNI) is working on installing the Run Conditions
                Database (RCDB) on an Apache server.
              </li>
              <li> CNI now has a desktop version of RedHat Enterprise
                Linux 7 available for beta testers. See Kelvin Edwards
                for an install image.
              </li>
              <li> Our work disk filled up this morning. We have 14 TB
                at present. Volunteers deleting their files have got it
                down to 75% used now.
              </li>
              <li> Mark remarked that we should review our long-term
                requests for disk space and see if we can start to
                expand our disk portfolio in a significant way.
              </li>
            </ol>
            <br>
            <span class="mw-headline"
              id="Review_of_Minutes_from_January_7">Review of Minutes
              from January 7</span><br>
            <br>
            We went over the <a
href="https://halldweb1.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_January_7,_2015#Minutes"
              title="GlueX Offline Meeting, January 7, 2015">minutes</a>.
            Items were either resolved or appear on the agenda for this
            meeting.
            <br>
            <br>
            <span class="mw-headline" id="Data_Challenge_3">Data
              Challenge 3</span><br>
            <br>
            Mark has successfully run test jobs going all the way from
            event generation to REST file production. Along the way
            EVIO-formatted data is produced and read. He showed <a
href="https://halldweb1.jlab.org/wiki/index.php/Software_Review_January_16,_2015#Agenda"
              title="Software Review January 16, 2015">some statistics</a>
            about the test jobs presented at the last Software Review
            Preparation Meeting. The next step is to scale the jobs to
            real-challenge size. We hope to be in production by the time
            of the Software Review.
            <br>
            <br>
            <span class="mw-headline" id="Software_Review_Preparations">Software
              Review Preparations</span><br>
            <br>
            Curtis reviewed the discussion we had at last Friday's <a
href="https://halldweb1.jlab.org/wiki/index.php/Software_Review_January_16,_2015"
              title="Software Review January 16, 2015">Software Review
              Preparations Meeting</a>. We spent some time answering
            questions from Graham about out needs vis-a-vis the schedule
            for computer procurements. We also ran down a <a
href="https://halldweb1.jlab.org/wiki/index.php/Topics_for_the_2015_Software_Review"
              title="Topics for the 2015 Software Review">list of
              talking points and topics</a> that we plan to present.
            <br>
            <br>
            <span class="mw-headline" id="Commissioning_Run_Review">Commissioning
              Run Review</span><br>
            <br>
            <span class="mw-headline" id="Offline_Monitoring_Report">Offline
              Monitoring Report</span><br>
            <br>
            Kei gave the report.
            <br>
            <ul>
              <li> He ran over all files (online plugins, 2-track EVIO
                skim, REST) 2 weeks ago
              </li>
              <li> Next launch of the entire process is this Friday.
              </li>
              <li> The group will be testing EventStore to mark events.
                This will take some dedicated disk space.
              </li>
              <li> Kei showed <a
href="https://halldweb1.jlab.org/wiki/images/b/b5/2015-01-21-multithread.pdf"
                  class="external text" rel="nofollow">slides</a>,
                giving an update on CentOS65 use and multi-thread
                processing.
              </li>
            </ul>
            <br>
            <span class="mw-headline"
              id="Commissioning-Branch-to-Trunk_Migration">Commissioning-Branch-to-Trunk
              Migration</span><br>
            <br>
            Simon reported that he and Mark have started working on
            migration of code developed on the commissioning branch
            during the run to the trunk in the source code repository.
            An initial attempt have version that compiled and ran, but
            when the b1pi test was run with the code, no successful
            kinematic fits were produced.
            <br>
            Paul asked if the Monte Carlo variation was being used; it
            was not. This will be tried next.
            <br>
            <br>
            <span class="mw-headline" id="Analysis_of_REST_File_Data">Analysis
              of REST File Data</span><br>
            <br>
            Justin reported that he has had success reproducing his
            recent bump-hunting plots starting from REST formatted data.
            This mode would allow users to pursue similar studies
            without having to fetch the data from tape and perform
            reconstruction; a big time savings. He did this with a
            private version of the code. There is currently and issue
            with unpacking tagger hits from the REST file. Hopefully
            this can be fixed before the next generation of REST files
            are produced.
            <br>
            <br>
            <span class="mw-headline"
              id="Handling_Changing_Magnetic_Field_Setting">Handling
              Changing Magnetic Field Setting</span><br>
            <br>
            Quoting from a recent email from Sean:
            <br>
            <br>
            One of the bigger headaches of running over the fall data
            was keeping
            track of all of the different magnetic field conditions, as
            the field
            went up and down. It would be user-friendly if we could keep
            track
            of this information as well, instead of forcing the user to
            specify
            the correct magnetic field map on the command line every
            time.
            Naively, I'd think that we could add a CCDB table that
            stored the
            name of the magnetic field map to use, i.e., that same
            information
            that would be passed in on the command line. Maybe this
            information
            is better stored as geometry or something else, though?
            <br>
            <br>
            Mark remarked that we did have a plan for handling this
            problem using the CCDB and JANA Resources. David, Sean, and
            Mark will get together offline to revisit the plan.
            <br>
            <br>
            <span class="mw-headline" id="Data_Management">Data
              Management</span><br>
            <br>
            Quoting from the same email from Sean, three items:
            <br>
            <br>
            <span class="mw-headline"
              id="Storing_software_information_in_REST_files">Storing
              software information in REST files</span><br>
            <br>
            Since we're storing information on the software conditions
            used for
            reconstruction, it might be nice to store some of this
            information in
            the "officially" created REST files themselves, for a
            certain amount
            of self-documentation.
            <br>
            <br>
            Mark thought that it should be possible to add a "software
            version" element to the rest format, independent of the
            physics events, at the beginning of the file. Paul will ask
            Richard Jones about how this might be done.
            <br>
            <br>
            <span class="mw-headline"
              id="EVIO_format_definition_for_Level_3_trigger_farm">EVIO
              format definition for Level 3 trigger farm</span><br>
            <br>
            Is running the L3 trigger farm a goal of the spring running?
            If so,
            it would be useful to define the EVIO output format that
            would be
            used. I seem to remember that even if we run in pass-through
            mode,
            the L3 farm could be used to disentangle multi-block EVIO
            events, and
            output them in single-block format.
            <br>
            <br>
            David remarked that disentangling was fundamental in the L3
            design and any output format from L3, would be in
            single-blocked form.
            <br>
            <br>
            <span class="mw-headline"
              id="EventStore:_implementation_plan">EventStore:
              implementation plan</span><br>
            <br>
            One thing that could save the amount of disk space needed
            for
            handling skims would be the EventStore DB, the development
            of which
            I've taken back up. However, the user would still need
            access to
            these files, so it would only help for people running over
            the data
            at JLab. So in the end, there might be still be a desire for
            us to
            make these files, for those who want to analyze the files at
            their
            home institutions.
            <br>
            <br>
            The exact model we will use has not been decided on. Mark
            thought that to first order we would try to distribute at
            least the REST formatted data to each institution and
            therefore each site could have a functional EventStore-based
            system. This should be do-able for early running at least.
            <br>
            <br>
            <span class="mw-headline"
              id="Requests_to_SciComp_on_Farm_Features">Requests to
              SciComp on Farm Features</span><br>
            <br>
            Kei led us through a set of questions and feature requests
            he sent to SciComp. These were collected from the group
            working on offline monitoring.
            <br>
            <ol>
              <li> Tools to track jobs:
                <ol>
                  <li> tools to track what percentage of nodes were
                    being used by whom at a given time, preferably in
                    both # of jobs and threads.We can see the pie charts
                    for example in <a
                      href="http://scicomp.jlab.org/scicomp/#/auger/usage"
                      class="external free" rel="nofollow">http://scicomp.jlab.org/scicomp/#/auger/usage</a>
                    but would like the information in a form that we can
                    easily access and analyze.
                  </li>
                  <li> what % of nodes are currently available for each
                    OS at a given time
                  </li>
                  <li> tools to track the life time of each stage of the
                    job, such as sitting in queue, waiting for files
                    from tape, running, etc.
                  </li>
                  <li> Would it be possible to make the stdout and
                    stderr web-viewable?
                  </li>
                  <li> If possible, can you add the ability to search by
                    “job name” (every job that includes the search term)
                    in the auger custom job query website?
                  </li>
                </ol>
              </li>
              <li> For more general requests:
                <ol>
                  <li> better transparency for whether there are
                    problems in the system, such as heavy traffic due to
                    users, broken disks, etc. Could there be an email
                    list/webpage for that information?
                  </li>
                  <li> clarification of how 'priority' of jobs works
                    between different halls and users.
                  </li>
                  <li> would it be possible for the system to
                    auto-resubmit failed jobs if the failure is on the
                    side of the system (e.g., bad farm nodes, temporary
                    loss of connection)?
                  </li>
                </ol>
              </li>
              <li> Additionally, ask for more space on cache disk?
              </li>
            </ol>
            <br>
            There is a meeting tomorrow with SciComp personnel to go
            over the list. Interested parties should attend.
            <br>
            <br>
            <span class="mw-headline" id="Action_Items">Action Items</span><br>
            <ol>
              <li> Ask Richard about a new software information element
                in the REST format. (Paul)
              </li>
              <li> Meet to figure out magnetic field map handling using
                CCDB and Resources. (David, Sean, Mark)
              </li>
            </ol>
            <div class="printfooter">
              Retrieved from "<a
href="https://halldweb1.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_January_21,_2015">https://halldweb1.jlab.org/wiki/index.php/GlueX_Offline_Meeting,_January_21,_2015</a>"<br>
              <br>
            </div>
          </div>
        </div>
      </div>
    </div>
  </body>
</html>