[Hps-analysis] high statistics tritrig-wab-beam

Nathan Baltzell baltzell at jlab.org
Thu Jan 18 18:49:28 EST 2018


Hi Norman,

Great, and writing to /cache will (better) end up on tape automatically.

If you can take care of this that sounds good to me.  The biggest work is getting both (SLAC/JLAB) accounts active, remembering how to use globus, and knowing the paths on both ends, after which it’s just a few clicks.

-Nathan


> On Jan 18, 2018, at 6:39 PM, Graf, Norman A. <ngraf at slac.stanford.edu> wrote:
> 
> Hello Nathan,
> I use globus all the time and it works very well. I just tested that I can write to /cache at JLab, so I'm more than happy to take on this responsibility from this end.
> Norman
> 
> 
> From: Hps-analysis <hps-analysis-bounces at jlab.org> on behalf of Nathan Baltzell <baltzell at jlab.org>
> Sent: Thursday, January 18, 2018 3:30 PM
> To: Maruyama, Takashi
> Cc: hps-analysis at jlab.org
> Subject: Re: [Hps-analysis] high statistics tritrig-wab-beam
>  
> Ok, let’s say 5 TB, probably too much to comfortably transfer to /work or /volatile without some significant cleanup there first, so first choice would be straight to /cache.
> 
> Does SLAC still have a globus endpoint?  Anyone used globus to write to /cache at JLab?
> 
> -Nathan
> 
> 
> 
> > On Jan 18, 2018, at 12:22 PM, Maruyama, Takashi <tvm at slac.stanford.edu> wrote:
> > 
> > Hi Nathan,
> > 
> > Since we are in the middle of testing the tuple/DST makers, I don't know the file size of tuple/DST. Bradley might know.  The recon file size is 400 MB per file and we need 10,000 recon files to be equivalent to the 2015 data statistics. So the total disk space requirement is 4 TB.
> > 
> > The tuple maker can generate three tuples, FEE, Moller and trident.  Since tritrig-wab-beam is processed by pair1 trigger, I would think there are not many FEE, Moller in the tritrig-wab-beam recon files.  If we want FEE and Moller samples, a separate readout/recon step must be run on wab-beam.slicio. Do we need FEE and Moller? If we don't need FEE, Moller, I will delete the wab-beam.slcio files as each wab-beam.slcio is 450 MB and we need 100,000 files, 45 TB.
> > 
> > Takashi
> > 
> > -----Original Message-----
> > From: Nathan Baltzell [mailto:baltzell at jlab.org] 
> > Sent: Tuesday, January 16, 2018 4:08 PM
> > To: Maruyama, Takashi
> > Cc: hps-analysis at jlab.org
> > Subject: Re: [Hps-analysis] high statistics tritrig-wab-beam
> > 
> > What’s the estimate on disk space requirements for the different bits in #1?  
> > 
> > -nathan
> > 
> > 
> > 
> >> On Jan 16, 2018, at 7:03 PM, Maruyama, Takashi <tvm at slac.stanford.edu> wrote:
> >> 
> >> High statistics tritrig-wab-beam production is in progress. About 15% of 2015 data equivalent statistics has been generated, but the production is paused due to lack of disk space.  As soon as a disk space becomes available, the production will resume. There are a couple of issues. 1) What files do we need to transfer to JLab? Only recon files, or only Tuple files, or DST as well?  2) We need a contact person at JLab who finds the disk space and does the transfer. 3) Since the intermediate wab-beam.slcio is large, requiring 5 TB for 10,000 files (~100,000 files to be 2015 data equivalent), we need to delete these files once the recon files are made. But wab-beam could be useful to generate, for example, wab-beam-tri. How long do we need to keep wab-beam? I would delete the files once the recons, tuples, and DSTs are made.
> >> 
> >> Takashi 
> >> 
> >> _______________________________________________
> >> Hps-analysis mailing list
> >> Hps-analysis at jlab.org
> >> https://mailman.jlab.org/mailman/listinfo/hps-analysis
> > 
> 
> 
> _______________________________________________
> Hps-analysis mailing list
> Hps-analysis at jlab.org
> https://mailman.jlab.org/mailman/listinfo/hps-analysis




More information about the Hps-analysis mailing list