<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=Windows-1252">
</head>
<body>
<div>Hello,</div>
<div><br>
</div>
<div>I never saw a run list from Norman. Can you forward that info so the collaboration has access to it?</div>
<div><br>
</div>
<div>Thanks,</div>
<div>Cameron</div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> hps-software@SLAC.STANFORD.EDU <hps-software@SLAC.STANFORD.EDU> on behalf of Nathan Baltzell <baltzell@jlab.org><br>
<b>Sent:</b> Tuesday, December 7, 2021 7:03 PM<br>
<b>To:</b> hps-analysis@jlab.org <hps-analysis@jlab.org>; hps-software <hps-software@slac.stanford.edu><br>
<b>Subject:</b> Re: [Hps-analysis] 2021 trigger skims</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">Hello All,<br>
<br>
After some further preparations, the 2021 trigger skims are launched.<br>
<br>
Outputs will be going to /cache/hallb/hps/physrun2021/production/evio-skims.<br>
<br>
I broke the run list from Norman into 5 lists, and started with the first 20% in one batch, all submitted. I'll proceed to the other 4 batches over the holidays, assessing tape usage as we go.<br>
<br>
-Nathan<br>
<br>
> On Nov 29, 2021, at 3:39 PM, Nathan Baltzell <baltzell@jlab.org> wrote:<br>
> <br>
> The 10x larger test is done at /volatile/hallb/hps/baltzell/trigtest3<br>
> <br>
> -Nathan<br>
> <br>
> <br>
>> On Nov 29, 2021, at 2:52 PM, Nathan Baltzell <baltzell@jlab.org> wrote:<br>
>> <br>
>> Hello All,<br>
>> <br>
>> Before running over the entire 2021 data set, I ran some test jobs using Maurik’s EVIO trigger bit skimmer. Here’s the fraction of events kept in 14750 for each skim:<br>
>> <br>
>> fee 2.0%<br>
>> moll 3.3%<br>
>> muon 1.9%<br>
>> rndm 2.9%<br>
>> <br>
>> In each case, it’s inclusive of all such types, e.g., moll=moll+moll_pde+moll_pair, rndm=fcup+pulser.<br>
>> <br>
>> Are those numbers in line with expectations? The total is 10% and not a problem if these skims are expected to be useful. The outputs are at /volatile/hallb/hps/baltzell/trigtest2 if people are interested to check things.<br>
>> <br>
>> A 10x larger test is running now and going to /volatile/hallb/hps/baltzell/trigtest3 and should be done in the next couple hours.<br>
>> <br>
>> ************<br>
>> <br>
>> Note, it would be prudent to do this *only* for production runs, those that would be used in physics analysis, to avoid unnecessary tape access. By that I mean removing junk runs, keeping only those with some significant number of events, and only keeping
those with physics trigger settings (not special runs). For that we need a run list. I think we have close to a PB, but I remember hearing at the collaboration meeting that at least 20% is not useful for the porpoises of trigger bit skimming.<br>
>> <br>
>> -Nathan_______________________________________________<br>
>> Hps-analysis mailing list<br>
>> Hps-analysis@jlab.org<br>
>> <a href="https://mailman.jlab.org/mailman/listinfo/hps-analysis">https://mailman.jlab.org/mailman/listinfo/hps-analysis</a><br>
> <br>
<br>
<br>
########################################################################<br>
Use REPLY-ALL to reply to list<br>
<br>
To unsubscribe from the HPS-SOFTWARE list, click the following link:<br>
<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwMF-g&c=CJqEzB1piLOyyvZjb8YUQw&r=JaSEOiNc_6InrJmbYDvKU2tZqhhONpIkbyl_AUnkDSY&m=ZmstzaBgeIOpiGANRYv7uzOkDsRLdzoqIoCFLiRGN3wqRGspeRtyWV8WPp8dg5lK&s=LvlsXry9o7DUprYwwYshvsMAhW9mllQz0sq7hNS4s7o&e=">https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1</a><br>
</div>
</span></font></div>
</body>
</html>