[Hps-analysis] WAB Normalization

Sho Uemura meeg at slac.stanford.edu
Thu Feb 9 15:01:29 EST 2017


BTW what I'm describing in "importance of filtering" is filtered WABs 
mixed with background. I realized that's probably not actually something 
you're doing, since you have wab-beam-tri and pure WAB, and only pure WAB 
is filtered. But it is still true; also, everything else I said should 
still apply.

On Thu, 9 Feb 2017, Sho Uemura wrote:

> Not clear to me what you mean by #events/250. But I can try to explain how 
> things are supposed to work:
>
> The point of filtering is to discard generated events that will not make 
> triggers. Most WAB events do not make triggers. The 250-event spacing is 
> necessary whether or not you filter. FilterMCBunches has options that can 
> independently control/disable filtering and spacing.
>
> The importance of filtering is that it reduces the amount of other MC you 
> need to mix in. If I have 2000 generated WABs and I don't filter, I will get 
> 250*2000=500k spaced events, and need to run 500k beam events through SLIC. 
> If I filter and can reject 95% of the generated WABs as unlikely to trigger 
> (missing the ECal, for example), I only need to run 250*100=25k beam events 
> through SLIC. That's a big deal. There are similar reductions in the time it 
> takes to run the readout sim, and the amount of background "tri" that needs 
> to be generated in MadGraph.
>
> If you turn off filtering, then the number of events coming out of the filter 
> will be exactly 250 times the number of generated events. But after the 
> readout simulation, the number of triggered events should be roughly the same 
> whether or not you filtered. This is a good check that the filter is doing 
> its job - if it is not, you need to change the filter settings or turn it 
> off.
>
> On Thu, 9 Feb 2017, Bradley T Yale wrote:
>
>> 
>> I think I know what this could be, or at least what it's related to.
>> 
>> 
>> Pure MC events (with a 50 MeV minimum threshold) are spaced out by 250 
>> events to avoid pileup effects, and ensure that each triggered event 
>> corresponds to a single generated event. This is not done for wab-beam-tri, 
>> since the mixing sort of takes care of that.
>> 
>> 
>> Checking a 100to1 WAB readout file, (43609/1152944) = 3.8% of the generated 
>> events to be readout were written after filtering, but still a factor of 10 
>> from what you see.
>> 
>> 
>> I don't know if this was how Sho's filtering procedure was designed to 
>> work, because it seems incredibly inefficient to write so few events after 
>> reading in so many. It seems like it should be #events*250 instead of 
>> #events/250. I need to ask him.
>> 
>> 
>> It looks like this could be related to the 30-40% you see though.
>> 
>> As John said, applying background to these events instead of filtering them 
>> would likely fix the consistency.
>> 
>> 
>> 
>> ________________________________
>> From: Rafayel Paremuzyan <rafopar at jlab.org>
>> Sent: Wednesday, February 8, 2017 3:59:55 PM
>> To: Bradley T Yale; hps-analysis at jlab.org
>> Subject: WAB Normalization
>> 
>> Hi Brad,
>> 
>> Using normalization factors that I have, I am getting about 30-40%
>> larger rate
>> for pure wabs wrt Wab Beam Tri.
>> 
>> I would like to double check normalizations:
>> 
>> I am using following files
>> /mss/hallb/hps/production/postTriSummitFixes/recon/wab/1pt05/wabv3_spinFix_100to1_HPS-EngRun2015-Nominal-v5-0-fieldmap_3.11-20170118_pairs1_*
>> 
>> The cross section is 0.57*bn;
>> number of generated events per recon file is 1000000 = 100 (gen files) *
>> 10000 (number of events per gen file).
>> 
>> Could you please confirm these numbers?
>> 
>> Rafo
>> 
>


More information about the Hps-analysis mailing list