[Hps-analysis] WAB Normalization
Bradley T Yale
btu29 at wildcats.unh.edu
Thu Feb 9 14:35:34 EST 2017
I think I know what this could be, or at least what it's related to.
Pure MC events (with a 50 MeV minimum threshold) are spaced out by 250 events to avoid pileup effects, and ensure that each triggered event corresponds to a single generated event. This is not done for wab-beam-tri, since the mixing sort of takes care of that.
Checking a 100to1 WAB readout file, (43609/1152944) = 3.8% of the generated events to be readout were written after filtering, but still a factor of 10 from what you see.
I don't know if this was how Sho's filtering procedure was designed to work, because it seems incredibly inefficient to write so few events after reading in so many. It seems like it should be #events*250 instead of #events/250. I need to ask him.
It looks like this could be related to the 30-40% you see though.
As John said, applying background to these events instead of filtering them would likely fix the consistency.
________________________________
From: Rafayel Paremuzyan <rafopar at jlab.org>
Sent: Wednesday, February 8, 2017 3:59:55 PM
To: Bradley T Yale; hps-analysis at jlab.org
Subject: WAB Normalization
Hi Brad,
Using normalization factors that I have, I am getting about 30-40%
larger rate
for pure wabs wrt Wab Beam Tri.
I would like to double check normalizations:
I am using following files
/mss/hallb/hps/production/postTriSummitFixes/recon/wab/1pt05/wabv3_spinFix_100to1_HPS-EngRun2015-Nominal-v5-0-fieldmap_3.11-20170118_pairs1_*
The cross section is 0.57*bn;
number of generated events per recon file is 1000000 = 100 (gen files) *
10000 (number of events per gen file).
Could you please confirm these numbers?
Rafo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/hps-analysis/attachments/20170209/65b46fca/attachment.html>
More information about the Hps-analysis
mailing list