[d2n-analysis-talk] Pion Rejection Factor Normalization, Statistical Error

David Flay flay at jlab.org
Tue Mar 16 09:24:56 EDT 2010


Hi all,

I'm rounding off my calculations for the Gas Cerenkov, and the numbers are
looking very nice, following a background subtraction.

I just finished running my calculations for pion rejection factors (and
efficiencies) as of last night.  When I plot my efficiencies, I have huge
statistical error bars as the cut in the gas Cerenkov increases.  This is
due to the statistical error taking the form:

error = prf*sqrt((1/sum_cer) + (1/sum_sh))

where prf = pion rejection factor = sum_sh/sum_cer.

sum_sh = number of entries (selected) in the 2D energy plot (PRL1 vs. PRL2).
sum_cer = number found in the GC which <pass> GC > X, where X is my cut
position.

It's clear as I increase my cut position, sum_cer falls quite rapidly. 
So, I do believe that this is why my error skyrockets -- it gets up to ~6%
for some kinematics!

Is there something wrong with my calculation of the statistical error here?

Secondly, I am also trying to figure out how to 'normalize' my numbers for
the pion rejection factors.  Since we have a very large amount of pions at
p = 0.6 as compared to p = 1.70, it is clear that a given cut in the 2D
energy plot in the PR will not yield the same initial numerical sample of
pions.  Hence, I get an apparent momentum dependence in my pion rejection
factor.

Therefore, I was trying to figure out how to 'scale up' or 'normalize' my
results for each momentum that is greater than p = 0.6, since that is the
worst case we have.

I tried a simple calculation, of the form:

prf = sum_sh/sum_cer.

scaled:

prf_scaled = (sum_sh/sum_cer)*(sum_sh_0.6/sum_sh)

While this does scale up my result, I do not think it is correct, since it
simply replaces my original pion sample with the one I obtained for p =
0.6.  I need to consider a scale factor for the number that survive my
Cerenkov cut -- but if I follow the same form to scale that number, I
would get:

prf_scaled = (sum_sh/sum_cer)*(sum_sh_0.6/sum_sh)*(sum_cer/sum_cer_0.6)
           = (sum_sh_0.6/sum_cer_0.6) = prf_0.6.

so that's not right either.

I was also considering using E/p instead of the 2D energy plot --
normalize my E/p distribution, and make a tight cut on the pions -- this
way I start out with a normalized distribution, particular to each
momentum.  However, this hasn't gotten me too far -- or maybe I'm not
making proper cuts.

In short, I'd like to select, say, 10000 pions at each momentum, and see
how many fire my GC for a given cut.

Does this seem like a reasonable method?

Thanks,

Dave
-------------------------------------------------
David Flay
Physics Department
Temple University
Philadelphia, PA 19122

office: Barton Hall, BA319
phone: (215) 204-1331

e-mail: flay at jlab.org
            flay at temple.edu

website: http://www.jlab.org/~flay
              http://quarks.temple.edu
-------------------------------------------------



More information about the d2n-analysis-talk mailing list