[d2n-analysis-talk] A_1^n Projected Statistical Error
David Flay
flay at jlab.org
Tue Sep 7 15:23:56 EDT 2010
On Tue, Sep 7, 2010 at 12:35 PM, Brad Sawatzky <brads at jlab.org> wrote:
> On Fri, 03 Sep 2010, David Flay wrote:
>
> > I have uploaded to the Wiki my calculation for the projected
> > statistical error on A_1^n, along with a plot containing the current
> > world data on A_1^n, with the statistical errors calculated at 5-pass
> > superimposed for comparison. I have also included Matt's calculations
> > of the A_{\perp} statistical error for the same 5-pass data.
>
> A few questions/comments:
>
> - In the caption for Table 1, say something like:
> ... using the BigBite data from runs 2022---2054 (E_beam = 4.73 GeV/c.)
> Make similar additions in the other Table captions.
>
> - (Minor typesetting quibble) There is no space between the number and
> the dash when you list a range. The LaTeX code for a range should be:
> 2022--2054
> not
> 2022 - 2054, nor 2022 -- 2054, nor 2022-2054.
> The double dashes generate an en-dash.
>
> Just FWIW, here is a good page of general typesetting and LaTeX tips:
>
> http://web.science.mq.edu.au/~rdale/resources/writingnotes/latexstyle.html<http://web.science.mq.edu.au/%7Erdale/resources/writingnotes/latexstyle.html>
> Good things to keep in mind as you draft your thesis.
>
Thanks!
> - Have the 3 days of unpolarized-beam running been excluded from this
> analysis?
>
Yes -- I took the appropriate runs from Matt's table:
https://hallaweb.jlab.org/wiki/index.php/Big_Bite_Kinematics_Run_Break_Down
(P.S. -- is there a direct link to this on the front page of the Wiki? I
couldn't find it... so I put a link here:
https://hallaweb.jlab.org/wiki/index.php/Analysis_resources_for_d2n#Preliminary_Production_Run_List
)
- Why do we have so much parallel data at p=0.80 vs. the other points
> (Table 1)?
>
> - I don't understand the "N_eff" quantity. For the 4-pass data, N_eff
> is less than N_p, but for the 5-pass data N_eff > N_p. I've read the
> description on p.2 a couple of times and still don't get it (maybe I
> just need more coffee or something...)
>
Because:
A) I made a mistake in my code that calculates the values for the 4-pass
data -- I corrected this, now the numbers make sense... see
B) The idea behind "N_eff" is to "scale up" the percentage of events
remaining from the sample run used in the analysis to the total data taken
at each beam pass.
Could you pick on momentum bin (say 1.20 GeV/c) and be a little more
> explicit on how you arrive at N_p, N_cut, and N_eff?
>
for p = 1.20 GeV/c at E_beam = 5.89 GeV , we use the momentum cut: 1.165 <
BB.tr.p[0] < 1.235 (in accordance with the formula in the note).
Using this cut only, we see how many events remain for the (sample) run
2060. So, we have:
N_raw = 4681052 (total number of events recorded for run 2060).
N_p = 30386 (the number of events that survive a momentum cut, as specified
above).
Now, we then apply <all> good electron cuts. This includes:
1. GC mirror cuts, ADC, and TDC cuts
2. Various tracking cuts
3. Preshower and Shower cuts
4. Momentum cut (from above)
We then see how many events survive this cut. We call this N_cut:
N_cut = 4372
Now, the number of events recorded for parallel running (runs 1530--1553 and
1702--1719) is:
N_T = 214350416
Now, we determine N_eff --- we see what percentage of events survive all
cuts, as compared to the original number of events:
N_eff = (N_cut/N_p)*(N_p/N_raw)*N_T = (N_cut/N_raw)*N_T
Here's the tricky part --- I considered two different ideas:
The first was N_eff = (N_cut/N_p)*N_T. [option 1]
The second was:
N_eff = (N_cut/N_raw)*N_T, [option 2]
and is the one I used in the calculations in the note.
My reasoning behind the first was: shouldn't the percentage of events that
survive the good electron cuts be determined from sample events that
<satisfy> the momentum cut --- since that sample of events I am considering
to be from the momentum bin of interest?
But then I thought about how we should properly 'scale' the number of events
that survive the cuts to the full statistics we took. Now, it's clear from
the first form that N_eff seemingly does not depend upon N_raw (the number
of events we started out with in the first place) --- it's indirect at
best. Therefore, I thought that N_eff should reflect this -- that is, it
should be a scale factor multiplied by the total statistics taken during the
parallel running. This results in:
N_eff = (N_cut/N_raw)*N_T [option 2]
This is an estimate of the percentage of (total) events that would survive
all the cuts --- that is, momentum plus good electron cuts --- because the
only difference between doing this calculation for one run (2060) and all
the runs listed (runs 1530--1553 and 1702--1719), is that there's more
events. On average, I would expect the behavior of the cuts to be the same
if we just added more and more events to the study.
Further reasoning behind using [option 2]:
N_T in [option 1] is <not> the total number of events for given momentum bin
--- N_T is just the total number of events for the appropriate runs. I
think it would be correct if I determined the total number of events (summed
over all valid runs) for each momentum bin for N_T. [option 2] does not
have this issue -- equation is: (ratio of events that pass cuts/total number
of events)*(total number of events for parallel running).
Please see the corrected note (corrected errors at 4-pass):
http://www.jlab.org/~flay/thesis/A1_error/A1_stat_error_note_v2.pdf<http://www.jlab.org/%7Eflay/thesis/A1_error/A1_stat_error_note_v2.pdf>
--
-----------------------------------------------------------
David Flay
Physics Department
Temple University
Philadelphia, PA 19122
office: Barton Hall, BA319
phone: (215) 204-1331
e-mail: flay at jlab.org
flay at temple.edu
website: http://www.jlab.org/~flay <http://www.jlab.org/%7Eflay>
http://quarks.temple.edu
-----------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailman.jlab.org/pipermail/d2n-analysis-talk/attachments/20100907/c73fe91a/attachment.html
More information about the d2n-analysis-talk
mailing list