[d2n-analysis-talk] target, beam pol. corrections
Brad Sawatzky
brads at jlab.org
Thu Mar 15 17:03:05 EDT 2012
Just a follow-up to our discussion during the meeting.
When you're deciding how best to treat a set of measurements to extract
observable (be it cross section, or polarization) you should think about
what is driving any run-by-run differences you are observing in your
observables.
If nothing is changing in your experiment at all (ideal world), then any
run-by-run division is artificial. (The physics don't know or care about
when CODA was running.) Any run-to-run variation in your results is
purely driven by statistics. In that case it is simplest to just bin
all the runs together as one long run and extract your observable from
that.
In real life, the experimental conditions change over any set of runs.
If you know and can measure how your apparatus is changing from run to
run with perfect precision, then you want to make those corrections run
by run too (easy choice) -- often the uncertainties associated with how
your apparatus is changing are significant too. Now the choice isn't so
clear.
In the case of the beam polarization, you have good reason to believe
that it should be stable on the order of many hours (at least) -- it
really should be stable on the order of days to week+. In a simplified
model, unless there are machine problems, the beam should only see the
natural depolarization of the photo-cathode in the injector as we pull
charge off it. Every 1--2 weeks the injector guys move the laser spot
to a region on the cathode that hasn't degraded and the beam
polarization snaps back up to full and starts a slow slide again.
If you look at beam polarization from run-to-run (ie. Compton) you see a
lot of scatter -- that is largely driven by statistics and the
systematics of the Compton apparatus. The beam isn't changing really
changing in polarization run-by-run (we think). In this case you want
to bin enough Compton data/runs together to average out the run-to-run
measurement uncertainties, but still see any slow depolarization effects
or slow-scale variations in the injector setup. You'd fit a line (in
the simple case) to a set of Compton data (which may be bracketed by
Moller measurements) and then use the fit to compute the beam
polarization at any given time. That extracted beam polarization is
only sensitive to the "physical" long term variations in the real beam
polarization, but not the "noise" associated with individual Compton
runs.
You can apply the same type of logic when choosing how best to bin our
LHRS, BigBite runs. In the end you will probably try binning both ways,
if all your associated statistical and systematic errors are reasonably
and propagated correctly you should get results that are are close to
each other. If one method has much smaller uncertainties than the
other, or a very different principle value, then it implies you're not
handling a systematic correctly in at least on of the two cases.
There is a nice discussion on these types of issues in Bevington:
http://www.amazon.com/Reduction-Error-Analysis-Physical-Sciences/dp/0072472278/
I'd give you a page number, but my copy seems to be missing... argh.
See if it is in Temple's library and check it out.
-- Brad
--
Brad Sawatzky, PhD <brads at jlab.org> -<>- Jefferson Lab / Hall C / C111
Ph: 757-269-5947 -<>- Fax: 757-269-5235 -<>- Pager: brads-page at jlab.org
The most exciting phrase to hear in science, the one that heralds new
discoveries, is not "Eureka!" but "That's funny..." -- Isaac Asimov
More information about the d2n-analysis-talk
mailing list