[Halld-physics] draft text of the eta-Primakoff proposal update
Gan, Liping
ganl at uncw.edu
Wed Dec 1 09:14:10 EST 2010
Dear Matt,
The photon flux control itself is completely done by an independent procedure. As we did in Hall B, we will use the tagger to tag the photons upstream of the beam line, measure the absolute tagging efficiency with a total absorption counter at a low beam intensity, and measure the relative tagging efficiency with the pair spectrometer at the beam intensity of the production runs.
The compton cross section measurement will provide a comprehensive cross check on the overall systematic error by comparing the measured cross section with the theoretical calculation. This error is contributed from different sources, which include photon flux, target thickness and density, detection efficiency, trigger efficiency.... Therefore, 3.2% measurement error of Compton, does not translate to a 3.2% on photon flux.
Liping
________________________________________
From: halld-physics-bounces at jlab.org [halld-physics-bounces at jlab.org] On Behalf Of Matthew Shepherd [mashephe at indiana.edu]
Sent: Wednesday, December 01, 2010 6:28 AM
To: Ashot Gasparian
Cc: GlueX Physics
Subject: Re: [Halld-physics] draft text of the eta-Primakoff proposal update
Hi Ashot,
This is a good discussion -- it is nice to try to get to bottom of these things.
Presumably the cross check of the photon flux is the Compton cross section -- you must measure this within 1% in order to know that your photon flux systematic error is 1%. The two other dominant systematic errors background subtraction and event selection probably have nothing to do with the Compton cross section or its error since backgrounds and event selection are different for the Compton analysis. This is also probably why you use the Compton as a cross check and not as a normalization, since this measurement systematic errors are not 100% correlated -- they will be even less correlated in GlueX since Compton gets measured partially with a different detector. Therefore if you only make a 3.2% measurement of Compton, you can only claim you know the photon flux to 3.2%. Correct me if I'm wrong, but you still also must budget for background and event selection errors that have nothing to do with Compton.
Forget about the photon flux for now... this table (attached below) also assumes a detection efficiency error of 0.5%. For the eta -> gamma gamma mode where with both photons are detected in the FCAL this means that you must have a systematic error on photon detection efficiency of 0.25% per photon. (The systematic errors are 100% correlated.) How do we plan to get our photon detection efficiency in the FCAL understood at the precision of 0.25%? This is a huge problem -- I don't see how we can state we are going to deliver this!
(You can mitigate the problem a little by normalizing to Compton, assuming it is known at the sub-percent level. There will be correlation between Primakoff efficiency and Compton efficiency since one of the photons from each is detected in the FCAL. However, this doesn't completely remove the issue since Compton also needs comp-cal for measurement.)
Why does it matter?
First it affects the advertised precision of the experiment. It isn't fun to claim precision that ultimately can't be delivered -- it is arguably disingenuous to claim precision that we know can't be delivered. You state an overall 3.0% systematic error. If I put in what I think is maybe a more realistic flux error, say 3%, and maybe a more realistic, but still optimistic, photon detection efficiency error of 1.5% per photon or 3% for an eta then the total systematic error goes to 5.1%. (This assumes background and event selection remain as they are.) The total error on the cross section goes to 5.2%.
This also affects the justification for beam time. Your beam time allotment is based on needing a 1% statistical error. If we take my systematic errors above and reduce the allocated beam time by a factor of three then the total experiment error goes from 5.2% to 5.4%, which is almost insignificant. I know some of your systematic errors, like photon flux one above, probably scale with statistics too, but maybe a shorter beam allotment would allow the experiment to be worked into the schedule much sooner. I may not be speaking for all of my GlueX colleagues, but I think a short run of solenoid-off data at the start of the experiment could be pretty interesting for trying to shake-down various aspects of the detector. This is obviously a place where more discussion is needed -- I don't want to distract us from addressing the specifics of the proposal.
The bottom line is that we need to be realistic about what we are asking of the detector. If you plan to make 0.25% level measurements we must have some inkling of an idea for how to do this. PrimEx was clever and designed this into their detector from the start -- that is not possible in GlueX and you can't assume that because PrimEx was able to achieve this precision GlueX will also be able to do so.
-Matt
More information about the Halld-physics
mailing list