[Halld-physics] draft text of the eta-Primakoff proposal update
Matthew Shepherd
mashephe at indiana.edu
Wed Dec 1 06:28:49 EST 2010
Hi Ashot,
This is a good discussion -- it is nice to try to get to bottom of these things.
Presumably the cross check of the photon flux is the Compton cross section -- you must measure this within 1% in order to know that your photon flux systematic error is 1%. The two other dominant systematic errors background subtraction and event selection probably have nothing to do with the Compton cross section or its error since backgrounds and event selection are different for the Compton analysis. This is also probably why you use the Compton as a cross check and not as a normalization, since this measurement systematic errors are not 100% correlated -- they will be even less correlated in GlueX since Compton gets measured partially with a different detector. Therefore if you only make a 3.2% measurement of Compton, you can only claim you know the photon flux to 3.2%. Correct me if I'm wrong, but you still also must budget for background and event selection errors that have nothing to do with Compton.
Forget about the photon flux for now... this table (attached below) also assumes a detection efficiency error of 0.5%. For the eta -> gamma gamma mode where with both photons are detected in the FCAL this means that you must have a systematic error on photon detection efficiency of 0.25% per photon. (The systematic errors are 100% correlated.) How do we plan to get our photon detection efficiency in the FCAL understood at the precision of 0.25%? This is a huge problem -- I don't see how we can state we are going to deliver this!
(You can mitigate the problem a little by normalizing to Compton, assuming it is known at the sub-percent level. There will be correlation between Primakoff efficiency and Compton efficiency since one of the photons from each is detected in the FCAL. However, this doesn't completely remove the issue since Compton also needs comp-cal for measurement.)
Why does it matter?
First it affects the advertised precision of the experiment. It isn't fun to claim precision that ultimately can't be delivered -- it is arguably disingenuous to claim precision that we know can't be delivered. You state an overall 3.0% systematic error. If I put in what I think is maybe a more realistic flux error, say 3%, and maybe a more realistic, but still optimistic, photon detection efficiency error of 1.5% per photon or 3% for an eta then the total systematic error goes to 5.1%. (This assumes background and event selection remain as they are.) The total error on the cross section goes to 5.2%.
This also affects the justification for beam time. Your beam time allotment is based on needing a 1% statistical error. If we take my systematic errors above and reduce the allocated beam time by a factor of three then the total experiment error goes from 5.2% to 5.4%, which is almost insignificant. I know some of your systematic errors, like photon flux one above, probably scale with statistics too, but maybe a shorter beam allotment would allow the experiment to be worked into the schedule much sooner. I may not be speaking for all of my GlueX colleagues, but I think a short run of solenoid-off data at the start of the experiment could be pretty interesting for trying to shake-down various aspects of the detector. This is obviously a place where more discussion is needed -- I don't want to distract us from addressing the specifics of the proposal.
The bottom line is that we need to be realistic about what we are asking of the detector. If you plan to make 0.25% level measurements we must have some inkling of an idea for how to do this. PrimEx was clever and designed this into their detector from the start -- that is not possible in GlueX and you can't assume that because PrimEx was able to achieve this precision GlueX will also be able to do so.
-Matt
-------------- next part --------------
A non-text attachment was scrubbed...
Name: syst_errors.pdf
Type: application/pdf
Size: 45328 bytes
Desc: not available
Url : https://mailman.jlab.org/pipermail/halld-physics/attachments/20101201/17cc6e72/attachment-0001.pdf
-------------- next part --------------
On Dec 1, 2010, at 12:34 AM, Ashot Gasparian wrote:
>
> Hi Matt,
>
> In my email I did not argue the 1% error in photon plux,
> that is still there in our proposal, for sure. The total
> error on the Compton cross section has to be on the same level
> as the eta cross section, which is 3.2%. Once more, we need to
> provide error on the Compton cross section measurement on the
> level of 3.2% (total).
>
> Regards,
> Ashot
>
>
> .............................................................
> Ashot Gasparian Phone:(336)285-2112 (NC A&T)
> Professor of Physics
> Physics Department (757)-269-7914 JLab
> NC A&T State University Fax:(757)-269-6273 JLab
> Greensboro, NC 27411 email: gasparan at jlab.org
> .............................................................
>
>
> On Tue, 30 Nov 2010, Matthew Shepherd wrote:
>
>>
>> For completeness, I'm referring to slide 21 here:
>>
>> http://www.jlab.org/~gasparan/PAC35/PAC35_Gasparian.pdf
>>
>> It lists the same 3.2% you note as "relaxed" below, and assumes 1% flux error.
>>
>> -Matt
>>
>> On Nov 30, 2010, at 7:24 PM, Matthew Shepherd wrote:
>>
>>>
>>> Hi Ashot,
>>>
>>> Sorry, I don't understand. There are other systematic errors besides photon flux. Slide 21 of your proposal says that to achieve a 3% systematic error (which, when combined with 1% stat error, gives 3.2% total error) you need a 1% error on the photon flux. Other errors like background subtraction and event selection are also very significant.
>>>
>>> If you plan a 1% error on the photon flux, presumably you need to know the efficiency for detecting a Compton electron and photon together to 1%. These systematic errors, in the best case, are uncorrelated. This means you need 1% / sqrt( 2 ) or 0.7% uncertainty on the detection efficiency in both the FCAL and comp-cal. Did I miss something?
>>>
>>> -Matt
>>>
>>> On Nov 30, 2010, at 6:52 PM, Ashot Gasparian wrote:
>>>
>>>>
>>>> Hi Matt,
>>>>
>>>> I complitly agree with all you say below in your email.
>>>> They are all difficult and need to be worked out, BUT there is
>>>> one thing which may make our todays discussion much relaxed:
>>>> the 1% uncertainty in the Compton cross section is an overkill
>>>> statement and we need to change it for this proposal. It is left
>>>> from the PrimEx and our original proposal where we been looking
>>>> for 2% level measurement on eta decay rate. SINCE we have a new
>>>> relaxed error bar in this proposal, which is 3.2% in total then
>>>> the requirement for the Compton also SHOULD be on the similar
>>>> 3.2% level.
>>>>
>>>> If the 1% number is left in the proposal then we need to change it.
>>>>
>>>> Hope this new corrected number makes much easier, though, I agree,
>>>> we need to look on ways to measure the detection efficiencies in
>>>> the experiment.
>>>>
>>>> Thanks,
>>>> Ashot
>>>
>>>
>>> _______________________________________________
>>> Halld-physics mailing list
>>> Halld-physics at jlab.org
>>> https://mailman.jlab.org/mailman/listinfo/halld-physics
>>
>>
More information about the Halld-physics
mailing list