[Halld-tagger] [EXTERNAL] Fwd: Accidental subtraction

Shepherd, Matthew mashephe at indiana.edu
Thu Feb 23 06:46:43 EST 2023


I was thinking more about this last night...  

For the class of events where one uses best chi^2 and selects the wrong photon then the beam energy (and hence the flux) is determined not by the tagger alone but by the spectrometer, which we know has inferior resolution.  Therefore one loses sensitivity to sharp variations in flux with energy and one loses the benefit of an accurate z momentum constraint the analysis.

It seems the question is:  what is the rate at which this happens for some practical analysis?  If one picks the wrong photon at the sub 1% level then I'd say for most analyses we just don't care.  

It is clear that rate that one picks the wrong beam photon goes to 100% at infinite tagger rate.  At that point, there will always be a photon in the tagger that, within tagger resolution, matches the spectrometer energy.  And since the spectrometer energy resolution is relatively poor, it will often be the wrong photon.  (I think this is exactly what Richard said in his initial email.... at high rate the procedure results in an untagged analysis.)

I don't see any reason why we can't get a relatively accurate picture of the fraction of events where we choose the wrong beam photon using signal MC.  I believe it should correctly predict this under all the standard assumptions about accuracy of simulation of spectrometer resolution, etc.  (There is a technical question I raised earlier about when the random hits are injected and when events are discarded because they have no photon candidate within the tagger acceptance -- the ordering of these steps seems very important.)  

So, it seems like a viable solution would be... if an analyst chooses to use the best chi^2 method, the analyst should demonstrate that the fraction of events in which the wrong choice is made is sufficiently small such that it is below the scale of systematic uncertainties in the analysis.  This may not be quantitatively precise, but should define a regime where we start to care.  And this would be more efficient than demonstrating the entire analysis can be done both ways to obtain the same result.

One must be mindful of the potential for spectacular failure modes... for example if one is measuring the cross section in the beam energy bin just above the coherent peak where the flux is small, then a large number of events with true beam photons in the coherent peak will have spectrometer energies in the region above the peak (due to spectrometer resolution).  If the tagger rate is high enough, then all of these events can be matched to a tagger hit above the peak, but none of them came from that region because the flux is so small.  This would dramatically distort the cross section in that energy bin.  (I think I'm restating part of what Richard said initially.)  Such a case would be easily detectable by the MC test above as the MC would show that very frequently the wrong beam photon is picked in the analysis in that energy bin.

I think this has been a useful discussion.  It would be good to come to some consensus and be sure that these alternate approaches that dramatically simplify the analysis process are also valid at the level we need them to be.

Matt


> On Feb 22, 2023, at 6:43 PM, Shepherd, Matthew via Halld-tagger <halld-tagger at jlab.org> wrote:
> 
> 
> Hi all,
> 
> This is a very difficult forum for this discussion.... 
> 
> I understand what Richard and Jon are saying but I still don't see any technical problem -- I think the solutions are built on different strategies.
> 
> We should be cautious about blanket statements about what we can't do... we technically can't do a lot of things we are doing.  Part of getting things done efficiently is trying to figure out when it matters.  
> 
> Let's focus on the strategy of picking the best kinematic fit chi^2 in the case there are multiple beam photons in the signal RF bucket.  For most of the analyses I've seen, this happens at the percent level (after all selection criteria are applied).
> 
> I think the point that Richard is trying to make is that as the rate goes up one gets (in an extreme case) to a point where there is always a beam photon that has an energy sufficient to pass the analysis cuts, even if the true beam photon was actually undetected by the tagger.  The tagging rate then becomes a function of energy and also at some level final state as the constraints on energy conservation depend on the final state particles.
> 
> When doing an accidental subtraction one effectively measures this false positive tagging rate by using out of time photons that cannot be the beam photon and determining what fraction of them pass the analysis criteria.  One then subtracts that contribution.
> 
> In the best chi^2 method this increase in efficiency is simulated in MC by providing extra beam photons (presumably from real data) at a rate and energy distribution consistent with the run conditions.  These then "inflate" the efficiency.  (I suppose there is a technical implementation issue here -- one has to inject these extra beam photons into events where also the true beam photon was outside the tagger acceptance.  If that is not done, then the MC won't have this tick up in efficiency.)
> 
> It seems that the key thing is that the tagger acceptance used in signal MC needs to match that used in calibrated PS flux.  Any aspect of matching to the tagger in the analysis or correcting for false matches should be handled provided the same procedure is applied to MC.  Maybe there are technical problems that I don't understand.  But I don't think this relies on a sophisticated simulation of tagger behavior... one needs the standard detector simulation (which incorporates rate dependent issues) and just the distribution of extra in time photons.  This is all they contribute to the analysis process anyway after all.
> 
>> This feels hacky to me. Could we just go whole-hog and not skip using beam photons altogether? 
> 
> 
> No, I think that would be a disaster -- we need the firm constraint on z momentum conservation.  This is unambiguous for something like > 90% of all events in most analyses.  It is really throwing the baby out with the bathwater.  The final state particles in the detector get us really close to tag, but then the tagger nails it.  It is kind of like bunch finding in the timing, you need to get close enough to be able to select the right accelerator bunch and then you know the timing with razor sharp precision.
> 
> OK times up... gotta move on to family errands...
> 
> Matt
> 
> 
>> On Feb 22, 2023, at 5:19 PM, Richard Jones <richard.t.jones at uconn.edu> wrote:
>> 
>> Jon,
>> 	• Is the tagged flux is universal, if you adopt the typical accidental subtraction scheme?
>> Yes, that is my understanding.
>> 	• Do we gain meaningful statistical precision by doing something that sidesteps accidental subtraction?
>> Yes, the statistical precision will be better using a hybrid scheme. A given data set will yield smaller statistical error bars, splitting up a run into sub-bits and comparing the results from each bit will have smaller scatter between the results from different bits, confirming the statistical errors, etc. But one is trading off statistical precision here for systematics on the flux. 
>> 
>> Here I add another question I thought of that you might be wondering about:
>> 	• Doesn't the simulation take everything into account in the hybrid scheme anyway? So the acceptance becomes rate dependent, which it is already anyway due to pile-up in the detector, so what's the problem with letting the simulation deal with pile-up effects in the tagger as well?
>> One could have designed the simulation that way, but we (I) didn't. My experience told me that systematics from rate-dependent effects start accumulating in the tagger at MUCH lower rates than they show up in the detector subsystems, ie. extra tracks in the FDC, accidental association in the start counter, TOF, FCAL, etc. The job of trying to describe the rate-dependent behavior of the individual tagging counters reliably in the simulation is much more difficult, and with limited time and manpower to devote to this task, it could be nonconvergent. That is why there is no tagger microscope or hodoscope in the hdgeant simulation. The downside of that is that accidentals subtraction is the only model-independent method we have with the current toolset to produce a differential cross section.
>> 
>> -Richard Jones
>> 
>> On Wed, Feb 22, 2023 at 4:30 PM Jonathan Zarling <jzarling at jlab.org> wrote:
>> 
>> *Message sent from a system outside of UConn.*
>> 
>> 
>> Hi all,
>> 
>> I appreciate the deep thinking on this, I too had been wondering about implications of the various strategies. I guess I'd like to ask/clarify two things:
>> 	• Is the tagged flux is universal, if you adopt the typical accidental subtraction scheme? 
>> I.e. if properly calibrated, it should incorporate any rate dependence, tagger inefficiency, dependence on final state, etc. Assuming this strategy fits well with the downstream analysis. Then on the other hand, if you do something else, say picking our "pick the best in-time chi^2 photon" cut, the tagged flux doesn't have any easy correspondence.
>> 	• Do we gain meaningful statistical precision by doing something that sidesteps accidental subtraction?
>> If the tagger efficiency becomes large enough in a stats. limited analysis, this "pick the best in-time chi^2 photon" would also pick up events where the true photon is lost, but some accidental comes along with similar-ish energy. This feels hacky to me. Could we just go whole-hog and not skip using beam photons altogether? At least in principle. I'm wondering about the b1 pi cross sections and charmonia measurements. Just curious if the charge to the beamline group becomes the same here.
>> I hope I'm not retreading anything above, there was a lot to go through here. I guess I'm particularly interested to make sure the eta p cross section results (which DON'T do any of this best combo picking, typical accidental subtraction only) shouldn't be affected by the discussion in this chain. 
>> 
>> Cheers,
>> Jon
>> From: Halld-tagger <halld-tagger-bounces at jlab.org> on behalf of Richard Jones via Halld-tagger <halld-tagger at jlab.org>
>> Sent: Tuesday, February 21, 2023 9:49 AM
>> To: Shepherd, Matthew <mashephe at indiana.edu>
>> Cc: Hall D beam working group <halld-tagger at jlab.org>
>> Subject: Re: [Halld-tagger] [EXTERNAL] Fwd: Accidental subtraction
>> 
>> Matt,
>> 
>> You cannot use the tagged flux unless you use an accidentals subtraction algorithm. Here are the rules I am claiming.
>> 	• if you do accidentals subtraction then you have complicated PWA fits, but at least you know what your flux should be.
>> 	• if you do hybrid tagging without full accidentals subtraction then you have simple PWA fits, but then you have problems knowing what your flux should be.
>> This is something of a no free lunch theorem that applies here. See my first response to Peter for more details on how the flux is problematic in a hybrid tagging scheme.
>> -Richard
>> 
>> On Tue, Feb 21, 2023 at 9:02 AM Shepherd, Matthew <mashephe at indiana.edu> wrote:
>> *Message sent from a system outside of UConn.*
>> 
>> 
>> 
>> 
>> ---------- Forwarded message ----------
>> From: "Shepherd, Matthew" <mashephe at indiana.edu>
>> To: Richard Jones <richard.t.jones at uconn.edu>
>> Cc: Hall D beam working group <halld-tagger at jlab.org>
>> Bcc: 
>> Date: Tue, 21 Feb 2023 14:01:31 +0000
>> Subject: Re: [Halld-tagger] [EXTERNAL] Fwd: Accidental subtraction
>> 
>> Hi Richard,
>> 
>>> On Feb 20, 2023, at 11:08 AM, Richard Jones <richard.t.jones at uconn.edu> wrote:
>> 
>>> Likewise with trying to measure an absolute differential cross section for the a2 over a continuum of rho,pi using amplitude analysis to extract the a2 part. The problem I am pointing to here is this: what to use for the flux is no longer model-independent if you are not doing proper accidentals subtraction.
>> 
>> Not sure I understand the details here... "model-independent"?
>> 
>> When doing amplitude analysis the output of the analysis is a tagged, acceptance-corrected yield over a range of beam energy.  We then used a tagged flux to turn this number into a cross section.  When obtaining the tagged acceptance corrected yield, we can use two methods of handling pileup of beam photons in the signal RF bin and they produce the same result.
>> 
>> Matt
>> 
> 
> _______________________________________________
> Halld-tagger mailing list
> Halld-tagger at jlab.org
> https://mailman.jlab.org/mailman/listinfo/halld-tagger

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4075 bytes
Desc: not available
URL: <https://mailman.jlab.org/pipermail/halld-tagger/attachments/20230223/6ddad90b/attachment-0001.p7s>


More information about the Halld-tagger mailing list