<div dir="ltr"><div class="gmail_quote"><div dir="ltr">Matt,<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">It seems you subject the true beam photon to your "pseudo-tagging simulation" which is probably just a check on the energy to be sure it will hit a tagger element, but the extra beam photons come from data and incorporate all real detector effects.<br></blockquote><div><br></div><div>Yes, but the efficiency that a given beam photon in the tagged energy window is actually psuedo-tagged is around 97%. whereas in the actual detector the tagging probability varies considerably from counter to counter, and averages around 70% at the beam intensities we are running at right now.</div><div><br></div><div>Here is an interesting study. Repeat the signal MC simulation but randomly delete some fraction of the true psueo-tags before mcsmear. Then repeat the analysis as before, and rescale the acceptance by the fraction of preserved true tags. For example, if I drop 50% of the true tags from the simulation before mcsmear, and my acceptance comes out just rescaled by this factor of 50% relative to what it was before then my analysis is completely insensitive to the tagging scheme I am using, at least with regard to the differential cross section. For a polarization-sensitive analysis,a similar comparison of angular distributions also probes my sensitivity to systematic variation in the polarization spectrum coming from accidentals remaining in my final sample. That would be a good place to start, and help us to bracket the size of this systematics issue in any analysis.</div><div><br></div><div>-Richard Jones</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 23, 2023 at 9:20 AM Shepherd, Matthew <<a href="mailto:mashephe@indiana.edu" target="_blank">mashephe@indiana.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">*Message sent from a system outside of UConn.*<br>
<br>
<br><br><br>---------- Forwarded message ----------<br>From: "Shepherd, Matthew" <<a href="mailto:mashephe@indiana.edu" target="_blank">mashephe@indiana.edu</a>><br>To: Richard Jones <<a href="mailto:richard.t.jones@uconn.edu" target="_blank">richard.t.jones@uconn.edu</a>><br>Cc: Hall D beam working group <<a href="mailto:halld-tagger@jlab.org" target="_blank">halld-tagger@jlab.org</a>><br>Bcc: <br>Date: Thu, 23 Feb 2023 14:20:19 +0000<br>Subject: Re: [Halld-tagger] [EXTERNAL] Fwd: Accidental subtraction<br><br>
<br>
> On Feb 23, 2023, at 7:43 AM, Richard Jones <<a href="mailto:richard.t.jones@uconn.edu" target="_blank">richard.t.jones@uconn.edu</a>> wrote:<br>
> <br>
> Yes, but at nominal GlueX intensity, this cannot be as low as 1% can it? Can you look into that and see if that might be based on hdgeant4 Monte Carlo where there is no tagger simulation present?<br>
<br>
Your calculation seems logical, and yes, I think we can look at this and provide some firm numbers.<br>
<br>
> All of this can be modeled with a simulation, but the present hdgeant[4] simulation is not set up for this. It was designed under the assumption that accidentals subtraction would be used.<br>
> <br>
> To get quantitative estimates of the systematic errors on our physics results from these effects, we need a simulation that includes the real tagger and not what is in there now, which I call a "pseudo-tagging simulation". This sounds like a lot of work to do correctly, with the efficiency of each individual fiber properly accounted for, column overlaps which vary at the 10% level along the microscope, etc. We might start off by doing bits of it and seeing how dependent the results are on the detailed inputs, or whether they are governed largely by just one or a few effective parameters.<br>
<br>
Can you elaborate on why more detailed simulation is needed? It is clear the true efficiency of the tagger must have complex effects and variation between elements, etc...<br>
<br>
This is where my knowledge of the exact details of how the simulation is implemented gets foggy... and I may completely misunderstand what is being done.<br>
<br>
My understanding is that we inject extra beam photons into the analysis and these photons come from out of time data. Therefore, these extra photons have all the rate dependence, true tagger inefficiencies, etc. already built in. So the probability and energy spectrum of extra beam photon hits in the tagger matches reality by construction (also in a rate dependent way because of how we manage random noise injection).<br>
<br>
It seems you subject the true beam photon to your "pseudo-tagging simulation" which is probably just a check on the energy to be sure it will hit a tagger element, but the extra beam photons come from data and incorporate all real detector effects.<br>
<br>
At the end of the analysis (after one choses an algorithm to manage accidentals) it seems like what you have is an efficiency corrected event yield times the average probability that the true beam photon lands in the tagger acceptance -- call this pseudo-tagging efficiency. I think this outcome is is generally independent of the subtraction or best chi^2 algorithm but there may be differing levels of statistical or systematic error in each case (and previously mentioned complications with smearing the flux spectrum at high rates).<br>
<br>
The key step to go to the cross section is then to use the tagged flux but in measuring this flux one also uses the same pseudo-tagging algorithm which has the same pseudo-tagging efficiency that is applied to the beam photon in the simulation. This results in a cancellation that yields the true cross section.<br>
<br>
Do I have an accurate picture of how the simulation is implemented?<br>
<br>
Thanks,<br>
<br>
Matt<br>
<br>
</blockquote></div>
</div></div>