[Halld-cal] FCAL calibration follow-up

Igal Jaegle ijaegle at jlab.org
Thu Apr 30 14:05:38 EDT 2020


Thank you, Matt for your comments/remarks/corrections. I really appreciate it. Before discussing again the FCAL energy calibration, let me try to address first your concerns. Might I suggest then we meet in a week from now to discuss FCAL again instead of tomorrow?

tks ig.
________________________________
From: Shepherd, Matthew <mashephe at indiana.edu>
Sent: Thursday, April 30, 2020 1:17 PM
To: Igal Jaegle <ijaegle at jlab.org>; Alexander Somov <somov at jlab.org>
Cc: halld-cal at jlab.org <halld-cal at jlab.org>
Subject: [EXTERNAL] FCAL calibration follow-up


Hi Igal and Sasha,

I'll start the discussion we weren't able to have today....

I have two concerns with Igal's proposal:

1) The outer ring gain calibration is dubious.  There is no reason that the blocks around (row,col) = (27,0) should have dramatically higher gains on average than those at (20,20).  The detector is cylindrically symmetric as far as I know.

I don't see how anything meaningful can be extracted from the plots on slide #5.  When the detector was commissioned, the HV was set on the outer blocks in the very same way that it was on the rest of the detector, using bench measurements of PMT gains.  We found gain variations were minor in the middle of the detector once we started studying real pi0's.  And therefore we expect the outer rings, where we have maintained constant gains and set HV in the same way, to behave in the same way.  It is highly unlikely they were all systematically set in high or low groups in the strange pattern that your analysis suggests.

2)  I'm not convinced there is any evidence of any ring dependence in the detector response (aside from perhaps some small leakage into the beam hole).  Compare Igal's ring 2 plots to ring 15.  The shape of the background is dramatically different.  It is not evident that the 1-2% variation between these rings observed on page 9 is real effect.  I think some of the trends you see on slide 9 are related to systematic effects in parameterizing the background in the fits.  For the plots on slide 11, 13, or 15, try varying the fit range substantially or increase the order of the polynomial for the background and see how much the pi0 mean varies.  You need to convince us that the *systematic* uncertainties on the mean are less than 1%-2% across a dramatic variation of background shapes.

If there were any ring dependence in the response, then the gain balancing procedure, if it used fixed energy photons, would remove this because it fixes the pi0 mass on a block-by-block basis.  If the gain balancing is done with all photon energies that are not corrected for non-linearities, then the block-by-block gains are susceptible to being biased by energy non-linearities because the average energy in each block probably depends on distance from the beamline.  Indeed these are intertwined and in the absence of a nice mono-energetic sample one may determine a non-linear correction and then go back and redetermine the gains, and then revise the non-linear correction.  This type of iteration has effectively happened over time in the standard procedure.

The existing calibration function addresses concerns about extrapolation to high energy as it is constructed to both match the observed performance and have a stable asymptotic behavior.  Igal's function on slide 9 uses a 5th order polynomial in energy, and such polynomials tend to be incredibly unstable in any extrapolation.

I'm certainly not claiming that the existing approach is complete or, most importantly, meets the precision needs of PrimEx.  However, it would be really nice if you would first demonstrate the existing approach does not work or breaks down at some desired level of precision.  It is trivial to take the framework and data sample you have and just "turn the handle" with the existing function we have used for production up to now, iterate, and study the ring dependence of the response.  Why not do that first?  Once you do this, then we can realize where it is deficient and address those deficiencies directly.  It is much easier to iteratively improve a strategy than evaluate something entirely new from scratch.  Also, this new proposal you have increases the number of calibration constants from 6 to 138.  It would be nice to be sure such enhanced complexity actually resulted in real improvements.

I'm happy to talk about this more and maybe it is easier to do it in a call than email.  If you want to have a dedicated informal meeting to discuss just this, we can, and we can invite anyone else who has input to this conversation.

We have already a JEF meeting tomorrow morning and I have two other meetings tomorrow.. and your work has now come up at two consecutive working group meetings spanning almost 3 hours and we have yet to have the time to really dive into the details..  I'm not sure the PrimEx meeting tomorrow is going be an opportunity to do that.

Cheers,

Matt

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-cal/attachments/20200430/0b3366ee/attachment-0001.html>


More information about the Halld-cal mailing list