[Halld-cal] source of FCAL inefficiency in high intensity running

Elton Smith elton at jlab.org
Fri Sep 15 08:04:29 EDT 2017


HI Matt and Sean et al.,

Thanks for tracking this down. It seems the source of the problem has 
been identified and we will have to work out how to move forward most 
efficiently.

As far as the DNP and presenting new results, I assume that 2016, 2017 
low intensity data sets are OK.  Only the 2017 high intensity period is 
affected. Can you confirm this?

Thanks, Elton.

Elton Smith
Jefferson Lab MS 12H3
12000 Jefferson Ave STE 4
Newport News, VA 23606
(757)269-7625
(757)269-6331 fax

On 9/14/17 6:30 PM, Shepherd, Matthew wrote:
> Hi all,
>
> Justin reported yesterday in the production meeting that there is an apparent reduction in FCAL efficiency in the spring high intensity running.  We've been digging into this and thanks to Sean's detective work, we are now pretty sure we understand what happened.
>
> Summary:
>
> A bogus set of channel-by-channel timing offsets was used in the high-intensity reconstruction which ultimately resulted in efficiency loss.
>
> Good news:
>
> * It has nothing to do with HV setting or raw data, i.e., it is recoverable.  It is only correlated with HV changes because we intentionally redetermined timing constants for the different HV set points.
>
> Bad news:
>
> * Recovery requires reprocessing the data.
> * FCAL efficiency and resolution will be degraded in a nontrivial way (with polar angle dependence) for the existing REST data for the high intensity run.
> * The current REST version of high-intensity run is not usable for DNP for any analysis that depends on getting efficiency correct as implementing this effect in the MC is slightly nontrivial.
>
> Details:
>
> Attached is a plot generated by Sean of the timing offset that gets applied to each channel in the high intensity run.  Units are in ns.  The variations across channels in the high intensity running are simply unphysical.  There is no way to get differences on the scale of 20-30 ns between neighboring channels as observed.  (It is not yet known whether this is a mistake or a breakdown of the existing calibration procedure due to some other "feature" of the high intensity run, e.g., a crate level timing shift that the algorithm didn't respond well to.)
>
> The efficiency loss happens because in the clustering routine we make a loose timing cut:  hits must be within 15 ns of the maximum energy hit in order to be added to the cluster.  Therefore, the key problem with these buggy timing offsets is not the absolute scale, but the fact the offset is much more *inhomogeneous* in the outer regions than the inner regions.  In the outer regions, depending on which block is the "seed" block of the cluster, the neighboring block may not be added to the cluster because it fails the timing cut.
>
> To demonstrate that this is correct, we widened the timing cut:
>
> -PFCAL:TIIMING_CUT=200
>
> And this resulted in a significant increase in pi0 yield in our pi0 skims.
>
> In addition, if FCAL timing is used elsewhere in any capacity, e.g., neutral hypothesis evaluation in the analysis library, then additional efficiency losses are possible.
>
> It turns out the timing offsets problem has a side-effect on the gain constants that Justin noticed last night.  Because one tends to randomly thrown away hits that are incorrectly deemed to be out of time in outer regions of the FCAL, then there is a tendency to compensate for this energy loss by boosting the gains of the blocks.  The result is a suspicious-looking 2D distribution of gain constants (also attached) for the high intensity running.
>
> It goes without saying that making either the plot Sean or Justin made with the actual gain constants used for reconstruction would have raised alarm bells.  While we have monitoring plots for data, we may consider having standardized visual depictions of constants that we can check prior to launch.
>
> Our immediate plan is to use low intensity time offsets for the high-intensity running and then redetermine the gain constants using the "histogram fit adjust" method developed by Mike and Will at CMU, which seems to handle edge-effects better than the previous method.  This is the minimum thing that needs to be done before reprocessing of the high-intensity run can start.
>
> This effect most certainly led to resolution degradation in the high intensity run.  It remains to be seen if any of this is linked to the apparent large floor term we see in the energy dependence of the resolution.
>
> Matt
>
>
> Timing offsets plotted by Sean:
>
>
> _______________________________________________
> Halld-cal mailing list
> Halld-cal at jlab.org
> https://mailman.jlab.org/mailman/listinfo/halld-cal



More information about the Halld-cal mailing list