[Halld-cal] fcal in the trigger

Shepherd, Matthew mashephe at indiana.edu
Fri Jan 19 14:12:52 EST 2018


Right, we understand such an approach neglects correlations.  If two neighboring blocks simultaneously are dead for part of an interval that we are averaging over then our procedure will be incorrect.  

However, I think this probability is small.  Nevertheless if we can do better with zero addtional effort, we should do it.  We're happy to have you update this constant in the DB if you want to use your technique.

(This discussion is a different thread from knowing what the mask is and discussing why you should mask.)

Matt

> On Jan 19, 2018, at 12:11 PM, Alexander Somov <somov at jlab.org> wrote:
> 
> 
> Sean,
> 
> I am sorry for the late response, was too busy yesterday.
> 
> In general, the statement:
> 
> "... you should get an accurate sample in the limit of large data sets".
> 
> is wrong.
> 
> 
> An example of MC simulation, you reconstruct pi0 by detecting photons
> in two detectors. Each detector is alive for the first 50 % of time
> and then turns bad.
> 
> Assume that you generate 1000 events. The number of reconstructed pi0 is 500.
> 
> Now, you introduce reconstruction efficiencies, 50 % for each detector.
> The number of reconstructed pi0 is  N_gen*Eff*Eff = 250.
> 
> ---
> 
> In general, sub-samples have to be generated for each period of run, where
> detector perfomance conditions are the same.
> 
> ---
> 
> I can help you with FCAL mask (it's trivial) if Matt is busy and there is no manpower for that.
> 
> 
> Alex
> 
> 
> 
> 
> 
> 
> 
> 
> On Thu, 18 Jan 2018, Sean Dobbs wrote:
> 
>> Alex,
>> 
>> Yes, we generally try to generate a representative set of simulations for a
>> run period, so whether the constants are set on a run-by-run basis or
>> averaged over several runs, you should get an accurate sample in the limit
>> of large data sets.
>> 
>> Cheers,
>> Sean
> 




More information about the Halld-cal mailing list