[Halld-cal] BCAL segmentation

semenov at jlab.org semenov at jlab.org
Mon Apr 4 18:05:55 EDT 2011


Elton:

I do not agree with you and Eugene that for 90-deg-emitted photons, the
summed readout timing should be the same accurate (if not superior) in
comparison with the fine segmentation.

Talking about timing, it's important to keep in mind that (both in the
leading-edge and constant-fraction methods) the timing moment is connected
with the collection of the small fraction of the total charge (and not the
whole charge deposited into scintillator). In the electromagnetic
calorimeter, the charge deposition is not located in relatively  small
region along z-direction but rather pretty wide spread; that leads to the
fact that "left" and "right" photodetectors are triggered by different
parts of the shower that are separated in the space, and this separation
is quite big (I think, a few centimeters), depends on how deep the readout
cell is located inside the calorimeter (viz., that layer or combination of
the layers is used to produce the timing measurement), and significantly
fluctuates event-by-event.

I do not state here that the present timing algorithm in HDGEANT is OK;
surely we should inspect it critically and fix if needed. But I'm pretty
much sure that summing of the readout cells makes the "total" timing
resolution different from the one with fine segmentation even for
90-deg-emitted photons, and (most probably) this resolution will be worse
for summed case (one big wide-spread shower instead of 3 showers)...

Talking of your "toy" simulation, it did not take into account the
transverse distribution of the shower as well as the fact that the timing
is done on the small part of the total charge deposited in the calorimeter
only, so it's not sensitive really to the calorimeter features. If we
really want to make these issues clear, the full-scale simulation must be
done correctly.

Thank you,
Andrei






> Hi David and Andrei,
>
> While I agree that a discussion is warranted on the best representation
> of the width of non-Gaussian distributions, we may have lost focus on
> the primary issue, which is that we do not understand the timing
> distributions.
>
> The distributions that Andrei showed last Thu indicate that coarse
> segmentation degrades the timing resolution. As Eugene pointed out, one
> should expect little difference at 90 deg, which is at variance with the
> computed widths. Also, there are reasons to believe that the timing
> should in fact improve with coarser segmentation. A couple of years ago,
> I played with a toy MC to understand the effect of segmentation on
> timing (see GlueX-doc-1083
> http://argus.phys.uregina.ca/cgi-bin/private/DocDB/ShowDocument?docid=1083).
> The conclusion from that study was that increasing segmentation always
> degrades the resolution when finite thresholds are applied, again
> contrary to Andrei's results. [The resolution only improves when there
> are no thresholds applied to extract the timing.]
>
> At the present, I suspect that we have  timing algorithms that are
> either not optimized for coarse segmentation, or even perhaps have not
> been completely debugged. Therefore, we need to understand why the
> distributions seem to be worsening with coarser segmentation and their
> dependence on z before spending too much effort on characterizing their
> shape.
>
> Cheers, Elton.
>
> Elton Smith
> Jefferson Lab MS 12H5
> 12000 Jefferson Ave
> Suite #16
> Newport News, VA 23606
> (757) 269-7625
> (757) 269-6331 fax
>
>
> On 4/3/11 9:51 PM, semenov at jlab.org wrote:
>> David:
>>
>> 2 comments:
>>
>> 1. Irina's name is "Irina" and not some other ancient-Greek versions
>> (with
>> all my respect to the Greek culture :)
>>
>> 2. About to use the fitting function instead calculating RMS: I do agree
>> that the fitting procedure (with Breit-Wigner convoluted with Gaussian
>> or
>> just sum of 2 Gaussians or Gaussian core and some tails) will provide
>> more
>> stable result; but we need agreement about what value to use as the
>> numerical estimator of the resolution. If we use "1-sigma" analog (viz.,
>> sixty-something percent confidence interval), we will catch the "core"
>> structure only and will not be sensitive much to the tails, and
>> definitely
>> we don't want to do such a thing. I do believe that 95%-confidence
>> interval will be much more appropriate, but we all should be aware that
>> it
>> corresponds to "2-sigma-resolution" if we will make any comparisons with
>> the "1-sigma" resolutions from Gaussian-shaped spectra.
>>
>> So, do we all agree to use from now the 95%-confidence interval as the
>> resolution?
>>
>> Thank you,
>> Andrei
>>
>>
>>
>>
>>
>>
>>
>>> Hi Irinia,
>>>
>>>       Please find my responses below.
>>>
>>> On 3/31/11 9:20 PM, stepi at jlab.org wrote:
>>>> Hi David,
>>>>
>>>> plots on the 2nd slide were calculated for the range of z  from -34cm
>>>> to
>>>> 30 cm (Near part of BCal).
>>> OK. This would make things seem more consistent. Your plots looked
>>> similar to my ~90 degree plot which I believe corresponds to z=0 in
>>> your
>>> coordinate system.
>>>
>>>> For the RMS calculation I used the same method : TH1D :: GetRMS().
>>>> For the histogram presented on this slide limits for RMS are:
>>>> (-15 to 15)  for the histogram "Fine-Segmented" and
>>>> (-30 to 30)  for the histogram "Summed-in-Towers".
>>>>
>>> Looking at the documentation for TH1, it looks like we should be
>>> calling
>>> TH1::StatOverflows() prior to filling the histogram if we want the RMS
>>> to include the tails. I did not do this myself, but will correct that.
>>> If that is not done, then the RMS will be a (possibly strong) function
>>> of the histogram limits.
>>>
>>>> For the comparison consistency :
>>>> All these results ( energy resolution, polar angle and azimuthal
>>>> angle)
>>>> were obtained under the following conditions:
>>>> ( "nTrue == 1&&   prim == 1&&   nShow == 1" )
>>>> that were created as:
>>>> ......
>>>>      m_rootTree->Branch( "nTrue",&m_nTrue, "nTrue/I" );
>>>>      m_rootTree->Branch( "prim", m_primary, "prim[nTrue]/F" );
>>>>      m_rootTree->Branch( "nShow",&m_numShowers, "nShow/I" );
>>>>
>>>> I have few questions to you:
>>>>
>>> OK, that may explain part of the discrepancy. In my RMS plot I'm not
>>> cutting on having only 1 reconstructed shower and I have no cut to
>>> ensure the primary particle makes it to the BCAL. My events will
>>> therefore, not be as clean so the tails will be a little larger which
>>> would blow up my RMS.
>>>
>>>> 1. You didn't put the numbers for the RMS on your plots. It's good to
>>>> know
>>>> how big the difference really is.
>>>>
>>> I can do this, but I don't trust the RMS as a good indicator since it
>>> is
>>> so sensitive to the tails. I think we we need to find an appropriate
>>> fitting function.
>>>
>>>> 2. What version of reconstruction code did you use? As I mentioned
>>>> before,
>>>> I am using tag version sim-recon-2011-02-02 . Matt recently found the
>>>> bug
>>>> in the smear.cc that included in this release.
>>>> To fix this bug,  the opening lines of SmearBCal should be changed to:
>>>> ......
>>>>     for( int m = 1; m<= 48; ++m ){
>>>>       for( int l = 1; l<= 10; ++l ){
>>>>         for( int s = 1; s<= 4; ++s ){
>>>> ............
>>>> (the l and s loop limits were swapped).
>>>> For the plots presented on the meeting, the "bagged" version was used.
>>>> After fixing the bug, the change in the angle and the energy
>>>> resolutions
>>>> is not big.
>>>>
>>> I used revision 7602 from the repository which looks to still have the
>>> bug. I'll fix that as well.
>>>
>>>
>>> Regards,
>>> -Dave
>>>
>>>
>>>> Thank you,
>>>> Irina
>>>>
>>>>> Hi Andrei and Irina,
>>>>>
>>>>>        I've start trying to look at the splitoffs issue for the
>>>>> course
>>>>> vs.
>>>>> finely segmented readout options in the BCAL. One of my first steps
>>>>> was
>>>>> to try and compare resolutions to some plots you showed at the
>>>>> meeting
>>>>> this week as a check that I've got the course segmentation set up
>>>>> correctly. I've not gotten results that seem completely consistent
>>>>> with
>>>>> yours. I'd like find the source of the discrepancy. I've put a couple
>>>>> of
>>>>> the plots into a PDF that I uploaded to the wiki here:
>>>>>
>>>>> http://www.jlab.org/Hall-D/software/wiki/images/7/7f/20110405_bcal_segmentation.pdf
>>>>>
>>>>> My first question is if the resolution plots shown on your 2nd slide
>>>>> were for the full range of z or, just for some range around 90
>>>>> degrees.
>>>>>
>>>>> Also, how did you calculate the RMS? In my plots, I just used ROOT's
>>>>> TH1D::GetRMS() method which I believe is limited by the defined
>>>>> histogram range (-20 to +20 in my case). I would think that by
>>>>> cutting
>>>>> off the tails, I would get a *smaller* RMS than the actual, but mine
>>>>> seems bigger than yours. Any clues as to why?
>>>>>
>>>>> Regards,
>>>>> -David
>>>>>
>>
>> _______________________________________________
>> Halld-cal mailing list
>> Halld-cal at jlab.org
>> https://mailman.jlab.org/mailman/listinfo/halld-cal
> _______________________________________________
> Halld-cal mailing list
> Halld-cal at jlab.org
> https://mailman.jlab.org/mailman/listinfo/halld-cal
>




More information about the Halld-cal mailing list