[Hallc_running] Suggestion to shorten NPS readout interval from 400 to 200 nsec
Peter Bosted
bosted at jlab.org
Sat Mar 9 19:32:59 EST 2024
Maybe a study of the existing data with removal of X first samples and Y
end samples could show what can be safely removed.
As far as an accidental window, I'm finding that +/-20 nsec is sufficient
to reduce the error on accieentals to a negligible level.
Wassim, could you do such a study in the next few days, possibly?
I also think that a 30% increase in rates is definitly worhwhile,
especially for the 3-pass running where we are statistics limited due
to the much smaller cross sections than at 5-pass.
So definitly, raising our limit for MB/sec from 130 to 200 should be
very safe in any case.
Pushing the anode current a bit is probably worth while too, in those
cases where we actually get close to 30 muA.
Running at bit bigger NPS angles (as we are doing right now) is also
a very good way to get more events, even if it means a less uniform
phi* coverage.
Yours, Peter
Prof. Peter Bosted
email: bosted at jlab.org
phone: (808) 315-1297 (cell)
P.O. Box 6254, Ocean View, HI 96737
On Sat, 9 Mar 2024, Carlos Munoz Camacho wrote:
> Dear Peter, all,
> Thank you for looking into ways to decrease the data rate for the
> upcoming kinematic settings. Malek and I have discussed this proposal of
> reducing the FADC readout window, but we don't think we should do it.
>
> As it has been shown, waveform analysis is crucial to improve the energy
> resolution of the NPS calorimeter, which is in turn the key to the good
> missing mass resolution needed for exclusive channels such as DVCS. It is
> incorrect to say that we only analyze events in a 100 ns time window; we are
> currently fitting all 110 samples (i.e. 440 ns). Attached is a sample
> (normalized) waveform.
>
> The pulse itself extends over ~100 ns (25 samples). In order to remove
> pile-up before the coincidence pulse, we need enough samples to fit a pulse
> arriving before it.
>
> There are also channel-to-channel variations in the coincidence time of +/-
> 20 ns (5 samples) and variations due to the calorimeter distance as a
> function of the kinematic setting.
>
> We also need enough time window to compute accidentals.
>
> While we could scrap *a few* samples at the end (and possibly at the
> beginning) of the current readout window, this is probably not worth the
> trouble as it won't significantly change things (20-30% at most).
>
> We are open to discussing other ways of reducing the data rate without
> compromising the experiment's feasibility.
>
> Best,
> Carlos
>
>
> On Fri, Mar 8, 2024 at 11:45 PM Peter Bosted via Hallc_running
> <hallc_running at jlab.org> wrote:
> Background: we have been generally running considerably less
> beam current
> than in the NPS and SIDIS proposals (mostly planned for 30 muA).
> As a
> result, we are getting anywhere from 2 to 10 times less events
> than
> we hoped for.
>
> There are several factors that limit which current we use:
> a) keep anode current in average of NPS columns 0 and 1 below 30
> muA
> b) keep data rate low enough that no crate exceeds 80 MB/sec.
> Since the
> crates all have roughly the same rates, we need to be well below
> 400 MB/sec to avoid this happening to any of the 4 and a half
> VME crates
> (one of the 5 is only half-populated). Last night we did a 30
> minute
> run at 300 MB/sec with no trips. I think there is general
> agreement
> that keeping the rate below 200 MB/sec is acceptable.
> c) keeping the trigger rate low eneough to avoid significant
> computer dead
> time corrections, as well as exceeding the maximum transfer rate
> of data
> from the hall to the mass storage system.
>
> For most of the settings for the rest of the experiment, factor
> c) will
> be the limit that we reach before factors a) and b). Since the
> event size
> (and hence computer live time and transfer rates) are largely
> determined
> by writing out the NPS FADC data, we can gain up to a factor of
> two
> in what current we run by reducing the readout time for the
> FADC.
>
> At present, we have a readout window of 400 nsec.
>
> We only analyze events in a 100 nsec time window.
>
> But we need to readout over an interval longer than 100 nsec in
> order
> to catch the long tails of some pulses.
>
> A resonable compromise as far as I can tell would be to shorten
> the
> readout interval from 400 to 200 nsec.
>
> To keep the window centered on the coincidence time peak, we
> would
> reduce the time by 75 nsec on the back end, and 125 on the front
> end,
> as far as I understand.
>
> I propose that the experts get together on Tuesday after the
> Moller run
> and implement this change, after taking one run with the 400
> nsec
> window, so we can make sure thaat no good data is being lost. As
> far as I understand, the experts include Alex, Wassim, Ben, and
> Sanguang.
>
> As I understand, only the config file(s) need to be changed: no
> knobs
> on the crates need to be turned.
>
>
> Prof. Peter Bosted
> email: bosted at jlab.org
> phone: (808) 315-1297 (cell)
> P.O. Box 6254, Ocean View, HI 96737
>
> _______________________________________________
> Hallc_running mailing list
> Hallc_running at jlab.org
> https://mailman.jlab.org/mailman/listinfo/hallc_running
>
>
>
More information about the Hallc_running
mailing list