<div dir="ltr"><div dir="ltr"><div dir="ltr"><div>I think David was called or messaged about it at some point during the following shift (owl). He made this note <a href="https://logbooks.jlab.org/entry/3608646">https://logbooks.jlab.org/entry/3608646</a></div><div><br></div><div>The problem was fixed when Sasha suggested rebooting the ROC at the start of the following day shift. <a href="https://logbooks.jlab.org/entry/3608681">https://logbooks.jlab.org/entry/3608681</a></div><div><br></div><div>I don't know for sure that all the data are garbage, at present they are certainly unreadable as hd_root crashes very near the start of the 000 file. In some of the other files it keeps running for a bit longer. I tried rigging my own evio-reader to skip the error events (I was only interested in CDC gain calibrations so FCAL data weren't too important to me, I just need the triggers and tracking) but was not entirely successful :-( and instead I found another run to use for calibrations instead. Not every event contains the bad data, there were some fairly long gaps between them.</div><div><br></div><div>Naomi.</div><div><br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr">On Tue, Nov 13, 2018 at 10:15 AM Shepherd, Matthew <<a href="mailto:mashephe@indiana.edu">mashephe@indiana.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
I was on shift when runs 50920 and 21 were taken. I am certain we were looking at rootspy plots to verify data quality. However, I do remember the rootspy processes dying frequently and having to restart them using the GUI. It is a little unfortunate the data is unrecoverable. :(<br>
<br>
There were some DAQ issues. We noticed erratic live time with no explanation for the reason. The relevant shift summary entry is here:<br>
<br>
<a href="https://logbooks.jlab.org/entry/3608369" rel="noreferrer" target="_blank">https://logbooks.jlab.org/entry/3608369</a><br>
<br>
Matt<br>
<br>
> On Nov 13, 2018, at 9:48 AM, Naomi Jarvis <<a href="mailto:nsj@cmu.edu" target="_blank">nsj@cmu.edu</a>> wrote:<br>
> <br>
> <br>
> Just FYI this affects runs 50920 to 50926. The runs either side of those look ok.<br>
> <br>
> On Fri, Nov 9, 2018 at 2:27 PM Naomi Jarvis <<a href="mailto:nsj@cmu.edu" target="_blank">nsj@cmu.edu</a>> wrote:<br>
> <br>
> PlotBrowser incoming data is completely blank (not even an empty histogram) for 50924 and a few other runs. That could be a clue. <br>
> <br>
> On Fri, Nov 9, 2018 at 2:10 PM Sean Dobbs <<a href="mailto:sdobbs@fsu.edu" target="_blank">sdobbs@fsu.edu</a>> wrote:<br>
> Hi Naomi,<br>
> <br>
> It does indeed looks like there's possibly some corrupted data in there. <br>
> We don't have any automatic monitoring that checks for something like this at the moment.<br>
> <br>
> Usually this is noted as we go through the runs and look at monitoring/etc. results.<br>
> <br>
> Cheers,<br>
> Sean<br>
> <br>
> On Fri, Nov 9, 2018 at 1:53 PM Mark Ito <<a href="mailto:marki@jlab.org" target="_blank">marki@jlab.org</a>> wrote:<br>
> Naomi,<br>
> <br>
> I opened an issue on this on GitHub, for the halld_recon repository, so it does not get lost.<br>
> <br>
> -- Mark<br>
> <br>
> On 11/09/2018 01:45 PM, Naomi Jarvis wrote:<br>
>> Hi,<br>
>> <br>
>> I have run into a problem with the evio files in run 50924 - hd_root finds a bad f250 pulse and then throws an JException and crashes. Files 000 and 001 crash instantly, file 002 proceeds for 2.1k events and then crashes. I'm using halld_recon version 3.2.0. <br>
>> <br>
>> Does anyone know which runs do and don't have these errors in them? <br>
>> <br>
>> Was this already found by the automatic monitoring software? (if not, why not?)<br>
>> <br>
>> Naomi.<br>
>> <br>
>> <br>
>> _______________________________________________<br>
>> Halld-offline mailing list<br>
>> <br>
>> <a href="mailto:Halld-offline@jlab.org" target="_blank">Halld-offline@jlab.org</a><br>
>> <a href="https://mailman.jlab.org/mailman/listinfo/halld-offline" rel="noreferrer" target="_blank">https://mailman.jlab.org/mailman/listinfo/halld-offline</a><br>
> <br>
> -- <br>
> Mark Ito, <br>
> <a href="mailto:marki@jlab.org" target="_blank">marki@jlab.org</a>, (757)269-5295<br>
> _______________________________________________<br>
> Halld-offline mailing list<br>
> <a href="mailto:Halld-offline@jlab.org" target="_blank">Halld-offline@jlab.org</a><br>
> <a href="https://mailman.jlab.org/mailman/listinfo/halld-offline" rel="noreferrer" target="_blank">https://mailman.jlab.org/mailman/listinfo/halld-offline</a><br>
> _______________________________________________<br>
> Halld-offline mailing list<br>
> <a href="mailto:Halld-offline@jlab.org" target="_blank">Halld-offline@jlab.org</a><br>
> <a href="https://mailman.jlab.org/mailman/listinfo/halld-offline" rel="noreferrer" target="_blank">https://mailman.jlab.org/mailman/listinfo/halld-offline</a><br>
> _______________________________________________<br>
> Halld-offline mailing list<br>
> <a href="mailto:Halld-offline@jlab.org" target="_blank">Halld-offline@jlab.org</a><br>
> <a href="https://mailman.jlab.org/mailman/listinfo/halld-offline" rel="noreferrer" target="_blank">https://mailman.jlab.org/mailman/listinfo/halld-offline</a><br>
<br>
</blockquote></div>