[Sbs_daq] Big endian raw data?

Ole Hansen ole at jlab.org
Sun Oct 3 13:06:32 EDT 2021


Maybe our various front-ends differ in endianness, so we write 
mixed-endian data?!? That would be disastrous since it is not supported 
by EVIO. A file can only be one or the other—a very binary view. (I 
guess EVIO was written before we became diversity-aware ;) ).

Ole

On 3.10.21 at 13:03, Andrew Puckett wrote:
>
> Hi Ole,
>
> This is interesting. The GRINCH data are being read out by the new 
> VETROC modules, I don’t know if they differ from the other modules in 
> terms of “endian-ness”. Maybe a DAQ expert can weigh in here?
>
> Andrew
>
> *From: *Sbs_daq <sbs_daq-bounces at jlab.org> on behalf of Ole Hansen 
> <ole at jlab.org>
> *Date: *Sunday, October 3, 2021 at 1:00 PM
> *To: *sbs_daq at jlab.org <sbs_daq at jlab.org>
> *Subject: *[Sbs_daq] Big endian raw data?
>
> Hi guys,
>
> Bradley reported a crash of the replay (actually in EVIO) with 
> /adaq1/data1/sbs/grinch_72.evio.0 (see 
> https://logbooks.jlab.org/entry/3916105 
> <https://logbooks.jlab.org/entry/3916105>).
>
> When digging into the cause of this crash, I discovered that these raw 
> data are written in big-endian format. How can this be? I thought the 
> front-ends are Intel processors. Are we taking data with ARM chips 
> that are configured for big-endian mode? Is this a mistake, or is 
> there some plan to it?
>
> These big-endian data have to be byte-swapped when processing them on 
> x86, which is what all our compute nodes run. That's a LOT of work. It 
> leads to significant and seemingly completely unnecessary overhead. 
> I.e. we're burning CPU cycles for nothing good, it seems.
>
> Please explain.
>
> Ole
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/sbs_daq/attachments/20211003/a85f35c5/attachment.html>


More information about the Sbs_daq mailing list