[Sbs_daq] [EXTERNAL] Re: Big endian raw data?
Benjamin Raydo
braydo at jlab.org
Sun Oct 3 14:59:04 EDT 2021
yeah, it's a bit more complicated:
VME CPU is Intel/little endian, but the VME modules are all big endian...so the data readout out remains in big endian unless the Intel CPU byte swaps it (an easy task for it) - there should also be a bit set in the EVIO bank header indicating big or little endian front end data type.
VTP is a bit different...Sure, it has an ARM CPU (which is little endian - ARM, btw, isn't committed to big or little - depends on the CPU model), but we're using the FPGA to readout the data which doesn Cate about endless and we picked a format to match the Intel CPU default big endian format. We can easily change the VTP to have a config setting tonight it whichever way you want....so if little endian is really desired then we can add the support for this soon I think (something me/Dave/Bryan will need to do for the various readout lists). The main thing is we're consist though and that we also indicate the correct endianness flag in the evio header that matches the data (I still don't see this being a huge CPU cycle cost as long as it's done remotely efficiently, but just giving it in the right endianness doesn't seem like a big deal so let's shoot for that if folks are onboard/okay with that).
________________________________
From: Sbs_daq <sbs_daq-bounces at jlab.org> on behalf of Alexandre Camsonne <camsonne at jlab.org>
Sent: Sunday, October 3, 2021 2:21:57 PM
To: Paul King <pking at jlab.org>
Cc: sbs_daq at jlab.org <sbs_daq at jlab.org>
Subject: [Sbs_daq] [EXTERNAL] Re: Big endian raw data?
Everything is intel besided VTP. Though Dave mentionned VME was big endian.
Alexandre
On Sun, Oct 3, 2021, 13:56 Paul King <pking at jlab.org<mailto:pking at jlab.org>> wrote:
I can comment that when I wrote the helicity scaler library, I found that I needed to byte swap the data words (module data and diagnostic counters) on the crate in order to be decoded correctly.
I'm not sure if halladaq8 is an Intel or arm cpu.
Does Podd or evio2xml do a dynamic check of endianness and then byteswap, or is that explicitly enabled?
Sent from my mobile device.
Get Outlook for Android<https://urldefense.proofpoint.com/v2/url?u=https-3A__aka.ms_ghei36&d=DwMFaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=3WIiK4AiegFcRbgilX3vTw&m=qCvgogjK_XShaAzw6j_bVBbsMwegZv0ftnZS5u5C3VQ&s=DlHzZcyFxELhv7NPiwWjh8WrrALgcW1pFGAaTXpPHVE&e=>
________________________________
From: Sbs_daq <sbs_daq-bounces at jlab.org<mailto:sbs_daq-bounces at jlab.org>> on behalf of Andrew Puckett <puckett at jlab.org<mailto:puckett at jlab.org>>
Sent: Sunday, October 3, 2021 1:31:05 PM
To: Robert Michaels <rom at jlab.org<mailto:rom at jlab.org>>; Ole Hansen <ole at jlab.org<mailto:ole at jlab.org>>; sbs_daq at jlab.org<mailto:sbs_daq at jlab.org> <sbs_daq at jlab.org<mailto:sbs_daq at jlab.org>>
Subject: Re: [Sbs_daq] Big endian raw data?
Interesting. So perhaps I’m being naïve here, but other than the byte-swapping inefficiency Ole pointed out in processing the raw data on the compute farm nodes, is there an actual problem here? Do we need to check/care about this in the software in writing our raw data decoders?
The cause of Bradley’s crash while processing GRINCH data doesn’t necessarily seem related to this…
Andrew
From: Robert Michaels <rom at jlab.org<mailto:rom at jlab.org>>
Date: Sunday, October 3, 2021 at 1:21 PM
To: Ole Hansen <ole at jlab.org<mailto:ole at jlab.org>>, Andrew Puckett <puckett at jlab.org<mailto:puckett at jlab.org>>, sbs_daq at jlab.org<mailto:sbs_daq at jlab.org> <sbs_daq at jlab.org<mailto:sbs_daq at jlab.org>>
Subject: Re: [Sbs_daq] Big endian raw data?
I believe there are byte-swapping routines available in the DAQ libraries which allow to put the bytes in the right state and be consistent. But the DAQ expert needs to make this happen. Below is a snippet of an email from Dave Abbott about a year ago when I was having some trouble, which I think is relevant.. Dave is a good person to ask. Can ask Bryan Moffit or Alexandre, too.
---------------------- snippet of email from Dave Abbott ------------------------
The CODA data files are written from a Java Event Builder. JAVA is inherently Big Endian. The EVIO
files will be by default in big endian.
However, ALL Banks of User data - created in your readout list - will NOT be swapped. They will stay
whatever Endian it was when it was written.
Typically the ROC will run in Linux on Intel which is Little Endian. Therefore the Data banks you create will stay
little endian. However the Bank headers will be swapped to be compatible with the rest of the CODA file.
An even more confusing possibility is that you might do a DMA from the VME bus into a CODA data Bank.
The VME bus is Big endian. Therefore the data from the VME bus will stay Big endian in this bank.
Our general rule for CODA 3 is that for purposes of DAQ we will not touch (or modify) the User's data in any way.
We will only modify the EVIO headers to match the endianess of whatever System writes the file.
________________________________
From: Sbs_daq <sbs_daq-bounces at jlab.org<mailto:sbs_daq-bounces at jlab.org>> on behalf of Ole Hansen <ole at jlab.org<mailto:ole at jlab.org>>
Sent: Sunday, October 3, 2021 1:06 PM
To: Andrew Puckett <puckett at jlab.org<mailto:puckett at jlab.org>>; sbs_daq at jlab.org<mailto:sbs_daq at jlab.org> <sbs_daq at jlab.org<mailto:sbs_daq at jlab.org>>
Subject: Re: [Sbs_daq] Big endian raw data?
Maybe our various front-ends differ in endianness, so we write mixed-endian data?!? That would be disastrous since it is not supported by EVIO. A file can only be one or the other—a very binary view. (I guess EVIO was written before we became diversity-aware ;) ).
Ole
On 3.10.21 at 13:03, Andrew Puckett wrote:
Hi Ole,
This is interesting. The GRINCH data are being read out by the new VETROC modules, I don’t know if they differ from the other modules in terms of “endian-ness”. Maybe a DAQ expert can weigh in here?
Andrew
From: Sbs_daq <sbs_daq-bounces at jlab.org><mailto:sbs_daq-bounces at jlab.org> on behalf of Ole Hansen <ole at jlab.org><mailto:ole at jlab.org>
Date: Sunday, October 3, 2021 at 1:00 PM
To: sbs_daq at jlab.org<mailto:sbs_daq at jlab.org> <sbs_daq at jlab.org><mailto:sbs_daq at jlab.org>
Subject: [Sbs_daq] Big endian raw data?
Hi guys,
Bradley reported a crash of the replay (actually in EVIO) with /adaq1/data1/sbs/grinch_72.evio.0 (see https://logbooks.jlab.org/entry/3916105).
When digging into the cause of this crash, I discovered that these raw data are written in big-endian format. How can this be? I thought the front-ends are Intel processors. Are we taking data with ARM chips that are configured for big-endian mode? Is this a mistake, or is there some plan to it?
These big-endian data have to be byte-swapped when processing them on x86, which is what all our compute nodes run. That's a LOT of work. It leads to significant and seemingly completely unnecessary overhead. I.e. we're burning CPU cycles for nothing good, it seems.
Please explain.
Ole
_______________________________________________
Sbs_daq mailing list
Sbs_daq at jlab.org<mailto:Sbs_daq at jlab.org>
https://mailman.jlab.org/mailman/listinfo/sbs_daq
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/sbs_daq/attachments/20211003/afd8c28c/attachment.html>
More information about the Sbs_daq
mailing list