[dsg-halld_magnets] PXI

Beni Zihlmann zihlmann at jlab.org
Tue Dec 5 12:44:17 EST 2023


As you can see in the attached screen shot almost 2/3 of the data is missing
looks like only every 3rd package arrives.

cheers,
Beni

On 12/4/23 17:35, Brian Eng wrote:
> The controller running now was swapped out in May of this year.
>
> We've never used Windows as an OS, only Phar Lap or NI Linux.
> ------------------------------------------------------------------------
> *From:* Hovanes Egiyan <hovanes at jlab.org>
> *Sent:* Monday, December 4, 2023 4:49 PM
> *To:* Brian Eng <beng at jlab.org>; Benedikt Zihlmann <zihlmann at jlab.org>
> *Cc:* dsg-halld_magnets at jlab.org <dsg-halld_magnets at jlab.org>
> *Subject:* Re: PXI
> Thanks! I think we should try 2500 element arrays and use 4Hz update 
> rate. The part of ROOT writer is easy to change. I am not sure what is 
> hard coded in the analyzer.
> I forgot where the bottleneck was when we were tuning this in 2013 
> with the old controller.
>
> When was this controller first used in the chassis with Linux 
> operating system? Did we used Windows OS in the spring and summer?
>
> Hovanes.
>
>
> ------------------------------------------------------------------------
> *From:* Brian Eng <beng at jlab.org>
> *Sent:* Monday, December 4, 2023 4:38 PM
> *To:* Hovanes Egiyan <hovanes at jlab.org>; Benedikt Zihlmann 
> <zihlmann at jlab.org>; Brian Eng <beng at jlab.org>
> *Cc:* dsg-halld_magnets at jlab.org <dsg-halld_magnets at jlab.org>
> *Subject:* Re: PXI
> Looks like with enough stop/start cycles the VI is running again.
>
> I did edit the /etc/hosts file to include halld-pxi, but the VI is 
> still showing it as ni.var.psp://localhost/... so I don't think that 
> made any difference.
>
> It does seem like the variables aren't using enough capacity, they're 
> only being reported as 4064 elements instead of 10000.
>
> Maybe we could just increase the reporting rate to 4 Hz so the array 
> size is only 2500?
> ------------------------------------------------------------------------
> *From:* dsg-halld_magnets <dsg-halld_magnets-bounces at jlab.org> on 
> behalf of Brian Eng via dsg-halld_magnets <dsg-halld_magnets at jlab.org>
> *Sent:* Monday, December 4, 2023 4:11 PM
> *To:* Hovanes Egiyan <hovanes at jlab.org>; Benedikt Zihlmann 
> <zihlmann at jlab.org>
> *Cc:* dsg-halld_magnets at jlab.org <dsg-halld_magnets at jlab.org>
> *Subject:* Re: [dsg-halld_magnets] PXI
> I tried stopping/starting the program via the debug application, but 
> it seems to be getting stuck when it goes to deploy the PVs. I've 
> attached a screenshot of the front panel of the main.
>
> I'm fairly certain the variables (all the ni.var.psp://localhost/... 
> array entries) weren't aliased to localhost in previous versions, but 
> am not 100% sure. At least in the code they're all listed as starting 
> with ni.var.psp://halld-pxi/...
>
> I'm hoping it's just some setting that needs to be updated (assuming 
> that is the problem and not something else), but both NI MAX and the 
> Linux command line list the hostname as halld-pxi.
>
> I'm going to CC the DSG mailing list so others are aware of the issue 
> and might be able to think of other possible things to try.
>
> ------------------------------------------------------------------------
> *From:* Hovanes Egiyan <hovanes at jlab.org>
> *Sent:* Monday, December 4, 2023 3:56 PM
> *To:* Brian Eng <beng at jlab.org>; Benedikt Zihlmann <zihlmann at jlab.org>
> *Subject:* Re: PXI
> The MAC address is correct on the network, that is the is the device 
> that responds to arping. But it is not correct in JNET/DNS and in the 
> DHCP server, they have 00:80:2f:17:c8:65 in them. I can change that , 
> although it may not change much if the controller is configured as 
> static.
>
> It seems to have the EPICS variables on the server, but somehow they 
> cannot be read.
>
> Hovanes.
>
> ------------------------------------------------------------------------
> *From:* Brian Eng <beng at jlab.org>
> *Sent:* Monday, December 4, 2023 3:53 PM
> *To:* Hovanes Egiyan <hovanes at jlab.org>; Benedikt Zihlmann 
> <zihlmann at jlab.org>
> *Subject:* Re: PXI
> The PXI is online, but doesn't seem to be running.
>
> I'm going to connect to the debug application and see if 
> stopping/starting it from there makes any difference.
> ------------------------------------------------------------------------
> *From:* Brian Eng <beng at jlab.org>
> *Sent:* Monday, December 4, 2023 3:41 PM
> *To:* Hovanes Egiyan <hovanes at jlab.org>; Benedikt Zihlmann 
> <zihlmann at jlab.org>
> *Subject:* Re: PXI
> I'm not planning on doing anything with it yet. I thought Beni was 
> doing stuff with it earlier?
>
> If it's up and online we should just leave it for now.
>
> The MAC should be = 00:80:2F:17:9D:C7
>
> I just verified that is setup to be a static IP.
>
> ------------------------------------------------------------------------
> *From:* Hovanes Egiyan <hovanes at jlab.org>
> *Sent:* Monday, December 4, 2023 3:34 PM
> *To:* Brian Eng <beng at jlab.org>; Benedikt Zihlmann <zihlmann at jlab.org>
> *Subject:* Re: PXI
> Please, do not reboot the PXI chassis if you get it working (now it 
> looks like it is working). Let's see first what is wrong with it while 
> it is working.  I would like to check if the MAC address on it matches 
> what is in DNS. I also want to see the lengths of the EPICS variables 
> coming from PXI.
>
> Hovanes.
>
> ------------------------------------------------------------------------
> *From:* Brian Eng <beng at jlab.org>
> *Sent:* Monday, December 4, 2023 3:28 PM
> *To:* Benedikt Zihlmann <zihlmann at jlab.org>
> *Cc:* Hovanes Egiyan <hovanes at jlab.org>
> *Subject:* Re: PXI
> If you only saw part of the data, then that is probably the issue that 
> is mentioned in that knowledge base article, where NI Linux isn't 
> respecting the EPICS array size. Which ... would be kind of bad. The 
> newer controllers can only run Windows or NI Linux. The current 
> controller is in that overlap region where it can run Windows, NI 
> Linux, or Pharlap (which is EOL).
>
> If it does come back online and still can't see all the data I'll have 
> to try with Windows on the controller to see if it even works. 
> Otherwise I'm not sure what else we can do; smaller, more frequent 
> array updates maybe?
>
>
>
> As for restarting or not, I saw last week that it basically took a few 
> tries to work.
>
> Remotely doing an off then on didn't work. Local chassis power button 
> worked as well as doing a remote cycle (which in theory should be the 
> same as doing an off/on).
>
> It should be set to a static IP, so there shouldn't be any IP to get.
>
> ------------------------------------------------------------------------
> *From:* Benedikt Zihlmann <zihlmann at jlab.org>
> *Sent:* Monday, December 4, 2023 3:23 PM
> *To:* Brian Eng <beng at jlab.org>; Benedikt Zihlmann <zihlmann at jlab.org>
> *Subject:* PXI
> Hi Brian,
>
> I guess I screwed up the PXI again. I saw on Friday that it was running
> and producing
> data however the data was "corrupted" and only part of the data was in
> the root file.
> so I restarted the PXI ioc but that did not help so I rebooted the PXI
> and of course that
> did not work either and when I rebooted it the second time it did not
> come back.
>
> I think this may be a similar problem we had in the past that the PXI
> does not get
> its IP address?
>
> I am going over the to the hall now to see what happens when I reboot it
> right there.
>
> cheers,
> Beni
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/dsg-halld_magnets/attachments/20231205/717e6e8c/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screenshot_2023-12-05_12-42-30.png
Type: image/png
Size: 643014 bytes
Desc: not available
URL: <https://mailman.jlab.org/pipermail/dsg-halld_magnets/attachments/20231205/717e6e8c/attachment-0001.png>


More information about the dsg-halld_magnets mailing list