[dsg-halld_magnets] PXI

Brian Eng beng at jlab.org
Mon Dec 4 16:38:11 EST 2023


Looks like with enough stop/start cycles the VI is running again.

I did edit the /etc/hosts file to include halld-pxi, but the VI is still showing it as ni.var.psp://localhost/... so I don't think that made any difference.

It does seem like the variables aren't using enough capacity, they're only being reported as 4064 elements instead of 10000.

Maybe we could just increase the reporting rate to 4 Hz so the array size is only 2500?
________________________________
From: dsg-halld_magnets <dsg-halld_magnets-bounces at jlab.org> on behalf of Brian Eng via dsg-halld_magnets <dsg-halld_magnets at jlab.org>
Sent: Monday, December 4, 2023 4:11 PM
To: Hovanes Egiyan <hovanes at jlab.org>; Benedikt Zihlmann <zihlmann at jlab.org>
Cc: dsg-halld_magnets at jlab.org <dsg-halld_magnets at jlab.org>
Subject: Re: [dsg-halld_magnets] PXI

I tried stopping/starting the program via the debug application, but it seems to be getting stuck when it goes to deploy the PVs. I've attached a screenshot of the front panel of the main.

I'm fairly certain the variables (all the ni.var.psp://localhost/... array entries) weren't aliased to localhost in previous versions, but am not 100% sure. At least in the code they're all listed as starting with ni.var.psp://halld-pxi/...

I'm hoping it's just some setting that needs to be updated (assuming that is the problem and not something else), but both NI MAX and the Linux command line list the hostname as halld-pxi.

I'm going to CC the DSG mailing list so others are aware of the issue and might be able to think of other possible things to try.

________________________________
From: Hovanes Egiyan <hovanes at jlab.org>
Sent: Monday, December 4, 2023 3:56 PM
To: Brian Eng <beng at jlab.org>; Benedikt Zihlmann <zihlmann at jlab.org>
Subject: Re: PXI

The MAC address is correct on the network, that is the is the device that responds to arping. But it is not correct in JNET/DNS and in the DHCP server, they have 00:80:2f:17:c8:65 in them. I can change that , although it may not change much if the controller is configured as static.

It seems to have the EPICS variables on the server, but somehow they cannot be read.

Hovanes.

________________________________
From: Brian Eng <beng at jlab.org>
Sent: Monday, December 4, 2023 3:53 PM
To: Hovanes Egiyan <hovanes at jlab.org>; Benedikt Zihlmann <zihlmann at jlab.org>
Subject: Re: PXI

The PXI is online, but doesn't seem to be running.

I'm going to connect to the debug application and see if stopping/starting it from there makes any difference.
________________________________
From: Brian Eng <beng at jlab.org>
Sent: Monday, December 4, 2023 3:41 PM
To: Hovanes Egiyan <hovanes at jlab.org>; Benedikt Zihlmann <zihlmann at jlab.org>
Subject: Re: PXI

I'm not planning on doing anything with it yet. I thought Beni was doing stuff with it earlier?

If it's up and online we should just leave it for now.

The MAC should be = 00:80:2F:17:9D:C7

I just verified that is setup to be a static IP.

________________________________
From: Hovanes Egiyan <hovanes at jlab.org>
Sent: Monday, December 4, 2023 3:34 PM
To: Brian Eng <beng at jlab.org>; Benedikt Zihlmann <zihlmann at jlab.org>
Subject: Re: PXI

Please, do not reboot the PXI chassis if you get it working (now it looks like it is working). Let's see first what is wrong with it while it is working.  I would like to check if the MAC address on it matches what is in DNS. I also want to see the lengths of the EPICS variables coming from PXI.

Hovanes.

________________________________
From: Brian Eng <beng at jlab.org>
Sent: Monday, December 4, 2023 3:28 PM
To: Benedikt Zihlmann <zihlmann at jlab.org>
Cc: Hovanes Egiyan <hovanes at jlab.org>
Subject: Re: PXI

If you only saw part of the data, then that is probably the issue that is mentioned in that knowledge base article, where NI Linux isn't respecting the EPICS array size. Which ... would be kind of bad. The newer controllers can only run Windows or NI Linux. The current controller is in that overlap region where it can run Windows, NI Linux, or Pharlap (which is EOL).

If it does come back online and still can't see all the data I'll have to try with Windows on the controller to see if it even works. Otherwise I'm not sure what else we can do; smaller, more frequent array updates maybe?



As for restarting or not, I saw last week that it basically took a few tries to work.

Remotely doing an off then on didn't work. Local chassis power button worked as well as doing a remote cycle (which in theory should be the same as doing an off/on).

It should be set to a static IP, so there shouldn't be any IP to get.

________________________________
From: Benedikt Zihlmann <zihlmann at jlab.org>
Sent: Monday, December 4, 2023 3:23 PM
To: Brian Eng <beng at jlab.org>; Benedikt Zihlmann <zihlmann at jlab.org>
Subject: PXI

Hi Brian,

I guess I screwed up the PXI again. I saw on Friday that it was running
and producing
data however the data was "corrupted" and only part of the data was in
the root file.
so I restarted the PXI ioc but that did not help so I rebooted the PXI
and of course that
did not work either and when I rebooted it the second time it did not
come back.

I think this may be a similar problem we had in the past that the PXI
does not get
its IP address?

I am going over the to the hall now to see what happens when I reboot it
right there.

cheers,
Beni

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/dsg-halld_magnets/attachments/20231204/78548c6b/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pxi_running.PNG
Type: image/png
Size: 79714 bytes
Desc: pxi_running.PNG
URL: <https://mailman.jlab.org/pipermail/dsg-halld_magnets/attachments/20231204/78548c6b/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pxi_array.PNG
Type: image/png
Size: 7690 bytes
Desc: pxi_array.PNG
URL: <https://mailman.jlab.org/pipermail/dsg-halld_magnets/attachments/20231204/78548c6b/attachment-0003.png>


More information about the dsg-halld_magnets mailing list