[Halld-offline] Interest in a dedicated ENP/Hall VM cluster?
Alexander Austregesilo
aaustreg at jlab.org
Tue May 24 13:51:18 EDT 2022
Dear Colleagues,
There is a motion in other halls to move processes for monitoring and
controls from dedicated hardware to centrally administered virtual
machines (see below). Please let me know in case you see any use-cases
in Hall D.
Best regards,
Alex
-------- Forwarded Message --------
Subject: [EXTERNAL] Interest in a dedicated ENP/Hall VM cluster?
Date: Tue, 24 May 2022 12:19:10 -0400
From: Brad Sawatzky <brads at jlab.org>
To: Ole Hansen <ole at jlab.org>, Nathan Baltzell <baltzell at jlab.org>,
David Lawrence <davidl at jlab.org>, Thomas Britton <tbritton at jlab.org>,
Alexander Austregesilo <aaustreg at jlab.org>
CC: Bryan Hess <bhess at jlab.org>, Graham Heyes <heyes at jlab.org>
Hi all,
I touched on this at the last SciComp meeting and am working out
the (initial) scope for a dedicated 'Hall' VM cluster.
It would help if each Compute Coordinator could respond with a
rough list of hosts/use-cases that might be better served by being
on a VM rather than dedicated hardware.
As an example, here is what I would like Hall C migrate to VMs.
The '*'s indicate my initial priorities. The [] lines are estimates of
VM resource allocations that will help spec the needed VM cluster HW.
(Note that none of these are intended to be "high performance" systems,
but they are all to some degree critical path for Hall operations.)
-- Hall C --
*'skylla10': Win10 based system running Hall C HMI/PLC spectrometer
controls (main backend server).
[16GB RAM, CPUS:2, 1TB disk; user interactive]
*'cmagnets': Win10 system providing Shift Worker 'User' spectrometer
controls.
[8GB RAM, CPUS:2, 500TB disk; user interactive]
*EPICS SoftIOCs : JLab/RHEL system providing monitoring, alarms, data
logging, etc for many Hall C systems (cryo, magnets, target,
high-voltage, etc)
[2GB RAM, CPUS:2, 500TB disk]
CryoTarget User Controls : Linux host providing user controls
to Target Operators. Non-standard linux install, presently
on dedicated hardware. A VM that can be brought up on
standard Hall Cluster hosts would have advantages.
[8GB RAM, CPUS:2, 500TB disk; user interactive]
etc...
Just to remind people, the primary function of this cluster would be to
move systems providing slow-controls, monitoring, softIOC support, etc
off dedicated hardware to VMs. That would allow us to take advantage of
VM features like snapshotting, improved/simplified fail-over, etc.
Roughly speaking this VM cluster would need to follow these guidelines:
- Physically located inside the accelerator fence
- existing compute racks on 2nd floor CH?
- generator and battery backed power
- It would follow the "Experimental Hall Computing" directive and be
as independent of the central resources as possible.
- For example, the VM cluster hosts should stay up and remain
functional if a network link to the main campus goes down.
- It will be maintained independently of the central systems
and existing ESX cluster.
- Updates/changes would need to be scheduled with direct input from
the Hall Compute Coordinators.
- etc..
- VMs would be mapped directly the relevant Hall VLANs/subnets
-- Brad
--
Brad Sawatzky (he/him), PhD<brads at jlab.org> -<>- Jefferson Lab/ Hall C/C111
Ph: 757-269-5947 -<>- Fax: 757-269-5235 -<>- Pager:brads-page at jlab.org
The most exciting phrase to hear in science, the one that heralds new
discoveries, is not "Eureka!" but "That's funny..." -- Isaac Asimov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-offline/attachments/20220524/6042dfe4/attachment.html>
More information about the Halld-offline
mailing list