[Hallc_running] [New Logentry] RC Update for Wed March 4
brads at jlab.org
brads at jlab.org
Wed Mar 4 17:15:01 EST 2020
Logentry Text:
--
<h1><span class="mw-headline"><a class="new" title="RC Meeting and Run Plan, Wednesday, March 4 2020" href="https://hallcweb.jlab.org/wiki/index.php?title=RC_Meeting_and_Run_Plan,_Wednesday,_March_4_2020">RC Meeting and Run Plan, Wednesday, March 4 2020</a></span></h1><h1><span id="RC_Meeting_Information" class="mw-headline">RC Update</span></h1><p><strong> RC Daily Meetings are M-F at 4:15 PM, 2nd floor Counting House Meeting Room </strong> <a class="external text" href="https://bluejeans.com/861439920" rel="nofollow">Bluejeans Connection</a></p><p><strong> Wednesday 4 Mar 2020. </strong></p><h1><span id="General_updates" class="mw-headline">General updates</span></h1><ul><li>Target has been running smoothly at around 53%+, but Accelerator has been having problems<ul><li>Took ~13 hours of longitudinal data from Tues Swing through Wed Owl</li></ul></li><li>Accelerator down Wed from 08:30 into swing<ul><li>'Macro Pulse Generator' replacement in injector was attempted (details!
below)</li><li>MCC deferred beam restoration to the Halls on Wed Swing in order to investigate continuing (significant) beam interception and Hall C bunchlength issues in the injector <a class="external autonumber" href="https://logbooks.jlab.org/entry/3797388" rel="nofollow">[1]</a> <a class="external autonumber" href="https://cebaf.jlab.org/dtm/reports/activity-audit?eventId=6239" rel="nofollow">[2]</a></li></ul></li><li>SHMS dipole started having issues during Wed Day shift (see below for details)<ul><li>SHMS still offline, as of 5pm experts are in the Hall investigating</li></ul></li></ul><ul><li>Beam studies took all of Tuesday Day shift and bled into Swing by a couple hours<ul><li>Attempt to understand what is going on with the machine (beam loss, activation, etc) <a class="external autonumber" href="https://opsweb.acc.jlab.org/CSUEApps/atlis/task/20368" rel="nofollow">[3]</a></li><li>Identified some significant bunch-length issues on the C beam in particular<ul><li>!
Likely the source of significant beam loss / activation in the!
machine, and bleedthrough/scraping problems seen in Hall C (and A) during Mollers</li><li>There may be an issue with the Hall C laser (unclear)</li><li>Compressing our bunch with the pre-buncher seems to anti-correlate with getting production quality beam to Hall A</li></ul></li><li>No easy solution at the moment<ul><li>Many thanks to RadCon and OPs for sticking around a little longer while the Target folks worked to wrap up in the Hall</li></ul></li></ul></li></ul><ul><li>Target group + RadCon was in the Hall Tuesday Day 10am -- 6pm with work focused on EPR problems<ul><li>They had a few issues, but managed to get to a working state by the time they had to get out</li><li>See <a class="external text" href="https://logbooks.jlab.org/entry/3796852" rel="nofollow">Hall Access for EPR Debugging and EPR NMR Calibration</a> for more details</li><li>More work during Wed Day shift (during Acc downtime) <a class="external autonumber" href="https://logbooks.jlab.org/entry/3797346" !
rel="nofollow">[4]</a></li></ul></li></ul><ul><li>'Macro Pulse Generator' replacement in injector was attempted Wed Day<ul><li>Started 8:30 with a plan to recover by 1:30, but it is running long (still ongoing as of 4pm)</li><li>Attempted installation, but had to back it out later late morning (ran into problems)</li></ul></li></ul><ul><li>SHMS Dipole has tripped 5 times during Wed day shift <a class="external autonumber" href="https://logbooks.jlab.org/entry/3797173" rel="nofollow">[5]</a> <a class="external autonumber" href="https://logbooks.jlab.org/entry/3797183" rel="nofollow">[6]</a> <a class="external autonumber" href="https://logbooks.jlab.org/entry/3797273" rel="nofollow">[7]</a> <a class="external autonumber" href="https://logbooks.jlab.org/entry/3797304" rel="nofollow">[8]</a> <a class="external autonumber" href="https://logbooks.jlab.org/entry/3797363" rel="nofollow">[9]</a><ul><li>Steve Lassiter went in the Hall after the 3rd trip (had difficulties resetting re!
motely) and took a look at the power supply<ul><li>Found the 9V power s!
upply that had been replaced earlier had been displaced (mechanical vibration due to dump switches closing?) and corrected that. He felt that was unlikely to be the root cause of the trips however.</li><li>He suspects the V-Loop board may have an issue?</li><li>Jack Segal is investigating whether we have spares and what our near term options are.</li><li>Jack's group is going to take a controlled access and see what they can find.</li></ul></li><li>Mike Fowler rebooted the Magnet controls IOC in mid-Day shift on Tuesday to correct some issues <a class="external autonumber" href="https://logbooks.jlab.org/entry/3796697" rel="nofollow">[10]</a><ul><li>GUI wasn't working quite right, not sure what else may have been affected at that time</li></ul></li></ul></li></ul><ul><li>Joe Did a quick walkthrough training for 2 more people Wed Day</li></ul><h2><span id="Notes_for_the_Future" class="mw-headline">Notes for the Future</span></h2><ul><li>Scheduled <strong>Moller</strong> meas!
urement for <strong>Thursday DAY</strong> shift</li></ul><ul><li>'Filler' runs for next spin up / target-down period (Don't want to forget about this!)<ul><li>1 hour run with target out for measure of the background just from the beryllium windows</li></ul></li></ul><h2><span id="Pending_Issues" class="mw-headline">Pending Issues</span></h2><ul><li>pNMR issues<ul><li>Software/driver issue needs to be resolved</li><li>Checkout during next non-production down time (Moller or SHMS fix)</li></ul></li><li>EPR problems <a class="external autonumber" href="https://logbooks.jlab.org/entry/3793821" rel="nofollow">[11]</a> saw some progress on Tuesday <a class="external autonumber" href="https://logbooks.jlab.org/entry/3796852" rel="nofollow">[12]</a>, but issues remain<ul><li>Tentative plan now is to continue the d2n (and A1n) program by replacing the photo-diode that is placed directly on the pumping chamber (we have many spares)</li><li>It would be good to write up a procedure to !
streamline this replacement procedure for us and RadCon</li></ul></li><!
li>Kepco power supply commissioning/installation (Todd, Bill)<ul><li>Bill will update status on Thursday</li></ul></li><li>Verify that calibration constants and input values associated with NMR and other polarimetry are consistent and will be correct when the software is restarted <a class="external autonumber" href="https://logbooks.jlab.org/entry/3796178" rel="nofollow">[13]</a> <a class="external autonumber" href="https://logbooks.jlab.org/entry/3795421" rel="nofollow">[14]</a> <a class="external autonumber" href="https://logbooks.jlab.org/entry/3795982" rel="nofollow">[15]</a> <a class="external autonumber" href="https://logbooks.jlab.org/entry/3796117" rel="nofollow">[16]</a> <a class="external autonumber" href="https://logbooks.jlab.org/entry/3796111" rel="nofollow">[17]</a> (Bill, Junhao)</li><li>Persistent BLM & ion chamber trips<ul><li>General activation, weird BLM issues (<a class="external free" href="https://logbooks.jlab.org/entry/3792326" rel="nofollow">ht!
tps://logbooks.jlab.org/entry/3792326</a>), Ad-hoc ion chamber threshold adjustments (<a class="external free" href="https://logbooks.jlab.org/entry/3792095" rel="nofollow">https://logbooks.jlab.org/entry/3792095</a>)</li><li><strong>NOTE</strong>: Shift crew should <em>NOT</em> give authorization to MCC change ion chamber thresholds. If they ask, tell Operator to contact an appropriate expert. Make a log entry if trips persist, or call the RC if it becomes severe (for example, >20 trips/hour is getting intolerable)</li><li>Beam Studies program will investigate some of these issues (most pressing is significant activation + beam loss in the arcs)</li></ul></li><li>Contact with Target readbacks through AnywhereUSB USB bridge still drop out on occasion<ul><li>Initial replacement of the USB bridge failed (did not seem to help underlying issue, then bridge stopped working), replaced with original device</li></ul></li><li>OPs wished to reboot hung iochc10 <a class="external a!
utonumber" href="https://logbooks.jlab.org/entry/3795636" rel="nofollow!
">[18]</a>, noted in December as well <a class="external autonumber" href="https://logbooks.jlab.org/entry/3745818" rel="nofollow">[19]</a><ul><li>This runs the Moller cryo controls and should not be rebooted without expert oversight (does not impact operations). D. Gaskell is looking into getting this addressed properly with OPs.</li></ul></li></ul><h2><span id="Opportunistic_Access_Jobs" class="mw-headline">Opportunistic Access Jobs</span></h2><h2><span id="Target_coil_rotation_planned_for_March_13--20" class="mw-headline"><strong>Target coil rotation planned for March 13--20</strong></span></h2><ul><li>Nominal 1-week downtime in Hall C to rotate target coils, install new cell, and transition to d2n program</li></ul><ul><li>More details are here: <a title="PolHe3 Target Coil Rotation Plan, Mar 2020" href="https://hallcweb.jlab.org/wiki/index.php/PolHe3_Target_Coil_Rotation_Plan,_Mar_2020">PolHe3_Target_Coil_Rotation_Plan,_Mar_2020</a></li></ul><h1><span id="Current_Run_Pl!
an" class="mw-headline"><br></span></h1>
---
This is a plain text email for clients that cannot display HTML. The full logentry can be found online at https://logbooks.jlab.org/entry/3797402
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/hallc_running/attachments/20200304/f471b382/attachment-0001.html>
More information about the Hallc_running
mailing list