<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<style type="text/css">
<!--
html{color:#555555;}body{line-height:1.5;font-family:'Trebuchet MS','Helvetica Neue',Arial,Helvetica,sans-serif;font-size:87.5%;}h1{font-size:1.6em;}h2.field-label{display:inline-block;font-size:1em;padding-right:5px;min-width:10em;margin:0.3em;}.problem_report{line-height:1.5;max-width:60em;}fieldset.problem_report.resolved
legend{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAy0lEQVQ4jWP8//8/AyWAiZACd3f3/xYWFrht+f//P1a84t3e/0obff4rbfT5D1GGXR0LuoEr3+/7X3W4n2gvwA0gVSOKAcqbfPGGpImJCU45JgYGBoa7fpsZ22wLSbadgYGBgRE9GrF55Vf2BYbHjx8zYjWB0ljAcAGGExkZ/0MtwuoCggmJEBh4AzBS4pMnT/7fuXOH4dKlSwwnT56EiwcGBv43MDBgMDExYdDX12eQkZGBhAlyiC5YsOA/AwMDUXjLli3/iYoFQgAA+pSxZrXofD0AAAAASUVORK5CYII=);background-repeat:no-repeat;padding-left:18px;}fieldset.problem_report.needs_attention
legend{background-image:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAABmJLR0QA/wD/AP+gvaeTAAAA9ElEQVR42sWTvUoDQRSFv9wMKWxSBVmzdhZJIwTWv9pyLWxTpbE1kBeJPoLxBazzBgGFKNqlHXAhsITUw1y7sMpmjER0YJrDPWcO556BPzhjYPJjlhfT82LUpxcK6Lo5U0JMgcu56tXy7BQajeBDpkAcAEdz1W6uyrLdYieK2DMmKCArcqczpH/ddc0msy+OkyQJC4h3N0ynx+Q5ALtUNs5q5U+8e+R+VPFi4kjk3NZqd++qUK+TZVnYwSfAOyvejeLXt/2qVG/dYoG1dqseBNco27bs/wXKWhIDB8AhcFLAH4Bn4Al4AUqT7RVC++6mv/JVPwDi3VGzomYvyAAAAABJRU5ErkJggg==);background-repeat:no-repeat;padding-left:18px;}.problem_report div.field-items{display:inline-block;}div.date-vitals p{font-size:87.5%;}a{text-decoration:none;}.Readme a:link,.Readme a:visited,.Readme
a:active{color:red;}
-->
</style>
</head>
<body id="mimemail-body" class="elog-logentry-notify">
<div id="center">
<div id="main">
<style>
<!--/*--><![CDATA[/* ><!--*/
div.field-vitals{
margin: 0.5em 0;
}
div.field-vitals .field-type-taxonomy-term-reference {
margin: 0.1em 0;
}
article.comment {
padding-left: 10px;
}
article.comment.odd {
background-color: #EEEEEE;
}
article.comment.even {
background-color: #DDDDDD;
}
div.node-content.logentry table{
width: auto;
border-collapse: collapse;
border-spacing: 0;
border-width: 1px;
}
div.node-content.logentry th{
border: inherit;
}
div.node-content.logentry blockquote{
background-color: #FFFFFF;
}
div.node-content.logentry caption{
font-size: 1em;
font-weight: normal;
}
table.field-vitals{
margin-top: 1em;
margin-bottom: 1em;
font-size: 87.5%;
}
table.field-vitals th{
vertical-align: middle;
text-align: left;
width: 15%;
padding: 0.1em;
}
table.field-vitals td{
vertical-align: middle;
text-align: left;
width: auto;
padding: 0.1em;
}
table.field-vitals td li {
margin-left: 0;
list-style-type: none;
list-style-image: none;
}
table.downtime {
width: 30em;
margin-bottom: 1em;
border: 1px black dotted;
}
table.downtime th {
text-align: center;
}
table.downtime td {
text-align: center;
}
tr.caption th {
border-bottom: none;
}
table.downtime tfoot{
background-color:#EEEEEE;
}
div.field-name-body{
margin: 1em 0;
font-size: 110%;
}
div.date-vitals p{
margin: .1em 0;
}
article div.ctools-collapsible-container{
margin-left: -5px;
clear: both;
}
#comment-form{
margin-left: 5px;
border: graytext outset medium;
-moz-border-radius: 15px;
border-radius: 15px;
padding: 1em;
}
div.comments-form-box {
margin-top: 2em;
margin-bottom: 5em;
}
h3.comment-title {
/* display: none; */
}
p.author-datetime{
font-weight: bold;
}
/*--><!]]>*/
</style><article id="node-731133" class="node node-logentry contextual-links-region article ia-n clearfix" role="article"><header class="node-header"><h1 class="node-title">
<a href="https://logbooks.jlab.org/entry/3658217" rel="bookmark">RC Daily Update - 2/23/2019 - (no Daily Meeting)</a>
</h1>
</header><div class="contextual-links-wrapper"><ul class="contextual-links"><li><a href="https://logbooks.jlab.org/entry/3658217/edit?destination=email/send">Edit</a></li><li><a href="https://logbooks.jlab.org/entry/3658217/delete?destination=email/send">Delete</a></li></ul></div>
<div class="date-vitals">
<p class="author-datetime">
Lognumber <a href="https://logbooks.jlab.org/entry/3658217" class="lognumber" data-lognumber="3658217">3658217</a>. Submitted by <a href="https://logbooks.jlab.org/user/cameronc">cameronc</a> on <time datetime="2019-02-23T10:41:09-0500" pubdate="pubdate"><a href="https://logbooks.jlab.org/entries?start_date=1550932869&end_date=1550940069&book=HALOG">Sat, 02/23/2019 - 10:41</a></time>. </p>
<table class="field-vitals"><tr><th>Logbooks: </th><td><a href="https://logbooks.jlab.org/book/halog">HALOG</a></td></tr><tr><th>Entry Makers: </th><td>cameronc</td></tr></table></div>
<div class="logentry node-content">
<p>Hall A RC Daily Update</p>
<p>Experiment Status:<br />
Yesterday Day until now<br />
* Good Production data all day starting at 30uA<br />
* Got up to 40 uA and have stayed there: <a href="https://logbooks.jlab.org/entry/3657590">https://logbooks.jlab.org/entry/3657590</a><br />
* We can add an epics variable call to our end of run scripts that will log the configuration, if an expert could please do this at a convenient time: <a href="https://logbooks.jlab.org/entry/3657778">https://logbooks.jlab.org/entry/3657778</a><br />
* There was an incident where the target temperature readback variables disconnected from epics because the raspberry pi that sets them had a problem that needed rebooting: <a href="https://logbooks.jlab.org/entry/3657856">https://logbooks.jlab.org/entry/3657856</a><br />
** We identified the problem incorrectly - we thought it was an IOCHA12 issue, but it is not controlling the temperatures - miscommunication with target on call: <a href="https://logbooks.jlab.org/entry/3657821">https://logbooks.jlab.org/entry/3657821</a><br />
** Reboot of IOCHA12 unsuccessful from remote, need access to reboot manually: <a href="https://logbooks.jlab.org/entry/3657824">https://logbooks.jlab.org/entry/3657824</a><br />
** After rebooting the raspberry pi and restarting background scripts everything came back: <a href="https://logbooks.jlab.org/entry/3657857">https://logbooks.jlab.org/entry/3657857</a><br />
** The septum corrector magnets got reset along with IOCHA12 and needed 5 minutes to retune their settings (30 A US, 10 A DS, previously was 38 US and 0 DS for 2/22 OWL): <a href="https://logbooks.jlab.org/entry/3657869">https://logbooks.jlab.org/entry/3657869</a><br />
** During the access some tests of clock trigger and new "APEX" CODA config were successful: <a href="https://logbooks.jlab.org/entry/3657844">https://logbooks.jlab.org/entry/3657844</a><br />
* Took the opportunity to test the APEX config during the access to fix temperature rpi:<br />
** Alex updated APEX config: <a href="https://logbooks.jlab.org/entry/3657612">https://logbooks.jlab.org/entry/3657612</a><br />
* Too many events in a run causes scalers to run out of room to keep counting, stick below 3M events: <a href="https://logbooks.jlab.org/entry/3657614">https://logbooks.jlab.org/entry/3657614</a><br />
* Small incident with CODA crashing, preventing end of run entry autolog from being made, but data is fine (we should not tie end of run scripts directly to "end" button press... workaround background scripts?): <a href="https://logbooks.jlab.org/entry/3658184#comment-20336">https://logbooks.jlab.org/entry/3658184#comment-20336</a><br />
* We turned off LHRS S0 HV to spare the PMTs - not being used in our triggers anyway except at very high prescaled value (ps2 = 30,000): <a href="https://logbooks.jlab.org/entry/3657889">https://logbooks.jlab.org/entry/3657889</a><br />
* Plotting/analysis updates<br />
** Alex fixed up "good event" cuts to not be based on tritium: <a href="https://logbooks.jlab.org/entry/3657641">https://logbooks.jlab.org/entry/3657641</a><br />
** Cameron and Ranit started working on a "good coincidences" fitting and counting macro update:<br />
** The raster was incorrectly plotting based on US raster which is off, and the DS raster appears to suffer from FADC timing update from last night: <a href="https://logbooks.jlab.org/entry/3657695">https://logbooks.jlab.org/entry/3657695</a> and others<br />
** Ranit noticed VDC efficiencies went up suddenly: <a href="https://logbooks.jlab.org/entry/3657803">https://logbooks.jlab.org/entry/3657803</a></p>
<p>Hall Status:<br />
* The hall is fine, having taken 40 uA beam on target for 16 ABUs, with the dump diffuser not needing to be investigated.<br />
* To prevent further accesses to reboot IOCs we would like to have the raspberry pi with target temperatures put on a remote power switch.<br />
* CORRECTION: I have been mistaken and misspoken about the nature of the RHRS dipole ramping and power supply faulting. Instead of it being a problem with the current ramping controlling computer, it was instead an unknown problem (that should be investigated later, though it could be anything, including single event upset) with the fault management computer and the ramping capabilities of the power supply are all fine and the way that it ramped up (and tripped) were fine and expected behaviors. The problem we encountered was the fact that the magnet tripped and then that the fault of the power supply wasn't able to be cleared without ramping all the way down to zero and reseting the system itself.</p>
<p>Plan:<br />
Continue production data on Tungsten 2.8% target at 40uA.</p>
<p>RC:<br />
Cameron Clarke is the RC (email <a href="mailto:cameronc@jlab.org">cameronc@jlab.org</a>, RC phone: 7578761787)<br />
All shifts are on-call and should expect to receive full beam for production on tungsten foils.</p>
</div>
<div class="attachment-box">
</div>
</article> </div>
</div>
</body>
</html>