<html>
<head>
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text --><style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
</head>
<body>
<meta content="text/html; charset=UTF-8">
<style type="text/css" style="">
<!--
p
{margin-top:0;
margin-bottom:0}
-->
</style>
<div dir="ltr">
<div id="x_divtagdefaultwrapper" dir="ltr" style="font-size:12pt; color:#000000; font-family:Calibri,Arial,Helvetica,sans-serif">
<p>I deleted/moved 351 GB in mc_production.</p>
<p><br>
</p>
<p>The following things are owned by other users though, and ~6 GB more can be freed up if no longer needed:</p>
<p><br>
</p>
<p>Luca:</p>
<p><span>762M: /work/hallb/hps/mc_production/Luca/lhe/Vegas_10_10_2016/</span></p>
<p><br>
</p>
<p>Holly:</p>
<span><span>3.1G: /work/hallb/hps/mc_production/MG5/dst/</span><br>
</span>
<p><span><span>2G: /work/hallb/hps/mc_production/postTriSummitFixes/tritrig/1pt05/NOSUMCUT/</span></span><br>
</p>
<p><span>25M: /work/hallb/hps/mc_production/logs/slic/ap/</span><br>
</p>
<p><span><span></span></span></p>
<p><span><span>16M: /work/hallb/hps/mc_production/logs/readout/beam-tri/1pt92/</span></span></p>
<span><span><span></span></span></span><br>
<span><span></span></span>
<p><span><span>Matt G:</span></span></p>
<p><span><span><span><span><span><span>35M: /work/hallb/hps/mc_production/dst/</span></span></span></span><br>
</span></span></p>
<p><span><span><span>11M: /work/hallb/hps/mc_production/logs/dqm/</span></span></span></p>
<p><br>
<span><span></span></span></p>
<p><span><span>There is also 2 GB of old 2.2 GeV A' MC, which should no longer be relevant, but I didn't want to do anything with it since it had mock data stuff in there:</span></span></p>
<p><span><span><span>/work/hallb/hps/mc_production/lhe/ap/2pt2/</span></span></span></p>
<p><span><span><span><br>
</span></span></span></p>
<p></p>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> hps-software@SLAC.STANFORD.EDU <hps-software@SLAC.STANFORD.EDU> on behalf of Nathan Baltzell <baltzell@jlab.org><br>
<b>Sent:</b> Thursday, June 1, 2017 1:23:19 PM<br>
<b>To:</b> HPS-SOFTWARE<br>
<b>Cc:</b> hps-analysis@jlab.org<br>
<b>Subject:</b> Re: [Hps-analysis] Fwd: [Clas_offline] Fwd: ENP consumption of disk space under /work</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt;">
<div class="PlainText">Here’s the most relevant usage<br>
<br>
649G mrsolt/<br>
570G sebouh/<br>
459G mc_production/<br>
228G holly<br>
159G mccaky/<br>
78G rafopar/<br>
45G omoreno/<br>
44G spaul<br>
39G fxgirod<br>
34G jeremym<br>
<br>
data/engrun2015:<br>
3.2T tweakpass6<br>
50G tweakpass6fail<br>
64G tpass7<br>
2.4G tpass7b<br>
39G tpass7c<br>
6.5G t_tweakpass_a<br>
373G pass6/skim<br>
201G pass6/dst<br>
<br>
data/physrun2016:<br>
3.5T pass0<br>
690G feeiter4<br>
94M feeiter0<br>
327M feeiter1<br>
339M feeiter2<br>
338M feeiter3<br>
15G noPass<br>
24G pass0_allign<br>
52G pass0fail<br>
4.5G tmp_test<br>
281G tpass1<br>
11G upass0<br>
<br>
<br>
<br>
<br>
On Jun 1, 2017, at 11:05, Stepan Stepanyan <stepanya@jlab.org> wrote:<br>
<br>
> FYI, we need to move files.<br>
> <br>
> Stepan<br>
> <br>
>> Begin forwarded message:<br>
>> <br>
>> From: Harut Avakian <avakian@jlab.org><br>
>> Subject: [Clas_offline] Fwd: ENP consumption of disk space under /work<br>
>> Date: June 1, 2017 at 5:01:24 PM GMT+2<br>
>> To: "clas_offline@jlab.org" <clas_offline@jlab.org><br>
>> <br>
>> <br>
>> <br>
>> <br>
>> Dear All,<br>
>> <br>
>> As you can see from the e-mail below, keeping all our work disk space requires some additional funding.<br>
>> Option 3 will inevitably impact on farm operations, removing of ~20% space from Lustre.<br>
>> <br>
>> We can also choose something between options 1) and 3).<br>
>> Please revise the content and move at least 75% of what is in /work/clas to either /cache or /volatile?
<br>
>> The current Hall-B usage includes:<br>
>> 550G hallb/bonus<br>
>> 1.5T hallb/clase1<br>
>> 3.6T hallb/clase1-6<br>
>> 3.3T hallb/clase1dvcs<br>
>> 2.8T hallb/clase1dvcs2<br>
>> 987G hallb/clase1f<br>
>> 1.8T hallb/clase2<br>
>> 1.6G hallb/clase5<br>
>> 413G hallb/clase6<br>
>> 2.2T hallb/claseg1<br>
>> 3.9T hallb/claseg1dvcs<br>
>> 1.2T hallb/claseg3<br>
>> 4.1T hallb/claseg4<br>
>> 2.7T hallb/claseg5<br>
>> 1.7T hallb/claseg6<br>
>> 367G hallb/clas-farm-output<br>
>> 734G hallb/clasg10<br>
>> 601G hallb/clasg11<br>
>> 8.1T hallb/clasg12<br>
>> 2.4T hallb/clasg13<br>
>> 2.4T hallb/clasg14<br>
>> 28G hallb/clasg3<br>
>> 5.8G hallb/clasg7<br>
>> 269G hallb/clasg8<br>
>> 1.2T hallb/clasg9<br>
>> 1.3T hallb/clashps<br>
>> 1.8T hallb/clas-production<br>
>> 5.6T hallb/clas-production2<br>
>> 1.4T hallb/clas-production3<br>
>> 12T hallb/hps<br>
>> 13T hallb/prad<br>
>> <br>
>> Regards,<br>
>> <br>
>> Harut<br>
>> <br>
>> P.S. Few times we had crashes and they may also happen in future, so keeping important files in /work is not recommended.<br>
>> You can see the list of lost files in /site/scicomp/lostfiles.txt and /site/scicomp/lostfiles-jan-2017.txt<br>
>> <br>
>> <br>
>> <br>
>> -------- Forwarded Message --------<br>
>> Subject: ENP consumption of disk space under /work<br>
>> Date: Wed, 31 May 2017 10:35:51 -0400<br>
>> From: Chip Watson <watson@jlab.org><br>
>> To: Sandy Philpott <philpott@jlab.org>, Graham Heyes <heyes@jlab.org>, Ole Hansen <ole@jlab.org>, Harut Avakian <avakian@jlab.org>, Brad Sawatzky <brads@jlab.org>, Mark M. Ito <marki@jlab.org><br>
>> <br>
>> All,<br>
>> <br>
>> As I have started on the procurement of the new /work file server, I <br>
>> have discovered that Physics' use of /work has grown unrestrained over <br>
>> the last year or two.<br>
>> <br>
>> "Unrestrained" because there is no way under Lustre to restrain it <br>
>> except via a very unfriendly Lustre quota system. As we leave some <br>
>> quota headroom to accommodate large swings in usage for each hall for <br>
>> cache and volatile, then /work continues to grow.<br>
>> <br>
>> Total /work has now reached 260 TB, several times larger than I was <br>
>> anticipating. This constitutes more than 25% of Physics' share of <br>
>> Lustre, compared to LQCD which uses less than 5% of its disk space on <br>
>> the un-managed /work.<br>
>> <br>
>> It would cost Physics an extra $25K (total $35K - $40K) to treat the 260 <br>
>> TB as a requirement.<br>
>> <br>
>> There are 3 paths forward:<br>
>> <br>
>> (1) Physics cuts its use of /work by a factor of 4-5.<br>
>> (2) Physics increases funding to $40K<br>
>> (3) We pull a server out of Lustre, decreasing Physics' share of the <br>
>> system, and use that as half of the new active-active pair, beefing it <br>
>> up with SSDs and perhaps additional memory; this would actually shrink <br>
>> Physics near term costs, but puts higher pressure on the file system for <br>
>> the farm<br>
>> <br>
>> The decision is clearly Physics', but I do need a VERY FAST response to <br>
>> this question, as I need to move quickly now for LQCD's needs.<br>
>> <br>
>> Hall D + GlueX, 96 TB<br>
>> CLAS + CLAS12, 98 TB<br>
>> Hall C, 35 TB<br>
>> Hall A <unknown, still scanning><br>
>> <br>
>> Email, call (x7101), or drop by today 1:30-3:00 p.m. for discussion.<br>
>> <br>
>> thanks,<br>
>> Chip<br>
>> <br>
>> <br>
>> _______________________________________________<br>
>> Clas_offline mailing list<br>
>> Clas_offline@jlab.org<br>
>> <a href="https://mailman.jlab.org/mailman/listinfo/clas_offline">https://mailman.jlab.org/mailman/listinfo/clas_offline</a><br>
> <br>
> _______________________________________________<br>
> Hps-analysis mailing list<br>
> Hps-analysis@jlab.org<br>
> <a href="https://mailman.jlab.org/mailman/listinfo/hps-analysis">https://mailman.jlab.org/mailman/listinfo/hps-analysis</a><br>
<br>
########################################################################<br>
Use REPLY-ALL to reply to list<br>
<br>
To unsubscribe from the HPS-SOFTWARE list, click the following link:<br>
<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwMF-g&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=J4PP6Zl8IyGHpsqWaKegORCYw8hoCHePTw5O95a5lqQ&m=cy79CXspXvpm_W3W7-usQHWPudYG2uahKDcBVpg45Ow&s=W5NA0YhiwRVhTGDO44Ktk6-cWNakzDnDPgjVIkJ4V-U&e=">https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1</a><br>
</div>
</span></font>
</body>
</html>