<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p><br>
</p>
<div class="moz-forward-container"><br>
<br>
<p>Dear All,</p>
<p>As you can see from the e-mail below, keeping all our work
disk space requires some additional funding.<br>
</p>
<p>Option 3 will inevitably impact on farm operations, removing of
~20% space from Lustre. </p>
<p>We can also choose something between options 1) and 3).<br>
</p>
<p>Please revise the content and move at least 75% of what is in
/work/clas to either /cache or /volatile? <br>
</p>
<p>The current Hall-B usage includes:<br>
</p>
<p>550G hallb/bonus<br>
1.5T hallb/clase1<br>
3.6T hallb/clase1-6<br>
3.3T hallb/clase1dvcs<br>
2.8T hallb/clase1dvcs2<br>
987G hallb/clase1f<br>
1.8T hallb/clase2<br>
1.6G hallb/clase5<br>
413G hallb/clase6<br>
2.2T hallb/claseg1<br>
3.9T hallb/claseg1dvcs<br>
1.2T hallb/claseg3<br>
4.1T hallb/claseg4<br>
2.7T hallb/claseg5<br>
1.7T hallb/claseg6<br>
367G hallb/clas-farm-output<br>
734G hallb/clasg10<br>
601G hallb/clasg11<br>
8.1T hallb/clasg12<br>
2.4T hallb/clasg13<br>
2.4T hallb/clasg14<br>
28G hallb/clasg3<br>
5.8G hallb/clasg7<br>
269G hallb/clasg8<br>
1.2T hallb/clasg9<br>
1.3T hallb/clashps<br>
1.8T hallb/clas-production<br>
5.6T hallb/clas-production2<br>
1.4T hallb/clas-production3<br>
12T hallb/hps<br>
13T hallb/prad<br>
</p>
<p><br>
</p>
<p>Regards,</p>
Harut<br>
<div class="moz-forward-container"><br>
P.S. Few times we had crashes and they may also happen in
future, so keeping important files in /work is not recommended.<br>
You can see the list of lost files in
/site/scicomp/lostfiles.txt and
/site/scicomp/lostfiles-jan-2017.txt<br>
<br>
<br>
<br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" border="0"
cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th align="RIGHT" nowrap="nowrap" valign="BASELINE">Subject:
</th>
<td>ENP consumption of disk space under /work</td>
</tr>
<tr>
<th align="RIGHT" nowrap="nowrap" valign="BASELINE">Date:
</th>
<td>Wed, 31 May 2017 10:35:51 -0400</td>
</tr>
<tr>
<th align="RIGHT" nowrap="nowrap" valign="BASELINE">From:
</th>
<td>Chip Watson <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:watson@jlab.org"><watson@jlab.org></a></td>
</tr>
<tr>
<th align="RIGHT" nowrap="nowrap" valign="BASELINE">To: </th>
<td>Sandy Philpott <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:philpott@jlab.org"><philpott@jlab.org></a>,
Graham Heyes <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:heyes@jlab.org"><heyes@jlab.org></a>,
Ole Hansen <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:ole@jlab.org"><ole@jlab.org></a>,
Harut Avakian <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:avakian@jlab.org"><avakian@jlab.org></a>,
Brad Sawatzky <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:brads@jlab.org"><brads@jlab.org></a>,
Mark M. Ito <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:marki@jlab.org"><marki@jlab.org></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<pre>All,
As I have started on the procurement of the new /work file server, I
have discovered that Physics' use of /work has grown unrestrained over
the last year or two.
"Unrestrained" because there is no way under Lustre to restrain it
except via a very unfriendly Lustre quota system. As we leave some
quota headroom to accommodate large swings in usage for each hall for
cache and volatile, then /work continues to grow.
Total /work has now reached 260 TB, several times larger than I was
anticipating. This constitutes more than 25% of Physics' share of
Lustre, compared to LQCD which uses less than 5% of its disk space on
the un-managed /work.
It would cost Physics an extra $25K (total $35K - $40K) to treat the 260
TB as a requirement.
There are 3 paths forward:
(1) Physics cuts its use of /work by a factor of 4-5.
(2) Physics increases funding to $40K
(3) We pull a server out of Lustre, decreasing Physics' share of the
system, and use that as half of the new active-active pair, beefing it
up with SSDs and perhaps additional memory; this would actually shrink
Physics near term costs, but puts higher pressure on the file system for
the farm
The decision is clearly Physics', but I do need a VERY FAST response to
this question, as I need to move quickly now for LQCD's needs.
Hall D + GlueX, 96 TB
CLAS + CLAS12, 98 TB
Hall C, 35 TB
Hall A <unknown, still scanning>
Email, call (x7101), or drop by today 1:30-3:00 p.m. for discussion.
thanks,
Chip
</pre>
</div>
</div>
</body>
</html>