<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>I need to gather some numbers on our usage, but while I am doing
that....FYI....<br>
</p>
<div class="moz-forward-container"><br>
<br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" cellspacing="0"
cellpadding="0" border="0">
<tbody>
<tr>
<th valign="BASELINE" align="RIGHT" nowrap="nowrap">Subject:
</th>
<td>ENP consumption of disk space under /work</td>
</tr>
<tr>
<th valign="BASELINE" align="RIGHT" nowrap="nowrap">Date: </th>
<td>Wed, 31 May 2017 10:35:51 -0400</td>
</tr>
<tr>
<th valign="BASELINE" align="RIGHT" nowrap="nowrap">From: </th>
<td>Chip Watson <a class="moz-txt-link-rfc2396E" href="mailto:watson@jlab.org"><watson@jlab.org></a></td>
</tr>
<tr>
<th valign="BASELINE" align="RIGHT" nowrap="nowrap">To: </th>
<td>Sandy Philpott <a class="moz-txt-link-rfc2396E" href="mailto:philpott@jlab.org"><philpott@jlab.org></a>, Graham Heyes
<a class="moz-txt-link-rfc2396E" href="mailto:heyes@jlab.org"><heyes@jlab.org></a>, Ole Hansen <a class="moz-txt-link-rfc2396E" href="mailto:ole@jlab.org"><ole@jlab.org></a>,
Harut Avakian <a class="moz-txt-link-rfc2396E" href="mailto:avakian@jlab.org"><avakian@jlab.org></a>, Brad Sawatzky
<a class="moz-txt-link-rfc2396E" href="mailto:brads@jlab.org"><brads@jlab.org></a>, Mark M. Ito <a class="moz-txt-link-rfc2396E" href="mailto:marki@jlab.org"><marki@jlab.org></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<pre>All,
As I have started on the procurement of the new /work file server, I
have discovered that Physics' use of /work has grown unrestrained over
the last year or two.
"Unrestrained" because there is no way under Lustre to restrain it
except via a very unfriendly Lustre quota system. As we leave some
quota headroom to accommodate large swings in usage for each hall for
cache and volatile, then /work continues to grow.
Total /work has now reached 260 TB, several times larger than I was
anticipating. This constitutes more than 25% of Physics' share of
Lustre, compared to LQCD which uses less than 5% of its disk space on
the un-managed /work.
It would cost Physics an extra $25K (total $35K - $40K) to treat the 260
TB as a requirement.
There are 3 paths forward:
(1) Physics cuts its use of /work by a factor of 4-5.
(2) Physics increases funding to $40K
(3) We pull a server out of Lustre, decreasing Physics' share of the
system, and use that as half of the new active-active pair, beefing it
up with SSDs and perhaps additional memory; this would actually shrink
Physics near term costs, but puts higher pressure on the file system for
the farm
The decision is clearly Physics', but I do need a VERY FAST response to
this question, as I need to move quickly now for LQCD's needs.
Hall D + GlueX, 96 TB
CLAS + CLAS12, 98 TB
Hall C, 35 TB
Hall A <unknown, still scanning>
Email, call (x7101), or drop by today 1:30-3:00 p.m. for discussion.
thanks,
Chip
</pre>
</div>
</body>
</html>