[Halld-offline] Fwd: Re: ENP consumption of disk space under /work
Mark Ito
marki at jlab.org
Mon Oct 2 20:58:22 EDT 2017
from Sandy Philpott, FYI
-------- Forwarded Message --------
Subject: Re: ENP consumption of disk space under /work
Date: Mon, 2 Oct 2017 13:00:29 -0400 (EDT)
From: Sandy Philpott <philpott at jlab.org>
To: Ole Hansen <ole at jlab.org>, Harut Avakian <avakian at jlab.org>, Brad
Sawatzky <brads at jlab.org>, Mark Ito <marki at jlab.org>
CC: Graham Heyes <heyes at jlab.org>, Chip Watson <watson at jlab.org>
All,
The new /work fileserver is installed, and testing almost complete. We are resolving one known issue with NFS over TCP for CentOS 6 systems. When that is resolved we'll be ready to go live.
We're creating the /work directories at the top level, and will work with each of you to move the data you want to reside there, while shrinking the work usage on the current ZFS appliance and Lustre areas.
Please send me any scheduling you want to follow, to get any groups with a tight timeline up and running at the start.
Meanwhile, please do continue to delete or move current work data to fit into the quotas... reminder that they're 30 TB for A and C, 45 TB for B & D.
Regards,
Sandy
On 07/17/2017 02:23 PM, Chip Watson wrote:
>> All,
>>
>> The purchase order for the new /work file server (with A+C
>> enhancements) will be done today or tomorrow. The bottom line is
>> that the old 10:40:10:40 distribution for compute resources will be
>> 20:30:20:30 for /work resources due to the supplement from halls A &
>> C ($3K each).
>>
>> B & D will each get 45 TB of quota.
>>
>> A & C will each get 30 TB.
>>
>> This is at about 84% full for the new Physics server. Note that
>> current usage is 253 TB against a future bound of 150 TB, so 40%
>> still needs to be deleted or moved to tape, /cache or /volatile.
>> Further expansion is possible in units of around $3K, which will add
>> 18 TB for the person with the money, and 18 TB for the division as a
>> whole -- this keeps the cost per TB the same for late buy-in as it
>> was for early buy-in.
>>
>> FYI, cost per TB is about $185 at 80% full, vs $100 for a new Lustre
>> server, thus there is an 85% premium for optimizing to zillions of
>> small files. THEREFORE don't waste this on large files. Ever.
>>
>> If anyone has any questions about the distribution, please reply/all.
>>
>> We should have the machine in a month, and have it ready for
>> production by the beginning of September if all goes well. Sandy will
>> be coming up with a migration order (we'll move project by project),
>> so if you have any suggested ordering for your hall, please send that
>> to Sandy. I could suggest that we migrate projects taking data in
>> the fall first, as that will most rapidly lower the stress on Lustre.
>>
>> Chip
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-offline/attachments/20171002/89c3b762/attachment.html>
More information about the Halld-offline
mailing list