[Halld-offline] Fwd: Re: ENP consumption of disk space under /work
Mark Ito
marki at jlab.org
Fri Feb 2 18:56:53 EST 2018
FYI, more work space.
-------- Forwarded Message --------
Subject: Re: ENP consumption of disk space under /work
Date: Fri, 2 Feb 2018 14:51:14 -0500 (EST)
From: Sandy Philpott <philpott at jlab.org>
To: Ole Hansen <ole at jlab.org>, Harut Avakian <avakian at jlab.org>, Brad
Sawatzky <brads at jlab.org>, Mark Ito <marki at jlab.org>
CC: Graham Heyes <heyes at jlab.org>, Chip Watson <watson at jlab.org>
Hi All,
To follow on from yesterday's SciComp/Physics meeting, the new FY18 disks have been attached to scifs17 and the system is ready for quota increases. Since this gives Physics a 150% increase over the initial FY17 fileserver purchase, we'll naively increase the quotas to 150% higher than they were originally set.
This means setting quotas at 75 TB each for A and C, and 110 TB each for B and D now.
And current status of the data migration... Halls C and D are using the new filer exclusively. There's about 10TB of Hall A data left to move. The Hall B ~80 TB copies are started, and we'll get the freed Lustre disk allocated to /cache as the Lustre data is relocated.
Let me know of any changes needed...
Regards,
Sandy
----- Original Message -----
From: "Sandy Philpott" <philpott at jlab.org>
To: "Ole Hansen" <ole at jlab.org>, "Harut Avakian" <avakian at jlab.org>, "Brad Sawatzky" <brads at jlab.org>, "Mark Ito" <marki at jlab.org>
Cc: "Graham Heyes" <heyes at jlab.org>, "Chip Watson" <watson at jlab.org>
Sent: Friday, October 27, 2017 3:42:56 PM
Subject: Re: ENP consumption of disk space under /work
Hello,
We've got a Physics/Scicomp meeting scheduled this Wednesday morning where we'll go over details of the /work move, and of the additional space under procurement. In the meantime, Hall D is ready to move now, if they can get an additional 15 TB of quota to start. This would then free 55 TB from Lustre for other use.
Also the Hall C data is copied and ready to cut over.
Wanted to put this out ahead of time along with any other factors to consider,
Sandy
----- Original Message -----
From: "Sandy Philpott" <philpott at jlab.org>
To: "Ole Hansen" <ole at jlab.org>, "Harut Avakian" <avakian at jlab.org>, "Brad Sawatzky" <brads at jlab.org>, "Mark Ito" <marki at jlab.org>
Cc: "Graham Heyes" <heyes at jlab.org>, "Chip Watson" <watson at jlab.org>
Sent: Monday, October 2, 2017 1:00:29 PM
Subject: Re: ENP consumption of disk space under /work
All,
The new /work fileserver is installed, and testing almost complete. We are resolving one known issue with NFS over TCP for CentOS 6 systems. When that is resolved we'll be ready to go live.
We're creating the /work directories at the top level, and will work with each of you to move the data you want to reside there, while shrinking the work usage on the current ZFS appliance and Lustre areas.
Please send me any scheduling you want to follow, to get any groups with a tight timeline up and running at the start.
Meanwhile, please do continue to delete or move current work data to fit into the quotas... reminder that they're 30 TB for A and C, 45 TB for B & D.
Regards,
Sandy
On 07/17/2017 02:23 PM, Chip Watson wrote:
>> All,
>>
>> The purchase order for the new /work file server (with A+C
>> enhancements) will be done today or tomorrow. The bottom line is
>> that the old 10:40:10:40 distribution for compute resources will be
>> 20:30:20:30 for /work resources due to the supplement from halls A &
>> C ($3K each).
>>
>> B & D will each get 45 TB of quota.
>>
>> A & C will each get 30 TB.
>>
>> This is at about 84% full for the new Physics server. Note that
>> current usage is 253 TB against a future bound of 150 TB, so 40%
>> still needs to be deleted or moved to tape, /cache or /volatile.
>> Further expansion is possible in units of around $3K, which will add
>> 18 TB for the person with the money, and 18 TB for the division as a
>> whole -- this keeps the cost per TB the same for late buy-in as it
>> was for early buy-in.
>>
>> FYI, cost per TB is about $185 at 80% full, vs $100 for a new Lustre
>> server, thus there is an 85% premium for optimizing to zillions of
>> small files. THEREFORE don't waste this on large files. Ever.
>>
>> If anyone has any questions about the distribution, please reply/all.
>>
>> We should have the machine in a month, and have it ready for
>> production by the beginning of September if all goes well. Sandy will
>> be coming up with a migration order (we'll move project by project),
>> so if you have any suggested ordering for your hall, please send that
>> to Sandy. I could suggest that we migrate projects taking data in
>> the fall first, as that will most rapidly lower the stress on Lustre.
>>
>> Chip
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-offline/attachments/20180202/3ade961d/attachment-0001.html>
More information about the Halld-offline
mailing list