[Hps-analysis] Fwd: [Clas_offline] Fwd: ENP consumption of disk space under /work

Luca Colaneri luca.colaneri at roma2.infn.it
Tue Jun 6 05:32:25 EDT 2017


I'm having trouble accessing the farm, I don't know why. I'll remove my 
files as soon as I can..

Cheers

L.


Il 01/06/2017 23:14, Solt, Matthew Reagan ha scritto:
>
> I reduced my work directory from 649G to 14G. I think I win ;)
>
>
> Matt Solt
>
> ------------------------------------------------------------------------
> *From:* hps-software at slac.stanford.edu 
> <hps-software at slac.stanford.edu> on behalf of Kyle McCarty 
> <mccaky at gmail.com>
> *Sent:* Thursday, June 1, 2017 1:04 PM
> *To:* Bradley T Yale
> *Cc:* Nathan Baltzell; hps-software; hps-analysis at jlab.org
> *Subject:* Re: [Hps-analysis] Fwd: [Clas_offline] Fwd: ENP consumption 
> of disk space under /work
> I have reduced my /work/ usage to 16 MB.
>
> On Thu, Jun 1, 2017 at 3:05 PM, Bradley T Yale <btu29 at wildcats.unh.edu 
> <mailto:btu29 at wildcats.unh.edu>> wrote:
>
>     I deleted/moved 351 GB in mc_production.
>
>
>     The following things are owned by other users though, and ~6 GB
>     more can be freed up if no longer needed:
>
>
>     Luca:
>
>     762M: /work/hallb/hps/mc_production/Luca/lhe/Vegas_10_10_2016/
>
>
>     Holly:
>
>     3.1G: /work/hallb/hps/mc_production/MG5/dst/
>
>     2G:
>     /work/hallb/hps/mc_production/postTriSummitFixes/tritrig/1pt05/NOSUMCUT/
>
>     25M:  /work/hallb/hps/mc_production/logs/slic/ap/
>
>     16M: /work/hallb/hps/mc_production/logs/readout/beam-tri/1pt92/
>
>
>     Matt G:
>
>     35M: /work/hallb/hps/mc_production/dst/
>
>     11M: /work/hallb/hps/mc_production/logs/dqm/
>
>
>     There is also 2 GB of old 2.2 GeV A' MC, which should no longer be
>     relevant, but I didn't want to do anything with it since it had
>     mock data stuff in there:
>
>     /work/hallb/hps/mc_production/lhe/ap/2pt2/
>
>
>     ------------------------------------------------------------------------
>     *From:* hps-software at SLAC.STANFORD.EDU
>     <mailto:hps-software at SLAC.STANFORD.EDU>
>     <hps-software at SLAC.STANFORD.EDU
>     <mailto:hps-software at SLAC.STANFORD.EDU>> on behalf of Nathan
>     Baltzell <baltzell at jlab.org <mailto:baltzell at jlab.org>>
>     *Sent:* Thursday, June 1, 2017 1:23:19 PM
>     *To:* HPS-SOFTWARE
>     *Cc:* hps-analysis at jlab.org <mailto:hps-analysis at jlab.org>
>     *Subject:* Re: [Hps-analysis] Fwd: [Clas_offline] Fwd: ENP
>     consumption of disk space under /work
>     Here’s the most relevant usage
>
>     649G    mrsolt/
>     570G    sebouh/
>     459G    mc_production/
>     228G    holly
>     159G    mccaky/
>     78G     rafopar/
>     45G     omoreno/
>     44G     spaul
>     39G     fxgirod
>     34G     jeremym
>
>     data/engrun2015:
>     3.2T    tweakpass6
>     50G     tweakpass6fail
>     64G     tpass7
>     2.4G    tpass7b
>     39G     tpass7c
>     6.5G    t_tweakpass_a
>     373G    pass6/skim
>     201G    pass6/dst
>
>     data/physrun2016:
>     3.5T    pass0
>     690G    feeiter4
>     94M     feeiter0
>     327M    feeiter1
>     339M    feeiter2
>     338M    feeiter3
>     15G     noPass
>     24G     pass0_allign
>     52G     pass0fail
>     4.5G    tmp_test
>     281G    tpass1
>     11G     upass0
>
>
>
>
>     On Jun 1, 2017, at 11:05, Stepan Stepanyan <stepanya at jlab.org
>     <mailto:stepanya at jlab.org>> wrote:
>
>     > FYI, we need to move files.
>     >
>     > Stepan
>     >
>     >> Begin forwarded message:
>     >>
>     >> From: Harut Avakian <avakian at jlab.org <mailto:avakian at jlab.org>>
>     >> Subject: [Clas_offline] Fwd: ENP consumption of disk space
>     under /work
>     >> Date: June 1, 2017 at 5:01:24 PM GMT+2
>     >> To: "clas_offline at jlab.org <mailto:clas_offline at jlab.org>"
>     <clas_offline at jlab.org <mailto:clas_offline at jlab.org>>
>     >>
>     >>
>     >>
>     >>
>     >> Dear All,
>     >>
>     >> As you can see from the e-mail below,  keeping all our work
>     disk space requires some additional funding.
>     >> Option 3 will inevitably impact on farm operations, removing of
>     ~20% space from Lustre.
>     >>
>     >> We can also choose something between options 1) and 3).
>     >> Please revise the content and move at least 75% of what is in
>     /work/clas  to either /cache or /volatile?
>     >> The current Hall-B usage includes:
>     >> 550G    hallb/bonus
>     >> 1.5T    hallb/clase1
>     >> 3.6T    hallb/clase1-6
>     >> 3.3T    hallb/clase1dvcs
>     >> 2.8T    hallb/clase1dvcs2
>     >> 987G    hallb/clase1f
>     >> 1.8T    hallb/clase2
>     >> 1.6G    hallb/clase5
>     >> 413G    hallb/clase6
>     >> 2.2T    hallb/claseg1
>     >> 3.9T    hallb/claseg1dvcs
>     >> 1.2T    hallb/claseg3
>     >> 4.1T    hallb/claseg4
>     >> 2.7T    hallb/claseg5
>     >> 1.7T    hallb/claseg6
>     >> 367G    hallb/clas-farm-output
>     >> 734G    hallb/clasg10
>     >> 601G    hallb/clasg11
>     >> 8.1T    hallb/clasg12
>     >> 2.4T    hallb/clasg13
>     >> 2.4T    hallb/clasg14
>     >> 28G    hallb/clasg3
>     >> 5.8G    hallb/clasg7
>     >> 269G    hallb/clasg8
>     >> 1.2T    hallb/clasg9
>     >> 1.3T    hallb/clashps
>     >> 1.8T    hallb/clas-production
>     >> 5.6T    hallb/clas-production2
>     >> 1.4T    hallb/clas-production3
>     >> 12T    hallb/hps
>     >> 13T    hallb/prad
>     >>
>     >> Regards,
>     >>
>     >> Harut
>     >>
>     >> P.S. Few times we had crashes and they may also happen in
>     future, so keeping important files in /work is not recommended.
>     >> You can see the list of lost files in
>     /site/scicomp/lostfiles.txt  and /site/scicomp/lostfiles-jan-2017.txt
>     >>
>     >>
>     >>
>     >> -------- Forwarded Message --------
>     >> Subject:     ENP consumption of disk space under /work
>     >> Date:        Wed, 31 May 2017 10:35:51 -0400
>     >> From:        Chip Watson <watson at jlab.org <mailto:watson at jlab.org>>
>     >> To:  Sandy Philpott <philpott at jlab.org
>     <mailto:philpott at jlab.org>>, Graham Heyes <heyes at jlab.org
>     <mailto:heyes at jlab.org>>, Ole Hansen <ole at jlab.org
>     <mailto:ole at jlab.org>>, Harut Avakian <avakian at jlab.org
>     <mailto:avakian at jlab.org>>, Brad Sawatzky <brads at jlab.org
>     <mailto:brads at jlab.org>>, Mark M. Ito <marki at jlab.org
>     <mailto:marki at jlab.org>>
>     >>
>     >> All,
>     >>
>     >> As I have started on the procurement of the new /work file
>     server, I
>     >> have discovered that Physics' use of /work has grown
>     unrestrained over
>     >> the last year or two.
>     >>
>     >> "Unrestrained" because there is no way under Lustre to restrain it
>     >> except via a very unfriendly Lustre quota system.  As we leave
>     some
>     >> quota headroom to accommodate large swings in usage for each
>     hall for
>     >> cache and volatile, then /work continues to grow.
>     >>
>     >> Total /work has now reached 260 TB, several times larger than I
>     was
>     >> anticipating.  This constitutes more than 25% of Physics' share of
>     >> Lustre, compared to LQCD which uses less than 5% of its disk
>     space on
>     >> the un-managed /work.
>     >>
>     >> It would cost Physics an extra $25K (total $35K - $40K) to
>     treat the 260
>     >> TB as a requirement.
>     >>
>     >> There are 3 paths forward:
>     >>
>     >> (1) Physics cuts its use of /work by a factor of 4-5.
>     >> (2) Physics increases funding to $40K
>     >> (3) We pull a server out of Lustre, decreasing Physics' share
>     of the
>     >> system, and use that as half of the new active-active pair,
>     beefing it
>     >> up with SSDs and perhaps additional memory; this would actually
>     shrink
>     >> Physics near term costs, but puts higher pressure on the file
>     system for
>     >> the farm
>     >>
>     >> The decision is clearly Physics', but I do need a VERY FAST
>     response to
>     >> this question, as I need to move quickly now for LQCD's needs.
>     >>
>     >> Hall D + GlueX,  96 TB
>     >> CLAS + CLAS12, 98 TB
>     >> Hall C,                35 TB
>     >> Hall A <unknown, still scanning>
>     >>
>     >> Email, call (x7101), or drop by today 1:30-3:00 p.m. for
>     discussion.
>     >>
>     >> thanks,
>     >> Chip
>     >>
>     >>
>     >> _______________________________________________
>     >> Clas_offline mailing list
>     >> Clas_offline at jlab.org <mailto:Clas_offline at jlab.org>
>     >> https://mailman.jlab.org/mailman/listinfo/clas_offline
>     <https://mailman.jlab.org/mailman/listinfo/clas_offline>
>     >
>     > _______________________________________________
>     > Hps-analysis mailing list
>     > Hps-analysis at jlab.org <mailto:Hps-analysis at jlab.org>
>     > https://mailman.jlab.org/mailman/listinfo/hps-analysis
>     <https://mailman.jlab.org/mailman/listinfo/hps-analysis>
>
>     ########################################################################
>     Use REPLY-ALL to reply to list
>
>     To unsubscribe from the HPS-SOFTWARE list, click the following link:
>     https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwID-g&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=J4PP6Zl8IyGHpsqWaKegORCYw8hoCHePTw5O95a5lqQ&m=gt1HtRGf6tfZellksPs0p6J-kX_KN4MseLVg2_3N8fw&s=QQtBwzeefkgP4x0M9ATSg8D97-WME6PSh2QVzrKodTI&e= 
>     <https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwMF-g&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=J4PP6Zl8IyGHpsqWaKegORCYw8hoCHePTw5O95a5lqQ&m=9Q1Lr9Ubyxo4a-aHpRTr_pw0S7IQk02K6XFKuDBvcGs&s=_aWngkITQu0UvMHgIBu-sCNwvne9c5O4rWMYRQc-vi8&e=>
>
>     ------------------------------------------------------------------------
>
>     Use REPLY-ALL to reply to list
>
>     To unsubscribe from the HPS-SOFTWARE list, click the following link:
>     https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwID-g&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=J4PP6Zl8IyGHpsqWaKegORCYw8hoCHePTw5O95a5lqQ&m=gt1HtRGf6tfZellksPs0p6J-kX_KN4MseLVg2_3N8fw&s=QQtBwzeefkgP4x0M9ATSg8D97-WME6PSh2QVzrKodTI&e= 
>     <https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwMF-g&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=J4PP6Zl8IyGHpsqWaKegORCYw8hoCHePTw5O95a5lqQ&m=9Q1Lr9Ubyxo4a-aHpRTr_pw0S7IQk02K6XFKuDBvcGs&s=_aWngkITQu0UvMHgIBu-sCNwvne9c5O4rWMYRQc-vi8&e=>
>
>
>
>
> ------------------------------------------------------------------------
>
> Use REPLY-ALL to reply to list
>
> To unsubscribe from the HPS-SOFTWARE list, click the following link:
> https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwID-g&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=J4PP6Zl8IyGHpsqWaKegORCYw8hoCHePTw5O95a5lqQ&m=gt1HtRGf6tfZellksPs0p6J-kX_KN4MseLVg2_3N8fw&s=QQtBwzeefkgP4x0M9ATSg8D97-WME6PSh2QVzrKodTI&e=  
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwMF-g&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=J4PP6Zl8IyGHpsqWaKegORCYw8hoCHePTw5O95a5lqQ&m=9Q1Lr9Ubyxo4a-aHpRTr_pw0S7IQk02K6XFKuDBvcGs&s=_aWngkITQu0UvMHgIBu-sCNwvne9c5O4rWMYRQc-vi8&e=> 
>
>
>
>
> _______________________________________________
> Hps-analysis mailing list
> Hps-analysis at jlab.org
> https://mailman.jlab.org/mailman/listinfo/hps-analysis

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/hps-analysis/attachments/20170606/2d5df8fd/attachment-0001.html>


More information about the Hps-analysis mailing list