<html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<p>Richard,</p>
<p>Is this new?<br>
</p>
<p>My immediate thought is the possibility of having our REST files
available via this mechanism. For each run period, we could have
the latest generation of REST files out there. The space at JLab
has not turned out to be sufficient to support such a scheme. If
we want to analyze REST files at JLab, the savings in avoiding
tape latency might overcome the overhead in getting our REST files
over the network.</p>
<p>Even if the scheme described above has some fatal flaw, it sounds
to me that this resource is available now, at least at some level.
If that is the case it would be a shame to not come up with a use
for it. Random trigger files maybe?</p>
<p> -- Mark<br>
</p>
<div class="moz-cite-prefix">On 11/16/21 10:13 AM, Richard Jones
wrote:<br>
</div>
<blockquote type="cite" cite="mid:CABfxa3RLZaV82u5YMxiL7UH79zmzC=pHNxU8YGfUb8VGETMAQw@mail.gmail.com">
<div dir="ltr">Hello all,
<div><br>
</div>
<div>A few weeks ago, the possibility was raised of a shared
global filesystem to provide easy access to shared Gluex data
(eg. REST, analysis data sets, skims of various kinds) from
anywhere on or off site without having to wait to stage data
from tape. As a first step, I have created a namespace for
these files under the osgstorage file catalog, managed by the
osg ops.</div>
<div>
<ul>
<li>/cvmfs/<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__gluex.osgstorage.org_gluex&d=DwMFaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=Te_hCR4EUlJ6iCDYLJ8Viv2aDOR7D9ZZMoBAvf2H0M4&m=2oEolNTMkADUFZV21KX3t875hunlJB8Xll9JSnHrs6U8hK2lILwL9Y7klqBibkxJ&s=OVDdt1Bt4yRmZhJL7NtZE68NN5hAsTLoSsSgp6a3d5k&e=" moz-do-not-send="true">gluex.osgstorage.org/gluex</a></li>
</ul>
<div>The purpose of the second /gluex is to allow various
physics working groups (eg. cpp, primex) to have their own
separate branch under /cvmfs/<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__gluex.osgstorage.org&d=DwMFaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=Te_hCR4EUlJ6iCDYLJ8Viv2aDOR7D9ZZMoBAvf2H0M4&m=2oEolNTMkADUFZV21KX3t875hunlJB8Xll9JSnHrs6U8hK2lILwL9Y7klqBibkxJ&s=qdZEQVKOakAlhVROu5FIQAdPuIh1qRFquFmpc_yMazg&e=" moz-do-not-send="true">gluex.osgstorage.org</a>. This <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__osgstorage.org&d=DwMFaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=Te_hCR4EUlJ6iCDYLJ8Viv2aDOR7D9ZZMoBAvf2H0M4&m=2oEolNTMkADUFZV21KX3t875hunlJB8Xll9JSnHrs6U8hK2lILwL9Y7klqBibkxJ&s=HLeKQCfxRbyNBlNqlPDf-EA-_PUpSs_OFEL8rCe1qXI&e=" moz-do-not-send="true">osgstorage.org</a> is built around
a network of shared caches across North America that
automatically finds and serves you the nearest copy of any
file that is registered in the catalog. The data are also
locally cached on your local machine through the cvmfs
caching mechanism, for repeated access to the same files.</div>
</div>
<div><br>
</div>
<div>Right now UConn is contributing the "origin" service for
the gluex namespace, but hopefully JLab will also contribute
to this in the near future. To provide an osgstorage origin
service, all you need is to export your files using a standard
xrootd server. Just email <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__support.opensciencegrid.org&d=DwMFaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=Te_hCR4EUlJ6iCDYLJ8Viv2aDOR7D9ZZMoBAvf2H0M4&m=2oEolNTMkADUFZV21KX3t875hunlJB8Xll9JSnHrs6U8hK2lILwL9Y7klqBibkxJ&s=w8Z8-6cXZfKrrnTBAdsrQB_VKWzJdWCuV2afHPIlijk&e=" moz-do-not-send="true">support.opensciencegrid.org</a> and
tell them what portion of the /gluex namespace you want to
occupy, and they will start automatically indexing your files
and adding them to the catalog.</div>
<div><br>
</div>
<div>If you don't have any storage to contribute, but you would
like to take advantage of this shared Gluex storage, write to
Mark Ito or to me and tell us what datasets you would like to
see published through the system. If you don't have /cvmfs
installed, you can still access any file in the namespace
using the stashcp command. If you log onto an ifarm machine at
the lab, you can poke around on /cvmfs/<a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__gluex.osgstorage.org_gluex&d=DwMFaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=Te_hCR4EUlJ6iCDYLJ8Viv2aDOR7D9ZZMoBAvf2H0M4&m=2oEolNTMkADUFZV21KX3t875hunlJB8Xll9JSnHrs6U8hK2lILwL9Y7klqBibkxJ&s=OVDdt1Bt4yRmZhJL7NtZE68NN5hAsTLoSsSgp6a3d5k&e=" moz-do-not-send="true">gluex.osgstorage.org/gluex</a> and
see what is currently stored there, maybe 120 TB of various
bits and pieces. There is about 600 TB of additional space
available at present, so there is plenty of room for anything
you would like to add.</div>
<div><br>
</div>
<div>-Richard Jones</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
Halld-offline mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Halld-offline@jlab.org">Halld-offline@jlab.org</a>
<a class="moz-txt-link-freetext" href="https://mailman.jlab.org/mailman/listinfo/halld-offline">https://mailman.jlab.org/mailman/listinfo/halld-offline</a></pre>
</blockquote>
</body>
</html>