<div dir="ltr"><div class="gmail_quote"><div dir="ltr">Sean,<br><div><br></div><div>Yes, Globus Online is very much like dropbox. So for individual users and a desktop compute platform, pushing a few large files around, it is a handy solution. I recently attended a Globus Online workshop, where I asked if it would be suitable for a file access and storage for grid jobs, the person replied that this might be a future area where effort might be directed, but at present it was not recommended, because</div>
<div><ol><li>Globus Online was designed for the interactive environment, not batch</li><li>The authentication is completely different and orthogonal to the grid authentication infrastructure, so none of the privs that are shipped around with grid jobs can be used to authenticate GO transactions.</li>
</ol><div> I would never consider using dropbox for grid jobs, even though with the right amount of effort and determination it might be made to work. I think it is not a good fit. </div></div><div><br></div><div>I think xrootd is an obvious choice for root files. We run a xrootd service here at UConn, in parallel with the dcache SRM. By the time they are reduced to root files, the volume of the data is very compact, like DST's, and a distributed xrootd service makes a lot of sense in that context, especially when combined with PROOF. We use PROOF fairly heavily in our group, because of how it allows hundreds of root processing threads to run simultaneously within a single application operating on a single tree.</div>
<font color="#888888">
<div><br></div><div>-Richard Jones</div></font></div><div class="gmail_extra"><br><br><div class="gmail_quote"><div><div class="h5">On Tue, Feb 4, 2014 at 12:03 PM, Sean Dobbs <<a href="mailto:s-dobbs@northwestern.edu">s-dobbs@northwestern.edu</a>> wrote:<br>
</div></div><blockquote class="gmail_quote"><div><div class="h5"><div dir="ltr"><div dir="ltr">Dear all,<div><br></div><div>Has there ever been any discussion of running an XrootD service at JLab? It is an increasingly popular service for accessing data over the grid these days. </div>
</div><div class="gmail_extra"><br></div><div class="gmail_extra">BTW, I've used Globus Online on the desktop at a workshop before, and it was really easy - about as hard as setting up Dropbox or a similar service.</div>
<div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Cheers,</div><div class="gmail_extra">Sean</div><div><div><div class="gmail_extra"><br></div><div class="gmail_extra">
<br><div class="gmail_quote">
On Tue, Feb 4, 2014 at 8:23 AM, Mark Ito <<a href="mailto:marki@jlab.org">marki@jlab.org</a>> wrote:<br><blockquote class="gmail_quote">
<div bgcolor="#FFFFFF" text="#000000">
>From Sandy Philpott:<br>
<div><br>
<br>
-------- Original Message --------
<table border="0" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th align="RIGHT" nowrap valign="BASELINE">Subject: </th>
<td>Re: [Halld-offline] Data Challenge Meeting Minutes, January 31, 2014</td>
</tr>
<tr>
<th align="RIGHT" nowrap valign="BASELINE">Date: </th>
<td>Mon, 3 Feb 2014 16:58:51 -0500 (EST)</td>
</tr>
<tr>
<th align="RIGHT" nowrap valign="BASELINE">From: </th>
<td>Sandy Philpott <a href="mailto:philpott@jlab.org">
<philpott@jlab.org></a></td>
</tr>
<tr>
<th align="RIGHT" nowrap valign="BASELINE">To: </th>
<td>Mark Ito <a href="mailto:marki@jlab.org"><marki@jlab.org></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<pre>Hi Mark,
As discussed...
IT/SciComp is ready for the Hall D data challenge. When we get your test
description details -- rates to/from tape, disk requirements, timeframe, etc,
we'll flip the farm nodes in HPC back to the farm, along with the 12s nodes
needed to get 2250 cores total in the cluster, so that Hall D will be able to
run 1250 cores with other jobs running. The nodes to be moved from HPC will have
to drain jobs, which can take up to 48 hours. We want to make sure you are
ready to run when we do it so that we don't waste CPUs sitting idle.
There's no SRM ability at JLab, and recall from the review that reviewers
suggested GlueX consider something besides SRM. At one point in 2010 Curtis
had firewall openings via our Cyber group for client tools installed
in user space on the ifarm nodes; check with Greg and company on that status
if still needed, as network configs for ifarms have undergone changed since then.
IT supports Globus Online for offsite data transfers using the site's 10 gigabit
pipe; info available at <a href="https://scicomp.jlab.org/docs/?q=node/11">https://scicomp.jlab.org/docs/?q=node/11</a>
Let us know when you're ready to run and what you need, then let the
node flipping and data challenge begin. I understand from today's discussion
you expect the timeframe to be sometime near the end of February. Note that
the farm is currently almost idle at this point, so you shouldn't have any
trouble getting test jobs and prototypes started in the meantime.
Regards,
Sandy
----- Original Message -----
From: "Mark Ito" <a href="mailto:marki@jlab.org"><marki@jlab.org></a>
To: "GlueX Offline Software Email List" <a href="mailto:halld-offline@jlab.org"><halld-offline@jlab.org></a>
Sent: Monday, February 3, 2014 1:05:43 PM
Subject: [Halld-offline] Data Challenge Meeting Minutes, January 31, 2014
...
9. Is the JLab CC ready for us?
+ Mark will talk to Sandy about our plans.
10. What ability will we have for SRM at Jefferson Lab?
+ Mark will talk to Sandy about the status of the system.
...
</pre>
<br>
</div>
<br>
</div>
</blockquote></div><br></div><br clear="all"><div><br></div></div></div><font color="#888888">-- <br>Dr. Sean Dobbs<br>Department of Physics & Astronomy <br>Northwestern University<br>phone: <a href="tel:847-467-2826">847-467-2826</a>
</font></div>
<br></div></div><div class="im">_______________________________________________<br>
Halld-offline mailing list<br>
<a href="mailto:Halld-offline@jlab.org">Halld-offline@jlab.org</a><br>
<a href="https://mailman.jlab.org/mailman/listinfo/halld-offline">https://mailman.jlab.org/mailman/listinfo/halld-offline</a><br></div></blockquote></div><br></div>
</div><br></div>