<div dir="ltr">Hi Michael and others,<div><br></div><div>We should strive to take advantage of every free allocation of computing resources. I have no experience running Geant4 on Azure (I have run them on Google Cloud and Amazon). I know Andrea Dotti, formerly one of the lead developers of Geant4 spent a summer on getting this to work in a joint program with Azure, and was not recommending us to use Azure afterwards (I forget whether due to technical reasons or data egress policy reasons)...</div><div><br></div><div>Our simulations are usually considered high throughput computation (HTC), not just (or not at all) high performance computation (HPC), since we have pretty large egress requirements (and for analysis also large ingress requirements). I have been targeting the scalable HTC resources that the lab provides through Open Science Grid (OSG), currently mainly for Hall D experiments. That allows a similar level of cores*, but indefinitely, with high bandwidth in/egress, and without in/egress costs. We are already working with Hall D to adapt their tools to remoll (in particular a web-based job submission system, MC Wrapper). For Azure we'd need to develop tools from scratch, though it may be possible to build those into MC Wrapper (which already supports multiple batch systems).</div><div><br></div><div>In any case, the remoll docker containers is what allows this to happen, both on Azure and on OSG (through singularity). Starting from those containers it should be possible to spin up Azure instances to run simulations; the challenge will be to make this scalable beyond a single node.</div><div><br></div><div>Cheers,<br></div><div>Wouter</div><div><br></div><div>* Back in 2012 GlueX did a data challenge to test the limits of OSG and got 2M core-hours in a 14 day period, equivalent to continuous use of 6k cores. That was a blip on the OSG worldwide capacity and is representative to their current baseline use, I think.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jun 10, 2019 at 12:22 AM Michael Gericke <<a href="mailto:Michael.Gericke@umanitoba.ca">Michael.Gericke@umanitoba.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello,<br>
<br>
The UofM has been given at least $20k worth of free Microsoft's Azure <br>
high performance<br>
computing resources to test their services and it seems that there is <br>
only a small number<br>
of groups really interested, including our group. That means that we <br>
could have access to<br>
a large number of cores over a significant period of time (months). This <br>
would be useful if<br>
one is planning to run a very large number of simulation configurations, <br>
or a very high<br>
statistics simulation. For example, we could have something like 72 <br>
cores over a period<br>
of 3 months or more, plus associated storage, without a queue or any <br>
form of scheduling.<br>
<br>
I am currently trying to figure out what we might run, to take advantage <br>
of this, so if anyone<br>
in this group thinks this could be useful for a large scale MOLLER <br>
simulation job, please let<br>
me now as soon as possible.<br>
<br>
If you would like more information about it, here is a link:<br>
<br>
<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__azure.microsoft.com_en-2Dca_pricing_calculator_&d=DwICaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=hK0qfHtfdwUHlZG0DnK-QRJVYus_TzTB8u52ev2QBtI&m=6NQiL4oNQwoN22o4fBlQGwUj6gCsdUE69cFS4F9zpug&s=y-eRkWEzfpGzeXbUgg1XCYCFAcqbT_2ZAI19cqtUQ1I&e=" rel="noreferrer" target="_blank">https://urldefense.proofpoint.com/v2/url?u=https-3A__azure.microsoft.com_en-2Dca_pricing_calculator_&d=DwICaQ&c=CJqEzB1piLOyyvZjb8YUQw&r=hK0qfHtfdwUHlZG0DnK-QRJVYus_TzTB8u52ev2QBtI&m=6NQiL4oNQwoN22o4fBlQGwUj6gCsdUE69cFS4F9zpug&s=y-eRkWEzfpGzeXbUgg1XCYCFAcqbT_2ZAI19cqtUQ1I&e=</a> <br>
<br>
Cheers,<br>
<br>
Michael<br>
<br>
<br>
-- <br>
Michael Gericke (Ph.D., Professor)<br>
<br>
Physics and Astronomy<br>
University of Manitoba<br>
30A Sifton Road, 213 Allen Bldg.<br>
Winnipeg, MB R3T 2N2, Canada<br>
<br>
Tel.: 204 474 6203<br>
Fax.: 204 474 7622<br>
<br>
<br>
_______________________________________________<br>
Moller_simulation mailing list<br>
<a href="mailto:Moller_simulation@jlab.org" target="_blank">Moller_simulation@jlab.org</a><br>
<a href="https://mailman.jlab.org/mailman/listinfo/moller_simulation" rel="noreferrer" target="_blank">https://mailman.jlab.org/mailman/listinfo/moller_simulation</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr">Wouter Deconinck<span style="font-size:12.8px"> </span>(pronouns: he, him, his)<br>Assistant Professor of Physics, William & Mary<br>Office: Small Hall 343D, Phone: (757) 221-3539<br><br>Emails sent to this address are subject to requests for public review under the Virginia Freedom of Information Act.</div></div></div></div></div></div></div></div>