<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
2.3 GeV data recon speed is x2 slower than the 1.05 data recon
speed.<br>
This is concerning, but maybe there is a good reason for that, i.e.
might be hit occupancies are high etc,<br>
<br>
but the other thing, that seems puzzling, is recon speed for 2.3 GeV
data is now more than x2 slower<br>
using the same hps-java version, same detector, same java version,
as we did for pass0.<br>
<br>
Rafo<br>
<br>
<br>
<div class="moz-cite-prefix">On 05/22/2017 03:10 PM, Graham, Mathew
Thomas wrote:<br>
</div>
<blockquote type="cite"
cite="mid:4AB85148-FF7F-436B-9F1D-ABC163ACD234@slac.stanford.edu">
<div class=""><br class="">
</div>
<div class="">Sorry to ask a dumb question…but I will anyway. </div>
<div class=""><br class="">
</div>
<div class="">This is a “2.3 GeV vs 1.05 GeV” issue? Or an
hps-java version issue? </div>
<br class="">
<div>
<blockquote type="cite" class="">
<div class="">On May 22, 2017, at 12:05 PM, Omar Moreno <<a
href="mailto:email@omarmoreno.net" class=""
moz-do-not-send="true">email@omarmoreno.net</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<div dir="auto" class="">Jeremy has also profiled the recon
before so if Maurik can't do it, I'm sure he can. </div>
<div class="gmail_extra"><br class="">
<div class="gmail_quote">On May 22, 2017 11:46 AM,
"Rafayel Paremuzyan" <<a
href="mailto:rafopar@jlab.org" class=""
moz-do-not-send="true">rafopar@jlab.org</a>> wrote:<br
type="attribution" class="">
<blockquote class="quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Alessandra and all,<br class="">
<br class="">
Yes clearly I see that pattern as well,<br class="">
going to the faster machine (pumpkin1), as Maurik
suggested,<br class="">
recon speed increased, but 2016 recon was about x2
slower than the 2015 recon speed.<br class="">
I also tried to with -Xms=4000m -Xmx=4000m options,
and without that options,<br class="">
but I didn't notice noticeable speed improvement.<br
class="">
<br class="">
Another thing that is *concering* is I run<br class="">
same jar on batch farms, centos6. and centos7, and<br
class="">
can't get same speed as we got during the pass0.<br
class="">
Note, jar is the same, detector is the same, run
number and file number is the same.<br class="">
Difference is the time when it started to tun (Oct 19
2016 vs May 21 2017)<br class="">
<br class="">
If someone is interested to look at job log files<br
class="">
The log file that I run yesterday:
/lustre/expphy/work/hallb/hps/<wbr class="">data/physrun2016/tpass1/logs/h<wbr
class="">ps_008054.27_pass0XML.err<br class="">
The log file that run on Oct 19 2016 for pass0
/lustre/expphy/work/hallb/hps/<wbr class="">data/physrun2016/pass0/logs/hp<wbr
class="">s_008054.27_R3.9.err<br class="">
<br class="">
I have never done profiling with java.<br class="">
I know Maurik is in workshop, he might not have time
to do this,<br class="">
if someone else is set up doing this, it will probably
be useful.<br class="">
<br class="">
Also, I noticed /group/hps is full<br class="">
<br class="">
Has anyone recently put some data there?<br class="">
<br class="">
Rafo
<div class="elided-text"><br class="">
<br class="">
<br class="">
On 05/22/2017 04:04 AM, Alessandra Filippi wrote:<br
class="">
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="elided-text">Hi Maurik, Rafo and all,<br
class="">
I think that the different Java VM could impact on
the absolute speed comparing different machines
and jar versions compiled in different times...
but I am running the same jar (hps-java 3.10,
compiled afresh at slac on rhel6-64, jvm 1.7.0) on
2015 and 2016 data, and the factor ~2 of speed
decrease for newer data is striking (whichever
geometry version).<br class="">
About garbage collection, I don't use any flag in
compilation, so it acts in default mode in both
cases.<br class="">
<br class="">
Are 2016 data affected by more noisy hits that
could extend the reconstruction time over testing
all different strategies, to your knowledge?<br
class="">
cheers<br class="">
Alessandra<br class="">
<br class="">
<br class="">
<br class="">
<br class="">
<br class="">
<br class="">
On Sun, 21 May 2017, Maurik Holtrop wrote:<br
class="">
<br class="">
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="elided-text">Hello Rafo,<br class="">
One thing that probably is different between the
last time we ran with the 3.8 jar and now<br
class="">
is a different version of the Java VM. It could
well be that the newer version of Java is<br
class="">
not faster. Also, it is tricky to compare
Endeavour with the Jlab farm computers. They are<br
class="">
probably not equivalent in speed. At UNH,
Pumpkin has the more modern processors, whereas<br
class="">
Endeavour is now ~5 years old.<br class="">
<br class="">
Best,<br class="">
Maurik<br class="">
<br class="">
On May 21, 2017, at 6:54 AM, Rafayel
Paremuzyan <<a href="mailto:rafopar@jlab.org"
target="_blank" class=""
moz-do-not-send="true">rafopar@jlab.org</a>>
wrote:<br class="">
<br class="">
Hi Alessandra, Norman, all<br class="">
<br class="">
thank you for replay and your tests.<br class="">
<br class="">
I tested both 2015 and 2016 data using v4-4
detector on UNH computers.<br class="">
I have use 3.8 JAR (the jar for 2015 pass6). 3.9
JAR (the jar for 2016 pass0 recon),<br class="">
and the NEW jar v051717 (the newest jar tag is
v051717)<br class="">
<br class="">
Ok, I also noticed that recon of 2015 data is
faster that 2016 data.<br class="">
Also seems the new jar is 20% slower than the
3.9 jar for 2016 data, and about 60%<br class="">
slower for 2015 data.<br class="">
now recon speed is about 2.55 EV/S for 2015
data, This is too slow<br class="">
it cause more than 40h for a single file.<br
class="">
<br class="">
Ths is summary of code speeds with different jar
files<br class="">
V4-4 Detector, UNH (endeavour), 5K events are
reconstructed<br class="">
<br class="">
Events per second<br class="">
Events per second<br class="">
Events per second<br class="">
<br class="">
3.8JAR (2015 recon jar)<br class="">
3.9JAR, 2016 pass0 recon jar<br class="">
v051717JAR, jar vor tpass1<br class="">
2015 Data 5772, file 20<br class="">
5.07<br class="">
5.19<br class="">
3.157<br class="">
2016 Data file 25<br class="">
<br class="">
3.11<br class="">
2.53<br class="">
<br class="">
*However* I looked into job Wall times for pass0
recon.<br class="">
The recon speed is more than 7.4 Events/sec,
which is about x3 faster than with the<br
class="">
new JAR.<br class="">
<br class="">
I again checked *same 3.9 jar*, which is slower
again.<br class="">
I don't know why now the code speed is so low!<br
class="">
<br class="">
<br class="">
Norman, I have tried the
"-DdisableSvtAlignmentConstant<wbr class="">s",
but it didn't work<br class="">
<br class="">
=================The command===============<br
class="">
java -XX:+UseSerialGC -cp
hps-distribution-3.11-v051717-<wbr class="">bin.jar<br
class="">
org.hps.evio.EvioToLcio -x
/org/hps/steering/recon/Physic<wbr class="">sRun2016FullRecon.lcsim
-r<br class="">
-d HPS-PhysicsRun2016-v5-3-fieldm<wbr class="">ap_globalAlign
-R 7796 -DoutputFile=out_7796_0<br class="">
-DdisableSvtAlignmentConstants
hps_007796.evio.25 -n 10000<br class="">
<br class="">
============The error backtrache============<br
class="">
2017-05-21 00:45:39 [CONFIG]
org.hps.evio.EvioToLcio parse :: using steering<br
class="">
resource /org/hps/steering/recon/Physic<wbr
class="">sRun2016FullRecon.lcsim<br class="">
2017-05-21 00:45:39 [CONFIG]
org.hps.evio.EvioToLcio parse :: set max events
to<br class="">
10000<br class="">
2017-05-21 00:45:48 [INFO]
org.hps.rundb.RunManager <init> ::
ConnectionParameters {<br class="">
database: hps_run_db_v2, hostname: <a
href="http://hpsdb.jlab.org/" rel="noreferrer"
target="_blank" class=""
moz-do-not-send="true">
hpsdb.jlab.org</a>, password: darkphoton,
port: 3306,<br class="">
user: hpsuser }<br class="">
2017-05-21 00:45:48 [CONFIG]
org.lcsim.job.JobControlManage<wbr class="">r
addVariableDefinition<br class="">
:: outputFile = out_7796_0<br class="">
2017-05-21 00:45:48 [CONFIG]
org.hps.evio.EvioToLcio parse :: set steering
variable:<br class="">
outputFile=out_7796_0<br class="">
2017-05-21 00:45:48 [SEVERE]
org.hps.evio.EvioToLcio parse :: bad variable
format:<br class="">
disableSvtAlignmentConstants<br class="">
java.lang.IllegalArgumentExcep<wbr class="">tion:
Bad variable format:<br class="">
disableSvtAlignmentConstants<br class="">
at org.hps.evio.EvioToLcio.parse(<wbr
class="">EvioToLcio.java:393)<br class="">
at org.hps.evio.EvioToLcio.main(E<wbr
class="">vioToLcio.java:97)<br class="">
<br class="">
Exception in thread "main"
java.lang.IllegalArgumentExcep<wbr class="">tion:
Bad variable format:<br class="">
disableSvtAlignmentConstants<br class="">
at org.hps.evio.EvioToLcio.parse(<wbr
class="">EvioToLcio.java:393)<br class="">
at org.hps.evio.EvioToLcio.main(E<wbr
class="">vioToLcio.java:97)<br class="">
<br class="">
Rafo<br class="">
<br class="">
<br class="">
<br class="">
<br class="">
<br class="">
<br class="">
On 05/20/2017 06:17 AM, Alessandra Filippi
wrote:<br class="">
Hi Rafo, all,<br class="">
I also noticed that the reconstruction for
2016 data is about twice as<br class="">
slow as compared to 2015 (whichever
geometry and reconstruction<br class="">
version).<br class="">
This happens when I run the aligned
geometry as well as the "current"<br class="">
one (v5.0), and the geometry taken from
the db as well (the result is<br class="">
the same as v5.0). I did not make any test
with v4.4, though - actually<br class="">
as far as svt alignment is concerned it
should be the same as v5.0. Can<br class="">
you please try and make the same short
test with the newest jar with<br class="">
v4.4?<br class="">
This happens to me both with hps-java 5.10
and 5.11 (not the most<br class="">
updated one).<br class="">
<br class="">
I would be surprised if it could be
something connected to the<br class="">
alignment, unless for some reason new
positions and harder tracks<br class="">
trigger some long loops in the
reconstruction. But this happens (to me)<br
class="">
also with the<br class="">
standard geometry, so a check with the one
used with pass0 (that should<br class="">
however be equivalent to v5.0) could at
least help to rule out, or blame<br class="">
on, the alignment step.<br class="">
Thanks, cheers<br class="">
Alessandra<br class="">
<br class="">
<br class="">
ps. make also sure that the correct
fieldmap is called in all the<br class="">
compact files - you never know!<br
class="">
<br class="">
<br class="">
<br class="">
On Fri, 19 May 2017, Rafayel Paremuzyan
wrote:<br class="">
<br class="">
Hi All,<br class="">
<br class="">
During the testing the recon for
test pass1,<br class="">
I noticed the recon time is more
than x2 longer wrt pass0<br class="">
recon time.<br class="">
<br class="">
To demonstrate it<br class="">
I submit 3 simple jobs with 10K
events to reconstruct, with<br class="">
new pass1 xml<br class="">
file (this has the new jar v051717,
and the new detector<br class="">
HPS-PhysicsRun2016-v5-3-fieldm<wbr
class="">ap_globalAlign),<br class="">
and the old pass0 xml file (pass0
jar release 3.9, and the<br class="">
detector<br class="">
HPS-PhysicsRun2016-Nominal-v4-<wbr
class="">4-fieldmap)<br class="">
<br class="">
Below is a printout from the jobs
with a new JAR, v051717,<br class="">
the average time<br class="">
for 1000 events is more than 7
minutes<br class="">
===================== LOG from the
v051717 JAR<br class="">
==============================<br
class="">
2017-05-19 09:36:51 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10614074 with sequence 0<br class="">
2017-05-19 09:43:13 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10615074 with sequence 1000<br
class="">
2017-05-19 09:49:18 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10616074 with sequence 2000<br
class="">
2017-05-19 09:55:54 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10617074 with sequence 3000<br
class="">
2017-05-19 10:02:55 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10618074 with sequence 4000<br
class="">
2017-05-19 10:09:57 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10619074 with sequence 5000<br
class="">
2017-05-19 10:16:13 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10620074 with sequence 6000<br
class="">
2017-05-19 10:25:20 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10621074 with sequence 7000<br
class="">
2017-05-19 10:32:56 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10622074 with sequence 8000<br
class="">
2017-05-19 10:36:19 [WARNING]<br
class="">
org.hps.recon.tracking.Tracker<wbr
class="">ReconDriver<br class="">
process :: Discarding track with bad
HelicalTrackHit<br class="">
(correction distance<br class="">
0.000000, chisq penalty 0.000000)<br
class="">
2017-05-19 10:42:03 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10623074 with sequence 9000<br
class="">
2017-05-19 10:47:44 [INFO]
org.hps.evio.EvioToLcio run ::<br class="">
maxEvents 10000<br class="">
was reached<br class="">
2017-05-19 10:47:44 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
endOfData ::<br class="">
10000 events processed in job.<br
class="">
2017-05-19 10:47:44 [INFO]
org.hps.evio.EvioToLcio run ::<br class="">
Job finished<br class="">
successfully!<br class="">
<br class="">
<br class="">
And below is the Job log info from
the pass0 jar. The<br class="">
average time for 1000<br class="">
events is less than 3 minutes<br
class="">
===================== LOG from the
3.9 release JAR<br class="">
==============================<br
class="">
2017-05-19 13:19:46 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10614074 with sequence 0<br class="">
2017-05-19 13:23:36 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10615074 with sequence 1000<br
class="">
2017-05-19 13:27:03 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10616074 with sequence 2000<br
class="">
2017-05-19 13:30:40 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10617074 with sequence 3000<br
class="">
2017-05-19 13:34:20 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10618074 with sequence 4000<br
class="">
2017-05-19 13:38:11 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10619074 with sequence 5000<br
class="">
2017-05-19 13:41:43 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10620074 with sequence 6000<br
class="">
2017-05-19 13:45:54 [WARNING]<br
class="">
org.hps.recon.tracking.Tracker<wbr
class="">ReconDriver<br class="">
process :: Discarding track with bad
HelicalTrackHit<br class="">
(correction distance<br class="">
0.000000, chisq penalty 0.000000)<br
class="">
2017-05-19 13:46:05 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10621074 with sequence 7000<br
class="">
2017-05-19 13:50:08 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10622074 with sequence 8000<br
class="">
2017-05-19 13:55:03 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
process :: Event<br class="">
10623074 with sequence 9000<br
class="">
2017-05-19 13:58:27 [INFO]
org.hps.evio.EvioToLcio run ::<br class="">
maxEvents 10000<br class="">
was reached<br class="">
2017-05-19 13:58:27 [INFO]
org.lcsim.job.EventMarkerDrive<wbr class="">r<br
class="">
endOfData ::<br class="">
10000 events processed in job.<br
class="">
2017-05-19 13:58:27 [INFO]
org.hps.evio.EvioToLcio run ::<br class="">
Job finished<br class="">
successfully!<br class="">
<br class="">
I also tried to do reconstruction by
myself interactively,<br class="">
but I am getting<br class="">
error below,<br class="">
<br class="">
The command<br class="">
/apps/scicomp/java/jdk1.7/bin/<wbr
class="">java -XX:+UseSerialGC -cp<br class="">
hps-distribution-3.9-bin.jar
org.hps.evio.EvioToLcio -x<br class="">
/org/hps/steering/recon/Physic<wbr class="">sRun2016FullRecon.lcsim
-r -d<br class="">
HPS-PhysicsRun2016-v5-3-fieldm<wbr
class="">ap_globalAlign -R 7796<br class="">
-DoutputFile=out_7796_0<br class="">
hps_007796.evio.0 -n 10000<br
class="">
<br class="">
The Error traceback<br class="">
017-05-19 14:58:44 [CONFIG]
org.hps.evio.EvioToLcio parse ::<br class="">
using steering<br class="">
resource<br class="">
/org/hps/steering/recon/Physic<wbr class="">sRun2016FullRecon.lcsim<br
class="">
2017-05-19 14:58:44 [CONFIG]
org.hps.evio.EvioToLcio parse<br class="">
:: set max events<br class="">
to 10000<br class="">
2017-05-19 14:58:45 [CONFIG]<br
class="">
org.lcsim.job.JobControlManage<wbr
class="">r<br class="">
addVariableDefinition :: outputFile
= out_7796_0<br class="">
2017-05-19 14:58:45 [CONFIG]
org.hps.evio.EvioToLcio parse<br class="">
:: set steering<br class="">
variable: outputFile=out_7796_0<br
class="">
2017-05-19 14:58:45 [CONFIG]
org.lcsim.job.JobControlManage<wbr class="">r<br
class="">
initializeLoop<br class="">
:: initializing LCSim loop<br
class="">
2017-05-19 14:58:45 [CONFIG]
org.lcsim.job.JobControlManage<wbr class="">r<br
class="">
initializeLoop<br class="">
:: Event marker printing disabled.<br
class="">
2017-05-19 14:58:45 [INFO]<br
class="">
org.hps.conditions.database.Da<wbr class="">tabaseConditionsManager<br
class="">
resetInstance ::<br class="">
DatabaseConditionsManager instance
is reset<br class="">
Exception in thread "main"
java.lang.UnsatisfiedLinkError<wbr class="">:<br
class="">
/u/apps/scicomp/java/jdk1.7.0_<wbr class="">75/jre/lib/i386/xawt/libmawt.s<wbr
class="">o:<br class="">
libXext.so.6:<br class="">
cannot open shared object file: No
such file or directory<br class="">
at
java.lang.ClassLoader$NativeLi<wbr class="">brary.load(Native<br
class="">
Method)<br class="">
at<br class="">
java.lang.ClassLoader.loadLibr<wbr class="">ary1(ClassLoader.java:1965)<br
class="">
at<br class="">
java.lang.ClassLoader.loadLibr<wbr class="">ary0(ClassLoader.java:1890)<br
class="">
at<br class="">
java.lang.ClassLoader.loadLibr<wbr class="">ary(ClassLoader.java:1851)<br
class="">
at
java.lang.Runtime.load0(Runtim<wbr class="">e.java:795)<br
class="">
at
java.lang.System.load(System.j<wbr class="">ava:1062)<br
class="">
at
java.lang.ClassLoader$NativeLi<wbr class="">brary.load(Native<br
class="">
Method)<br class="">
at<br class="">
java.lang.ClassLoader.loadLibr<wbr class="">ary1(ClassLoader.java:1965)<br
class="">
at<br class="">
java.lang.ClassLoader.loadLibr<wbr class="">ary0(ClassLoader.java:1890)<br
class="">
at<br class="">
java.lang.ClassLoader.loadLibr<wbr class="">ary(ClassLoader.java:1872)<br
class="">
at
java.lang.Runtime.loadLibrary0<wbr class="">(Runtime.java:849)<br
class="">
at
java.lang.System.loadLibrary(S<wbr class="">ystem.java:1088)<br
class="">
at<br class="">
sun.security.action.LoadLibrar<wbr class="">yAction.run(LoadLibraryAction.<wbr
class="">java:67)<br class="">
at<br class="">
sun.security.action.LoadLibrar<wbr class="">yAction.run(LoadLibraryAction.<wbr
class="">java:47)<br class="">
at<br class="">
java.security.AccessController<wbr
class="">.doPrivileged(Native Method)<br
class="">
at<br class="">
java.awt.Toolkit.loadLibraries<wbr
class="">(Toolkit.java:1653)<br class="">
at
java.awt.Toolkit.<clinit>(Tool<wbr
class="">kit.java:1682)<br class="">
at
java.awt.Component.<clinit>(Co<wbr
class="">mponent.java:595)<br class="">
at
org.lcsim.util.aida.AIDA.<init<wbr class="">>(AIDA.java:68)<br
class="">
at<br class="">
org.lcsim.util.aida.AIDA.defau<wbr class="">ltInstance(AIDA.java:53)<br
class="">
at<br class="">
org.hps.evio.RfFitterDriver.<i<wbr class="">nit>(RfFitterDriver.java:31)<br
class="">
at<br class="">
sun.reflect.NativeConstructorA<wbr class="">ccessorImpl.newInstance0(<wbr
class="">Native<br class="">
Method)<br class="">
atsun.reflect.NativeConstructo<wbr
class="">rAccessorImpl.newInstance(Nati<wbr
class="">veConstructorAcce<br class="">
ssorImpl.java:57)<br class="">
atsun.reflect.DelegatingConstr<wbr
class="">uctorAccessorImpl.newInstance(<wbr
class="">DelegatingConstru<br class="">
ctorAccessorImpl.java:45)<br
class="">
at<br class="">
java.lang.reflect.Constructor.<wbr class="">newInstance(Constructor.java:5<wbr
class="">26)<br class="">
at
java.lang.Class.newInstance(Cl<wbr class="">ass.java:379)<br
class="">
at<br class="">
org.lcsim.job.JobControlManage<wbr class="">r.setupDrivers(JobControlManag<wbr
class="">er.java:1199)
<br class="">
at<br class="">
org.hps.job.JobManager.setupDr<wbr class="">ivers(JobManager.java:82)<br
class="">
at<br class="">
org.lcsim.job.JobControlManage<wbr class="">r.setup(JobControlManager.<wbr
class="">java:1052)<br class="">
at<br class="">
org.lcsim.job.JobControlManage<wbr class="">r.setup(JobControlManager.<wbr
class="">java:1110)<br class="">
at<br class="">
org.hps.evio.EvioToLcio.parse(<wbr
class="">EvioToLcio.java:407)<br class="">
at
org.hps.evio.EvioToLcio.main(E<wbr class="">vioToLcio.java:97)<br
class="">
<br class="">
<br class="">
<br class="">
I see this library libXext.so.6: in
/usr/lib64, but not in<br class="">
/usr/lib,<br class="">
when I put /usr/lib64 in my
LD_LIBRARY_PATH, then it<br class="">
complaines again (see<br class="">
below)<br class="">
<br class="">
Exception in thread "main"
java.lang.UnsatisfiedLinkError<wbr class="">:<br
class="">
/u/apps/scicomp/java/jdk1.7.0_<wbr class="">75/jre/lib/i386/xawt/libmawt.s<wbr
class="">o:<br class="">
libXext.so.6:<br class="">
wrong ELF class: ELFCLASS64<br
class="">
<br class="">
I would appreciate, if I get some
help on running the<br class="">
reconstruction<br class="">
interactively, then I could look
more closely into logs<br class="">
of the old, and new JAR files.<br
class="">
<br class="">
Rafo<br class="">
<br class="">
<br class="">
______________________________<wbr class="">______________________________<wbr
class="">________________
<br class="">
<br class="">
Use REPLY-ALL to reply to list<br
class="">
<br class="">
To unsubscribe from the HPS-SOFTWARE
list, click the<br class="">
following link:<br class="">
<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin"
rel="noreferrer" target="_blank" class=""
moz-do-not-send="true">https://urldefense.proofpoint.<wbr
class="">com/v2/url?u=https-3A__listser<wbr
class="">v.slac.stanford.edu_cgi-2Dbin</a>
<br class="">
_wa-3FSUBED1-3DHPS-2DSOFTWARE-<wbr class="">26A-3D1&d=DwIDaQ&c=lz9TcOasaIN<wbr
class="">aaC3U7FbMev2lsutwpI4--09aP8Lu
<br class="">
18s&r=0HDJrGO9TZQTE97J9Abt2A&m<wbr
class="">=xnbGP5VHYWRAQRWRksVgMnYvBkXWI<wbr
class="">4roLxztdJ0Tp9I&s=ppNYedSrn5DP
<br class="">
aIZZJgRZu8tBDeSjroqbj_PoevFoFp<wbr
class="">I&e=<br class="">
<br class="">
<br class="">
<br class="">
##############################<wbr class="">##############################<wbr
class="">############<br class="">
Use REPLY-ALL to reply to list<br class="">
<br class="">
To unsubscribe from the HPS-SOFTWARE list,
click the following link:<br class="">
<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin"
rel="noreferrer" target="_blank" class=""
moz-do-not-send="true">https://urldefense.proofpoint.<wbr
class="">com/v2/url?u=https-3A__listser<wbr
class="">v.slac.stanford.edu_cgi-2Dbin</a>
<br class="">
_wa-3FSUBED1-3DHPS-2DSOFTWARE-<wbr class="">26A-3D1&d=DwIDaQ&c=lz9TcOasaIN<wbr
class="">aaC3U7FbMev2lsutwpI4--09aP8Lu
<br class="">
18s&r=0HDJrGO9TZQTE97J9Abt2A&m<wbr
class="">=xnbGP5VHYWRAQRWRksVgMnYvBkXWI<wbr
class="">4roLxztdJ0Tp9I&s=ppNYedSrn5DP
<br class="">
aIZZJgRZu8tBDeSjroqbj_PoevFoFp<wbr
class="">I&e=<br class="">
<br class="">
<br class="">
<br class="">
______________________________<wbr class="">______________________________<wbr
class="">______________________________
<br class="">
<br class="">
Use REPLY-ALL to reply to list<br class="">
<br class="">
To unsubscribe from the HPS-SOFTWARE list, click
the following link:<br class="">
</div>
<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwIDbA&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=0HDJrGO9TZQTE97J9Abt2A&m=nIEvNWJSNxvmECWKfMrVKMTZKNdym3zsAls7WG1fdfU&s=cI82nYRtYBzgLs1l5g3anHrSt0kPk4B98oUzllu5KfQ&e="
rel="noreferrer" target="_blank" class=""
moz-do-not-send="true">https://urldefense.proofpoint.<wbr
class="">com/v2/url?u=https-3A__listser<wbr
class="">v.slac.stanford.edu_cgi-2Dbin_<wbr
class="">wa-3FSUBED1-3DHPS-2DSOFTWARE-<wbr
class="">26A-3D1&d=DwIDbA&c=lz9TcOasaIN<wbr
class="">aaC3U7FbMev2lsutwpI4--<wbr class="">09aP8Lu18s&r=0HDJrGO9TZQTE97J9<wbr
class="">Abt2A&m=nIEvNWJSNxvmECWKfMrVKM<wbr
class="">TZKNdym3zsAls7WG1fdfU&s=cI82nY<wbr
class="">RtYBzgLs1l5g3anHrSt0kPk4B98oUz<wbr
class="">llu5KfQ&e=</a>
<br class="">
<br class="">
<br class="">
<br class="">
</blockquote>
<div class="quoted-text"><br class="">
##############################<wbr class="">##############################<wbr
class="">############<br class="">
Use REPLY-ALL to reply to list<br class="">
<br class="">
To unsubscribe from the HPS-SOFTWARE list, click
the following link:<br class="">
</div>
<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwIDbA&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=0HDJrGO9TZQTE97J9Abt2A&m=nIEvNWJSNxvmECWKfMrVKMTZKNdym3zsAls7WG1fdfU&s=cI82nYRtYBzgLs1l5g3anHrSt0kPk4B98oUzllu5KfQ&e="
rel="noreferrer" target="_blank" class=""
moz-do-not-send="true">https://urldefense.proofpoint.<wbr
class="">com/v2/url?u=https-3A__listser<wbr
class="">v.slac.stanford.edu_cgi-2Dbin_<wbr
class="">wa-3FSUBED1-3DHPS-2DSOFTWARE-<wbr
class="">26A-3D1&d=DwIDbA&c=lz9TcOasaIN<wbr
class="">aaC3U7FbMev2lsutwpI4--<wbr class="">09aP8Lu18s&r=0HDJrGO9TZQTE97J9<wbr
class="">Abt2A&m=nIEvNWJSNxvmECWKfMrVKM<wbr
class="">TZKNdym3zsAls7WG1fdfU&s=cI82nY<wbr
class="">RtYBzgLs1l5g3anHrSt0kPk4B98oUz<wbr
class="">llu5KfQ&e=</a>
<br class="">
</blockquote>
<br class="">
<br class="">
______________________________<wbr class="">_________________<br
class="">
Hps-analysis mailing list<br class="">
<a href="mailto:Hps-analysis@jlab.org" target="_blank"
class="" moz-do-not-send="true">Hps-analysis@jlab.org</a><br
class="">
<a
href="https://mailman.jlab.org/mailman/listinfo/hps-analysis"
rel="noreferrer" target="_blank" class=""
moz-do-not-send="true">https://mailman.jlab.org/mailm<wbr
class="">an/listinfo/hps-analysis</a><br class="">
</blockquote>
</div>
<br class="">
</div>
_______________________________________________<br class="">
Hps-analysis mailing list<br class="">
<a href="mailto:Hps-analysis@jlab.org" class=""
moz-do-not-send="true">Hps-analysis@jlab.org</a><br
class="">
<a class="moz-txt-link-freetext" href="https://mailman.jlab.org/mailman/listinfo/hps-analysis">https://mailman.jlab.org/mailman/listinfo/hps-analysis</a><br
class="">
</div>
</blockquote>
</div>
<br class="">
<br>
<hr>
<p align="left">
Use REPLY-ALL to reply to list</p>
<p align="center">To unsubscribe from the HPS-SOFTWARE list, click
the following link:<br>
<a
href="https://urldefense.proofpoint.com/v2/url?u=https-3A__listserv.slac.stanford.edu_cgi-2Dbin_wa-3FSUBED1-3DHPS-2DSOFTWARE-26A-3D1&d=DwMGaQ&c=lz9TcOasaINaaC3U7FbMev2lsutwpI4--09aP8Lu18s&r=0HDJrGO9TZQTE97J9Abt2A&m=yktqxWZIxrOJDj0GcxtXMdIm5S6CzoD5o_bfENc8yOA&s=9UGWhMTkreQfG7iEUol-dsVGExrJm-lRdRBfTmh68jU&e="
target="_blank" moz-do-not-send="true">https://listserv.slac.stanford.edu/cgi-bin/wa?SUBED1=HPS-SOFTWARE&A=1</a>
</p>
</blockquote>
<br>
</body>
</html>