[Halld-offline] Recreating tracks

Matthew Shepherd mashephe at indiana.edu
Thu Aug 7 14:51:09 EDT 2014


Hi Paul,

I'm cc'ing this one to the Hall D offline list since
I suspect it has implications on other software systems
like tracking.

As you know Ryan and I have been working to both
learn your analysis framework and cross check with 
some other high level analysis code that Ryan has
that we used for BES and CLEO to reconstruct
many reactions.

We are still at the stage of trying to get consistent
results for what we think are the same cuts running
over the same events.

So far the biggest issue we see (I think) is related to
the recreation of track hypotheses for tracks that did
not contain a hypothesis at the time the initial tracking
is done.  For example, track candidate has a proton
fit but not a pion fit so you have a special factory that
reconstructs the pion fit from the proton fit (not the hits)
so it can be used in analysis.

It seems this makes a notable difference for some 
topologies.  For example, gamma p -> 3 (pi+ pi-) p it
seems that my signal efficiency, using your code, is
something around 10x more than what is obtained
with the "stock" tracks provided by the reconstruction
framework.

There are several problems/issues with this:

* Tracks are something that should be provided to
all users.  As is, it is very hard for a user to get these
recreated tracks, which seem to be, based on the
efficiency gain, "real" tracks.  We've tried to cook
up a factory that creates 5 DReactions each of which
has a single track that triggers your factory create the
track hypotheses, that Ryan can then extract, but
this is non-trival and we still don't understand the
results (see question below).

* This is slooow to do at analysis time.  Your
algorithm involves reswimming, which is really slow.
We discussed some speed issues last week.
I thought a lot of the slowness was in overhead
in analysis classes, but I'm not sure.  When Ryan
started to use the kludge to get the recreated tracks
his code slowed down by 5x - 10x.  We shouldn't
have to redo tracking at analysis... any redoing
we are doing isn't nearly as good as what we
could have done with the original hits.

A question:  how is the FOM for newly created
track hypotheses determined?  We see the pattern
that some recreated tracks have an FOM of zero.
These tracks are getting cut by Ryan's tracking
FOM cut.  However, they don't seem to be cut
by my specified FOM cut.  Does the analysis system
ignore the FOM cut for tracks that it recreates
from another hypothesis?

It seems to me like we're trying to solve a tracking
issue at the analysis stage and it is consuming a 
lot of analysis-time CPU resources and creating 
confusing for users.  

It seems we have a solution -- can we move it from
the analysis libraries into the core tracking software?

Matt

---------------------------------------------------------------------
Matthew Shepherd, Associate Professor
Department of Physics, Indiana University, Swain West 265
727 East Third Street, Bloomington, IN 47405

Office Phone:  +1 812 856 5808



More information about the Halld-offline mailing list