[Halld-physics] new Bethe Heitler event generator

Richard Jones richard.t.jones at uconn.edu
Mon Mar 19 14:00:44 EDT 2018


Maria, thank you for your message. My responses are below.

On Mon, Mar 19, 2018 at 10:44 AM, Marie Boer <mboer at jlab.org> wrote:

> - Generating over a phase space weighted by the amplitude will have for
> consequence that >99% of generated events will fall out of acceptance.
> Indeed, BH angular distributions present sharp peaks (terms ∝ ~1/m_e^2).
> Next to these limits, either the electron or positron is emitted through
> the beam direction, and thus fall out of acceptance, while the other lepton
> will not pass momentum threshold of calorimeter. The difficulty here, is
> that cross sections are over 2 order of magnitude larger, when not more,
> than in other regions. It is therefore important to maximize the number of
> generated events in regions where we can make the measurement and have
> physics interpretations. Some solutions are to generate flat in theta and
> phi (my solution, adapted for BH or  BH+TCS studies) or flat in cos(theta)
> and phi (Rafo's solution, ideal for TCS studies). A completly flat phase
> space present advantages for the normalization and for studies involving
> polarization. For unpolarized cross sections, alternatively, log(Q'2) and
> log(t) can be generated to take account the steep  decrease in these
> variables. But after trying that, I decided not to do so, and my public
> version is completely flat in all 5 or 6 variables.On his side, Rafo is
> reweighting the phase space after calculating t_min which makes the
> generation of t distribution a bit faster.
>
There is no problem imposing a selection of which events you want to
simulate. You should not feel you have to simulate them all. But the
generator is very fast, compared to simulation. There is no advantage
throwing away part of the phase space at the generator stage, and it has
the great advantage of providing a numerical cross-check by reporting the
total cross section. A first rule for generators is, generate the full
phase space, and then make selection at the simulation step. A second rule
is measure the generator MC efficiency and make sure it is close to 1 on
the full phase space, not on some selected sub-region. Otherwise your users
of the generator will see problems when they try to reduce their dependence
on the built-in cuts, and find that their numerical errors blow up.


> m_e is very small compared to the scale of the problem, which is actually
> a minor issue after cuts (and can be neglected). But even after applying
> angular cuts, the numerical precision is important in the region next to
> t_min (majority of events). Double vs float precision can already completly
> mess the results next to t_min. A good solution to this problem is using
> interpolations at generator level.
>

I disagree with this. Using interpolations introduces another level of
approximation to the process. Why do that when you can generate everything
directly without approximation? There is no motivation for resorting to
this approximation.

> - I am not sure to well understand your notations, but for leading twist
> dilepton photoproduction, the M--      amplitude is the dominant one.
>
There a 4 diagrams, two of them look like the incident photon decaying into
two leptons, followed by rescattering of one of the two leptons. Those I
call "gamma decays" diagrams. These are dominant, so I presume you are
calling these M--. The other two diagrams look like Compton scattering with
the scattered photon being off-shell and scattering into a lepton pair.
This I call "Compton-Dalitz". Whatever you call them, those are the
complete set of tree level for this process.

> - TCS diagrams should not be included for performing fits of BH cross
> sections.This process is at its earlier studies on experimental side and
> has never been measured (the 2 diagram we call BH has not been measured
> either, it makes me also worry about the relevance of using it for
> normalization). Including these diagrams, even suppressed, is raising too
> many questions. Furthermore, GPDs have to be included in the proton vertex
> parametrization. Not including GPDs will overestimate the interference
> term, plus not take into account part of the kinematic dependence of the
> cross section.However, including GPDs is also not a solution due to huge
> uncertainties and discrepencies between models, in  particular for the part
> TCS unpolarized cross section is sensitive too.
> I think that treating TCS as a systematic error in BH estimations is the
> only option, even thought I think     using BH for normalization is not
> ideal neither. For doing that, the only way is to perform a mapping of the
> phase-space accounting detector acceptance and calculate a kinematic
> dependent value of BH/(BH+TCS) using a TCS+GPDs model. I proposed to use 2x
> the expected TCS rate as a systematic to take account our lack of knowledge
> on GPDs and TCS (NLO, higher twist contribution...).
>
Yes, that's right. The way I treat these two diagrams in my generator is
only half-hearted at best. When you use my generator, I suggest that you
set F1_timelike = F2_timelike = 0. That way these diagrams are suppressed,
and only the leading two gamma-decay diagrams are included. The
interference between the gamma-decay and TCS diagrams is the leading
correction. My generator, as presently written, does not have an accurate
account of the way that proton structure affects the TCS amplitude in it.
Until it does, leaving the inputs F1_timelike and F2_timelike at zero is
the best way to use it. For computing BH background in GlueX, these
interference terms are not relevant.

-Richard Jones
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.jlab.org/pipermail/halld-physics/attachments/20180319/8fd27a96/attachment.html>


More information about the Halld-physics mailing list