[Frost] Response to comments on eta analysis note

Michael Dugger dugger at jlab.org
Wed May 9 20:23:43 EDT 2012



Hi,

I thank Volker, Patrick, and Franz for their thoughtful comments and 
suggestions regarding the eta analysis note.

There was at least one comment from Franz that I did not fully understand 
and will need clarification on.

I have attached my response to the comments to this email:

Response to Volker -> volker.txt
Response to Patrick -> patrick.txt
Response to Franz -> franz.txt

Please let me know if I did not adequately address any comment or if I 
wrote anything that was flat-out stupid.

I have a revised version of the analysis note at
http://wwwold.jlab.org/Hall-B/secure/g9/ASU/etaEobsAnaFROSTv2.pdf

Thanks for your time.

Sincerely,
Michael
-------------- next part --------------
-----------------------
Comment->

1) When I read over section III, I was somewhat puzzled why you used 
different yield-extraction techniques for the numerator and the 
denominator terms. Is it that the more complicated technique is
not worth the effort for the numerator since the contribution from 
carbon cancels out anyway? Perhaps a sentence on explaining this 
briefly as an introduction is worth the effort.

Reply:
Added "For the numerator, the bound nucleon contribution canceled 
out and the resulting background was small and constant."

-----------------------
Comment->
2) The section "Overview of Technique" appears twice.

Reply:
Fixed.

-----------------------
Comment->
3) M???lar  -->  Moeller, I guess

Reply:
Fixed.

-----------------------
Comment->
4) The note says that a momentum correction based on kinematic 
fitting was applied. Is this what Sungkyun did or did you apply you 
own kinematic fitter? What I am asking is: Are we using the same
correction at ASU and FSU? If this is indeed Sungkyun's analysis, 
perhaps you could briefly mention it.

Reply:
We were not careful enough about the kinematic fitting discussion.
We appreciate Sungkyun's hard work and apologize for not giving him 
appropriate credit. The document has been modified and now includes
"Additional momentum corrections were supplied by Sungkyun Park of
Florida State University using the FSU kinematic fitting routine."


-----------------------
Comment->
5) For the scaling factor, could you show some examples of how the 
mass spectra look like (for butanol and carbon)? In particular, the 
unphysical (negative) mass regions that you used in the determination
of the scaling factors. Does the shape look similar for the two 
targets? I am sure that reviewers will ask for it.

Reply:
The scale factors were designed so that the shapes of the mass spectra
in the unphysical region could be different. In fact, the whole 
reason we went with phase-space dependent scale factors was that the 
shapes of the butanol and carbon mass distributions (in the 
unphysical region) were not identical for the single proton 
reactions we study.

-----------------------
Comment->
6) In section H, I did not really understand how you determine the 
"leakage within MVRT". What exactly are you fitting and how do you 
determine L_(2,MVRT)? 

Reply:
Changed to read:
"The leakage within \textsf{MVRT} ($L_{2,MVRT}}$) is the estimated 
number of events that have a MVRT vertex within the carbon target 
region that belong to the butanol events, and is is then brought 
into the original leakage equation as..."

Figure 8 shows sample fits to the MVRT vertex within the carbon 
target region. The fit is a gaussian + exponential. We assume the
gaussian represents the carbon and the exponential represents the
butanol events that have a vertex within the carbon target region.

-----------------------
Comment->
7) I suggest to briefly introduce and summarize the two methods when 
it comes to "constructing E" in section I and J. Perhaps a few 
sentences on how these methods are different and what goes into them
helps the reader who is not familiar with this. I have to admit 
(perhaps I missed it in one of our FROST meetings), I have never 
heard about the "super-ratio method" before.

Reply:

Each person of the ASU group came up with a different way to calculate
the E observable. We then checked that each method was equivalent.
Brian's method is the one we used in the analysis. My method was
very similar to Brian's (simple differences in how the scale factors
were defined and how the target regions were dealt with). Barry came
up with the super ratio method. Since the super ratio method was the
least similar to Brian's method, we decided that it was good to show
that the two methods were mathematically consistent. 

I have now made a subsection titled "Constructing E" and have placed
the individual methods as subsubsections.

Within the "Construction E" subsection, I have
"
In the following subsections we construct the $E$ observable using 
two different methods and show that the two methods are mathematically
equivalent. We do this to check the consistency of our calculations. 
We start by construction $E$ using a method we call the 
``scale factor method'' and then show a construction of $E$ using 
another method we have named the ``super ratio method''. After the 
two methods have been introduced we show that they are mathematically 
consistent.

In the final analysis we limit ourselves to the scale factor method.
"

-----------------------
Comment->
8) On page 17, I would not use the word "photon flux". This refers 
usually to the real photon flux measured with the tagging system. 
Here it is more like a consistency check that the two helicity
states have equal sampling sizes.

Reply:
Changed "photon flux" to "relative photon flux"

-----------------------
-------------- next part --------------
-----------------------
Comment->
In general:
1) please add a date and/or version to the note (I refer to the note 
on Hall-B/secure/g9/ASU from 4/6 18:21).

Reply:
We have now included the date.

-----------------------
Comment->
2) several paragraphs are doubled: (section III = section IV), 2 
paragraphs on p.16=same on p.17

Reply:
The redundant "Overview of technique" section has been removed.

-----------------------
Comment->
3) numbering of section,subsection is confusing (if you use Roman 
numbers for sections, then don't use capital letter for subsections 
(like V I) but standard numbers)

Reply:
The number/letter scheme is the standard one given by revtex4 when
using the \section and \subsection commands

-----------------------
Comment->
4) there are several references to chapters in the PhD thesis without
stating that this is a different text than this note (e.g. p.3: "As 
discussed in section 3.8")

Reply:
We hope that all references to the missing chapters have been removed.

-----------------------
Comment->
5) use a consistent set of terms: e.g. scale factor (not sometimes 
"scale factor", sometimes "scaling factor")

Reply:
We have replaced "scaling factor" with "scale factor" throughout the 
document.

-----------------------
Comment->
6) in many cases the provided information is insufficient and too 
confusing to evaluate the analysis (see below)

II. Data analysis:
the section is more like a 2nd introduction, information is spurious 
except the least important aspect of g9a: how to calculate the 
helicity transfer (which is NOT the "polarization value of the 
photon"!): Olsen's & Maximon's  equation shows up again on p.18 
where it belongs.

Crucial information missing in this 2nd introduction: the butanol 
target was longitudinally polarized, the CLAS detector was used to 
detect final-state particles.

Reply:
We have removed the section.

-----------------------
Comment->
III. Overview of technique
In the equation for E: H_1/2 and H_3/2 are not defined here.
The MC background simulation is mentioned but nowhere described in 
more detail. A few questions: how is the vertex distributed? Is gpp 
used? If so, which scale factors for DC sigma and SC? The GPP map has
only actual g9a values for DC wire status (for g8b it was necessary 
to have DC "wire efficiencies" in addition, I don't know about g9a).

Reply:
Added "...where $H_{1/2}$ ($H_{3/2}$) is the cross section for a 
state of initial total helicity 1/2 (3/2) in the direction of the 
incident photon."

Franz, we set up the MC as you suggested on the "Running GSIM for g9"
FROST wiki page at
http://clasweb.jlab.org/rungroups/g9/wiki/index.php/Running_GSIM_for_g9

The gpp flags given on the wiki and used in our MC: 
-a1.0 -b1.0 -c1.0 -f1.0 
we did not perform a study for how well these smearing parameters
mimic actual data. For g1c data we did a detailed study to 
optimize the smearing parameters, but feel that such a study for
g9a would be difficult to do. 

The vertex was distributed isotropic in the beam direction within
the butanol target region and centered in the x,y direction.

-----------------------
Comment->
V. Details of the technique
item 2: identify the protons ....: aren't you looking for events with 
single protons? and what is meant by "determine the momentum and 
angle"? ... the is already done before you start your analysis.

Reply:
I do not understand what the problem is.

When we talk about a single event, we say proton. If we are talking
about more than one event, we say protons. It is currently written:
"Identify the protons within the data...". I think it is clear
that we are talking about more than one event.

We determine the momentum and angle of a proton similar to how we
identify a proton: We ask GPID. 

-----------------------
Comment->
A. The running period
Please add run ranges and number of triggers to table 1.
Last sentence: describe the trigger conditions! (don't 
refer to non-existing sections)

Reply:
Added run ranges and number of triggers to table 1.

Removed the mention of "two event triggers".

Removed mention of non-existing sections.


-----------------------
Comment->
B. Valid runs
were "special" and "junk" runs (shift runlist) the only reasons to 
remove runs from the list? what about skipping all runs with only 1 
BOS file? Besides, amorphous runs were not "special" runs but are 
used in coh.brems. data analyses. The guy from the elastic (e- + e-) 
scattering was M{\o}ller (Moeller)

Reply:
We did not make a restriction as to the number of BOS files within
a run.

Removed mention of amorphous runs since those runs were not part
of the circular set and need not be discussed.

Fixed the spelling of M{\o}ller.

-----------------------
Comment->
C. Particle and event identification
There is basic misunderstanding of the reconstruction (or poor 
formulation?!):
GPID is called at the end of event reconstruction, time-based 
tracking occurs earlier (look into $CLAS_PACK/bankdefs/tbtr.ddl: 
TBTR=Time-Based Tracking Result bank ... actually only part of the
result).
GPID requires vertex information (...) from the start counter 
(how that?) along with momentum, scattering angle, charge (from the 
drift chambers!), and timing information from the time-of-flight
subsystems (what are these subsystems? ST+SC (+TAG). "vertex 
information" in this context is NOT "where the particle originated 
following the reaction" but 2 track fit parameters = intersection of 
the extrapolated track with the beamline plane (=y-z plane in sector 
coordinates: plane along nominal beamline perpendicular to the sector)
. Therefore TBTR "vertex" is an artifact if the beam is not exactly 
at (x=0,y=0): it could easily be changed in the tracking geometry 
definition IF we knew at which (x,y) coordinate the photon really 
gets into the target (the photon beam has a width of ~1cm diameter 
for 2.6mm collimator).

Reply:
Yes: this is a bit of a mess. GPID does not call TBTR or use the 
TBTR bank directly, except to make sure that it exists. GPID does
explicitly use the TBID bank. Also the vertex is not used in 
determining beta. Beta is found using the timing difference between 
the start counter and TOF, along with the track length (created with
information from the TBID and STR banks). The vertex position 
(explicitly taken from the PART bank which is copied directly from
TBTR) is used in helping determine the best timed photon. 

To fix this I have replaced the PID discussion with a slightly 
modified version taken from the g8b pion analysis note.

Remark: the MVRT bank contains the result from a vertex fit of all 
tracks in the event, if that doesn't work it throws out the track 
with largest effect on the chi^2 (could be an uncorrelated background
track). Anyway, if fitting the combined 'vertex' of more than 2 
tracks, the result is quite close to the position "where the 
particles originated". If there is only 1 track, the closest distance
to the "beam" is computed ("beam" position is given by offsets put 
into caldb, what we typically do after we got the centroid from the 
multi-track vertex fit (MVRT)). The difference between TBTR "vertex"
and MVRT "vertex" for single track events comes from the different 
beam position (except if the beam is centered at the nominal 
position) ... see your discussion in section H.

-----------------------
Comment->
Fig.1 needs units and explanation for \rho (which you define as 
momentum, but usually used for length, e.g. radius in cylindrical 
coord.; the figure should be limited to momentum<2GeV. Your cut 
|beta-beta_m| < 0.08 makes the pion and proton range small at low 
momentum where energy loss contributes a lot (and you may loose 
tracks) ... the cut should be particle&momentum dependent (see fig.2:
the entries on the neg. side are often due to energy loss of slow 
particles).

Reply:
I agree that \rho is not the most standard way to denote momentum
but I do not want to change every instance of \rho used in the 
document. 

Changed plots to so that momentum<2GeV.

We are not concerned with loosing a few events by neglecting the
momentum dependence. Also, please note that the analysis only uses
pions in determining the scale and leak factors. It is not critical
for this analysis to recover as many pions as possible.

-----------------------
Comment->
I am not sure whether fig.1 "clearly shows that GPID is capable of 
correctly determining charged particles": it show that GPID is is a 
good starting point but you have to do additional cuts. I would say 
that GPID is doing a quite good job (not optimal) since it only 
compares some given timing and momentum information and cuts 
everything out, which does not fit, without correcting or adjusting 
any information.

Reply:
Removed the statement. the reader can look at the plot and make 
their own opinion. There is no need to oversell GPID. 

-----------------------
Comment->
On p.4 the timing cut is +/- 1nsec between track and photon ... did 
you study whether this is too tight? i.e. whether you loose good 
candidates?
Another question in this context: if there are 2 tagged photon within
the time window, you could check both and keep the one for which the 
missing mass falls closer to the eta range.

Reply:
There is no absolute timing window for a single photon. After the
best timed photon is found, a timing window of +/- 1 ns is formed
around the best timed photon. If there is a second photon within
+/- 1 ns of the best timed photon, then we say that the photon that
originated the event is ambiguous and throw out the event. When we
do cross section measurements we correct the incident flux to take
into account the events that have been lost due to ambiguous photon
determination. In my opinion, it is better to throw out a small 
number of multi-photon events than to increase the number of 
accidentals (others feel differently about this). 

-----------------------
Comment->
D. Energy and momentum correction
2 remarks to eloss:
eloss does not change the "three-momentum" (p.4) but only the 
magnitude, it assumes a straight line from DC region 1 to the target;
eloss propagates back to the "start position" of the track, which is 
poorly known for single-track events in photon beam experiments, 
therefore the eloss correction in not really precise, in particular 
for slow tracks and heavy material (like butanol, carbon).

Reply:
Replaced "three-momentum" with "momentum" (two instances).

-----------------------
Comment->
Formulation p4.bottom: After processing the energy loss (skip due to 
traversing logical volumes ... no particle looses energy in "logical"
volumes!)

Reply:
Replaced all instances of "logical volume(s)" to "material volume(s)"

-----------------------
Comment->
For the kinematic fit show pull distributions (are they normal 
distributions?) and probability distributions (and it would be good 
to state the momentum correction functions in the analysis note).

Reply:
We did not perform the kinematic fitting. The wording on this was 
misleading. The document has been modified and now includes
"Additional momentum corrections were supplied by Sungkyun Park of 
Florida State University using the FSU kinematic fitting routine."

-----------------------
Comment->
E. Missing mass reconstruction
end of 2nd sentence: ... and $\gamma$ is the incident photon. 
(skip: four-momentum)
last sentence: Here $M_p$ is the mass of the proton, $E_{pf}$ is the
... (add subscript 'f')

Reply:
Fixed.

-----------------------
Comment->
F. Binning ...
1st sentence: skip "and helicity states" because that is the topic 
of the 2nd sentence. The determination of target and beam 
polarization directions has been somewhat confusing (e.g. period 5 
had actually opposite directions than Steffen's initial assignment), 
the overall (i.e. beam*target) polarization was correct.

Reply:
Removed "and helicity states" from 1st sentence.

Only the overall sign is attributed to Steffen's study:
"...overall sign applied to that run determined empirically from 
initial $\pi^{+}$ photoproduction analysis by Dr. Steffen Strauch"

-----------------------
Comment->
What do you mean with "By convention, the helicity 3/2 state was 
assigned a negative sign"?

Reply: 
The theorists sometimes use a convention where the helicity 1/2 state
is assigned the negative sign.

Changed:
"By convention, the..." to "The convention used in this document is 
that the...".

-----------------------
Comment->
p.6 1st sentence: last words "for the data set" (not "the the")

Reply:
Fixed.

-----------------------
Comment->
G. The scale factor
1st sentence: better "cleaner spectrum for the plots of the total 
yield" and last words: "before the spectrum is fitted".
What is "m^2"? What defines m^2<=-0.4 GeV^2 to be unphysical region? 
Isn't this far from m^2=-0.01 (m^2=0 means Compton scattering gp->gp 
assuming m^2=miss.mass squared for single proton event).

Reply:
I do not understand how m^2<=-0.4 GeV^2 could be anything other than
unphysical.

Changed 
"In order to obtain a cleaner spectrum for combined helicity 
plots (the denominator portion of the asymmetry equation)"
to 
"In order to obtain a cleaner spectrum for the plots of the total
yield"

Changed
"An unphysical mass region ($m_X^{2}\leq -0.4$ GeV$^{2}$)"
to
"An unphysical mass region ($m_X^{2}\leq -0.4$ GeV$^{2}$, from the 
assumed reaction $\gamma p \rightarrow p X$)"


-----------------------
Comment->
The target ranges are quite wide (-5.0 to 4.5cm - which is butanol + 
target windows + He4 bath) and +4.5 to 10.5cm. What about poorly 
identified tracks (e.g. tracks in nonfiducial regions)? eloss does 
not assume all tracks between 4.5 and 10.5cm being from carbon, so 
the corrections are not uniform over the range ...

Reply:
There is such a large amount of vertex leakage between targets that
it hardly matters exactly where the target ranges are set. In fact,
having a gap between the butanol and carbon targets makes the 
accounting much more difficult. No matter where we put any reasonable
target cut, we can have a large fraction of butanol events with a 
vertex located in the carbon region and a large fraction of carbon
events with a vertex located in the butanol region.

Eloss will only be as good as the vertex reconstruction and, 
unfortunately, the TBTR vertex reconstruction is not very good. 

-----------------------
Comment->
Fig.4 is the proton angle (\theta_0) in lab or cm frame?

Reply:
Changed "angle and momentum" to "lab angle and lab momentum"

-----------------------
Comment->
H. Leakage factor
About the differences between TBTR and MVRT see above.
Shouldn't be the cuts in fig.5 be on parallel lines to the main 
diagonal? (besides, what I find strange: MVRT seems to have more 
'wrong' entries (=outside the main diagonal range) than TBTR?!)

Reply:
Making cuts along parallel lines to the main diagonal would
would only give us events that have a range of self consistency
between TBTR and MVRT. For single track events using the TBTR
vertex, we do not know what the multi-track MVRT vertex would give.
For the single track proton events we have to make cuts on the TBTR 
vertex. We then ask "had there been multiple tracks, what would
be the fraction of events using TBTR that are seen to be in the 
wrong target."

As far as MVRT having more wrong entries:
I see what you are saying, but the log scale might be making
the vertex distribution look look worse than it actually is. I
have a lego plot of TBTR-z versus MVRT-z from a single run that 
is in linear scale at
http://www.jlab.org/Hall-B/secure/g9/ASU/tbtrVsmvrt.gif

It is at low momentum and low angles where all the vertex leakage 
problems happen, and for low momentum and angles, it is clear that 
MVRT does a much better job than TBTR.

-----------------------
Comment->
Are the 'vertex' position plotted before or after eloss and 
mom.corr.?

Reply:
The vertex position is plotted after eloss and momentum corrections.

-----------------------
Comment->
p.9 "... the bound content will cancel due to equal sampling sizes 
and assuming that the bound nucleons are unpolarized".
You should plot Y_2 to show that the helicity subtracted yield for 
carbon is practically zero.

Reply:
First let me define 
yFrac = [Y(+) - Y(-)]/[Y(+) + Y(-)] 

A plot of yFrac for gamma p -> p X can be found at
http://www.jlab.org/Hall-B/secure/g9/ASU/obsEpx.gif

In the unphysical region the value from a zero order fit is
-0.00762828+/-0.00194197
The estimate we have for the equal sample size between + and -
photon polarizations over the entire run period is 2.1%.

We could look more closely at this, but I don't see anything to be
alarmed about.

-----------------------
Comment->
p.9 last sentence: "This assumption ... " this sentence is only valid
if the acceptance for butanol and carbon is equal.

Reply:
Changed
"This assumption will hold true since the ratio of the bound content
between the unphysical region (as defined in the scale factor 
section) and the physical region should remain constant."

to

"This assumption will hold true if, the detection efficiency between 
events coming from the butanol and the carbon targets are the same, 
and the ratio of the bound content between the unphysical region (as 
defined in the scale factor section) and the physical region remain
constant."


-----------------------
Comment->
J. Constructing ...
I haven't figured out what we can learn from the super ratio when 
making lots of assumptions (in particular yields in physical to 
unphysical ranges). In general, the argument that you get expected 
values for no target leakage doesn't say anything about the 
correctness of the manipulations.

Reply:
Each person of the ASU group came up with a different way to calculate
the E observable. We then checked that each method was equivalent. 
Brian's method is the one we used in the analysis. My method was 
very similar to Brian's (simple differences in how the scale factors 
were defined and how the target regions were dealt with). Barry came
up with the super ratio method. Since the super ratio method was the 
least similar to Brian's method, we decided that it was good to show
that the two methods were mathematically consistent. We are simply
showing that we can derive the observable E in two very dissimilar
manners, yet obtain mathematically consistent expressions. Usually, 
such algebraic contortions are not worth the effort, however, the
E observable (including scale factors and leak factors) gets a bit
ugly and we felt it was worth the effort for us to check our results 
against one another and report on that.

-----------------------
Comment->
L. Fitting routines
Great! The description of the procedure is very detailed.
One question though: 2nd last paragraph: you assume the eta peak 
(mean+width) to be the same in numerator and denominator ... but 
plots in appendix A show that the denominator peak is much wider.

Reply:
It just looks that way because the range of data shown can be
different between the numerator and denominator fits. If you 
count the number of bins within the shaded regions, you should
find that the numerator and denominator peaks have the same number
of bins (same width).

-----------------------
Comment->
M. Uncertainties and systematics
Parts of this section is repeated!

Reply:
Fixed.

-----------------------
Comment->
I am not used to take Difference and Sum as (quasi) independent 
samples, what happens if the subtracted yield is zero (N~0)? I 
wanted to compare this with the usual binomial statistics ...
later!!

Reply:
I do not understand. I need clarification.

-----------------------
Comment->
Systematic uncertainties for target are not the the small variations 
within runs (or run groups) but uncertainties inherent in the 
measurements themselves: Q-meter NMR ~3%, Moeller polarimeter ~3%.

Reply:
The uncertainties have been revised to include the Q-meter
and Moeller polarimeter systematic uncertainties. Also the 
systematic uncertainty due to unequal sample sizes of helicity +,-
events (2%) has been added in. We now report an overall systematic
uncertainty of 5.4%.

-----------------------
Comment->
For the beam energy: relative precision is high but the energy has 
often been slightly different from the recorded value (I don't know 
whether we have energy measurements from Hall A or C for that time).

Reply:
How can I find out what the systematics of the electron beam energy
is?

-----------------------
Comment->
Photon polarization equation (Olsen-Maximon) should be: p_\gamma = 
P_e \frac{....} (skip "as derived...").
How did you account for P_\gamma: did you use weighted polarization 
values per E-bin? Then you have also a variance of that mean.

Reply:
Fixed the equation.

We did use the weighted polarization values per E-bin. 

All of the uncertainty in the polarization will be due to
systematics of the polarization. 

The statistical variance of the mean for photons incident on the 
target is very very small (assuming I did the calculations correctly):
If <pol> = sum(pol_i)/sum(I_i),
where the subscript i denotes event i, sum(x_i) means to sum
x over all i events, and I_i is the indicator for a poison 
distributed random variable (sum(I_i) = number of events (N)),
then
var(<pol>)/[<pol>^2] = sum([pol_i]^2)/[sum(pol_i)^2] + 1/N -
2*cov(sum(pol_i),sum(I_j))/[sum(pol_i)*sum(I_i)]

Even if we neglect the covariance and use a single run, the
fractional uncertainty in <pol> is about 0.4%. See plot at
http://www.jlab.org/Hall-B/secure/g9/ASU/polSigFrac.gif

Note: The y-axis is the standard deviation of <pol> divided by <pol>.
The x-axis is the center of mass energy (W) in GeV.

The value is about 0.4%.

If we include the covariance the fractional uncertainty gets super 
small. The covariance term becomes
2*cov(sum(pol_i),sum(I_j))/[sum(pol_i)*sum(I_i)] =
2*cov(sum(pol_i),sum(I_i))/[sum(pol_i)*sum(I_i)] =
2*sum(cov(pol_i,I_i))/[sum(pol_i)*sum(I_i)] =
2*sum(pol_i)/[sum(pol_i)*sum(I_i)] =
2/N

This then gives us
var(<pol>)/[<pol>^2] = sum([pol_i]^2)/[sum(pol_i)^2] - 1/N

Now the fractional uncertainty in the mean polarization of photons
incident on the target for a single run is about 0.006%. See plot at
http://www.jlab.org/Hall-B/secure/g9/ASU/polSigFrac2.gif

It gets even better if we think about what we really want. We do not
care about the mean value of the polarization for photons incident on
the target. We only care about the mean value of the photon 
polarization for events that are seen. This means that the 
uncertainty in the number of events of interest (those events actually
seen) is zero. If we now calculate the fractional uncertainty in the 
mean polarization for the events of interest we get exactly
a value of zero.

The only uncertainty that will matter for the polarization is
systematic, and even if we did care about the statistical uncertainty
in the polarization for the photons incident on the target, it is too
small to bother with.

(Sorry about being so long winded, but felt I had to fully address 
this concern.)

-----------------------
Comment->
Scale factors: a systematic uncertainty of 2% very ambitious (so far 
I remember Mike only claims variations within 2%).

Reply:
The claim of 2% was determined by looking at the consistency of 
the scale factors using different subsets of bound events. This
is the best guess that I got. You can find documentation as to
how I came up with the 2% on pages 6-11 of
http://wwwold.jlab.org/Hall-B/secure/g9/ASU/sfDraft/sfNote.pdf

-----------------------
Comment->
Using eta->3-pion yield over all kinematic bins can give a good 
estimate for most bins but maybe not the ones where you hardly get 
any 3-pi yield.

Reply:
The eta->3-pion branch gives us the best chance at making the
approximation but does not have enough events for us to go bin-by-bin.

-----------------------
Comment->
N. Finalizing
I don't understand why a Gaussian makes sense to plot the ratios. 
Shouldn't it be a Poisson type?

Reply:
You are correct that a Gaussian may not have been a great choice, 
however we are using the Gaussian distribution as a rough visual
guide. In the document it is stated that:
"Upon examining the plot, a reasonable cut can be imposed at the 
three standard deviation value of 0.2. Making this restriction 
ensures that the regions analyzed will have relatively small and 
smooth variations in scale factors, and the values used should be 
reliable."

-----------------------
Comment->
I think many bins in appendix A that were not used in the final plots
could be recovered, in particular if SC paddles were poorly working 
or calibrated in one but not all sectors.

Reply:
We don't want to analyze the data again to pick up a few bins. Once 
the CB-ELSA data is out, the impact for the CLAS data will be greatly
diminished.

-----------------------
Comment->
VI. Results
p.20 last sentence: The very first ... are excluded ...: these are 
NOT extreme angles: arccos(0.8)=37deg (in CM, which can vary between 
15-30deg in lab), so there must be a different reason, e.g. too low 
proton momentum (or you might have taken cos\theta between -0.9 to 
0.9).

Reply:
We are just saying that the first and last bins contain the most
extreme angles.

-----------------------
Comment->
p.21 3rd paragraph: ... a fit of a constant to the combined data 
gives a value of 0.98 \pm 0.03. It seems to me that the data show 
much larger variations than 0.03.

Reply:
It is difficult to judge these things by eye. I just ran the fit 
again and got: 0.989577 +/- 0.0307537

Changed value to 0.99 \pm 0.03

-----------------------
Comment->
p.23 paragraph to fig 19(C): I would skip the remark that "SAID has 
a small dip ...", the SAID curve is not flat but very different from 
the data at backward angles.

Reply:
I disagree. SAID does not look very different from the data in the
backward angles (the curve is close to hitting all of the error bars 
in the backward direction). 

Removed all qualitative discussion regarding the comparison of data
to theoretical curves for this plot.

-----------------------
Comment->
VII. Conclusions
2nd sentence: There are currently no published ... observable E for 
\eta production.
3rd sentence: In every kinematic bin studied in this analysis (skip:
"note") ...

Reply:
Added "for \eta production" to end of 2nd sentence.

Removed "note" in 3rd sentence.

-----------------------
Comment->
It is nice to have strong statements ... but this data (unfortunately)
will not provide a benchmark for theoretical models.
The statement "Without a full PWA ..." is rather confusing ...

Reply:
Changed "benchmark" to "data needed"

Removed comment regarding full PWA.

-----------------------

-------------- next part --------------

Comment->
I just started looking at it and I noticed that you left out the 
factor of electron polarization in the formula for degree of circular
polarization.

Section III and IV are the same thing.

Reply:
Fixed




More information about the Frost mailing list