[Clas_offline] [Fwd: Re: b-field parameterization/calculation]
Sebastian Kuhn
kuhn at jlab.org
Thu Nov 12 12:54:03 EST 2009
Hi Dave,
we don't really disagree all that much. Obviously N parameters are equivalent to N grid points. Using different parametrizations for different regions corresponds to using an "adapted" grid that is more fine where the field fluctuates more strongly (close to the coils) and more coarse elsewhere, where a 1st or 2nd order polynomial is a good interpolation That's why a grid in r*B(r,theta,phi) was suggested - one could choose small grid size if phi is close to a coil.
We only differ on whether it is better to have a few regions with lots of parameters or many "regions" (grid voxels) with fewer parameters. In the end, it all boils down to computing efficiency - is it faster to look up from an indexed table (and maybe calculate a 2nd-order polynomial) or to calculate a more complicated expression. I'm no expert on that question...
- Sebastian
David Lawrence wrote:
>
> Hi Sebastian,
>
> I kind of hesitate to keep commenting since I'm sort of an outsider
> here, but I trust you will discount my opinions accordingly.
>
> In the worst case one can use N parameters to perfectly reproduce N
> grid points. However, one assumes that the grid spacing is chosen so
> that there are only small changes from point to point. Thus, fewer
> parameters are required to describe the field (again, since it varies
> smoothly over several points). If this is not the case, then you grid
> spacing is too large. Whatever the grid spacing is, the higher order
> terms are automatically truncated by the grid spacing itself.
>
> What drives the number of terms needed for a proper parameterization
> is the error on the momentum resolution due to the uncertainty in the
> field itself (due to knowledge of current and coil positioning) and
> other contributors such as multiple scattering. (M.S. is the dominant
> contributor for Hall-D, but will be less for Hall-B and so may not
> dominate there).
>
> Note also that the parameterization can (and should) be broken up
> into sections of the detector. In the preliminary work I referenced
> earlier, I broke the detector up into 10 sections and parameterized each
> individually. The (far from final) result was a parameterization that
> achieved better than 100 Gauss (~0.5%) agreement everywhere in the
> active area with 2000 values as opposed to the ~360,000 values (for
> 180,000 grid points) map.
>
> So I guess I would have to respectfully disagree with your statement
> that a parameterization will not work. If you've set your map's grid
> spacing properly, and you break up the problem so you're not fitting
> 20th order polynomials, then it pretty much has to give you a smaller
> disk/RAM footprint than the original map.
>
> Just my $0.02.
>
> Regards,
> -Dave
>
> Sebastian Kuhn wrote:
>> Just my 5 cents: making a grid of r*B(r,theta,phi) is probably the most efficient method. However, while this will give a "smoother" function in most places, close to the coils we will have a very complicated field so the full resolution and all higher-order terms will be needed there (a parametrization will not work).
>>
>> - Sebastian
>>
>> Alexander Vlassov wrote:
>>
>>> ------------------------------------------------------------------------
>>>
>>> Subject:
>>> Re: [Clas_offline] b-field parameterization/calculation
>>> From:
>>> Alexander Vlassov <vlassov at jlab.org>
>>> Date:
>>> Wed, 11 Nov 2009 15:50:06 -0500
>>> To:
>>> David Lawrence <davidl at jlab.org>
>>>
>>> To:
>>> David Lawrence <davidl at jlab.org>
>>>
>>>
>>> Hi,
>>>
>>> I do not think that the magnetic field can be parametrized at all (if we
>>> want
>>> reasonable accuracy). I can not imagine how many parameters will it needs.
>>>
>>> One of ideas (Kossov, actually) was to use B(r, theta, phy) instead of
>>> B(x,y,z).
>>> The argument was that field difference d B/d phy is about the same at
>>> small and large r. That is if you use different cell sizes (in x,y,z)
>>> for small and large
>>> distances.
>>>
>>> - Alex.
>>>
>>>
>>> David Lawrence wrote:
>>>
>>>> Hi All,
>>>>
>>>> If the field were parameterized instead of tabulating it one could
>>>> see performance enhancements in 2 areas:
>>>>
>>>> 1.) Startup time. A 3-D field map can easily be >200MB which does take
>>>> some noticeable time to read in at startup
>>>>
>>>> 2.) Gradient calculation. Assuming the basis set for the
>>>> parameterization is chosen so that one can easily take it's derivative.
>>>>
>>>> One could of course use a pre-calculated gradient table as well, but
>>>> that would have the same size as the field map itself, simply moving
>>>> the overhead incurred at every startup to different place, but not
>>>> eliminating it.
>>>>
>>>> In principle, parameterizing r*B may result in fewer terms being
>>>> needed for the parameterization, but that would have to be looked at.
>>>>
>>>> -David
>>>>
>>>> Mac Mestayer wrote:
>>>>
>>>>> Hello Alex;
>>>>>
>>>>> You might be right. I may not improve time. I was thinking
>>>>> that to achieve the same accuracy we could use a coarser grid, with
>>>>> fewer grip points spaced further apart, and still achieve the same
>>>>> accuracy. I was thinking that a smaller table would be faster
>>>>> to look up, but I was thinking of the case of a non-indexed table,
>>>>> like the link table, where you have to sort through the table.
>>>>> A b-field table would probably be indexed by position and have
>>>>> every table element filled, so it would take the same time to
>>>>> find a value in a large indexed table as in a small one.
>>>>>
>>>>> There may still be a savings in computer time if we can use
>>>>> a linear interpolation method rather than a 2nd-order one.
>>>>>
>>>>> - Mac
>>>>>
>>>>> "mestayer at jlab.org", (757)-269-7252
>>>>>
>>>>> On Wed, 11 Nov 2009, Alexander Vlassov wrote:
>>>>>
>>>>>
>>>>>
>>>>>> Mac Mestayer wrote:
>>>>>>
>>>>>>
>>>>>>> Hello folks;
>>>>>>>
>>>>>>> Here is my suggestion for a useful, self-contained software
>>>>>>> project for Sebouh in the context of his Java-ization of SOCRAT.
>>>>>>>
>>>>>>> I understand that a fair fraction of tracking computing time is
>>>>>>> devoted to estimating the magnetic field along a trajectory.
>>>>>>> One way to estimate the field is to fill a table with pre-calculated
>>>>>>> values of the B field components on a grid of space points, and
>>>>>>> then to interpolate between grid points to estimate the field value
>>>>>>> at any arbitrary space point. The speed of such a process varies
>>>>>>> with the grid size and with the order of interpolation (linear, 2nd
>>>>>>> order, etc.).
>>>>>>>
>>>>>>> In the summer, Peter Bosted made a good suggestion. Since the
>>>>>>> dominant kinematic trend of the B-field for a toroidal magnet is
>>>>>>> a 1/r dependence, he suggested that we tabulate r*B instead of
>>>>>>> B itself.
>>>>>>>
>>>>>>> If folks agree, I'd like to see Sebouh investigate this.
>>>>>>> The project would have several stages. First, simply isolate
>>>>>>> and modularize the existing B-field estimation part of SOCRAT into
>>>>>>> an independent module. Secondly, measure how much time the
>>>>>>> B-field estimation is taking for a wide variety of typical tracks.
>>>>>>> Thirdly, create a new table with r*B as the tabulated values instead
>>>>>>> of B, and time this (it shouldn't be any faster if we don't change
>>>>>>> anything else). Now we can play around with reducing the number of
>>>>>>> grid points by making a coarser binning and see how this affects
>>>>>>> the computing time. Likewise, we can investigate using a lower-order
>>>>>>> interpolation scheme. As well as measuring computing time, we need
>>>>>>> a measure of the loss of resolution due to coarsening the grid, so
>>>>>>> we'll also have to keep track of the momentum and angular resolution
>>>>>>> of the reconstructed tracks.
>>>>>>>
>>>>>>> Any comments on the importance of such a project or on its
>>>>>>> implementation?
>>>>>>>
>>>>>>> - Mac
>>>>>>>
>>>>>>> "mestayer at jlab.org", (757)-269-7252
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Clas_offline mailing list
>>>>>>> Clas_offline at jlab.org
>>>>>>> https://mailman.jlab.org/mailman/listinfo/clas_offline
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>> Hi,
>>>>>> I can understand, that making of a table B*r instead of B may
>>>>>> improve accuracy,
>>>>>> but how can it affect the alculating time ?
>>>>>> - Alex.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> Clas_offline mailing list
>>>>> Clas_offline at jlab.org
>>>>> https://mailman.jlab.org/mailman/listinfo/clas_offline
>>>>>
>>>>>
>>>> --
>>>>
>>>> ------------------------------------------------------------------------
>>>> David Lawrence Ph.D.
>>>> Staff Scientist Office: (757)269-5567 [[[ [ [
>>>> [ Jefferson Lab Pager: (757)584-5567 [ [
>>>> [ [ [ [ http://www.jlab.org/~davidl davidl at jlab.org
>>>> [[[ [[ [[ [[[
>>>> ------------------------------------------------------------------------
>>>>
>>>>
>>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> Clas_offline mailing list
>>> Clas_offline at jlab.org
>>> https://mailman.jlab.org/mailman/listinfo/clas_offline
>>>
>> _______________________________________________
>> Clas_offline mailing list
>> Clas_offline at jlab.org
>> https://mailman.jlab.org/mailman/listinfo/clas_offline
>>
>
> --
>
> ------------------------------------------------------------------------
> David Lawrence Ph.D.
> Staff Scientist Office: (757)269-5567 [[[ [ [ [
> Jefferson Lab Pager: (757)584-5567 [ [ [ [ [ [
> http://www.jlab.org/~davidl davidl at jlab.org [[[ [[ [[ [[[
> ------------------------------------------------------------------------
>
>
More information about the Clas_offline
mailing list