<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; font-size: medium; "><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi Cody, Gerard,</div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br></div><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Yes I think it does need a head-start of 5 samples. When upsampling a smaller chunk of the data, 20 samples total was enough, starting 10 before the initial hit threshold. There is a little more info here <a href="https://halldweb1.jlab.org/wiki/index.php/LE_algo">https://halldweb1.jlab.org/wiki/index.php/LE_algo</a> and Gerard's original code is still here <a href="http://npvm.ceem.indiana.edu/~gvisser/GlueX/rdat.c">http://npvm.ceem.indiana.edu/~gvisser/GlueX/rdat.c</a> I'll send you (Cody&Beni) my c++ code when I've cleaned it up a bit. </div></span></div></span></span>
</div><div><br></div><div>Naomi.</div>
<div><br></div><br><div><div>On Aug 6, 2013, at 12:50 PM, Gerard Visser wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div>Hi Cody,<br><span class="Apple-tab-span" style="white-space:pre">        </span>If the upsampling is done continuously then the processing has to be done in <br>the frontend FPGA's, obviously you don't want to send 4-5x as much data over the <br>daisy-chain to "processor" FPGA.<br><span class="Apple-tab-span" style="white-space:pre">        </span>Perhaps you will find that you can fit all the required processing in the <br>frontend FPGA's. This would be nice, and would remove any concerns about the <br>readout daisy-chain bandwidth. In this case it is simplest to upsample <br>continuously. And probably to process on-the-fly (although you'll have to think <br>how you will handle overlapping windows from closely spaced triggers, e.g. will <br>need multiple copies of the integrator/accumulator).<br><span class="Apple-tab-span" style="white-space:pre">        </span>If that is not the plan, if you would put the processing in the "processor" <br>FPGA and send raw samples from the frontend to the processor FPGA as was the <br>original intention, then the upsampling needs to be done on a small region. I <br>think this will be perfectly feasible... Upsampling just consists of replicating <br>samples and then passing through an FIR filter so if the filter is short enough <br>not many original samples are needed to fully prime the pipeline. Perhaps as <br>little as 5 or so? Clearly the FIR coefficient vector needs to be chosen <br>carefully. Naomi is probably (?) still using the original one that I calculated <br>but a shorter one may give good enough performance. Of course this can and <br>should be investigated offline with a given raw data set analyzed with varying <br>algorithms.<br><br><span class="Apple-tab-span" style="white-space:pre">        </span>- Gerard<br><br>p.s. Details on the frontend to processor daisy-chain: I expect this to be able <br>to run at 80MHz, possibly 100MHz or more. There are 18 bits, but let's say 2 are <br>just used as flags to manage the transfer, so this is >160 MB/s. There is one of <br>these per main/mezzanine set of 36 channels, i.e. total data rate into <br>"processor" FPGA is limited to 320 MB/s (or 400 MB/s if you can get 100 MHz, etc.)<br><br>p.s.2. For filter design I have used "meteor" which is available at <br><a href="http://www.cs.princeton.edu/~ken/meteor.html">http://www.cs.princeton.edu/~ken/meteor.html</a> .<br><br>On 8/6/2013 11:25 AM, Cody Dickover wrote:<br><blockquote type="cite"><br></blockquote><blockquote type="cite">Hi Gerard,<br></blockquote><blockquote type="cite"><br></blockquote>...><br><blockquote type="cite">1. There was some question at the tracking meeting about up-sampling on a constant basis vs. some small region and the headroom involved for that. I assume we are in the green on that, but am not familiar with the code nor the implementation. Do you have some detail on that?<br></blockquote><br>_______________________________________________<br>Halld-tracking-hw mailing list<br><a href="mailto:Halld-tracking-hw@jlab.org">Halld-tracking-hw@jlab.org</a><br>https://mailman.jlab.org/mailman/listinfo/halld-tracking-hw<br></div></blockquote></div><br></body></html>