Comparison of BrainDX and ANI surface maps

In order to confirm the expected match between surface maps made using the Applied Neuroscience software (NeuroGuide or the ANI Z DLL) and BrainDX, we produced the following test maps.  They show the essentially identical match between ANI and BrainDX in three illustrative samples.  These findings are relevant to live z-score training, in that they show that the central target values are the same whether an FFT or a JTFA method is used, once one accounts for the effect of the tapering window used in the FFT.  Quite simply, if a 10 microvolt alpha wave appears in a record, the result of either an FFT or a JTFA analysis should produce a result of "10 microvolts" once proper corrections are used.

Recent attempts to disprove this have confounded the issue by using the FFT result of power, which is first of all expressed in "microvolts squared," and which also includes the uncompensated effect of the tapering window.  Therefore, while a JTFA using this comparison would show 10 microvolts, the FFT would show a result near 70 "microvolts squared."  This is simply a case of deliberately comparing different units of expression, and obscures the essential equivalence of the methods.

In our years of work using both live and static z-scores and maps, one finding is consistent.  The maps look the same generally, in that positive deviations from normal appear in both, and negative deviations from normal appear in both.  The mean targets are the same, however, as evidenced by the fact that a normal EEG that produces "green maps" in one method, invariably produces "green maps" by the other method.  In addition, particular focal abnormalities always appear in the same location and direction using both methods, except that the dynamic maps typically show a smaller deviation in z-scores.  This has led to the historic need to adjust by "adding 1.0 to 1.5 standard deviations" to the dynamic numbers.  When dynamic z-scores were introduced years ago, this was a source of confusion, and clients asked, why it had to be there.  The answer is that it does not "have" to be there, but it is introduced by using the dynamic norms for comparison.  If, however, static norms are used for comparison, the only difference is that the z-scores now become larger, in concert with the static maps, which is what users expected all along, but  were unable to achieve.  Now, by using the static norms in the real-time implementation, it is possible to do live z-score training while seeing z-scores that agree with the static maps, and to see the maps converge to the same maps one would see if one did a static analysis of the session data.

A publication is in preparation that shows that (1) the mean (target) values for EEG parameters are necessarily identical for either static or dynamic analysis, as long as the analyses are set up to produce equivalent output (e.g. "microvolts"), and that (2) the normalization using logarithms that is used for both static and for dynamic z-scores produces gaussian distributions for either.  In other words, the correction for Gaussian distribution is simply a logarithmic transform, and the choice of the transform is the same whether dynamic or static data are used.

To further elaborate on this, attached is a Comparison of NeuroGuide Static vs. BrainDX DLL vs. ANI DLL.  How this was done:

-          3 Separate Ages
-          Eyes Closed
-          NG Static Maps were made with 10 seconds of EEG
-          Each of the DLLs Maps were made with that exact same 10 seconds of EEG
-          10 Seconds were played back in BrainAvatar
-          The results are included
While, the BrainDX were larger, the ANI were smaller.  Though, both matched the NeuroGuide Static.
Note the following expected findings:
BrainAvatar / BrainDX live maps look like NeuroGuide Deluxe static maps.  As they should.
This shows that using JTFA calculations referenced to an FFT-based database works, and is statistically valid.
BrainAvatar / ANI live maps show lower z-scores, by about 1 S.D. in general.  As it should.
The following comparison shows the typical agreement with coherence maps:
Conclusion: The surface maps display the expected properties:
Maps created using NeuroGuide (FFT processing and FFT-based norms) produce results equivalent to maps created using BrainAvatar (JTFA processing and FFT-based norms), when the same data are used.  Furthermore, the maps created using ANI LZT (JTFA processing and JTFA-based norms) produce the expected result, which is maps that look similar, but that exhibit lower deviations (less coloration), but in the same direction as either of the maps using FFT-based norms.  This shows that the expected agreement between FFT->FFT z-scores and JTFA-FFT z-scores has been demonstrated, and it is justified to use JTFA-based calculations and FFT-based norms, to produce z-scores useful for instantaneous training, or for assessment.

Attached Files


    • 2015-11-22 22:42:03jeff66

      [url=]Sticker Printing[/url] | [url=]Realtor Marketing Ideas[/url] | [url=]Printing Services[/url] | [url=]Print Companies Tools[/url] | [url=]Canada Print Companies[/url] [url=]Custom Stickers[/url] | [url=]Print Stickers[/url] [url=]Atl anta Printing[/url] | [url=]Boston Printing[/url] | [url=]Chicago Printing[/url] | [url=]Houston Printing[/url] | [url=]Las Vegas Printing[/url] | [url=]Miami Printing[/url] | [url=]New Orleans Printing[/url] | [url=]San Diego Printing[/url] | [url=]Pittsburgh Printing[/url] | [url=]Seattle Printing[/url] | [url=]San Francisco Printing[/url] | [url=]Nashville Printing[/url] | [url=]New York Printing[/url] | [url=]Kansas City Printing[/url] | [url=]Baltimore Printing[/url] | [url=]Oklahoma Printing[/url] | [url=]Phoenix Printing[/url] | [url=]Detroit Printing[/url] | [url=]Philadelphia Printing[/url] | [url=]Printing Companies[/url]

    Related Entries