Related Topic(s):

Brain Mapping and Neurofeedback

A chapter from the Textbook of Neurofeedback, EEG Biofeedback and Brain Self Regulation
edited by Rob Kall, Joe Kamiya and Gary Schwartz
The E-book is Available on CD Rom
Brain Mapping and Neurofeedback.

Author: Jaime Romano MD

Clinical Neurophysiologist
Visiting Researcher, UCLA
Fellow member of the American Society of Clinical Neurophysiology.
Former President of the Mexican Society of Clinical Neurophysiology.
Liaison of the Mexican-American Society of Clinical Neurophysiology.
Professor: Universidad Anahuac, Mexico City.
Director: Centro Neuro Psico Pedagógico S.C.
Address: Gral. León # 38
Col. San Miguel Chapultepec
Mexico City, Mexico 11850
Phone: (525) 515-2070
E-mail: samiari@servidor.unam.mx

Acknowledgements:
The author wishes to thank his wife Dafna, his colleagues Miriam Soffer and Guillermo Mercado and Rob Kall for their valuable comments, Zerene Klip and Sven Doehner for their support in the translation of this chapter, and Beni Gojman for assistance in the elaboration of the figures.

Historical perspective.

Genesis of the EEG.

Instrument and Recording Techniques:

Electrode placement

Amplifiers:

Montages:

Artifacts:

External

Instrumental

Physiological

Eye movements.

Myogenic.

Artifacts related to cardiac activity

EKG artifacts

Pulse artifacts.

Artifacts arising from the scalp and skin

Quantitative EEG and Brain Mapping.

Digitizing EEG.

Spectral Analysis.

Brain Mapping.

Basis of EEG Analysis.

Normative Databases.

Brain Mapping and Neurofeedback.

Historical perspective.

Neurofeedback and Brain Mapping represent two new fields in Neuroscience. Their evolution in the last two decades has been possible thanks to technological development and more specifically to the revolution and introduction of computers into the fields of Neurophysiology and Psychophysiology.

As in any new field of knowledge, Neurofeedback and Brain Mapping are going through different stages of development. At present, Neurofeedback is in a stage of growth and maturation that with time, cumulative experience and proof will show its strengths and weaknesses until it assumes its proper place, giving meaning to its usefulness in different applications.

The study of mental functions has gone through a similar process. A wonderful journey that started with the Phrenologysts’ systematic but naive concept, who pretended to localize mental functions by examining the bumps and shape of the skull.

Through the observation of natural and fortuitous events, it became possible to obtain more information by examining the changes in behavior provoked by specific lesions in the brain. Cases like that of Phineas Gage, who was struck by a massive rod that destroyed his frontal lobe, and those other patients reported by Broca and Wernicke with cerebral strokes that produced lesions in specific areas of the cerebral cortex, contributed to understanding the specific role of certain brain areas in behavior.

These and other events supported the localizationist theory of mental functions, until it was modified by Wilder Penfield´s findings. (Penfield,W.; Rasmussen,T. 1957)

By stimulating the surface of the brain with a weak electric current, Penfield was able to explorethe cerebral cortex of awake individuals directly. Through this stimulation, as was proposed by the Phrenologists, Penfield observed, that he could delimitate specific brain areas related to specific brain functions. But unexpectedly, he also found extensive brain areas that were in an apparently "silent" state.

These "silent" areas are considered now to be the most important and interesting in relation to the most complex and elaborated mental functions, which are referred today as the association cortex.

We now conceive the brain as a very complex and organized system. This system is integrated by millions of complex elements that are organized by a very complex set of communication protocols, conforming microsystems which in turn are organized in macrosystems, integrating a harmonic whole with progressive levels of analysis and processing of information from which progressively complex behaviors emerge.

On the other hand, Psychology, as another discipline interested in understanding mental functions, had its own evolution. Starting with John Locke’s philosophy, and then with Wilhelm Wundt who, in 1879, founded the first laboratory on experimental psychology, and then, partly because of disputes about methods of doing research on mental or "inner" life, American psychologists from about 1920 to 1960 showed little concern with mental processes, focusing their attention instead on behavior itself. Watson led this movement, which was known as behaviorism. It was not until then that Psychophysiologysts began to study the interrelationships of physiological and cognitive or mental processes.

Within the field of Applied Psychophysiology, Clinical Biofeedback was developed to help people alter their behaviors thanks to information about their own physiological processes. These include muscle activity, peripheral blood flow, cardiac activity, sweat gland activity, and blood pressure. As computer technology evolved, it was possible to record and analyze electrical brain activity. and then, the field of Neurofeedback began.

It is obvious that neurological and psychological sciences are narrowing their differences. Neurological sciences have departed from a description of the "machinery of the mind" and now its interest shifts to function. Behavioral sciences have departed from the opposite side and currently focus its interest on finding the structural correlate of mentality.

Neurofeedback provides the opportunity for that encounter, because it is based on controling the function of the cerebral cortex ("the machinery") which is the site of mental functions ("mentality").

Neurofeedback began in the late 1960s, when Kamiya (1969) reported that it was possible to voluntarily control Alpha waves. Other investigators (Beatty, Greenberg, Deibler, & O´hanlon, 1974; Fehmi & Selzer, 1980; Fox & Rudell, 1968) implemented further experiments on Theta waves, evoked cortical responses, and EEG phase synchrony in specialized learning processes

More experiments followed, with specific rhythms such as the sensorimotor, which emerged as having therapeutic effects in epilepsy (Sterman, 1982), and in patients with Attention Deficit Disorders (Lubar, 1991).

Since then, there has been an increasing number of different training protocols for specific frequencies and ratios that report success in treating a wide variety of disorders such as addictive behaviors (Ochs, 1992; Rosenfeld, 1992; Peniston & Kulkovsky, 1989), postraumatic disorders (Montgomery, D.D., 1994) and stroke rehabilitation (Rozelle, G.R., 1995), among others.

Electroencephalography has its own history, starting in the early 1920s.

Electroencephalography has evolved tremendously since the time that Hans Berger, in 1924 was able to record for the first time an electric signal from his son’s brain. Berger was obsessed trying to find material events (electric brain potentials) that were related to mental phenomena, in which he included telepathy. Although he did not accomplished his final goal, he was able to establish the fundamentals necesary for the development of a powerful instrument of analysis of the cerebral cortical function (Berger, H.,1969).

At present, no one questions the fact that the cerebral cortex is the site of mental functions. The EEG is the method that records the function of this "enchanted loom". It represents a window through which we can examine the functioning of this "machinery of the mind".

With the passage of time, electroencephalography has shown its utility in the diagnosis and characterization of different pathologies that affect brain functioning with a well-defined application in neurology and an increasingly important one in neuropsychiatry.

Clinical Neurophysiology evolved as a branch of medicine, and has become a specialty in itself. At Present, clinical neurophysiologists are grouped in local societies, which are part of the International Federation of Clinical Neurophysiology. There are also local councils that certify and care for a rational and responsible use of this technique.

Genesis of the EEG

The diagnostic and investigative use of EEG is a little over 70 years old. Despite Berger´s (Berger,1969) careful and systematic descriptions of human EEG, beginning in 1929, at first scientists had difficulty accepting that Berger´s recordings were generated by, or even related to actual brain events. It was not until after Adrian and Matthews (Adrian and Matthews 1934) validated his findings that the scientific community began to seriously investigate, with enormous productivity, the neural origin of brain wave activity.

Adrian (Adrian,E.D.1936) and Bishop (Bishop,G.H.1936) thought that EEG waves represented summations of individual action potentials occurring in many neurons. Although Jasper (Jasper,H.H.1936) thought the same, he proposed that the cortical potentials reflect slower processes and more variable in time, characteristic of peripheral nerve fibers. Dempsey and Morrison (Dempsey,E.W. and Morrison,R.S.1941,1942) further contributed to the understanding of the influential role of thalamocortical projections in EEG rhythmicity.

In 1951, Eccles (Eccles,1951) proposed that synaptic electrogenesis was the essential mechanism for most EEG activity. There is now ample evidence that the generator sources for EEG waves reside within the cerebral cortex. Although individual action potentials do not appear to contribute directly to EEG activity, mainly because of their short duration, the cumulative evidence supports the concept that surface and scalp-recorded electrical activity results from extracellular current flow associated with summated excitatory and inhibitory postsynaptic potentials (Creutzfeldt,O.,Watanabe, S.,and Lux, H.D.(1966 a,b.) Goldensohn,E.S.,and Purpura, D.P. (1983 ); Goldensohn, E.S. (1979); Humphrey, D.R. (1968.a,b))

Mostly because of the morphology and orientation of cortical pyramidal cells, with their body located deep within the fifth cortical layer and having a long apical dendrite that prolongs to the first layer, a dipole is created with electromagnetic fields that oscillate influenced by subcortical structures (mainly thalamus and reticular formation) that work in a synergic fashion. Current sources of brain electrical activity also generate magnetic fields that can now be recorded by magnetoencephalography (MEG).

For a more detailed explanation of the physiological basis of the EEG, the reader may consult: Eli S. Goldensohn chapter 15, in Current Practice of Clinical Electroencephalography editors: Klass and Daly. Raven Press, 1979.

Instrument and Recording Techniques:

Electrode placement

With the evolution of technology since Berger´s time, more sophisticated multichannel equipment has become available, and it has been possible to record from different parts of the brain simultaneously.

In order to standardize the methodology of applying the electrodes on the skull, in 1949 the International Federation of Societies for Electroencephalography and Clinical Neurophysiology (IFSECN) adopted a system proposed by Jasper which has now been adopted worldwide. (See Fig. 1)

For a detailed explanation on the method of application of electrodes according to the 10-20 International System please refer to: Harner, P.F. and Sannit .T., A review of the International ten-twenty system of electrode placement. Grass Instrument Co., 1974.

Amplifiers:

Differential amplifiers are used in clinical Neurophysiology. These amplifiers are capable of eliminating a wide range of artifacts because of the way they are designed. Artifacts are probably the main elements that work against a correct analysis of brain activity. They interfere with traditional and with computerized analysis of EEG activity. With this last kind of analysis, artifacts are even more critical than with visual analysis of EEG, so this point will be further discussed in the chapter about artifacts.

Differential amplifiers record potential differences between a pair of electrodes, performing a simple algebraic subtraction between them. There are two inputs in differential amplifiers that are called G1 and G2 or input terminal 1 and input terminal 2. They are designed in a way that when an electrical activity that is going through G1 is identical than the one that goes through G2, they cancel each other out because an algebraic subtraction is performed. This characteristic of differential amplifiers is called "common mode rejection ratio" (CMRR).

Considering that the environment is an important source of electric and electromagnetic artifacts, and that the human body is an excellent conductor of electricity, it is like an antenna capturing all these artifacts. When electrodes are applied to the head, considering that these artifacts will be recorded equally by G1 and G2, they will be canceled out because of CMRR by differential amplifiers..

It is also important to say that in order to take full advantage of CMRR, it is not only important to get a good contact between the electrode and the scalp with an impedance lower than 10 Kohms, but also to have them equal between G1 and G2.

Impedance refers to a measure of the resistance to current flow. In order to minimize the impedance at the electrode/scalp interface, a careful preparation of the skin, using alcohol to dilute the sebum secretion of sebaceous glands and if required, an abrasive substance may be used to eliminate the horny layer of dead cells of the epidermis is mandatory.

Unequal impedance between G1 and G2 will attenuate differently the incoming signal through both leads, resulting in an inadequate rejection of common mode signals.

To a further discussion about this topic, the author recommends the excellent chapter of Charles J. Peek "A Primer of Biofeedback Instrumentation" (Peek, C.J.,1995)

Montages:

Mainly because there is some confusion in relation to the montages that are being used in Electroencephalography, we will provide further detail.

Strictly speaking, all montages that are being used in EEG are bipolars in a sense that differential amplifiers require input through G1 and another through G2.

Traditional montages that are being used in EEG are of three different types: Referential (inadequately referred to them as "monopolar"), Bipolar and Laplacian.

In referential montages, all channels are derived in input terminal 2 (G2) to a common electrode, most commonly to ipsilateral auricle, linked ears, or to an active electrode such as Cz.

In bipolar montages, both G1 and G2 are connected to "active" recording electrodes.

In Laplacian montages, the "active" electrode is connected to G1, and a reference G2 derived from the average voltage of its nearest neighbors. For example, for F4, the simplest value of the "local average" would consist of (Fp2+F4+C4+F8)/4.

It has been proposed that bipolar montages are better to localize phasic events such as spikes or sharp waves, than referential or Laplacian. This is only true for visual or conventional EEG analysis when bipolar montages are designed in a manner that they link serial pairs of electrodes in straight longitudinal or coronal lines. In linked chains, a single electrode becomes common to two adjacent channels, but it is connected to G2 in the first channel and to G1 in the second. The site of maximal voltage within a field shows as a phase reversal: Simultaneous pen deflections in two channels sharing a common electrode occur in opposite directions. The only advantage of this is when one is performing visual analysis of the EEG, because somewhat phase reversals captures the attention of the human eye.

Bipolar montages are less significance when one is doing computerized EEG.

It has been mentioned that bipolar montages have a greater common mode rejection ratio than referential, which is inaccurate because as we mentioned previously, CMRR is a characteristic of the design of the differential amplifier and does not have to do with the type of montage. CMRR is not dependent on G2 being connected to a reference or to an "active" electrode.

Bipolar montages also have the disadvantage that one can only infer (and not compare directly) activity from individual electrodes, because voltage and polarity are always relative. Information of voltage and phase are always lost. A referential recording permits clearly characterizing widespread or complex potential fields, unambiguously determining polarity and voltage, and less distortion of EEG activity.

Artifacts:

Any activity not arising from the brain is considered an artifact.

The kind of equipment used in EEG is very sophisticated and very sensible. It has to greatly amplify the signal coming from the cortical cells, which are in the order of 30 to 200 microvolts. This miniscule activity has to go also through a variety of tissues that attenuates it, and although it is designed with very advanced technology, using differential amplifiers, it captures a wide array of signals coming from outside of the brain.

There are three main sources of artifacts:

1.- External.

2.- Instrumental, including the interface between the patient and the instrument (electrodes)

3.- Physiological.

Although one uses the most unadulterated technique to apply electrodes and try to make sure that the equipment is adequately calibrated and in ideal conditions, even the best technician, has problems reducing, or eliminating artifacts.

When an EEGer is performing a visual analysis of an EEG, he has learned to recognize artifacts and to take them into account, and reliably extract the residual information in the data. But when one is doing quantitative EEG and brain mapping, the computer can not do the same so, in this case, it is mandatory to introduce artifact free data, which is very hard to do. Moreover if one is using data bases to analyze EEG activity one has to make sure that the activity reflects brain function and not artifacts.

Recognizing artifacts is not an easy task. Although one could be familiar with the main sources of artifacts, and how they look like, one has to take into account different parameters of analysis in order to detect and characterize them. These parameters include: localization, distribution, wave form, manner of occurrence, polarity and potential field distribution.

First of all, one has to know that brain activity behaves in a particular form in the way that brain activity propagates through brain tissue. The brain is a volume conductor and brain activity propagates differently than artifacts, so it is important to analyze the distribution of the suspected artifact. It is also important to become familiar not only with the shape and distribution of artifacts, but also to know all the electroencephalographic elements of pathological significance, along with their variants, at different ages and at different conditions, that could resemble artifacts. For example, the presence of slow paroxistic monorrhythmic activity during drowsiness could be a normal activity in children but pathologic if seen in adults, or during wakefulness.

Although true and efficient artifact rejection has to be performed by a well-trained EEGer, I will give some illustration of principles, since it is not possible to demonstrate all possible examples. (Please refer to Figs: 2a, 2b, 2c and 2d)

External:

The most important external artifact is the 60 Hz. interference. Almost any kind of electric or electronic equipment that uses regular electricity generates this artifact. Electricity is generated in a alternate fashion and it alternates at 60 cycles per second.

With modern EEG equipment that uses differential amplifiers (common mode rejection), using the proper technique to apply electrodes, and with the help of notch filters, this 60 Hz. artifact should not appear in routine work.

In order to eliminate this artifact, all the equipment used should be connected to the same grounded terminal. It is mandatory to use common ground for all the equipment in order to protect the patient from any current leakage. Likewise, it is also important, as discussed before, to have similar electrode resistances. It is more important to have similar resistances than low value resistances.

Some of the most frequent sources of 60 Hz. interference are:

a.. Poor skin preparation and contact of the electrode.

b.. Deficient grounded terminal.

c.. Nearby generators, motors, elevators, etc.

d.. Electromagnetic fields coming from fluorescent lights, air conditioning.

Other external artifacts can arise from electrostatic potentials developed by clothing, particularly nylon and artificial fibers, movement of people in the recording room, nearby radio or TV stations, radar transmitters, radio paging, cellular phones, etc.

Instrumental

So many artifacts could arise from the instrument that it is impossible to describe them all. Any part of the instrument like electrodes, cables, switches, amplifiers, oscillographs, computer, A/D converter, etc. could generate these artifacts.

Electrodes can produce "popping" that can be misinterpreted as epileptic spikes, silver-silver chloride electrodes may occasionally produce rhythmic delta/theta patterns. Dirty electrodes cause some bizarre patterns.

Electrode wires produce artifacts from the static and capacitive potentials that can mimic brain activity very exactly. A broken wire also produces bizarre patterns.

An important source of artifacts that is very frequently overlooked comes from an inaccurate calibration of the equipment. Deficient calibration can produce artificial asymmetries, which can lead to errors in interpretation.

An inadequate use of filters and sensitivity controls in the equipment can introduce an artificial element in the recording of brain activity.

Physiological

The appearance of potentials from physiological sources other than the brain is an almost invariable feature of EEG recording. In visual analysis of EEG, they may provide useful information about the state of the patient, but for brain mapping, they obscure the results.

Eye movements. These are regular features of any EEG recording. Eye movements are produced because there is a potential difference of some 5-10 mV between the cornea and the retina, the former being electropositive in respect to the latter, thus a dipole is created. The movement of the eye produces changes in the orientation of the dipole, which creates a large amplitude AC field which is recorded by the neighbor electrodes. Frontal electrodes (Fp1 and Fp2) are more sensitive to vertical movement of the eyes, and frontotemporal (F7 and F8) to horizontal ones.

Eye movements are a common source of frontal slow waves and may be easily confused with activity of cerebral origin. It may be useful to monitor eye movements in order to differentiate them.

Myogenic. These artifacts are related to the recording of the surface EMG from underlying muscles in the scalp (frontalis, occipitalis, temporalis, nuchal and auricular), and from some muscles in the neck, particularly the sterno-cleido-mastoid.

The amount of muscle artifacts varies from continuous high amplitude deflections that obscure the EEG, to little bursts that may mimic epileptic spiking.

Muscle artifacts usually comprise a profusion of fast, spike-like potentials of variable duration and amplitudes, which can be sometimes indistinguishable of beta activity.

Localization and distribution of these artifacts will depend on the contracting muscle that will be picked up by the neighbor electrodes.

Muscle artifacts are easier to be confused with Beta activity by computer analysis of EEG, because it only relies in frequency analysis, leaving out the analysis of shape and phase which are important to differentiate them, and which can be done better by visual analysis.

Artifacts related to cardiac activity. These are of two different types:

1.- Directly related to cardiac electric potential (EKG).

2.- Related to pulsating arteries.

EKG artifacts. The electric potential of the heart can be recorded from practically any part of the body, due in part to its relative amplitude and because the body is an excellent conductor of electric current.

Particularly in fat individuals with short necks, the EKG is rather large over the head. The cardiac potential is of highest amplitude in the A1 and A2 leads, and is a frequent contaminant of ear referential montages.

The R wave or the QRS complex can appear in the EEG record, usually taking the form of a repetitive spike or sharp wave in time with the pulse. An important feature that differentiates the cardiac artifact from EEG activity is its periodicity and frequency at about 1 per second (depending on age and physical fitness of the individual). The repetition rate will vary in patients with cardiac arrhythmias.

Pulse artifacts. These artifacts look like slow periodic deflections of the EEG baseline, which are caused by variation in the contact resistance of an electrode in the vicinity of a pulsating artery. Usually the frequency, the waveform and a localized distribution, identify the artifact.

Artifacts arising from the scalp and skin

Sweating and contraction or dilatation of the superficial skin capillaries gives rise to changes of conductivity and of skin potential. These in turn, give rise to slow potential changes at the amplifier inputs which, is typically registered as long duration slow activity, smooth in outline, and of high amplitude.

These changes of skin conductivity are mostly related to the high content of NaCl and lactic acid of sweating.

There are a number of other artifacts which can be further consulted in Brittenham, D.M.(1979).

Quantitative EEG and Brain Mapping

The earliest research to anticipate the use of numerical computation in EEG was, not surprisingly, Hans Berger. Berger collaborated with a physicist, G. Dietsch, of the Institute of Technology and Physics at Jena. Together they worked on the theoretical basis for calculating the frequency spectrum of the EEG, using the Fourier Transform.

Although the theoretical basis was previously established, quantitative analysis of EEG had to wait until computers were available.

It was Albert Grass, Frederick Gibbs (Grass and Gibbs, 1938), W. Grey Walter and Reginald G. Bickford, (Bickford,R.G.,1979) among the investigators who pioneered the area of computerized EEG.

Dr. John Knott built a frequency analyzer in 1949 at the University of Iowa in collaboration with Drs. Charles Henry, F. Gibbs and Albert Grass. This group was the first to coin the term "CSA" for "continuous or compressed spectral array". Reginald Bickford at the University of California, San Diego (UCSD), developed and introduced the technique in 1972.

As computer technology, developed and faster computers and color monitors were available, processing and displaying analyzed EEG went a step further until brain mapping was created.

There are a number of investigators who have contributed to the development of quantitative EEG. Some of the pioneers in this area were: MAB Brazier (1961), ER John (1988), MR Nuwer (1988a, 1988b), AS Gevins (1975), G. Dumermuth (1987), F.H. Duffy(1979) just to mention a few.

I first got interested in the area of Brain Mapping in 1980 when I was in charge of the Department of Neurophysiology at the National Institute of Mental Health in Mexico City, after spending three years at UCLA doing basic research in the area of the neurophysiology of memory. When I got back to Mexico, I had the opportunity to work with the first equipment that came into my country that could perform spectral analysis of EEG.

Since then, I have been actively involved, among other things, in the development of brain mapping software, from where all the graphics that you see in this chapter are created.

Digitizing EEG:

In order to perform digital processing of EEG, one needs to transform the analog signal of EEG into digits (numbers). An analog signal is defined as a signal that changes continuously over time (all natural signals are analogous: sound, light, heat, movement; etc).

Computers are digital instruments, so in order to process EEG, the first thing to do is to make a digital transformation, which is being done by an Analog to Digital converter. (A/D converter).

What an A/D converter does is to sample voltage of the incoming EEG at fixed intervals of time, thus making a digital conversion. An important feature of A/D conversion is sampling rate that refers to the interval between the sampling, or to say it in a different way, it corresponds to the frequency at which the signal is being sampled.

Sampling rate has to be in accordance to what is called the Nyquist theorem, which states that the sampling rate should be at least double than the highest frequency that one wants to sample. For example, if one wants to digitize EEG activity, which goes from 1 to 30 Hz., the sampling rate should be at least 60 Hz. As one increases sampling rate, the shape of the signal can be better reproduced, but if the intention is only to do frequency analysis, then sampling according to Nyquist theorem is sufficient.

Once the EEG signal is converted into numbers, then mathematical calculation can be performed. A simple form of increasing the amplitude of the signal could be obtained by multiplying the voltage sampled by a factor. By computer manipulation of the data, it is also easy to transform montages from referential into bipolars, by simply subtracting the output between two appropriately chosen channels. For example, if F3-A1 and C3-A1 were collected, a bipolar channel can be obtained by (F3-A1)-(C3-A1)=F3/C3.

Spectral Analysis.

One of the main features of EEG analysis corresponds to frequency.

Traditionally, EEG frequency has been separated into frequency bands:

Delta from 0.1 to 4 Hz

Theta from 4 to 8 Hz.

Alpha from 8 to 13 Hz.

Beta from 13 up.

Arranging EEG frequency into bands was useful at the beginning of EEG analysis, because of limitations in visual analysis. Quantifying frequency by visual analysis is an almost impossible task. First, because what we see in an EEG tracing is the result of a combination of frequencies, and because visually quantifying a frequency would involve counting each component of a rhythm in 1-second intervals. So you can imagine that it would be very time consuming if not impossible, to count every rhythm in an 8 or 16 channel tracing for the time that the EEG lasts.

Fortunately, with computer analysis we are now able to quantify frequency very efficiently. Spectral decomposition of the EEG can be performed by Fourier analysis which allows separation of various rhythms and estimation of their frequencies independently of each other, a task difficult to perform visually if several rhythmic activities occur simultaneously. Spectral analysis can also quantify the amount of activity in a frequency band.

Spectral analysis is based on the Fourier theorem, developed by a French mathematician in the 19th Century who was obsessed with the idea of analyzing the propagation of heat through solids. In his treatise The Analytical Theory of Heat (1822;), he employed a trigonometric series, usually called the Fourier series, by means of which discontinuous functions can be expressed as the sum of an infinite series of sines and cosines.

In order to understand what Fourier analysis does to EEG, we could compare it to what happens to light when it passes through a glass prism. The beam of light decomposes into its main components thus obtaining the spectrum.

Since a mixture of frequencies composes the EEG, its spectrum can also be obtained when processed by Fourier analysis. (Fig. 3)

Spectral analysis is only one of a wide variety of EEG analysis techniques, which includes analysis in the time and frequency domain: Power spectrum, coefficient of variation, coherence, ratios, period amplitude, and zero-crossing analysis just to mention a few.

There are more that ten thousand pages in literature that can not be summarized here, so the interested reader can consult: (Bickford, 1973; Brazier MAB, 1961; Dietsch G. 1932; Duffy. H, 1986; Frost JD Jr., 1987; Gevins AS, 1984; Hjorth B., 1986; John, ER., 1977; Kellaway P., 1973; Nuwer, MR, 1987; Lopez da Silva FH, 1977)

Brain Mapping:

After digitizing and processing the EEG, there are also a number of display formats, which include colored bar displays, compressed spectral array, histograms, numerical tables, and topographic maps.

CSA and Brain Mapping are the two most frequently used types of display for Neurofeedback, therefore we will only focus on these.

CSA developed by Reginald Bickford, consists on performing the spectral analysis of EEG, sorting the mixed frequencies into an orderly sequence from low to high (0.25 to 16 or more Hz) and plotting the graphs in packs one graph down upon another in sequential order. (See fig 4.)

Brain Mapping involves the construction of a topographic map from the results of a multi-channel recording analysis. Interpolation is required to build these maps. It starts with the values measured at each electrode, then the values at intermediate locations are mathematically calculated by assuming smooth changes of the values between electrodes.

Interpolated values can be displayed in different ways; currently the most popular one is relating a value to a color, using most commonly a color spectrum scale, arranging the hues in an orderly fashion. When no negative values are allowed, e.g., for frequency analysis, (the phase is lost by Fourier analysis), the blue hues represent low values and the red hues high values. When both positive and negative values are present as in voltage distribution maps, blue hues represent negative polarity and red hues positive polarity.

Brain maps can represent different types of analysis or information, i.e., voltage distribution at one instant of time, frequency data at one frequency or frequency bands, a z-score of such time or frequency activity. Other more complex representations are also possible. (Nuwer, 1988).

In the time domain, a map can be displayed at one instant of time, which is useful to analyze the potential field distribution of a phasic or transient event such as an epileptic spike. A series of maps can be displayed in progressive periods of time in order to assess how such events evolve over milliseconds of time. (Fig. 4)

In the frequency domain, maps can represent voltage or power distribution at individual frequencies or frequency bands. (Fig 5)

In these maps, the color or intensity of gray in a gray scale is in relation to voltage or power at that specific frequency or band.

Displaying these maps in a relative and absolute scales yields different and complementary information. (Fig. 6)

In a relative scale, all the bands or individual frequencies use the same scale in which the lowest value corresponds to the lowest voltage encountered and the highest to the highest voltage regardless of the band or frequency. This scale provides important information about topographic distribution as well as the relative magnitude between the frequencies or frequency bands.

In the absolute scale, each frequency or band utilizes independent scales in which the lowest value corresponds to the lowest voltage and the highest to the highest voltage encountered in that specific frequency or band. This scale permits to analyze precisely the distribution of the frequency or frequency band including those with a relatively low voltage because all the maps will show a full color scale. This scale emphasizes the topographic distribution of the frequency or frequency bands.

Basis of EEG Analysis.

Analysis and interpretation of the EEG is both a science and an art. On one hand, it is a rational and systematic process involving a series of orderly steps to characterize the electrical activity of the brain in terms of specific parameters such as:

a.. Frequency

a.. Voltage

b.. Locus

c.. Interhemispheric Coherence (Symmetry and Synchrony)

a.. Waveform

a.. Regulation

b.. Manner of Occurrence

c.. Reactivity

And on the other, the EEGer has to evaluate and correlate all these results in the light of a specific patient and conditions in order to derive a "clinical impression"; that is, an assessment of the probable significance of the EEG findings in relation to the patient’s history and clinical findings.

EEG analysis is so complicated that it requires an arduous and constant training. An EEGer has to know what a normal EEG looks like at different ages, at different states and conditions, how all the different pathologies expresses in the EEG, the normal variants, the artifacts. And if the EEGer wants to include quantitative EEG techniques, the reader needs to be expert as well in computer analysis of EEG. Interpretation of computerized EEG requires substantial clinical experience. The interpreter must understand also that the increased power of these techniques also increases their potential for misinterpretation.

Because of the complexity of visual analysis, which would be impossible to discuss properly in this chapter, I would rather try to compare the advantages and limitations of visual and quantitative analysis of EEG.

For a more detailed discussion of this topic, the reader is referred to Kellaway, P. (1990): Orderly approach to visual analysis. In: Current Practice of Clinical Electroencephalography, second edition, edited by D. Daly and T. Pedley, pp.139-199. Raven Press, New York.

In general terms, Computer analysis is superior to visual in respect to characterize: Frequency, Voltage, Locus and Interhemispheric Coherence (Symmetry and Synchrony), and visual analysis is advantageous over computer analysis in relation to: Waveform, Regulation, Manner of Occurrence and Reactivity.

Although there are some computer algorithms to analyze wave form, regulation, manner of occurrence and reactivity, these parameters are still better analyzed in a traditional form. e.g., in terms of shape or wave form, if one sees for one time a spike and slow wave, it will be easy to recognize other spike and slow waves, irrespective of their variations. Somehow it is easy for the human eye to make generalizations from a prototype, but to write a computer program to do the same is an almost impossible task. The same happens with regulation, although one can apply auto-correlation formulas to quantitative EEG, in order to quantify variations of the activity over time, it does not give the information on the subjective continuous variations of EEG that can be performed by visual analysis, which provides clinically relevant information.

One of the important advantages of brain mapping over visual analysis is its potency for localization and to analyze distribution of frequency bands and to make comparisons between homologous sites. For example, if one constructs a map of Alpha distribution, one might see a red area over the posterior head near the occipital lobes and an asymmetry can be more easily spotted because one sees the data displayed over the surface of the head in color. An abnormally increase of low voltage slow waves, might be difficult to see in paper records but would be obvious in a color-coded map.

As it was mentioned before, there is a wide array of computerized analysis techniques; spectral analysis, coefficient of variation, coherence, ratios, period amplitude zero-crossing analysis, ratios, significance scores, peak frequency measures, relative frequency analysis, etc. which can provide meaningful data not available or obvious in the simple raw data, but there are also several disadvantages to these techniques. They are predisposed to arbitrary, meaningless, irreproducible conclusions. They can find statistical significance where there is none. This is especially true when discriminate analysis is used to evaluate large amounts of data in research studies. Oken and Chiappa (1986) discuss a specific example of this.

Normative Data Bases.

Because the use of normative databases is a common practice in the field of Neuro feedback, I would like to discuss some of its advantages and limitations.

Although the use of normative databases is important to the EEGer in terms of the information they provide in relation to quantification of different features of the EEG in normal populations, and also because they are useful to make comparisons between groups of populations of different pathologies, the assessment of the state of normality of an individual patient has a substantial array of difficulties.

The first difficulty relates to the technical quality of the data as discussed previously in relation to artifacts. It is of the utmost importance to have artifact-free samples of EEG. Even the small amount of artifacts easily tolerated in traditional EEG readings can wreak havoc in computer EEG analysis. ("garbage in-garbage out"). (Nuwer, 1988)

Patients cannot always be compared to a normal database, even if the technical quality of a record is perfect. The "normal" subjects included in the databases may