Abstract
Purpose
The multifocal visual evoked potential (mfVEP) provides a topographical assessment of visual function, which has already shown potential for use in patients with glaucoma and multiple sclerosis. However, the variability in mfVEP measurements has limited its broader application. The purpose of this study was to compare several methods of data analysis to decrease mfVEP variability.
Methods
Twenty-three normal subjects underwent mfVEP testing. Monocular and interocular asymmetry data were analyzed. Coefficients of variability in amplitude were examined using peak-to-peak, root mean square (RMS), signal-to-noise ratio (SNR) and logSNR techniques. Coefficients of variability in latency were examined using second peak and cross-correlation methods.
Results
LogSNR and peak-to-peak methods had significantly lower intra-subject variability when compared with RMS and SNR methods. LogSNR had the lowest inter-subject amplitude variability when compared with peak-to-peak, RMS and SNR. Average latency asymmetry values for the cross-correlation analysis were 1.7 ms (CI 95 % 1.2–2.3 ms) and for the second peak analysis 2.5 ms (CI 95 % 1.7–3.3 ms). A significant difference was found between cross-correlation and second peak analysis for both intra-subject variability (p < 0.001) and inter-subject variability (p < 0.001).
Conclusions
For a comparison of amplitude data between groups of patients, the logSNR or SNR methods are preferred because of the smaller inter-subject variability. LogSNR or peak-to-peak methods have lower intra-subject variability, so are recommended for comparing an individual mfVEP to previous published normative data. This study establishes that the choice of mfVEP data analysis method can be used to decrease variability of the mfVEP results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
The multifocal visual evoked potential (mfVEP) provides a topographical method for assessing visual function. Compared with conventional pattern reversal visual evoked potentials (VEP), which represents the sum of the potentials across the field tested, mfVEP can detect local changes due to multiple visual inputs, generating a VEP in each corresponding region of the visual cortex.
Significant mfVEP changes have been found in patients with multiple sclerosis [1], diabetes [2], optic neuritis [3, 4], ischemic optic neuropathy [5], compressive optic neuropathy [6], glaucoma [7, 8] and optic disk drusen [9].
A high intra-subject (within-subject) and inter-subject (between-subject) variability has limited the application of the mfVEP in clinical practice. The variability is, in part, due to a combination of factors including: cortical anatomy, skull thickness, relationship between cortex and external landmarks, electrical noise from environment, placement of electrodes, patient’s attention and impedance.
Several studies have attempted to reduce the variability of mfVEPs by using interocular comparison [5], EEG-based scaling [10], selection of best channels [11] and multiple virtual channels [12, 13]. However, the different methods for mfVEP data analysis have not been directly compared, even though they differ between published mfVEP studies.
The two measures typically quantified when performing mfVEPs are amplitude and latency. To account for the effects of various factors (age, gender, etc.) on the amplitudes and latencies, previous work has used multiple regression models, providing waveforms consisting of t-statistics [14]. The quantification of mfVEP amplitude has been assessed using peak-to-peak method [8, 10, 15–17], root mean square (RMS) method [5, 18], signal-to-noise ratio (SNR) [19, 20] and the logarithmic signal-to-noise ratio (logSNR) [21, 22]. In Fig. 1, the different quantification methods of mfVEP amplitude are illustrated.
The peak-to-peak method measures the amplitude between the largest peak (positive) and trough (negative). Several studies have used the peak-to-peak method, as it produces a value in nanovolts [8, 10, 15–17], and it is the fastest and simplest way of quantifying mfVEP output. However, the peak-to-peak responses can be contaminated by alpha waves and high-frequency noise interference. Patient cooperation and optimal experimental settings are therefore essential when relying on this method.
The RMS method uses the squared value of the amplitude in a given time interval to ensure that all waveforms analyzed are in a standardized positive format. Then, the square root of the mean amplitude is calculated to give the final output [5, 18]. The RMS method is not dependent on a specific waveform due to averaging and use of a specific time segment.
Signal to noise is the ratio of RMS in a “signal window” divided by a “noise window.” The signal window for mfVEP is normally between 45 and 150 ms and the noise window is between 325 and 430 ms, but the reported intervals vary. Most mfVEP studies use responses that are estimated by cross-correlation with the stimuli, which means they provide no standard error in the estimated waveform coefficients. As a work around, the noise levels are conventionally estimated from a section of the estimated waveform that is assumed to contain no response components but which may have small contributions from nonlinear interactions in the response, so-called kernel overlap [23]. As the ratio in SNR is derived from “signal window amplitude” divided by “noise window amplitude” and both values include background noise, the variation due to factors causing this background noise is decreased. In this way the SNR can reduce the variability in results between tests on different days and between individuals and laboratories. Therefore, the SNR method is very useful in follow-up studies and when examining the same patient in different experimental settings. SNR is sometimes used as a direct measure of mfVEP amplitude [19, 20] but more often is calculated to ensure the responses are of a certain quality [12, 13, 24]. LogSNR is normally used when comparing mfVEP amplitude with visual field analysis due to its comparability to log sensitivity reported by automated perimeters [21, 22].
Multifocal visual evoked potential latency can be assessed using monocular and interocular analysis. For monocular analysis, a template from an age- and gender-matched control group can be used [24]. The interocular analysis compares the latency between the patient’s eyes [25]. The advantage of interocular analysis is that it eliminates factors such as cortical convolutions, which can otherwise cause inter-subject variability. Unfortunately, many retinal and optic nerve diseases affect both eyes, thereby negating the usefulness of inter-eye amplitude or latency measurements. The most commonly used methods for calculating monocular and interocular latency delays are the cross-correlation method [24–27] and the second peak method [15]. In Fig. 2, the different methods are illustrated.
For interocular studies, the second peak method compares the latency of the second peak between the two eyes. The cross-correlation method shifts the response from one eye along the x axis to maximum overlap (best correlation) with the response from the other eye. Hence, the amount of shift represents the latency difference between the eyes. The cross-correlation can also be performed with a Gaussian wavelet transform and yields similar results to direct interocular or monocular cross-correlation [28]. The cross-correlation is a more robust method of measuring latency as it solves two important problems evident with the peak-to-peak method. First, the selection of wrong peaks from mfVEP traces is not uncommon when using second peak method resulting in falsely high values. To avoid this error, all traces must be reviewed and manually changed if the incorrect peaks have been chosen by the algorithm. Secondly, some traces will have a “double-hump” morphology caused by recording artifacts. Such artifacts can create a false second peak or a wide peak can be altered by a negative artifact signal, thereby creating a double hump. These artefacts will result in false latency variation [28].
The large variability for both intra- and inter-subject measurements is the main reason why mfVEP has not moved beyond being a research tool and into clinical application. In particular, the high-amplitude variability makes the differentiation between real pathology in the visual system and random physiological fluctuations very difficult.
To the best of our knowledge, no previous studies have investigated the difference in variability between commonly reported methods of data analysis used in mfVEP studies. Hence, the aim of this study was to compare the inter-subject and intra-subject variability of the methods used to quantify mfVEP amplitude and latency.
Methods
Subjects
Twenty-three normal subjects (nine males and 14 females) were included. The median age of the subjects was 29 years (range 26–66 years). None of the subjects had previous or current ocular pathology, nor systemic diseases that could affect retinal or optic nerve function. All subjects were examined by slit lamp biomicroscopy and/or optical coherence tomography (OCT). Mean spectacle corrected visual acuity was 0.88 (range 0.5–1.0). Informed consent was obtained from all participants. Procedures followed the tenets of the Declaration of Helsinki and were approved by the national research ethics committee (HREC 14855).
Stimulation
The visual stimuli were generated in a dimly lit room, on a screen (22-in. high-resolution LCD display; Hitachi, Tokyo, Japan) with brightness 90 % and contrast 65 %. The stimulus consisted of a 56-segment dartboard containing a checkered pattern of 16 checks in each segment. Segments and checks were cortically scaled to stimulate equal areas of the visual cortex. The head position was at a viewing distance of 30 cm from the screen. This resulted in a radially subtended stimulus covering 24° of the visual field. The subjects were tested with non-dilated pupils and optimal refraction. The checks alternated between black and white according to a pseudorandom sequence. To maintain focus, the central 1° of the stimulus screen worked as a subject fixation area, displaying arrows pointing right or left. The subjects used a game controller to respond to the arrows, allowing the investigator to assess the degree of subject cooperation. High subject cooperation in all our patients required the use of optimal refraction at near.
Electrode position
A cross-shaped electrode holder with four gold cup electrodes (Grass Technologies, West Warwick, RI, USA) was placed over the inion. The hair under each electrode was separated and the scalp was cleaned. To obtain two recording channels, i.e., a horizontal and a vertical, the center of the cross was arranged over the inion with the electrodes in a horizontal and vertical pattern (one positive electrode 2.5 cm above the inion, one negative electrode 4.5 cm below the inion, one negative electrode 4 cm left of the inion and one positive electrode 4 cm right of the inion).
A ground ear clip gold cup electrode was attached to the ear.
Recording
The mfVEP was performed using VisionSearch1 (VisionSearch, Sydney, Australia). Commercial designed software (Terra™ software, ver. 1.6, VisionSearch, Sydney, Australia) was used to record and analyze the mfVEP.
Subjects were seated comfortably in front of the stimulus screen. Non-testing eye was covered with an eye patch. After correct positioning of the subject and the electrodes was confirmed, the impedance was measured for both channels. Only impedance less than 25 K Ohms was accepted, but impedance was normally less than 10 K Ohms. The subject was instructed to fixate centrally on the screen and respond to the fixation arrows. The test was repeated until the noise of the trace was reduced to 10 % or less. On average, the test required 12 rounds of stimulation. The electrical signals were amplified 1 × 105 times and band-pass filtered between 1 and 20 Hz. Data sampling rate was 600 Hz with a recording length of 1000 ms. The software automatically correlated the visual stimuli with the recorded electrical potentials to obtain the mfVEP responses. Among the two channels, the waveform with the wave of maximal peak-trough amplitude within the interval of 70–210 ms was automatically selected by the software as best channel.
Data analysis
Peak-to-peak amplitude and second peak latency for each segment were automatically calculated in the software by a specially designed algorithm. Manual confirmation of the chosen peaks was performed. Recordings from best channel were exported for further analysis in Excel (Excel, version 15.0, Microsoft, Redmond, WA, USA). Custom-made programs written in MATLAB (R2012, The Mathworks Inc., Natick, Ma, 2000) were used to compute SNR, logSNR and RMS and for cross-correlation analysis. Intra-subject and inter-subject coefficients of variability (CV) were used to compare amplitude variables (peak-to-peak, SNR, logSNR and RMS) and latency variables (second peak). CV was obtained using the formula: CV = Standard deviation/mean.
Intra-subject CV of amplitude was calculated as the standard deviation of all sectors from the subject’s amplitude recording divided by the mean amplitude of all sectors. The term intra-subject variability in the study therefore referred to the regional differences in amplitude between the 56 segments from a single mfVEP output, and not to repeated measurements as seen in most studies.
The inter-subject CV of amplitude was calculated as the standard deviation of the mean amplitude of all subjects divided by the mean amplitude of all subjects.
In the latency asymmetry analysis, the standard deviation (SD) was used as a measure of variability. The CV was not used for interocular latency analysis because the low means attained when subtracting the latency of one eye from the other made the CV misleading.
Statistical analysis
Only right eye was used in testing for statistical significance to avoid correlation bias.
Distribution of data was visualized. A linear regression analysis was performed between the peak-to-peak and SNR methods. For each method, intra-subject CVs were estimated by their sample means. Pairwise comparisons were made by means of Z tests accounting for inter-marker correlation nonparametrically using the methodology described in [29]. Resulting alpha levels were adjusted by means of Bonferroni’s correction. For each method, inter-subject CVs were estimated as the ratio of sample standard deviation and sample mean. Pairwise comparisons were made by means of Z tests accounting for inter-marker correlation obtained by nonparametric bootstrap with 5000 bootstrap samples. Resulting alpha levels were adjusted by means of Bonferroni’s correction.
Intra-subject asymmetry variability using SD was compared with nonparametric Wilcoxon’s paired rank test, and inter-subject asymmetry variability using SD was compared with nonparametric bootstrap with 5000 bootstrap samples.
The predetermined level of statistical significance for the comparisons was p ≤ 0.05. The statistical analysis was performed using the SAS program for Windows (version 9.1, SAS Institute, Cary, NC, USA).
Results
The mean mfVEP amplitude using peak-to-peak values was 169.1 nV (CI 95 % 152.9–185.3 nV). Mean RMS was 51.8 (CI 95 % 47.0–56.5) and mean SNR was 4.6 (CI 95 % 4.3–5.0). There was a significant correlation between the peak-to-peak method and SNR method (R 2 = 0.69, p < 0.001) (Fig. 3).
In Fig. 4, the different methods of assessing mfVEP amplitude and their CV are compared. Significant differences were found between peak-to-peak and SNR, peak-to-peak and RMS, logSNR and SNR, and logSNR and RMS. Overall, logSNR and peak-to-peak had a significantly lower intra-subject CV when compared with RMS and SNR. The inter-subject CV was 19.9 % in the peak-to-peak method, 21.1 % in the RMS method, 16.9 % in the SNR method and 11.2 % using logSNR method. Significant differences were found between logSNR and SNR, logSNR and RMS, and logSNR and peak-to-peak. Overall, logSNR had a significantly lower inter-subject CV when compared with SNR, RMS and peak-to-peak.
Mean mfVEP second peak latency was 147 ms (CI 95 % 146–149 ms). Second peak latency inter-subject CV was 3.2 %, and second peak latency intra-subject CV was 8.6 %.
Latency asymmetry value for the cross-correlation analysis was 1.7 ms (CI 95 % 1.2–2.3) and for the second peak analysis 2.5 ms (CI 95 % 1.7–3.3 ms). Inter-subject asymmetry variability expressed as SD was 1.8 ms (95 % CI 1.4–2.5 ms) using second peak method and 1.2 ms (95 % CI 1.0–1.8 ms) using cross-correlation. A significant difference in inter-subject asymmetry variability was found between the two methods (p < 0.001). Intra-subject asymmetry variability expressed as SD was 6.1 ms (95 % CI 5.2–7.0 ms) using cross-correlation and 12.6 ms (95 % CI 10.6–14.6 ms) using second peak analysis. A significant difference in intra-subject asymmetry variability was found between the two methods (p < 0.001).
Discussion
Data analysis is required to quantify mfVEP amplitude and latency recordings; however, no gold standard for this analysis has been published. If the mfVEP is to evolve into a useful clinical tool, it is important to understand how the data analysis can affect the quantitative outcomes. Furthermore, it is important to know the most reliable parameters for this quantification. Our study is, as far as we know, the first that compares the variability of different methods for data analysis of mfVEPs. The results of this study demonstrate what differences on intra-subject and inter-subject variability can be expected when using the most common mfVEP analysis methods.
Considerable overlap in mfVEP parameters has been shown when comparing normal controls with patients. Rodarte et al. [7] found no difference in mfVEP latency between normal controls and glaucoma patients, with the exception of one high-tension glaucoma patient who fell outside the control group range.
The low clinical reliability, especially in amplitude and in monocular analysis, is continuously a challenge in the mfVEP and makes latency the preferred parameter in most mfVEP studies. A study by Grippo et al. found a latency delay in patients with optic disk drusen. However, they did not assess amplitude although amplitude abnormalities would mainly be expected due to the compressive nature of the optic disk drusen [9]. Our study has confirmed a low variability in latency values compared with amplitude values. While this may be useful in studying conditions such as multiple sclerosis, there may be valuable information to be gained by appropriate study of amplitude data.
The peak-to-peak amplitude inter-subject CV has previously been assessed in relation to electroencephalogram (EEG)-based scaling of mfVEPs by Klistorner et al. [10]. An inter-subject CV in peak-to-peak amplitude of approximately 14 % was seen after the application of EEG-based scaling. This is a similar outcome to our results. The slightly lower variability could be due to the more sophisticated normalization method. However, the study only assessed variability using the peak-to-peak method.
The results of our study show that logSNR and thereafter peak-to-peak are the preferred methods in detection of local defects. The smaller intra-subject CV indicates that the amplitude varies the least from segment to segment. This is also applicable when performing sectorial analysis.
LogSNR proved to have a small inter-subject CV, which is important when comparing groups. A low inter-subject CV makes it easier to find small differences between patients with and without abnormalities, as the confidence limits will be narrow.
Asymmetry analysis has been used in the assessment of monocular optic nerve damage. Most of these studies use the cross-correlation method to determine the difference in latency between the eyes [6, 7]. We found a significantly higher variability using second peak method when compared with cross-correlation method. Therefore, we recommend the use of cross-correlation method for asymmetry analysis.
The main limitation of this study is that the interpretation of the coefficient of variation is closely linked to the normal distribution. As a consequence, this quantity is, for instance, not invariant with respect to monotone transformations of the marker for which it is calculated. This is contradictory to the logical reasoning that a one-to-one transformation of a marker should not change its ability to differentiate samples. In the context of our research, logSNR and peak-to-peak are preferred based solely on a coefficient of variation evaluation. However, such a ranking should not be based on the coefficient of variation alone, but also on choosing a scale on which normality can be assumed.
In conclusion, this study emphasizes the importance of choosing the right method in mfVEP data analysis and establishes the choice of data analysis as another factor that can lead to a decreased variability. Different methods for quantifying mfVEP amplitude have different indications dependent on the purpose of the study. For comparison of groups, logSNR or SNR would be preferred because of their smaller inter-subject CV. When looking at an individual mfVEP, as may be done in a clinical setting, the logSNR or peak-to-peak methods would be the preferred methods of choice.
References
Blanco R, Perez-Rico C, Puertas-Munoz I, Ayuso-Peralta L, Boquete L, Arevalo-Serrano J (2014) Functional assessment of the visual pathway with multifocal visual evoked potentials, and their relationship with disability in patients with multiple sclerosis. Mult Scler 20(2):183–191. doi:10.1177/1352458513493683
Wolff BE, Bearse MA Jr, Schneck ME, Barez S, Adams AJ (2010) Multifocal VEP (mfVEP) reveals abnormal neuronal delays in diabetes. Doc Ophthalmol Adv Ophthalmol 121(3):189–196. doi:10.1007/s10633-010-9245-y
Fraser CL, Klistorner A, Graham SL, Garrick R, Billson FA, Grigg JR (2006) Multifocal visual evoked potential analysis of inflammatory or demyelinating optic neuritis. Ophthalmology 113(2):323e321–323e322. doi:10.1016/j.ophtha.2005.10.017
Klistorner A, Arvind H, Nguyen T, Garrick R, Paine M, Graham S, O’Day J, Yiannikas C (2009) Multifocal VEP and OCT in optic neuritis: a topographical study of the structure-function relationship. Doc Ophthalmol Adv Ophthalmol 118(2):129–137. doi:10.1007/s10633-008-9147-4
Hood DC, Zhang X, Greenstein VC, Kangovi S, Odel JG, Liebmann JM, Ritch R (2000) An interocular comparison of the multifocal VEP: a possible technique for detecting local damage to the optic nerve. Investig Ophthalmol Vis Sci 41(6):1580–1587
Semela L, Yang EB, Hedges TR, Vuong L, Odel JG, Hood DC (2007) Multifocal visual-evoked potential in unilateral compressive optic neuropathy. Br J Ophthalmol 91(4):445–448. doi:10.1136/bjo.2006.097980
Rodarte C, Hood DC, Yang EB, Grippo T, Greenstein VC, Liebmann JM, Ritch R (2006) The effects of glaucoma on the latency of the multifocal visual evoked potential. Br J Ophthalmol 90(9):1132–1136. doi:10.1136/bjo.2006.095158
Moschos MM, Georgopoulos G, Chatziralli IP, Koutsandrea C (2012) Multifocal VEP and OCT findings in patients with primary open angle glaucoma: a cross-sectional study. BMC Ophthalmol 12:34. doi:10.1186/1471-2415-12-34
Grippo TM, Ezon I, Kanadani FN, Wangsupadilok B, Tello C, Liebmann JM, Ritch R, Hood DC (2009) The effects of optic disc drusen on the latency of the pattern-reversal checkerboard and multifocal visual evoked potentials. Investig Ophthalmol Vis Sci 50(9):4199–4204. doi:10.1167/iovs.08-2887
Klistorner AI, Graham SL (2001) Electroencephalogram-based scaling of multifocal visual evoked potentials: effect on intersubject amplitude variability. Investig Ophthalmol Vis Sci 42(9):2145–2152
Zhang X, Hood DC, Chen CS, Hong JE (2002) A signal-to-noise analysis of multifocal VEP responses: an objective definition for poor records. Doc Ophthalmol Adv Ophthalmol 104(3):287–302
Mazinani BA, Waberski TD, Weinberger AW, Walter P, Roessler GF (2011) Improving the quality of multifocal visual evoked potential results by calculating multiple virtual channels. Jpn J Ophthalmol 55(4):396–400. doi:10.1007/s10384-011-0040-4
Hood DC, Zhang X, Hong JE, Chen CS (2002) Quantifying the benefits of additional channels of multifocal VEP recording. Doc Ophthalmol Adv Ophthalmol 104(3):303–320
Sabeti F, James AC, Essex RW, Maddess T (2013) Dichoptic multifocal visual evoked potentials identify local retinal dysfunction in age-related macular degeneration. Doc Ophthalmol Adv Ophthalmol 126(2):125–136. doi:10.1007/s10633-012-9366-6
Klistorner A, Fraser C, Garrick R, Graham S, Arvind H (2008) Correlation between full-field and multifocal VEPs in optic neuritis. Doc Ophthalmol Adv Ophthalmol 116(1):19–27. doi:10.1007/s10633-007-9072-y
Alshowaeir D, Yannikas C, Garrick R, Van Der Walt A, Graham SL, Fraser C, Klistorner A (2014) Multifocal VEP assessment of optic neuritis evolution. Clin Neurophysiol. doi:10.1016/j.clinph.2014.11.010
Bengtsson M, Andreasson S, Andersson G (2005) Multifocal visual evoked potentials—a method study of responses from small sectors of the visual field. Clin Neurophysiol 116(8):1975–1983. doi:10.1016/j.clinph.2005.04.009
Hood DC, Greenstein VC, Odel JG, Zhang X, Ritch R, Liebmann JM, Hong JE, Chen CS, Thienprasiddhi P (2002) Visual field defects and multifocal visual evoked potentials: evidence of a linear relationship. Arch Ophthalmol 120(12):1672–1681
Fortune B, Zhang X, Hood DC, Demirel S, Johnson CA (2004) Normative ranges and specificity of the multifocal VEP. Doc Ophthalmol Adv Ophthalmol 109(1):87–100
Nakamura M, Ishikawa K, Nagai T, Negi A (2011) Receiver-operating characteristic analysis of multifocal VEPs to diagnose and quantify glaucomatous functional damage. Doc Ophthalmol Adv Ophthalmol 123(2):93–108. doi:10.1007/s10633-011-9285-y
Jayaraman M, Gandhi RA, Ravi P, Sen P (2014) Multifocal visual evoked potential in optic neuritis, ischemic optic neuropathy and compressive optic neuropathy. Indian J Ophthalmol 62(3):299–304. doi:10.4103/0301-4738.118452
Hood DC, Greenstein VC (2003) Multifocal VEP and ganglion cell damage: applications and limitations for the study of glaucoma. Progr Retinal Eye Res 22(2):201–251
Sutter EE (1992) A deterministic approach to nonlinear system analysis. In: RB Pinter, B Nabet (eds) Nonlinear vision: determination of neural receptive fields, function, and networks. CRC press, Boca Raton, pp 171–220
Hood DC, Ohri N, Yang EB, Rodarte C, Zhang X, Fortune B, Johnson CA (2004) Determining abnormal latencies of multifocal visual evoked potentials: a monocular analysis. Doc Ophthalmol Adv Ophthalmol 109(2):189–199
Hood DC, Zhang X, Rodarte C, Yang EB, Ohri N, Fortune B, Johnson CA (2004) Determining abnormal interocular latencies of multifocal visual evoked potentials. Doc Ophthalmol Adv Ophthalmol 109(2):177–187
de Santiago L, Klistorner A, Ortiz M, Fernandez-Rodriguez AJ, Rodriguez Ascariz JM, Barea R, Miguel-Jimenez JM, Boquete L (2015) Software for analysing multifocal visual evoked potential signal latency progression. Comput Biol Med 59:134–141. doi:10.1016/j.compbiomed.2015.02.004
De Santiago L, Fernandez A, Blanco R, Perez-Rico C, Rodriguez-Ascariz JM, Barea R, Miguel-Jimenez JM, Amo C, Sanchez-Morla EM, Boquete L (2014) Improved measurement of intersession latency in mfVEPs. Doc Ophthalmol Adv Ophthalmol 129(1):65–69. doi:10.1007/s10633-014-9438-x
Thie J, Sriram P, Klistorner A, Graham SL (2012) Gaussian wavelet transform and classifier to reliably estimate latency of multifocal visual evoked potentials (mfVEP). Vis Res 52(1):79–87. doi:10.1016/j.visres.2011.11.002
Pipper CB, Ritz C, Bisgaard H (2012) A versatile method for confirmatory evaluation of the effects of a covariate in multiple models. J R Stat Soc C-Appl 61:315–326. doi:10.1111/j.1467-9876.2011.01005.x
Acknowledgments
This research was partially supported by Værn om Synet, Synoptik-Fonden, Kleinsmed Svend Helge Arvid Schröder og Hustrus Fond and by the Spanish government Grant: TEC2011-26066.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership or other equity interest; and expert testimony or patent-licensing arrangements) or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript.
Funding
This research was partially supported by Værn om Synet, Synoptik-Fonden, Kleinsmed Svend Helge Arvid Schröder og Hustrus Fond and by the Spanish government Grant: TEC2011-26066 in the form of Ph.D. salary. The sponsors had no role in the design or conduct of this research.
Ethical approval
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Informed consent
Informed consent was obtained from all individual participants included in the study.
Statement of human rights
The study was performed in accordance with Universal Declaration of Human Rights.
Statement on the welfare of animals
This article does not contain any studies with animals.
Rights and permissions
About this article
Cite this article
Malmqvist, L., De Santiago, L., Fraser, C. et al. Exploring the methods of data analysis in multifocal visual evoked potentials. Doc Ophthalmol 133, 41–48 (2016). https://doi.org/10.1007/s10633-016-9546-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10633-016-9546-x