Abstract
Background: In applied psychophysiology, bootstrapping
procedures are often used to classify individuals into one of two or
more independent states (e.g., high risk vs low risk). Although the
number of iterations required for a reliable bootstrap test is not
universally agreed upon, some research (Rosenfeld et al., 2017b)
suggests that 100 iterations is a sufficient number to obtain reliable
results when analyzing P300 from a concealed information test. However,
no study to-date has evaluated the diagnostic consistency of the 100
iterations test across repeated examinations.
Methods: We evaluated the precision of the 100 iteration
test by repeating the test 100 times per participant in a sample of 81
participants. The test was designed to classify participants as either
knowledgeable or not knowledgeable of critical information related to a
mock crime.
Results: We found that the test provided variable
classifications in approximately a quarter of our sample (n = 19/81 or
23%), specifically when a participant’s score presented near the
diagnostic cutpoint. Moreover, the test’s diagnostic results varied by
as much as +/-15%, in certain cases.
Conclusion: Although the test provided reliable results
for the majority of our sample, this was not true for a notable number
of cases. We recommend that researchers report the variability of their
diagnostic metrics and integrate this variability when classifying
individuals. We discuss several simple examples of how to take
variability into account when making classifications, such as by
calculating the probability of one classification state over another
given the data.
Keywords: Bootstrapping, Diagnostics, Precision, Concealed
Information Test, P300, Complex Trial Protocol