Figure 1. Illustration of alternate encoding-imagery task
Note:餐厅自拍的母女: Mother and daughter taking selfies in a restaurant;
手拿网球的母女: Mother and daughter holding a tennis ball
2.5. EEG recordings and
analysis
EEG data were recorded using a 64-channel Neuro-Scan ERP workstation,
and the electrode locations were based on the extended international
10/20 system. Vertical EOG and horizontal EOG signals were recorded with
electrodes placed above and below each participant’s left eye and at the
outer canthi of the two eyes, respectively. Electrode impedances were
kept below 10kΩ. EEG signals were bandpass filtered between 0.05-100 Hz
and digitized with a sample frequency of 1000 Hz.
The data were processed using EEGLab version 2021b. The electrode
signals were offline filtered using a 30-Hz low pass filter and a 0.1-Hz
high-pass filter. Independent
Components Analysis was used to detect and correct ocular artifacts and
muscle artifacts. Trials with other artifacts (mean voltage exceeding
±100μV) were also removed. The extracted epochs (from -200 to 2000 ms)
were time-locked with the onset of the title cue, and the resulting data
were baseline-corrected (−200 to 0 ms). EEG signals were re-referenced
offline to average reference.
Based on previous research (Bauer & MacNamara, 2021; Marmolejo-Ramos et
al., 2015) and visual inspection of
topographic maps (see Figure 4), the
LPP was measured by averaging the amplitudes at centroparietal sites
(Pz, CPz, Cz, C1 and C2). Furthermore, the LPP was quantified using
three distinct time windows: 400–800 ms (early window), 800–1,400 ms
(middle window), and 1,400–2,000 ms (late window). EEG data were
entered into a 3 (emotion: happy, neutral, sad) × 3 (time window: early,
middle, late) × 2 (group: depression vs control) mixed model ANOVA.
Self-report ratings of imagery valence and vividness were entered into a
3 (emotion: happy, neutral, sad) ×2 (group: depression vs control) mixed
model ANOVA. Greenhouse–Geisser adjustment was employed when the
sphericity assumption was violated. Partial eta squared
(ηp2 ) values were reported for
all analyses, and Bonferroni correction was used for multiple
comparisons. Statistical analysis was performed using IBM SPSS Version
26.
3.Results
3.1. Behavioral results
The correct identification rate of the imagined exemplar in the 7% of
the catch trials was 94.74% (±0.07%), indicating that all participants
performed the task accurately. Furthermore, the score of VVIQ showed a
significant positive correlation with averaged vividness ratings
(r =0.44, p <0.001), indicating that individuals
with higher imagery vividness reported experiencing more vivid imagery
during the experiment.
For the scores of imagery valence, there was a main effect of group
(F (1,82)=5.70, p =0.02,ηp2 =0.07), with lower scores
for the depressed participants compared to the healthy controls . There
was also a main effect of emotion (F (2,164)=230.19,p <0.001,ηp2 =0.74),
with higher scores for happy imagery
(ps <0.001) compared
to both sad and neutral imagery, as well as for neutral imagery compared
to sad imagery
(p <0.001). Further,
there was a significant interaction of group and emotion (F(2,164)=3.11, p =0.047,ηp2 =0.04; see Figure 2). Simple
effect analysis showed that depressed individuals scored lower in sad (p =0.006) and neutral imagery (p =0.03) compared to the
control group. No significant was found in happy imagery between two
groups (p =0.97).
For the scores of imagery vividness,
there was a main effect of emotion (F (2,164)=230.19,p <0.001,ηp2 =0.74), with higher scores
for happy imagery compared to both sad and neutral imagery
(ps <0.001), as well as for neutral imagery compared to
sad imagery (p <0.001). No other effects reached
significance (ps >0.35).