MATERIAL AND METHODS
The development and reporting of the study were guided by the CHERRIES
checklist13 (Appendix 1).
An anonymous and open international e-survey was distributed from August
to October 2019 using SurveyMonkey (San Mateo, California, USA). The
primary outcome was the proportion of respondents aware of the utility
of various diagnostic tools to diagnose endometriosis. Secondary
outcomes attempted to delve deeper into understanding the intricacies of
diagnostic tools for various endometriosis disease states and various
clinical scenarios, emphasising ultrasound as the typical first-line
investigative tool for endometriosis symptomatology.
Original questions were brainstormed and formulated by the research
team. These questions were piloted on a group of 10 women in the lay
population. Questions were modified according to feedback to improve the
interpretability of the questions.
Once ethically approved by the Nepean Blue Mountains Local Health
District Human Research Ethics Committee (2019/ETH00444), the
survey (Appendix 2) was disseminated via the social media outlets
Facebook, Twitter, and Instagram, through collaborations with national
and international endometriosis community groups. Reminder posts were
sent out one and two months after the initial release of the survey. The
target population was a convenience sample; an international and diverse
population, encompassing all ethnicities, ages, and genders, was sought
to reflect that of the general population. The survey advertisement is
depicted in Appendix 3 and includes a recommended short statement for
ease of use on social media. Prior to initiating the survey, potential
respondents were shown the patient information sheet (Appendix 4).
Respondents were told the length of time of the survey, data storage
process/policies, security of data, name of the investigator, and
purpose of the study. Completion of the survey was voluntary and implied
informed consent. No incentives were offered. Respondents took as much
time as they needed to consent to provide information in the survey;
however, it was impossible to withdraw consent once the survey was
submitted.
The survey consisted of 26 questions over 12 pages, with two to five
questions per page. Each respondent had the same order of questions. In
multiple choice-type questions, the potential responses were displayed
in a semi-structured fashion. When appropriate, questions included an
option to answer, ”I don’t know” or ”other (please specify)”. Adaptive
questioning was used to reduce the number and complexity of questions.
No ”completeness check” was required before the survey was submitted.
Respondents could not review and change their answers (e.g. through aback button or a review step that displays a summary of the
responses) due to the survey’s nature. On page 11 of the survey,
respondents were provided with educational information that would have
potentially resulted in a difference in their responses to the preceding
questions. At the conclusion of the survey, respondents were asked if
they learned anything from completing the survey. Multiple responses
were turned ”off”, only allowing the survey to be taken once from the
same device.
Once collected, the data was stored on the SurveyMonkey server. View,
participation, and completion rate were not sought. Completeness rate,
defined as the percentage of survey takers that completed the entire
survey, was captured by SurveyMonkey. Completed aspects of surveys which
were incomplete (where, for example, users did not answer all questions)
were included in the analysis. Timestamps were not used, and no
timeframe for completion was used as a cut-off to determine inclusion in
the analysis. Descriptive statistics were reported using
medians/interquartile ranges (IQR), numbers and percentages, and
comparisons were made using chi-square tests. Analyses were performed in
SAS v9.4 (Cary, USA) and R (R Core Team, 2019)14. The
weighting of items or propensity scores have not been used to adjust for
a ”non-representative sample”.