Publications related to the fearbase
Data sharing holds promise for advancing and accelerating science by facilitating and fostering collaboration, reproducibility and optimal use of sparse resources. We argue that despite the existence of general data sharing guidelines (e.g, FAIR-principles), their translation and implementation requires field-specific considerations. Here, we addressed this timely question for the field of experimental research on fear and anxiety and showcase the enormous prospects by illustrating the wealth and richness of a curated data collection of publicly available datasets using the fear conditioning paradigm based on 103 studies and 8839 participants. We highlight challenges encountered when aiming to reuse the available data corpus and derive 10 simple steps for making data sharing in the field more efficient and sustainable and hence facilitating collaboration, cumulative knowledge generation and large scale mega-, meta- and psychometric analyses. We share our vision and first steps towards transforming such curated data collections into a homogenized and dynamically growing database allowing for easy contributions and for living analysis tools for the collective benefit of the research community.
Here, we follow the call to target measurement reliability as a key prerequisite for individual-level predictions in translational neuroscience by investigating (1) longitudinal reliability at the individual and (2) group level, (3) internal consistency and (4) response predictability across experimental phases. One hundred and twenty individuals performed a fear conditioning paradigm twice 6 months apart. Analyses of skin conductance responses, fear ratings and blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) with different data transformations and included numbers of trials were conducted. While longitudinal reliability was rather limited at the individual level, it was comparatively higher for acquisition but not extinction at the group level. Internal consistency was satisfactory. Higher responding in preceding phases predicted higher responding in subsequent experimental phases at a weak to moderate level depending on data specifications. In sum, the results suggest that while individual-level predictions are meaningful for (very) short time frames, they also call for more attention to measurement properties in the field.
Raw data are typically required to be processed to be ready for statistical analyses, and processing pipelines are often characterized by substantial heterogeneity. Here, we applied seven different approaches (trough-to-peak scoring by two different raters, script-based baseline correction, Ledalab as well as four different models implemented in the software PsPM) to two fear conditioning data sets. Selection of the approaches included was guided by a systematic literature search by using fear conditioning research as a case example. Our approach can be viewed as a set of robustness analyses (i.e., same data subjected to different processing pipelines) aiming to investigate if and to what extent these different quantification approaches yield comparable results given the same data. To our knowledge, no formal framework for the evaluation of robustness analyses exists to date, but we may borrow some criteria from a framework suggested for the evaluation of "replicability" in general. Our results from seven different SCR quantification approaches applied to two data sets with different paradigms suggest that there may be no single approach that consistently yields larger effect sizes and could be universally considered "best." Yet, at least some of the approaches employed show consistent effect sizes within each data set indicating comparability. Finally, we highlight substantial heterogeneity also within most quantification approaches and discuss implications and potential remedies.
There is heterogeneity in and a lack of consensus on the preferred statistical analyses in light of a multitude of potentially equally justifiable approaches. Here, we introduce multiverse analysis for the field of experimental psychopathology research. We present a model multiverse approach tailored to fear conditioning research and, as a secondary aim, introduce the R package 'multifear' that allows to run all the models though a single line of code. Model specifications and data reduction approaches were identified through a systematic literature search. The heterogeneity of statistical models identified included Bayesian ANOVA and t-tests as well as frequentist ANOVA, t-test as well as mixed models with a variety of data reduction approaches. We illustrate the power of a multiverse analysis for fear conditioning data based on two pre-existing data sets with partial (data set 1) and 100% reinforcement rate (data set 2) by using CS discrimination in skin conductance responses (SCRs) during fear acquisition and extinction training as case examples. Both the effect size and the direction of effect was impacted by choice of the model and data reduction techniques. We anticipate that an increase in multiverse-type of studies will aid the development of formal theories through the accumulation of empirical evidence and ultimately aid clinical translation.
The so-called 'replicability crisis' has sparked methodological discussions in many areas of science in general, and in psychology in particular. This has led to recent endeavours to promote the transparency, rigour, and ultimately, replicability of research. Originating from this zeitgeist, the challenge to discuss critical issues on terminology, design, methods, and analysis considerations in fear conditioning research is taken up by this work, which involved representatives from fourteen of the major human fear conditioning laboratories in Europe. This compendium is intended to provide a basis for the development of a common procedural and terminology framework for the field of human fear conditioning. Whenever possible, we give general recommendations. When this is not feasible, we provide evidence-based guidance for methodological decisions on study design, outcome measures, and analyses. Importantly, this work is also intended to raise awareness and initiate discussions on crucial questions with respect to data collection, processing, statistical analyses, the impact of subtle procedural changes, and data reporting specifically tailored to the research on fear conditioning.
Why do only some individuals develop pathological anxiety following adverse events? Fear acquisition, extinction and return of fear paradigms serve as experimental learning models for the development, treatment and relapse of anxiety. Individual differences in experimental performance were however mostly regarded as 'noise' by researchers interested in basic associative learning principles. Our work for the first time presents a comprehensive literature overview and methodological discussion on inter-individual differences in fear acquisition, extinction and return of fear. We tell a story from noise that steadily develops into a meaningful tune and converges to a model of mechanisms contributing to individual risk/resilience with respect to fear and anxiety-related behavior. Furthermore, in light of the present 'replicability crisis' we identify methodological pitfalls and provide suggestions for study design and analyses tailored to individual difference research in fear conditioning. Ultimately, synergistic transdisciplinary and collaborative efforts hold promise to not only improve our mechanistic understanding but can also be expected to contribute to the development of specifically tailored ('individualized') intervention and targeted prevention programs in the future.