Hoofdmenu

Advising on research methods: Selected topics 2015

Advising on research methods

Selected topics 2015


Genre: Paper collection
Editors: Herman J. Adèr and Gideon J. Mellenbergh
First Edition: 2016
Softcover ISBN 97-890-79418-39-8
Price: €10,- ($ 11)
Website: www.jvank.nl/ARMSelected2015

Contents  
Advising on research methods: Selected topics 2015 results from a research master course Methodological Advice that was given at the University of Amsterdam, fall 2015 by Don Mellenbergh and Herman Adèr.

The course had the same format as previous courses.

The objectives of the course were: (a) to acquire methodological knowledge that is needed for advising researchers in the behavioral and social sciences, and (b) to get experience with methodological consultancy.
The main material for the course was the book:

Advising on research methods: A consultant’s companion
by Herman J. Adèr and Gideon J. Mellenbergh (with contributions by David J. Hand).

The students of the course get a number of different assignments. One of these was to write a paper on a topic that occurs in methodological consultancy. The students were instructed to write a paper that had to be published in a book. The intended audience of the book were fellow research master students who gave methodological advice on research in the behavioral sciences, such as is done in Methodology Shops of Dutch psychology departments. The procedure to prepare the book resembles the procedure that is used to prepare any edited book. The authors wrote a first draft of their paper, this draft was reviewed by other students and the course instructors, and the authors used the comments to rewrite the first drafts. The assignment appeared to be a success. The students found it a hard job, but they appreciated this learning experience.

At first, students selected a topic from a list of methodological topics. One student wrote a paper on her own, and the other twelve students worked in pairs.

Giovanni Giaquinto and Hester Sijtsma describe questionable research practices (QRPs). They focus on the QRP of improper sequential testing of a null hypothesis: A sample of participants is selected, and a null hypothesis is tested. If the null hypothesis is not rejected, a new sample of participants is selected, the data of the samples are combined, and the null hypothesis is tested again. Studies are mentioned showing that this QRP inflates the Type I error rate of statistical tests. Statistics has developed proper sequential tests for this situation. The authors introduce a proper Bayesian method: The results of a sample are used to specify an a priori distribution of a new sample, and the Bayes Factor is used to decide whether a new sample has to be selected. The authors note that QRPs can be counteracted by preregistration of studies, open access to data, and more influence of methodologists and statisticians. They recommend consultants not to criticize clients for applying QRPs, but to explain the undesirable effects of QRPs on research.

Jonnemei Colnot and Susanne de Mooij discuss three threats to the quality of medical and psychological data. First, the collection of data at different locations, for example, different hospitals or psychology institutes. The collection of data may differ between locations, which may cause differences in the quality of the data. Second, the missing of data. Data that are not missing at random affect study results. Third, small power of statistical tests. The authors compare these threats between medical and psychological studies. Differences of data quality between locations are more pronounced in medical studies than in psychological studies. Medical studies better report missing data than psychological studies, but both types of studies hardly report how missing data are handled. Many psychological studies are underpowered, and, generally, the power of psychological studies is less than the power of medical studies. The authors recommend to pay attention to differences between locations, power, and the handling and reporting of missing data in consultancy.

Hannah Sigurðoardóttir discusses response biases in questionnaires. A response bias is respondent’s tendency to answer a questionnaire item in a way that differs from his (her) true answer. The author describes a number of response biases, such as, socially desirable answering and agreeing (acquiescence) and disagreeing (dissentience) with items independently of the content of the items. Respondents’ cognitive abilities, their motivation, and the difficulty of the response task are factors that cause response biases. Test constructors cannot influence respondents’ cognitive abilities, but they can influence their motivation and item difficulty. Methods are described to maximize respondents’ motivation and to minimize the difficulty of the response task. Moreover, test construction methods are described to counteract response biases. The paper ends with recommendations to consultants who give advice on the construction of questionnaires.

Bobby Houtkoop and Simone Plak introduce readers into nonparametric item response theory (NIRT) for dichotomous items (correct/incorrect, agree/don’t agree answer). They describe Mokken’s monotone homogeneity and double monotonicity models for the analysis of questionnaire and test data. Parametric and nonparametric item response models have a number of common assumptions, but differ in their item response functions (i.e., functions that relate the probability of giving a correct (agree) answer to respondents’ latent trait values): parametric models assume logistic item response functions, whereas nonparametric models relax this assumption by assuming nondecreasing functions. The authors apply the monotone homogeneity model to a test of transitive reasoning for children, and demonstrate how the model is applied in practice. They recommend NIRT when the assumption of a logistic item response function appears to be violated.

Lotte Schuilenborg and Leonie Vogelsmeier introduce factor analysis to a broad audience. They describe the model, assumptions, and model fitting in a nontechnical way. They distinguish between exploratory and confirmatory factor analysis. Moreover, they discuss cross-validation, and how this method can be applied in factor analysis: The sample is randomly split into two subsamples. Exploratory factor analysis is applied in one subsample, and the resulting model is tested with confirmatory factor analysis in the other subsample. The authors demonstrate factor analysis and cross-validation with Spearman’s intelligence test data. They conclude that their two-factor solution differs from Spearman’s one-factor model of intelligence. They recommend to pay attention in consultancy not only to the technical aspects of factor analysis but also to the substantive interpretation of the results of a factor analysis.

Rogier Hetem and Bren Meijer discuss bootstrapping in regression analysis. They start with a description of the linear regression model. The model makes assumptions that are easily violated in practice. Assumption violations may lead to incorrect estimates of confidence intervals and null hypothesis tests. The bootstrap is the preferred method when assumptions of a linear relation between dependent and independent variables, the independence of residuals, or the homogeneity of residual variances are violated. The bootstrap method can be applied to the residuals of the model or to participants’ observed data. The authors focus on bootstrapping participants’ observed data. They demonstrate how the bootstrap is performed in SPSS and R.

Don van den Bergh and Carmen Wolvius introduce readers into Event History analysis. Survival analysis studies the time that elapses until an event (e.g., death) occurs, while Event History analysis studies the time of recurrent events (e.g., children’s developmental stages). The dependent variable of Event History analysis is the conditional probability (given the elapsed time) that an event (e.g., a developmental stage) occurs. This conditional probability is described by the hazard function, and is predicted by one or more explanatory variables. In the literature, different models are described. The authors focus on the proportional hazard model. They simulated data, and show how the model is applied in R.

 

WebsiteOrder in the NetherlandsInternationally

1264total visits,1visits today