Papers
Topics
Authors
Recent
Search
2000 character limit reached

Confronting Quasi-Separation in Logistic Mixed Effects for Linguistic Data: A Bayesian Approach

Published 31 Oct 2016 in stat.AP | (1611.00083v3)

Abstract: Mixed effects regression models are widely used by language researchers. However, these regressions are implemented with an algorithm which may not converge on a solution. While convergence issues in linear mixed effects models can often be addressed with careful experiment design and model building, logistic mixed effects models introduce the possibility of separation or quasi-separation, which can cause problems for model estimation that result in convergence errors or in unreasonable model estimates. These problems cannot be solved by experiment or model design. In this paper, we discuss (quasi-)separation with the language researcher in mind, explaining what it is, how it causes problems for model estimation, and why it can be expected in linguistic datasets. Using real linguistic datasets, we then show how Bayesian models can be used to overcome convergence issues introduced by quasi-separation, whereas frequentist approaches fail. On the basis of these demonstrations, we advocate for the adoption of Bayesian models as a practical solution to dealing with convergence issues when modeling binary linguistic data.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.