This article is part of the network’s archive of useful research information. This article is closed to new comments due to inactivity. We welcome new content which can be done by submitting an article for review or take part in discussions in an open topic or submit a blog post to take your discussions online.
There has been considerable recent interest in methods of determining sample size for qualitative research a priori, rather than through an adaptive approach such as saturation. Extending previous literature in this area, we identify four distinct approaches to determining sample size in this way: rules of thumb, conceptual models, numerical guidelines derived from empirical studies, and statistical formulae. Through critical discussion of these approaches, we argue that each embodies one or more questionable philosophical or methodological assumptions, namely: a naïve realist ontology; a focus on themes as enumerable ‘instances’, rather than in more conceptual terms; an incompatibility with an inductive approach to analysis; inappropriate statistical assumptions in the use of formulae; and an unwarranted assumption of generality across qualitative methods. We conclude that, whilst meeting certain practical demands, determining qualitative sample size a priori is an inherently problematic approach, especially in more interpretive models of qualitative research.