Keywords

1 Methodological Background of Qualitative Content Analysis

The techniques of Qualitative Content Analysis have become a standard procedure of text analysis within the social sciences. In their bibliometrical analysis of the Social Sciences Citation Index (SSCI, 1991–1998), Titscher et al. (2000) found Qualitative Content Analysis in seventh place (after Grounded Theory, Ethnography, Standardized Content Analysis, Critical Discourse Analysis, Conversation Analysis and Membership Categorization Device). On a predominantly German language database (Psyndex, Sociofile, WISO-Social Science and MLA International Bibliography) they found qualitative, open content analysis in first place. A reason for this could be that it can be located between open hermeneutic approaches and quantitative measurement. Thus Hussy et al. (2010) discuss it as hybrid qualitative–quantitative approach within the mixed methods approach.

But how could qualitative and quantitative methodologies come together? In social sciences a “science war” is diagnosed (Ross 1996). On the one hand stands a rigid positivist conception of research with quantitative, experimental methodology; on the other hand an open, explorative, descriptive, interpretive conception working with qualitative methods. Norman Denzin has subtitled his Qualitative Manifesto (Denzin 2010) as “a call to arms”, so it seems for him impossible to overcome the contradiction.

If we are looking at approaches to text analysis, we can differentiate the two positions as coming from different epistemological backgrounds (cf. Guba and Lincoln 2005):

  • The hermeneutical position, embedded within a constructivistic theory, tries to understand the meaning of the text as interaction between the preconceptions of the reader and the intentions of the text producer. Within the hermeneutical circle the preconceptions are refined and further developed in confrontation with the text. The result of the analysis remains relative to the reading situation and the reader.

  • The positivistic position tries to measure, to record and quantify obvious aspects of the text. Those aspects of the text can be detected automatically, and their frequencies can be analyzed statistically. The results of the analysis claim objectivity.

A strict adherence to one of these positions overlooks the possible convergences: the social constructivist theory formulates the possibility of agreement between different individual meaning constructions and allows by that the concept of a socially shared quasi-objective reality. Modern hermeneutical approaches try to formulate rules of interpretation. By this the analysis gains objectivity. On the other hand, positivistic positions had been refined to post-positivism, or critical rationalism (Popper). Here only an approximation of reality, by critical efforts of researchers to refute hypotheses, is held to be possible; again there is the notion of an agreement process in talking about reality rather than a naive copy of reality.

If there are possibilities of bringing together opposing positions in the qualitative–quantitative debate, the floor is open for models of combination and integration, now discussed under the label of mixed methods (cf. Mayring et al. 2007; Creswell and Clark 2010). Qualitative Content Analysis tries to establish such a mixed methods approach in text analysis. We combine two fundamental steps of analysis: the first is a qualitative-interpretative step following a hermeneutical logic in assigning categories to text passages; the second is a quantitative analysis of frequencies of those assignments (if the same categories are coded in several text passages) (cf. Mayring 2002).

2 Development and Definition of Content Analysis

Following this background to the procedures of Qualitative Content Analysis, we first define and characterize the basic ideas of (quantitative) Content Analysis. There is general agreement that the aim of content analysis is to analyze material derived from any kind of communication, but content analysis has not concerned itself solely with analyzing the content. On this point even the definition by the author of the first textbook on content analysis, Bernhard Berelson, is not precise: “Content analysis is a research technique for the objective, systematic, and quantitative description of the manifest content of communication” (Berelson 1952, p. 18). Not only description of content, but also formal aspects of communication and underlying meaning structures have become the object of analysis. Thus scripts of dialogues with psychotherapy patients are scrutinized for formal characteristics such as sentence corrections, incomplete sentences, word repetitions, “ers” and “erms”, etc., in order to register indications of a patient’s anxiety level (Pool 1959). Even American propaganda research during the Second World War, which was directed by Harold D. Lasswell and contributed significantly to the development of content analysis, does not restrict itself to the actual contents of communication. In fact, many analysts are altogether suspicious of the concept “content”, as they are more interested in the latent meanings than in the overt communicative content. Thus Budd et al. (1967) define as follows: “Content analysis is a systematic technique for analyzing message content and message handling” (p. 2), while George (1959) points in a different direction when he calls it “a diagnostic tool for making specific inferences about some aspects of the speaker’s purposive behaviour” (p. 7); or, again, in a more generalized form, Krippendorff (1969): “Content analysis may therefore be redefined as the use of replicable and valid methods for making specific inferences from texts to other states or properties of its source” (p. 11). As can be seen from this, content analysis has long ceased to concern itself solely with content. Pool (1959), in summary, identifies three objectives:

  • describing texts;

  • drawing inferences from texts to their antecedents;

  • drawing inferences from texts to their effects.

With this background two main techniques of quantitative content analysis have been developed. First, and primarily, frequency analyses and techniques derived from them. The simplest method of content-analytical procedure is to count certain elements in the material and compare them in their frequency with the occurrence of other elements. Of special importance here is the use of comprehensive category systems (so-called “dictionaries”), which are supposed to include all aspects of a text and form the basis for a computer count of language material. The General Inquirer (Stone et al. 1966) seems to have been the first attempt in this direction. Dictionaries now exist, for instance, for psychologically relevant issues (e.g. Harvard Psychological Dictionary), the latest editions of which can be conveniently used on a PC (cf. Weber 1990). On this basis frequencies are computed and analyzed statistically. The dictionary must also of course be able to recognize different grammatical forms of a word within the context of a sentence. This, however, can cause problems:

  • multiplicity of meaning (e.g. “madly” in the colloquial meaning, say, of “very”; or “madly” as pertaining to psychological disturbance);

  • the nuances and connotations conferred on terms by the context;

  • contextual modification of meaning (for instance in the case of “no anxiety”, “little anxiety” and “a lot of anxiety”, “anxiety” will be counted once in each case);

  • the contextual relationship of the term counted (e.g. with “I am afraid of X” or “X is afraid of me”, “afraid” is counted once in each case);

  • the problem of pro-forms (e.g. with “I didn’t notice any of that” the computer does not know what “of that” refers to);

  • dialect expressions (which occur in interview scripts regularly) need a great deal of re-working.

Several more problems could be added to the list. Attempts have in fact been made to check and control contextual influences of this kind (e.g. KWIC – Key Word In Context program, cf. Weber 1990). For this a list of the text passages within which a category was found, that is, the category in its different contexts, is drawn up for each concept or term counted. This, however, only makes it possible to recognize the problem, not to solve it. In any case, lists such as this are difficult to process with large quantities of text. One example of a more complex frequency analysis is the Gottschalk–Gleser Speech Content Analysis for the measurement of affective states (anxiety, aggressivity) (Gottschalk and Gleser 1969), which has also been adapted for the German language (Schoefer 1980).

This brings us to the second group of tested techniques of content analysis: contingency analyses. The development of such techniques goes back above all to Charles Osgood (Osgood 1959). The objective here is to establish whether particular text elements (e.g. central concepts) occur with particular frequency in the same context and are connected with one another in any way in the text, that is, are contingent. The intention is that by discovering many such contingencies one may extract from the material a structure of text elements associated with one another. Examples of this are the classical contingency analysis of Osgood (1959) or semantic field analysis (Weymann 1973).

However, there are fundamental criticisms of quantitative content analysis to the extent that, today, one can say that the methodology discussion has reached a point of stagnation. An increasing number of critical voices describe the technique as inadequate and unable to fulfill requirements. The joke about “discontent analysis” can be heard with increasing frequency. Koch et al. (1974), for example, tested six fairly recent journalistic content analyses from German-speaking countries according to customary standards of quality. From them, content analysis gets a bad report: “If conclusions are drawn on the basis of the work reviewed here, then it must be stated that up to now no one has succeeded in developing a handy instrument for describing and analysing news publications with the help of content analysis” (Koch et al. 1974, p. 83). Manfred Ruehl also denies that content analysis has a chance of achieving “social-scientific status capable of gaining general acceptability” (Ruehl 1976, p. 377). It achieves only superficial polish through quantitative techniques, and has pushed the problem of sense and meaning to one side, he argues. “The results of content analysis remain highly pseudo- and parascientific … as long as content analysts do not know how to equip their scientific criteria better for methodological testing” (Ruehl 1976, pp. 376–377). The fact that the quantification approach and orientation to manifest content tend to sidestep the problem of what language symbols actually mean is reason enough also for Ingunde Fuehlau to declare that content analysis is a failure: “This is why content analysis, if pursued strictly according to its own tenets, must inevitably lead to distorted results. If the method was stringently applied which actually is almost never really the case — it must either produce irrelevant descriptions of the subject — albeit in a very “objective manner” — or on the other hand meaningful descriptions of communication content, to which, however, if judged according to its own criteria, it can only assign a highly subject value. In either case, therefore, it fails as a method” (Fuehlau 1978, pp. 15–16; cf. also Fuehlau 1982).

3 Basics of Qualitative Content Analysis

Qualitative Content Analysis tries to retain the strengths of quantitative analysis and against this background to develop techniques of systematic qualitatively oriented text analysis. The following points are central:

3.1 Embedding of the Material Within the Communicative Context

A particular advantage of the content-analytical procedure as compared with other approaches to text analysis is the fact that it has a firm basis in the communicative sciences. The material is always understood as relating to a particular context of communication. The interpreter must specify which part of the communication process he wishes his conclusions from the material analysis to relate to. This content-analytical particularity should be retained at all costs for qualitative content analysis because many quantitative content analyses have neglected this point. The text is thus always interpreted within its context, i.e. the material is examined with regard to its origin and effect.

3.2 Systematic, Rule-Bound Procedure

Preserving the systematic procedure of content analysis is one of the main concerns of the methods suggested here. Systematic procedure in this connection means, first and foremost, orientation towards rules of text analysis laid down in advance. Several points need to be made in this regard. The establishing of a concrete procedural model of analysis is of central importance. Content analysis is not a standardized instrument that always remains the same; it must be fitted to suit the particular object or material in question and constructed especially for the issue at hand. This is laid down in advance in a procedural model (examples of such models will be found below) which define the individual steps of analysis and stipulate their order. But it is also continually necessary to establish additional rules. It is an axiom precisely of content analysis, in contrast to “free analysis”, that every analytical step and every decision in the evaluation process should be based on a systematic and tested rule. Finally, the systematic quality of content analysis is reflected also in its method of “dissection” or line-by-line analysis rather than a more holistic interpretation.

The definition of content-analytical units (recording units, context units, recording unit) should in principle be retained also in qualitative analysis. Concretely this entails deciding in advance how the material is to be approached, which parts are to be analyzed in what sequence, what conditions must obtain in order for an encoding to be carried out. In the process of inductive category formation it can be useful to keep such content-analytical units very open-ended. Despite this, however, the process here also is characterized by dissection of the material carried out progressively from one passage to the next. Certainly, it is precisely this last point which has frequently been criticized by some proponents of the qualitative approach. Latent structures of meaning cannot be revealed in this way, they say. One answer to this, in the case of such an analytical objective, is to define the units more broadly. Nevertheless, it is important that such units be theoretically well founded, in order to allow other analysts access to the logic and method of the analysis. The system should be so described that another interpreter may carry out the analysis in a similar way.

3.3 Categories as the Focus of Analysis

The category system is the central point in quantitative content analysis. Even with qualitative analysis, however, an attempt should be made to concretize the objectives of the analysis in category form. The category system constitutes the central instrument of analysis. It also contributes to the inter-subjectivity of the procedure, helping to make it possible for others to reconstruct or repeat the analysis. In this connection qualitative content analysis will have to pay particular attention to category construction and substantiation. However, precious little help is given in this respect by standard works on content analysis. Krippendorff thus writes: “How categories are defined … is an art. Little is written about it” (Krippendorff 2004, p. 76). That of course is unsatisfactory. It is precisely the methods described in this work which may be of further assistance in this regard.

On this point also, some proponents of qualitative analysis make the objection that orientation to categories entails an analytically dissecting procedure which impedes more holistic comprehension of the material. In answer to this it can be said that qualitative content analysis also provides methods which accord prominence to synthetic category construction, that is, where the category system actually constitutes the findings of the analysis. This is the case for inductive category formation procedures and summarizing content analysis (see below). On the other hand, working with a category system is an important contribution to the comparability of findings and the evaluation of analysis reliability.

3.4 Object Reference in Place of Formal Techniques

On the other hand the methods of qualitative content analysis should not simply be techniques to be employed anywhere and everywhere. The alliance with the individual object of analysis is an especially important concern. This is seen in the fact that the procedures discussed here are oriented to the way language material is ordinarily experienced and dealt with in everyday life. The three basic techniques of summary, explication and structuring (see Sect. 4) are based on it. This clearly demonstrates that it is the object of analysis which is paramount. The methods are not intended to be conceived of as techniques which can be blindly and automatically transferred from one object to the other. The appropriateness of the method must be demonstrated with regard to the particular material in each individual case. This is why the methods suggested here must themselves always be adapted to suit the individual study.

3.5 Pilot Testing of the System of Categories and the Content Analytical Rules

Qualitatively oriented content analysis does not use fully standardized instruments. The category system and the related content analytical rules usually are developed for the specific material in respect to the specific research question. Initially that means a disadvantage compared with quantitative research and is why methods should be tested in a pilot study. After working through a substantial part of the material the coder is requested to stop coding and revise the category system and the coding rules. Are they adequate to the material and the research question? If a revision is done as consequence, the coding process has to start from the beginning. In the procedural models (see below), these steps are included through the presence of reverse loops. What is important in this is that the trial runs are also documented in the research report.

3.6 Theory-Guided Character of the Analysis

It must by now have become clear that qualitative content analysis is not a rigidly delineated technique, but a process in which new decisions regarding basic procedure and individual stages of analysis constantly have to be made. What are such decisions based upon? In qualitatively oriented research it is repeatedly stressed that here theoretical arguments must be used. Technical fuzziness is compensated for by theoretical stringency. This applies above all to the explication of the particular issue, but it also concerns detailed analyses. Theory-guidedness means that in all procedural decisions systematic reference is made to the latest research on the particular subject and on comparable subject fields. In qualitative content analysis content-related arguments should always be given preference over procedural arguments.

3.7 Integrating Quantitative Steps of Analysis

As has already been emphasized above, efforts are made to combine qualitative and quantitative methods. Putting it more exactly, the chief task is to determine those points in the analytical process at which quantitative measures can be sensibly brought in. Reasons for their use should then be carefully explained and the results should be analyzed in detail. Quantitative steps of analysis will always gain particular importance when generalization of the results is required. In case study procedures it is important to show that a certain case recurs in similar form with particular frequency. But within content-analytical category systems, too, registration of how often a category occurs may give added weight to its meaning and importance. Of course, this must be given adequate justification in the respective case. A precisely based qualitative assignment of categories to a certain material (e.g. through the structuring method, cf. below) can also be supplemented by more complex statistical evaluation techniques, as far as these are appropriate to the purpose of analysis and suited to the object involved.

3.8 Quality Criteria

It is precisely because here the harsh methodological standards of quantitative content analysis have been softened and applied more flexibly in some respects that the assessment of results according to quality criteria such as objectivity, reliability and validity is especially important even in qualitative content analysis. For quantitative content analysis it is inter-coder agreement which is of particular significance. Several content analysts work on the same material independently of one another and their findings are compared. In general this should also be attempted with qualitative content analysis, although negative findings do not necessarily have to lead to the immediate abandoning of the analysis. Here the main point, again, is to understand and interpret unreliabilities. Such a search for sources of error is especially important during the pilot phase, as it can lead to the instruments of analysis being modified. That is to say, it can lead to inquiry into arguments for reliability and validity while the process of analysis is actually going on, instead of leaving this exclusively to a single assessment at the close of the analysis.

4 Basic Procedures or Techniques of Qualitative Content Analysis

From an analysis of common qualitative oriented text analysis techniques (cf. Mayring 2010a, b) we can show that they can be reduced to three fundamental forms of interpreting: summary (text reduction), explication and structuring:

  • Reducing procedures: The object of the analysis is to reduce the material such that the essential contents remain, in order to create through abstraction a comprehensive overview of the base material which is nevertheless still an image of it.

  • Explicating procedures: The object of the analysis is to provide additional material on individual doubtful text components (terms, sentences, …) with a view to increasing understanding, explaining, interpreting the particular passage of text.

  • Structuring procedures: The object of the analysis is to filter out particular aspects of the material, to give a cross-section through the material according to pre-determined ordering criteria, or to assess the material according to certain criteria. In those procedures the categories are formulated in advance in the sense of a deductive category assignment.

These basic forms, however, must be further differentiated before an exact description of procedures is possible. In addition to the usual summaries, the same ongoing process is useful for inductive category formation; a criterion for the categories is defined and aspects of this criterion are stepwise gathered in the material. Forms of explication are possible which use the textual context for the elucidation of a particular text passage (narrow context analysis); however, the most common method of hermeneutical interpretation is to use further material beyond the textual context for explication (broad context analysis). With structuring, too, subgroups must be distinguished. The categories which are brought deductively to the material can consist of a list of aspects (nominal scale). Or the categories form an ordinal scale (e.g. more – less) and serve as a rating procedure for the text. In addition, some mixed procedures have been described (Mayring 2010a, 2013). One such is that in content structuring or theme analysis the material is deductively ordered to categories and within each category material an inductive process of category formation is performed. Type analysis is a similar procedure where categories in the first step have to meet a typologizing criterion (typical types, extreme types, frequent types, theoretical types). In category refinement a deductive category system is modified and supplemented with new categories in an inductive way. Parallel forms execute several procedures in one passage through the material.

Through this differentiation we arrive at ten distinct forms of analysis:

Reductive:

(1) Summary

(2) Inductive category formation

Explicating:

(3) Narrow context analysis

(4) Broad context analysis

Structuring/Deductive:

(5) Nominal categories

(6) Ordinal categories

Mixed:

(7) Content structuring

(8) Type analysis

(9) Category refinement

(10) Parallel forms

These procedures have been described extensively elsewhere (Mayring 2010a, 2013). We now demonstrate two central techniques of Qualitative Content Analysis: inductive category formation and deductive category assignment (structuring).

4.1 Inductive Category Formation

On the basis of summarizing qualitative content analysis a technique for inductive category formation can be developed. We have heard that category definition is a central step in content analysis, a very sensitive process, “an art” (Krippendorff 2004). There are two possible procedures: deductive category definition tries to develop categories out of theoretical considerations, with theories or theoretical concepts used in a process of operationalization in direction of the material; inductive category formation develops categories directly out of the material. For qualitative content analysis the second is very fruitful. The ongoing inductive process has great importance within qualitative research. It aims at a true description without bias due to the preconceptions of the researcher, an understanding of the material in terms of the material. Inductive category formation is a central process within the approach of Grounded Theory (Strauss 1987; Strauss and Corbin 1990), which they call “open coding”. They developed many rules of thumb for open coding, and they recommended a systematic, line-by-line procedure. For content analysis, nevertheless, inductive category formation has to be more systematic. And it can use the same logic, the same reductive procedures, as in summarizing content analysis. The following process model (Fig. 13.1) will now be explained.

Fig. 13.1
figure 1

Process model of inductive category formation

Within the logic of content analysis, the level or theme of categories to be developed must be defined previously. There has to be a criterion for the selection process in category formation. This is a deductive element and is established within theoretical considerations about the subject matter and the aims of analysis. The second basic content analytical rule for inductive category formation is the establishment of the abstraction level. This comes from summarizing content analysis which reduces the material from one abstraction level to the next. If this level is not defined the categories remain chaotic, out of order. After those two rules are determined, the material is worked through line by line. The first time material fitting the category definition is found, a category has to be constructed. A term or short sentence which stands as near as possible to the material serves as category label. The next time a passage fitting the category definition is found it has to be checked whether if it falls under the previous category, in which case it can be subsumed under this category (a reductive process); if not, a new category has to be formulated.

After working through a good deal of material (c. 10–50 %) no new categories are to be found. This is the moment for a revision of the whole category system. It has to be checked whether the logic of categories is clear (e.g. no overlaps) and whether the level of abstraction is adequate to the subject matter and aims of analysis. Perhaps the category definition has to be changed. If there are any changes in the category system, of course the complete material has to be worked through once again. After this analysis we have a set of categories to a specific topic, connected with specific passages in the material. The further analysis can go different ways: the whole system of categories can be interpreted in terms of the aims of the analysis and used theories; or the links between categories and passages in the material can be analyzed quantitatively (e.g. we can look at those categories occurring most frequently in the material).

For example, in a study on learning emotions (Glaeser-Zikuda and Mayring 2003) we analyzed open-ended interviews and daily diaries on concrete learning experiences of 24 students of eighth grade. With inductive category formation we built up categories concerning positive learning experiences. We generalized those categories on a medium level of abstraction. Here are the most frequent categories:

C1: Happy about the interesting learning activities today

(21 occurrences)

C2: Happy to master the subject and having understood everything

(21 occurrences)

C3: Amazing subjects in the lesson (literature, poems)

(16 occurrences)

C4: Enjoyed the positive feedback by the teacher

(14 occurrences)

C5: Nice group work or partner work

(11 occurrences)

C6: Interesting problems (electricity) in the lesson

(3 occurrences)

Those inductive categories give an impression about positive emotions in learning processes. In a second step we found two main categories within this list: positive emotions about the learning processes (C1, C2, C4, C5) and positive emotions about the learning content (C3, C6). We then compared the occurrences of those main categories between the two groups of high and low achievers and found a correlation between positive emotions about learning processes and high classroom achievement. That means that it seems to be more important for good teaching to associate positive emotionality with successful learning processes than with learning content.

4.2 Deductive Category Assignment (Structuring)

This is the content-analytical method which is probably most often used. It has the goal of extracting a certain structure from the material. This structure is brought to bear on the material in the form of a category system. All text components addressed by the categories are then extracted from the material systematically. If one wishes to describe the structuring procedure quite generally, a few points are especially important: the fundamental structuring dimensions must be exactly determined; they must derive from the research question and must be theoretically based; these structuring dimensions can be further subdivided, split up into individual features or values; the dimensions and values are then brought together to form a category system.

The particular categorization of a given material component is something that must be determined precisely. A procedure for this, based on everyday life processes of categorization, has proven useful (cf. Ulich et al. 1985). Within developmental psychology (learning of categories in speech development) and within general psychology (categorization theories, cf. Murphy 2002) it has been shown that categories are imagined in form of explicit definitions, prototypes and demarcation rules. So a category can be defined best if all three determining approaches are used:

  • Definition of the categories

    It is precisely determined which text components belong in a given category.

  • Anchor samples

    Concrete passages belonging in particular categories are cited as typical examples to illustrate the character of those categories.

  • Coding rules

    Where there are problems of delineation between categories, rules are formulated for the purpose of unambiguous assignment to a particular category.

Test extracts are taken from the material to check whether the categories are at all applicable and whether the definitions, anchor samples and coding rules make category assignment possible. This trial run-through, like the main run-through proper, is subdivided into two steps of operation. First of all the text passages in the material are marked in which the category concerned is addressed. These “points of discovery” (cf. Hausser et al. 1982) can be marked by noting the category number in the margin of the text or through differently colored underlinings in the text itself. In the second step the material thus marked is processed in accordance with the structuring intention (see below) and copied out of the text. As a rule this trial run-through results in a revision and partial reformulation of the category system and its definitions. Now the main material run-through can finally begin, again split up into the two stages of marking the points of discovery and extracting and processing them. This general description of a structuring content analysis can be shown in a procedural model (Fig. 13.2).

Fig. 13.2
figure 2

Process model of deductive category application (structuring)

To further explain the procedure for all techniques of Qualitative Content Analysis, rules of interpretation have been formulated. Those step-models and content analytical rules are explained in detail in Mayring (2010a, 2013). Here we just demonstrate the idea of rule-orientated text analysis.

In the above-mentioned study on learning emotions (Glaeser-Zikuda and Mayring 2003), we developed a category system with 3-point ordinal scales (much – some – no) for four central learning emotions: joy, interest, anxiety and boredom. We established a coding guideline containing definitions, anchor examples and coding rules for those 12 categories. Every student was coded one value (much – some – no) for the four emotions. We divided the sample into high and low achievers and compared the emotion results. In the material (interview, learning diary) of high achievers, significantly more joy was coded (p < 0.05; Mann–Whitney-U = 42.00), more interest (p < 0.05, Mann–Whitney-U = 12.50), but no significant difference in boredom was found.

To conduct a qualitative content analysis (inductive or deductive) it would be very helpful to use computer software, because most of the texts today are already data files and because normally we are handling huge amounts of texts. There are several programs available (Computer Assisted Qualitative Data Analysis, CAQDAS) and it is possible to use them for Qualitative Content Analysis, even if they are more orientated on Grounded Theory. But it is not easy to implement all content analytical steps and procedures adequately. So in the meantime we have developed special open access software which supports exactly the steps of qualitative content analysis (www.qcamap.org).

5 Final Appraisal of the Qualitative Content Analysis

First, compare the procedures of Qualitative Content Analysis with similar approaches of qualitative oriented social science text analysis (cf. Mayring 2010b).

Within media analysis, David Altheide (1996) has developed a procedure (“ethnographic content analysis”) working with deductive categories (codes), which were refined in the process of analysis. Then he summarizes the results for each category. This has similarities with our approach but is not so rule-oriented as Qualitative Content Analysis. In the USA there exists an approach coming from quantitative content analysis which is called Codebook Analysis (Neuendorf 2002). It is a deductive category application procedure, which defines all categories in the codebook and gives examples from the text. But this definition is not so systematic as the coding scheme (definitions, anchor examples and coding rules) in our procedure. In some ways similar is Thematic Text Analysis (Stone 1997), which looks in the text for central themes, using theoretical preconceptions or empirical word frequencies and word contingencies. In both cases Qualitative Content Analysis defines the procedure more precisely. The related concept of Theme Analysis covers more free, phenomenological procedures (Meier et al. 2008). Some similarities can be found between Qualitative Content Analysis and text analysis following Berg (2004). He describes deductive (“analytic”) and inductive (“grounded”) categories which have to be defined explicitly, but it remains unclear how this has to be done.

In comparison with those text analytical approaches, Qualitative Content Analysis seems to be most broad (describing a wide set of different procedures) and most exact (prescribing clear step models and analytical rules). So Steigleder (2008), after a praxis test of Qualitative Content Analysis, comes to the conclusion that “it has proven its worth in many studies. With its different techniques of analysis and its methodological concept it is excellently adapted to analyse qualitatively collected material” (Steigleder 2008, p. 197). But it should not be argued that Qualitative Content Analysis is the only legitimate text analysis procedure. It depends on the concrete research question and the quality of the material which procedure should be chosen. If use of the strict category relatedness and rule orientation of Qualitative Content Analysis neglects important deeper aspects of the material (e.g. repressions in the sense of psychoanalysis), then other procedures (e.g. psychoanalytical text interpretation) would be more adequate.