Introduction

The International Business literature predominantly deals with an empirical and analytical research agenda. The focus is on quantitative methodologies, pursuing well-defined research problems with rigorous empirical investigations (Yang/Wang/Su 2006). We argue that research in International Business (IB) often deals with dynamic and volatile situations that demand creative and flexible research designs and methodologies (Ghauri/Grønhaug 2005, McDonald 1985). Many scholars suggest exploratory research and qualitative methodologies to capture multi-dimensional phenomena (Anderson 1983, Yin 2003) and non-linear, sometimes fuzzy, patterns of our realities (Firat 1997, Sinkovics/Penz/Ghauri 2005). These perspectives are supported by methodological arguments, that qualitative methodologies can help to find “meaning behind the numbers”, provide flexibility without requiring large samples (Sykes 1990) and offer a clear and holistic view of the context (Denzin/Lincoln 1994, Ghauri/Grønhaug 2005, Ruyter/Scholl 1998). Moreover, driven by the globalisation of markets, production, and diverse business environments, there is an increasing emphasis on comparative empirical research methodology (Cavusgil/Das 1997, Knight/Spreng/Yaprak 2003, Sekaran 1983). As a result, cross-national and/or cross-cultural perspectives which emerged in the psychological literature (e.g., Berry 1989, Pike 1966, Triandis/Berry 1980) were quickly adopted in other fields and extended to areas such as Management, International Business and Marketing (see e.g., Cavusgil/Das 1997, Peng/Peterson/Shyi 1991, Steenkamp/Baumgartner 1998, Earley/Singh 1995)Footnote 1. However, despite calls for more integrative research, and attempts to breaking down the ‘positivist-epistomological’ divide, the adoption of qualitative methodologies in IB is still scarce (Parkhe 1993, Peterson 2004, Yang/Wang/Su 2006). It is against this background that we endeavour to consider methodological issues for qualitative International Business research, particularly that building on text narratives from interviews.

Problem and Purpose

The literature offers a vast collection of methods for qualitative inquiry (Bickman/Rog 1997, Denzin/Lincoln 1994, Miles/Huberman 1994, Peterson 2004, Strauss/Corbin 1994, Van Maanen 1983) and these methods appear particularly suitable for research where multiple actors and environments are involved. Current standards of qualitative data analysis are often considered as less rigorous and half-formulated art (Miles 1979). Researchers acknowledge the need to analyse qualitative data and establish meaning in a systematic way. To make qualitative IB research a viable source of knowledge generation and dissemination, researchers are encouraged to systematize, regularize, and coordinate the work of observation, recording, and analysis (Ghauri/Grønhaug 2005, Miles 1979). This is particularly important in the International Business field, where coordinating multi-cultural research teams and integrating their joint efforts aggravates the challenges.

This paper is concerned with fundamental principles of research quality, particularly how to make qualitative research findings more trustworthy. Some researchers suggest the adoption of alternative terminology and procedures for the qualitative research process. Strictly speaking, quantitative criteria such as objectivity and validity, are not deemed applicable to qualitative inquiry (Denzin/Lincoln 1994, Lincoln/Guba 1984). Hence, to establish “trustworthiness” of qualitative research, credibility, dependability, transferability and confirmability need to be established. The purpose of this paper is to suggest specific research strategies for dealing with qualitative data, especially data that stem from interviews. We advocate formalised procedures of gathering, analysing and interpreting qualitative data and discuss these issues in view of the emergence of computer assisted qualitative data analysis software (CAQDAS). Although formalisation and the aim to establish trustworthy research results does not necessarily presuppose CAQDAS, we maintain that its application enhances the trustworthiness and thus quality of qualitative inquiry. Out of a range of available CAQDAS packages, we decided to use N*Vivo. It not only helps researchers and managers in their pursuit to systematize and organise their work (Marshall 2001) but also offers group features (Richards 2000), which are particularly helpful in the coordination of IB research (Mangabeira/Armstrong/Sprokkereef 1996, Peterson 2004).

The contribution of this paper is on two levels. We contribute conceptually and methodologically, and the aim is to stimulate discussion in the field. We first develop a conceptual framework which links standards and stages in the analysis of interview-based data. We then exemplify our conceptual and methodological thinking by introducing a project on knowledge management. Therein company managers from various international consulting companies communicate their views on practices and procedures of knowledge sharing and knowledge management.

Adopting Formal Criteria for Interview-based Qualitative IB Research

Qualitative research may include multiple methods such as, case studies, ethnography and participant observation, grounded theory, biographical and participative inquiries (Strauss/Corbin 1994). Within the field of qualitative IB research, the case study methodology is the most prevalent method (Pauwels/Matthyssens 2004). Similarly, there are a range of specific methods for collecting empirical material, such as interviewing, observational techniques, semiotic analysis etc. The analysis of text-based in-depth interviews is the most widely employed methodology for firm-level IB research, as published material in major journals (DuBois/Reeb 2000) such as, Journal of International Business Studies (JIBS), Management International Review (MIR), Journal of World Business (JWB) and International Business Review (IBR) reveals.

The Relevance of the Emic-Etic Discussion for Qualitative Research in International Business

IB research transcends political or cultural boundaries and therefore is inherently comparative in nature. A tension exists, however, regarding cross-cultural research traditions and the fundamental understanding of how to deal with comparative issues. Some scholars propose to work intensively within a single cultural context in order to discover and comprehend indigenous phenomena, while others advocate extensively research across cultures in order to produce results that are valid in international contexts (Berry 1989). The basic split in orientations to research stems from work in cross-cultural anthropology by linguist Kenneth Pike (1966) who coined the terms “emic” and “etic” using the suffixes of the terms phonemic and phonetic. In linguistic analysis these terms distinguish sound structure, as analysed by a linguist (phonetics), from the meaning of the sounds to the native speaker (phonemics) (Morey/Luthans 1984, Pike 1966).

The “emic” and “etic” terms have since developed to denote general research orientations which were long understood to be dichotomous and contrasting views rather than equally applicable. Emic research centres on the native, that is, the insider’s view of reality. Thus, the emic approach emphasises phenomena which occur in a particular culture by using only concepts employed in that culture (Buckley/Chapman 1997). Contrastingly, etic designates the orientation which is taken by outside researchers. Behaviours and phenomena are described using external criteria which are imposed by the researcher. While researchers adopting an emic approach may obtain a very accurate within-culture description, probably by employing qualitative techniques, they will not be able to compare emic-results obtained in one culture with emic-results from another culture. Emic research, which builds on subjectivist, idiographic, qualitative and insider perspectives (Morey/Luthans 1984) suggests designs are not necessarily comparative in nature. Etic researchers, on the other hand, impose universal categories on their data and can, therefore, make comparisons (Davidson et al. 1976). This imposition of universal categories, however, may prove to be difficult because by choosing particular categories researchers might miss the most important aspects of the phenomena which they originally intended to study. The challenge to obtain observations that are both adequate within the cultural description of a phenomenon and that are cross-culturally comparable has been described as the emic-etic dilemma (Davidson et al. 1976).

The dilemma is more pronounced in cross-cultural research and attempts have been made to overcome the tension between these traditions. Conceptually, Berry’s (1989) “imposed etics-emics-derived etics” operationalisation, which basically presents a sequence of reasonable and necessary research steps for intercultural research, provides a bridging link. Buckley/Chapman (1997), contribute to the debate by exploring the usefulness of ‘native categories’ in management research. By native they mean self-generated and valid in any local context. They suggest that in search for objectivity, western management studies have often overemphasised rational, positivist perspectives. Contrasting interpretevist approaches, and thus native categories, can in fact be seen as vital steps towards adequate positivist research (Buckley/Chapman 1997). However, it appears as if theoretical discussion had failed to inform methodology and research practice (Lonner 1999). Therefore it is not surprising to learn from reviews of cross-cultural research that an overwhelming majority of published work (up to 93 percent) used “imposed-etic” designs (Jackson/Niblo 2003). This inevitably leads to ethnocentric cross-cultural comparisons and biases towards mostly Western perspectives. Moreover, while the conceptual underpinnings of the emic-etic dilemma relate equally well to quantitative and qualitative research, most of the work is effectively quantitative.

We argue that IB research should take more emic (i.e., subjectivist/ qualitative/ insider) perspectives, which then could be translated into etic (i.e., objectivist/quantitative/outsider) terms and used as valuable input for further studies. Both approaches are however, considered complementary to each other and provide essential building blocks for IB research. We strongly believe that advocating one approach over the other is not beneficial, rather a hindrance to the development of the field.

Bias and Equivalence – Relevant Concepts in Qualitative Research

Building on the choice of location on the emic-etic continuum, qualitative research needs to further consider two closely related concepts, equivalence and bias (Poortinga 1989). From a conceptual perspective, these two concepts are on the opposite of two extremes. Measurements are considered equivalent when they are unbiased. A bias is therefore indicated by the presence of factors that challenge the validity of cross-cultural comparisons. Only when bias is absent and equivalent measures are present, we can compare scores, otherwise no useful comparison can be made (Brislin/Lonner/Thorndike 1973). Van de Vijver/Poortinga (1997) discuss three different biases and common causes for them, namely construct bias, method bias and item bias.Footnote 2 Construct bias can occur when there is an incomplete overlap of definitions of the construct across cultures. Measurement of this dimension by means of questionnaires or open-ended questions will therefore necessarily reveal different response patterns. Poor sampling of all the relevant behaviours may be another cause for construct bias. The Chinese Culture Connection (1987) for instance raised concern about the incorporation of Western psychological knowledge into non-Western systems. Their criticism was consecutively incorporated into Hofstede’s cultural measure, by the inclusion of the “Confucian Dynamism” dimension, which reflects a dynamic, future-oriented mentality (Hofstede/Bond 1988).

Method bias may develop out of different social desirability levels or differential response styles such as extremity scoring and acquiescence. Hui/Triandis (1989) for instance found that Hispanics exhibit a stronger tendency for extreme checking on Likert-scales than non-Hispanics. By drawing on this finding, it is suggested that individuals from different countries will demonstrate different response patterns in qualitative interview settings. Language proficiency and language dynamics of cross-cultural researchers is another pragmatic concern, especially in interview situations (Piekkari/Welch 2004).

What van de Vijver andPoortinga (1997) call ‘item bias’ refers to poor item translations or inadequate item formulation. For our purposes, this is more appropriately termed stimulus bias. Biases can be introduced by translating stimulus questions wrongly, inappropriate use of contextual explanations which might trigger different associations with the respondent and ultimately lead to fundamentally different response statements.

Salzberger, Sinkovics, and Schlegelmilch (1999) look specifically into the equivalence aspect. They present a conceptual model, which builds on Churchill and Iacobucci’s (1995) hierarchical view of relationships and stages in the empirical research process. Four stages are identified, a) problem definition, b) data collection, c) data preparation and d) data analysis. Within each of these stages equivalence issues are pertinent and have to be addressed accordingly, in order to ensure comparability and consequently reliable and valid results. At the problem definition stage, equivalence of research topics represents the minimum requirement for meaningful comparisons across national or cultural borders. The international researcher needs to assess whether a phenomenon under investigation serves the same function in another cultural context.

At the data collection stage, qualitative researchers need to look into equivalence of research methods, research units and equivalence of research administration to rightfully contrast findings. Usually researchers will aim to collect the qualitative data in the same way, e.g. by means of face-to-face interviews or observational methods. Recent developments in marketing research for instance involve the computerisation of focus groups, whereby group members interact with each other using specially designed software, rather than using a moderator (Kiely 1998). However, while this approach may be well suited to tap into relevant focus group members in one country, this approach may be difficult to employ across borders due to technological and infrastructural differences (Craig/Douglas 2005) or language differences (Welch/Piekkari 2006).

As far as data preparation is concerned, measures to ensure equivalence involve the equal handling of qualitative interview response, systematic and standardised coding across all cross-cultural groups and the development of coherent code-sets which can be facilitated by the use of text-analysis programmes.

CAQDAS – Making Qualitative Analysis more Trustworthy

Many software tools are available for qualitative researchers, ranging from simple free-form text-storage and retrieval products such as askSAM, FileMaker Pro or text-counting and sorting packages such as TextPack PC (Zuell/Landmann/Geis 2002) to advanced and state-of-the-art systems such as Atlas.ti, C-I-SAID, Decision Explorer, Ethnograph, HyperResearch, N*Vivo, Nud*ist, MaxQDA and the open-source programme Weft-QDA. The latter products allow for the grouping and linking of concepts by building on features such as code-banks, master-lists and family trees. Despite this abundance of software tools for qualitative researchers, their use is not widely embraced.

Many researchers express concerns related to the potential theoretical, political and methodological costs of computer use in qualitative research (Coffey/Holbrook/Atkinson 1996, Jack/Westwood 2006). There are also suggestions that the ease with which text can be coded in qualitative text analysis programmes and subsequently be incorporated into statistical software packages such as SPSS or SAS offers an inherent temptation to quantify qualitative research. Hesse-Biber (1996, p. 25) notes that qualitative research might be “colonized” by the reliability and validity criteria of quantitative research, therefore unleashing “Frankenstein’s monster”. This may give way to the “industrialisation” of the qualitative research process (Harbison/Myers 1959) where neopositivist methodology is suggested to be universally applicable. The epistemological consequences of the adoption of such methods “that fit the requirements for objectivity, neutrality and the separation of subject and object” (Jack/Westwood 2006, p. 484) are suggested to lead to decontextualisation and a superficial attempt to legitimise and depoliticise procedures that do not serve the qualitative research community well. As Jack and Westwood (2006) argue, “(Qualitative) research methods are not innocent: they are political” (Jack/Westwood 2006, p. 482). There is also concern that CAQDAS, by reinforcing and even inflating the tendency for the code-and-retrieve process that underpins most approaches to qualitative data analysis may result in the fragmentation of textual materials on which researchers work and it may detract researchers from providing creative input (Bryman/Bell 2003, Burgess 1996, Fielding/Lee 1998).

We do not share these concerns and believe that the danger of methodological biases and distortions arising from the use of certain software packages is exaggerated (Kelle 1997). In fact, CAQDAS facilitates transparency in the dialogue between researcher and textual data, therefore improves confirmability through external audit (Maxwell 1997). We further support the idea that CAQDAS enhances creative views on data which by its very nature requires “flexibility, fluidity and continual renegotiation” (Easterby-Smith/Thorpe/Lowe 2002). Nonetheless, while CAQDAS separates data organisation and researcher creativity, it ultimately increases creative freedom. Miles and Huberman (1994) argue that CAQDAS facilitates the organisation and analysis of large volumes of data, therefore potentially overcomes limitations and weaknesses associated with qualitative research. The use of CAQDAS thus formalises the way in which researchers can look at their text-data, thus reinforcing their methodological rigour.

This formalisation also serves as a mechanism to overcome anecdotalism in qualitative research (Silverman 1985). Some researchers have a tendency to use quotations from interview transcripts or field notes with little relationship to the prevalence of the underlying phenomenon which they are aiming to discuss. The formalised and collective bargaining between multiple, international researchers not only makes coding and retrieval much faster and more efficient, it also enhances the transparency of the process of conducting qualitative data analysis and supports theory building, which transcends beyond descriptive-interpretative research.

An important operational issue which exemplifies this point is pertinent in the efforts involved to coordinate multi-lingual research teams. Mangabeira, Armstrong, and Sprookkereef (1996) argue that teams working on a project may experience problems and challenges in coordinating the coding of text when different people are involved in this activity. However, some analysis software such as N*Vivo offer appropriate features to cope with the international dimension of research. It also features a software supplement that allows text merging (Richards 2000). Hence, research teams can merge their work and continue analysis in the combined project or ask questions across the whole merged project.

Overall, it is argued that the use of CAQDAS such as N*Vivo provides procedural advantages compared to traditional means of text analysis and ultimately helps in the formalisation of processes which contribute to more reliable research findings. While evidently there are tremendous advantages in the management of textual data and record keeping, the major contribution of CAQDAS is the application of standards, coding and searching procedures (King/Keohane/Verba 1994).

Adopting Formal Criteria for Qualitative Research

The traditional concepts of reliability and validity are universally accepted as playing key roles in the evaluation of rigour in research (Nunnally 1978). Quantitative researchers devote much of their attention to these fundamental concerns of research in their empirical examination. However, reliability and validity have a somewhat uncertain place in the repertoire of the qualitative methodologist (Armstrong et al. 1997). Some researchers argue that these dimensions are grounded on a different paradigmatic view and are therefore not directly applicable to qualitative research. This perspective has not always served the qualitative research community well because certain qualitative research has proven inefficient and the uncertainty regarding the validity standards has inhibited clear evaluation of merits and achievements of well-done qualitative research (Borman/LeCompte/Goetz 1986).

Against this background, some have sought to apply the concepts of reliability and validity to the practice of undertaking qualitative research (Kirk/Miller 1986, LeCompte/Goetz 1982, Lincoln/Guba 1984, Denzin 1994) and have proposed to use alternative terms and ways of assessing qualitative research, such as credibility, transferability, dependability and confirmability. Credibility is defined by Guba and Lincoln (1989) as being parallel to internal validity. It focuses on establishing a match between the constructed realities of respondents and those realities represented by the researcher(s). Transferability is considered parallel to external validity or generalisability in quantitative research. It depends on the degree to which salient conditions overlap or match (Crawford/Leybourne/Arnott 2000). Dependability is a criterion which is considered equivalent to reliability and similarly concerned with the stability of the results over time. Confirmability is what objectivity is to quantitative research. Researchers need to demonstrate that their data and the interpretations drawn from it are rooted in circumstances and conditions outside from researchers’ own imagination and are coherent and logically assembled (Ghauri 2004).

Ultimately, the issue of validity in qualitative research “is … a question of whether the researcher sees what he or she thinks or thinks what he or she sees” (Kirk/Miller 1986) so that there is evidence in the data which describes clearly how the data were interpreted. Good scientific research requires explicit, codified, and public methods to generate and analyse data (Merton 1949). This is “truism” to quantitative research but is equally important for qualitative research: The procedures and methods applied are meant to be public. If the method and logic of a researcher’s observations and inferences are left implicit, the scholarly community has no way of judging the validity of what was done. Learning from applied methods or attempts to arrive at similar results requires transparent processes (King/Keohane/Verba 1994).

Table 1 below builds on these discussions and offers a framework for enhancing the quality of International Business Research. The rows of the table represent commonly acknowledged stages in a research process (see e.g., Churchill/Iacobucci 2005, Ghauri/Grønhaug 2005, Lee 1999, Salzberger/Sinkovics/Schlegelmilch 1999, Yin 2003), the columns point out standards and schemes (Potter 1996) which need to be addressed to enhance quality. Lee (1999, p. 146) argues that “the ideas of reliability and validity apply equally well to both [qualitative and quantitative research]”. This is why we use both the traditional terminology and the terms (credibility, dependability, transferability and confirmability) proposed by Guba and Lincoln (1989) and Denzin 1994. Our argument also holds that programs such as N*Vivo can in fact help to add rigour to the qualitative research process. This corresponds to the thinking of e.g. Richards and Richards (1994), although others have critically disputed the use of computers (see e.g. Marshall 2001 for a good review of the arguments of CAQDAS as rigorous or rigid tools). The rightmost column in Table 1 focuses on the outcome, the research report. By making the analytic logic transparent and addressing the issues listed in the research report, we will greatly enhance the research outcome and trust and confidence in its findings.

Table 1. Enhancing the Trustworthiness of Qualitative Research in International Business – Linking Standards and Stages

Methodology

To develop our discussions further, we use an empirical example from a knowledge-management study. N*Vivo software is used to guide the discussion of rigour and quality issues in an international context. The section headings follow the conceptual discussion in Table 1, enriched with results from the empirical knowledge management context.

“Getting started” – Conceptual Context for the Practical N*Vivo Application

We deem the knowledge management study provides a timely and appropriate context for highlighting rigour and trustworthiness issues in international business research. With respect to credibility and confirmability our study embarked on established theories and leading literature in the field. Knowledge management as a topic of managerial decision making has become increasingly popular for international companies (e.g., Bennett/Gabriel 1999, Buckley/Carter 2000, 2002, Mudambi 2002). The role of knowledge resources in developing and maintaining competitive advantages have been addressed in the literature (see e.g., Davenport/Prusak 1998, De Geus 1988, Kim 1993, Prahalad/Hamel 1990, Stata 1990), so have the flows and transfer of knowledge within multinationals (e.g., Foss/Pedersen 2002). Particularly, key references on knowledge management (e.g., Bennett/Gabriel 1999, Buckley/Carter 2000, 2002, Mudambi 2002), human resource management and motivational aspects (e.g., Hislop 2003, Kim 1993, MacNeil 2003) with regard to ‘knowledge workers’ were included to gain further insight (Ardichvili/Page/Wentling 2003, Kubo/Saka 2002, Osterloh/Frey 2000, Osterloh/Frost/Frey 2002, Smith/Rupp 2003). However, with respect to the individual level of knowledge management, there is a shortage of empirical work on employees’ willingness to share knowledge. Touching the issue of transferability, we emphasize that the concept of knowledge sharing needs to be studied in more depth and width. We start by assuming that the issue of knowledge sharing is related to the mechanics of interpersonal relations between managers and employees. The way in which managers and employees interact, the specific organisational and environmental context as well as social factors will impact on knowledge sharing. For firms, such as IT consultancies, where knowledge, knowledge-sharing and its’ dissemination are crucial, management targets behaviour indirectly through norms, values, and culture. These are dependent on social relations, identity formation and ideology and not the behaviour of employees itself, therefore it is hard to measure the output of such management styles (Hislop 2003, Kärreman/Alvesson 2004). It is commonly agreed that the success of knowledge management rests on the willingness of employees to share their knowledge. However, people demonstrate an inherent resistance to it (Hislop 2003).

The empirical part of this study addresses this gap in research on knowledge-management and interpersonal relations and practically applies conceptual and methodological considerations on qualitative IB research. A knowledge-based industry context is chosen to empirically investigate the company-manager-employee interactions where a socio-ideological form of control is expected to manage the so-called knowledge-worker.

Research Design and Development of Research Questions

In order to facilitate the evaluation of the empirical grounding and credibility of our qualitative study we followed Corbin/Strauss’ grounded theory approach (1990, 1998) as a general methodological road map. They suggest canons, procedures, and evaluative criteria for research trigger a dynamic, yet interrelated process of data collection and analysis. This process harmonizes well with our goal to inductively explore the interpersonal relationships in MNC’s knowledge sharing without relying on established theoretical assumptions about knowledge sharing.

The grounded theory approach fits well into CAQDAS as N*Vivo supports “[…] new modes of interaction and organization using methodology that is attentive to issues of interpretation and a process not binding itself too closely to longstanding assumptions” (Suddaby 2006, p. 633). The software is furthermore beneficial in terms of storing and organising a lot of data in different formats and sizes. It enables creative and informed discussion among involved researchers with different cultural backgrounds.

Appropriate research questions were formulated to guide the research process and care was taken to control for research bias Footnote 3 The following research questions were developed from existing research on knowledge management and interpersonal relations:

  • What are managers’ perspectives of knowledge management in international companies?

  • How does the relation between manager/s and employee/s shape the management of knowledge? How do managers deal with motivation, rewards and mistakes?

  • What is the relation between the company (its culture, goals, etc.) and its human capital (managers, employees), as perceived by the managers?

Sample and Context

We concentrated on textual data which were collected through semi-structured interviews, observations, and company information. Initially, two companies were selected and studied. At the outset three managers from two international consulting companies were approached via telephone or personally to participate in the project (contributes to confirmability). The companies were selected on the basis of size, geographyFootnote 4and their core businesses. Particular attention was given to international companies with a foothold in more than two technology markets, as their extended geographical reach suggested the implementation of certain procedures regarding knowledge management.

External validity was assessed by contrasting manager response from different company sectors. According to the emerging theory, additional samples were selected (six managers in four companies) and studied in order to confirm or disconfirm aspects and conditions under which the model holds by making use of theoretical sampling, “which means that the investigator examines individuals who can contribute to the evolving theory” (Creswell 1997, p. 155). Subsequent interviews were therefore conducted with managers in similar companies (all consultancies) with complementing background. In order to strengthen credibility, we followed a theoretical sampling approach, and further companies were added to the interview list to probe for differences. One Italian manager from a non-technology based consultancy was interviewed because it was expected that his view on the relational aspect of knowledge management would differ. Overall, to report on dependability, nine interviews were conducted involving companies in three different countries, four Austrian, two Italian and three German companies.

Data Collection

To ensure functional and conceptual equivalence, we had to find out whether knowledge management served the same function in the companies. Therefore, information about the target companies (e.g., from their websites, international subsidiaries, media releases and presence at events on knowledge management, including their international subsidiaries) was collected and analysed regarding their understanding and application of knowledge management. This aspect was also considered in the data collection stage: the interview guideline began with a question on how knowledge management was seen in the company and what it involved. While all three managers from the same consultancy talked about information, the others used the term knowledge. The storage of information and knowledge enabled the support of employees and the creation of contacts among them. To sum up, the concept was understood and defined very similar, slight differences were due to the responsibilities of the respective managers and the overall area of business. When developing the measurement instrument, its validity, i.e. the usefulness of the instrument (Nunnally 1978) was a concern. Practically, before applying the interview guideline we tested it by interviewing managers in two countries (Austria and Italy) and established construct validity (credibility).

With respect to data collection we standardised the process in such a way that equivalence of research methods, units and administration was ensured. Cross-cultural issues were discussed in two extensive training meetings and it was felt that the interviewers were satisfactorily equipped with the knowledge needed. Interviewers were informed about the managers’ background and position within the company (Welch et al. 2002) and in terms of lingual aspects (Marschan-Piekkari/Welch/Welch 1999). A laddering-type interview process (Grunert/Grunert 1995) was encouraged to facilitate the clarification of issues, verification of interpretations of answers during the interview, and persistence in following up on emerging topics and themes arising during the interview (Arksey/Knight 1999, Kvale 1996, Lee 1999, Rubin 1995, Strauss/Corbin 1998).

Interview Agenda

Interviews took place from March 2003 to March 2004. Respondents were recruited from top and middle management and were responsible for knowledge management. With respect to the physical context, this was standardised to the managers’ offices, and the interviews lasted between one and two hours each. English, German and Italian was used. In order to guarantee dependability (repeatability), a protocol was followed. Data were digitized, tape recorded and transcribed in their original language. Additionally, each interviewer provided a short summary of the interview in English as separate document (chain of evidence).

Specific Steps and Questions in Data Collection

Three proposed tactics were applied in order to guarantee construct validity (Lee 1999), namely (1) multiple sources of evidence, (2) chain of evidence, and (3) feedback to key informants (Yin 2003).

  1. 1)

    Multiple sources of evidence: Interviewees’ comments, observations of the interview setting, and contextual factors (physical factors such as building, entrance, etc.) were noted. The infrastructure of the office was observed, taking note of the behaviour of employees within the building. Visual materials, such as advertisements, rules and regulations for employees on the respective company websites and so on were also included in the analysis. Using all these multiple sources of evidence increased the construct validity. In addition, researchers used memos to document the relevance of the collected information to the overall research question. Each researcher was instructed to rate the sources according to the concept being studied. Then the comments were discussed and a common rating of the used operational measures was established.

  2. 2)

    Establish chain of evidence: As suggested by (Yin 2003) and Lee (1999), a “chain of evidence” was applied to establish construct validity, i.e. assuring a logical, sequential process which can be reconstructed and anticipated by an external audit. (a) When selecting companies, researchers were advised to make field notes about the contact situation, e.g. how many times had he/she to call to talk to the manager, how were the atmosphere and the first impression, scheduling the interview. (b) Then interviews were conducted and transcripts prepared in the original languages (English, German, and Italian). The interviewers continued to write memos about the visits which later on were linked to the initial observations. (c) The observations of the researchers were an important part of the project, since they helped in developing an idea about the relationship between managers and employees even before the interviewees’ answers were analysed. Hence, multiple elements of information converged and indicated high construct validity.

  3. 3)

    The third tactic in establishing a chain of evidence was to let key informants review preliminary reports and observations of the researchers. Researchers were guided through the analytical schemes by a protocol as suggested by Yin (2003) which increased the dependability of the study and facilitated repeatability of procedures. It was developed by the researcher team using N*Vivo’s modelling and memo function and included study objectives, specific procedures and interview guidelines. Later, the memo was used as a rough guide for the documentation of researchers’ experience in the field.

Data Analysis

The data analysis processes involved formalized steps of (i) organising, (ii) coding (data reduction), (iii) searching, and (iv) modelling and interpretation.

(i) Organising steps. In N*Vivo a new project was created, including rich textual material from the nine in-depth interviews in original language, written observations as recorded by the interviewers and company information. Visual material such as company pictures and ads – illustrative examples of company culture – were also included. Three researchers formed the core team of analysts, each one a native speaker in one of the used languages (English, German, and Italian). However, all of them were also knowledgeable about the other two languages which was important to develop a common coding scheme.

One additional interview with an Italian company was conducted to allow for intra-country comparisons regarding specific questions such as the relation between managers and their employees. This procedure also enabled validity checks and served as indicator of transferability. Section attributes were used within N*Vivo to organise company characteristics (see multiple sources of evidence in section 0 above). Hence, quantitative information was included on top of qualitative data, which provided overall insights into company background and organisational structure.

(ii) Coding (data reduction). Collected data was subsequently coded, which was probably the most crucial step in the analytical process. The coding process is an ongoing interpretation and examination of textual data from different perspectives. It is dependent on the number of researchers involved. Figure 1 illustrates the coding process for the multinational researchers involving English, German, and Italian language.

Figure 1.
figure 1

Coding Process

Two coding strategies were used: (a) a-priori and (b) a-posteriori categorisation of data. We started out with a-priori categorisation which concerned the development of English categories prior to actual data collection, based on theory, literature and exploratory interviews. A-posteriori categorisation was involved subsequent to the data collection in Austria, Germany and Italy. Empirical indicators were developed based on the multilingual data and uses in subsequent analysis stages.

Interview languages were retained, as well as initial codes. As illustrated in Figure 1, the coding process started off with the team of three researchers, each of them being fluent in a different language (English, German, and Italian). However, the codes were later transformed and merged into English (see Figure 1), as the common analysis language. This facilitated further analysis and comparability. Also, this augmented reliability because categorisation decisions were discussed in English, a language that fit all researchers and participants in the project. The derived categorisation scheme was continually monitored and updated with the co-analysts. This derived “etic” approach safeguarded against the danger of a purely uniform coding scheme and facilitated the identification of country specificities and equivalence of data.

Decisions about the size of the node system required us to find a balance between breadth and depth. The node system is a function of the stage of the research process and evolves over time (Marshall 2002). Hence, each text-section was analysed with more scrutiny, following (a) open, (b) axial and (c) selective coding processes, as suggested in the literature (Miles/Huberman 1994). Open coding is usually used for the discovery of categories and the identification of new concepts. In this stage of the categorisation process, each researcher freely added categories which were discussed in a meeting afterwards. About 130 nodes were created initially by open coding. Axial coding applies categories and concepts to empirical data. Here, categories are related to their subcategories and intersections of related categories are identified. The objective of axial coding is to add depth to categories. As a result, nodes which expressed the same concepts were merged and the number of main nodes and sub nodes was reduced to eight nodes.Footnote 5 Finally, (c) selective coding, the process where categories are integrated and refined in order to build a theory, was applied. Therein concepts were established and statements used to explain the phenomenon of interest. The textual-data was reduced and, as suggested by Lee (1999) or Strauss and Corbin (1998) a desirable level of abstraction was reached for our research.

The next step was to theorise about the content by (iii) search processes. N*Vivo enables researchers to trigger virtually unlimited searches. The software responds immediately with those text sequences which are related to the keywords. Our study was geared towards the company manager and employee relation and aimed to examine managers’ perceptions regarding knowledge management. Consequently, we used N*Vivo’s search function to examine nodes such as “importance of employee”, “communication”, “utility of knowledge”, etc. We also generated matrix intersections of different nodes and companies and displayed results as nodes.

Figure 2.
figure 2

Example for a Matrix Intersection (N*Vivo output)

Note: Codes for the rows are 1=E-Business, 2=Consulting, 3=Electronic Equipment, 4=Computer; first column displays the nodes (“building up networks”, etc.). Numbers in the cells indicate number of node identified.

All documents were included into the analysis and the results were displayed as a node. Figure 2 shows the number of documents in which the nodes were found, separated into the four fields (1 = E-Business, 2 = Consulting, 3 = Electronic Equipment, 4 = Computer). In addition, the number of characters coded or coding references can be displayed. Areas with high frequencies are automatically highlighted. For companies specialising in E-Business, network-building was seen a crucial element for knowledge management. Consulting companies stressed the importance of harvesting employees’ knowledge and diffusing it within the organisation.

N*Vivo search results were also used to export frequency and response tables into SPSS, thus enabling simple descriptive analysis and further formalising structural results from the data. Searches and matrix intersections were also used to explore cross-country differences. Results revealed that only for one Italian consultant the issues ‘knowledge management’ and ‘team orientation’ were linked together. His response highlighted issues such as his company being a ‘people’s company’, a ‘spirit of togetherness’ within the company and the notion that ‘closer alignment with the international goals’ might threaten to make the workplace ‘less enjoyable’. In all the other statements, knowledge management was closely linked to issues of organisational structure, ‘flat hierarchy’ and the conflict between ‘technicians and management’. It appeared that these perspectives, although probably limited in their generality, pointed out differences pertaining to the different national contexts involved in the study.

The final step in the N*Vivo analysis helped in the interpretation and building of models (iv) using documents, nodes, attributes and memos (see Figure 3). This helped the generation of nodes and facilitated the visual construction of a nodal system. Furthermore it assisted in the development of a categorisation scheme which could be used in the subsequent coding process. Modelling was also instrumental for the conceptualisation of ideas that arose during the ongoing coding and search process. Modelling helped to design the project and helped in graphical exploration of the research process. Finally, visual representation helped to overcome codification or language problems within the researcher team. Models in N*Vivo were “living organisms”, i.e. evolving, continually changing, refined by the researcher and updated according to the research progress.

Figure 3.
figure 3

Example for a Model in N*Vivo

Note: Bullet points refer to the nodes of the node system. Lines between the nodes indicate relations which were drawn by the research team after discussing the node system. Numbers in brackets refer to the organisation of the node in the node system

Discussion

In finishing the six-step qualitative research process (see Table 1) we conclude that researchers’ interpretation of the interpersonal relationships between managers and employees confirms existing knowledge. Knowledge sharing is part of the management of knowledge in international companies. Most managers see knowledge management as a means of ‘storing information’. Although this perspective was confirmed in our study, the analysis also points to the overarching importance of the relational dimension.

Relationship dynamics between managers and subordinates determines the amount and level of knowledge “sharing”. Technical awareness is considered a pre-requisite to make useful contributions to knowledge management systems. However, given that, relationships between employees can be facilitated, the development of informal networks between colleagues assisted and communication enhanced. In consultancies, some managers view employees as “group of actors” who have to be controlled and should come together to solve problems. Conversely, managers in electronic equipment/computer companies wanted their employees to be supported by the knowledge-management system.

Overall, no best way of dealing with knowledge and its’ management could be identified. The management of knowledge depends strongly on the culture of the company (MNC), the cultures within which the individuals lead their lives and the way people in a company interact with each other. To account for these issues, cultural differences and similarities were integrated in the research process. Practically this was relevant in the way managers were approached and how they reacted to the questions. The study also revealed that Austrian and Italian managers tend to think more in terms of flat organisations and not so much about controlling employees by means of knowledge management. In their view, workplace should be enjoyable and knowledge management could support that.

The grounded theory-driven approach and the use of computer software helped in three ways: (a) To convert tacit/implicit knowledge of researchers undertaking the interviews into explicit knowledge, which could be recorded, transcribed and analysed, (b) to convert explicit knowledge such as hierarchical dimensions in organisations and sector-differences (known from the outset of interviews) into implicit knowledge, exploited within the dynamic interview situation. Finally, (c) the continuous and updated dialogue between researchers, their analysis and their data (Strauss/Corbin 1994) helped to solidify qualitative procedures and improve their overall trustworthiness and quality.

Conclusions and Future Outlook

Qualitative International Business research deals with interactions of geographically dispersed organisations and consumers. The age of globalisation and the advent of technological innovation implicate tremendous challenges for researchers and practitioners. Given fundamentally different methodological and conceptual paradigms for International Business compared to “domestic” management studies, it is not surprising that established and structured methodologies and practices have not always been applied successfully (McDonald 1985). To this end, quality concerns regarding high levels of subjectivity and low reliability are understandable and have contributed to a somewhat sluggish adoption of qualitative methodologies in the field. However, the criticism is mostly due to “good housekeeping” (Marshall 2001) which we believe can easily be overcome.

In this paper we proposed a framework of quality standards, which should be addressed during the qualitative research process, particularly during the analysis process of textual data. If properly adhering to concerns of credibility/validity, dependability/reliability, transferability/generalisability and confirmability/objectivity during the six-stage qualitative research process, analysts’ inherent analytic logic will become transparent and their research reports will gain substance in quality and trustworthiness. To this end, by following formalised procedures of organising textual analysis, addressing specific challenges of IB research such as biases and equivalence, it is possible to overcome criticism against qualitative research and in fact promote more qualitative research in International Business.

The use of software programmes (e.g. N*Vivo) which is considered by some researchers to inhibit creativity and colonize qualitative research with rigorous criteria of quantitative research (Hesse-Biber 1996), is not seen as a danger. Contrastingly, we argue that computer software helps and supports researchers in the analytical process of coding and analysing textual data, makes data easily accessible to collaborators and thus strengthens credibility, replicability and substance of research results. Lee (1999) and Yin (2003) both encourage the use of good protocols to substantiate qualitative research. The framework in this paper adds to their arguments and is specifically concerned with logic and transparent documentation of the research report (see Table 1).

The empirical context which is provided in the paper relates to an ongoing study on knowledge management. Given that the purpose of this paper was to have a discussion on a methodological level, the documentation of this particular project has been truncated and data analysis only provided to illustrate methodological considerations. Notwithstanding this limitation we promote the idea that formalised processes have the potential to make qualitative inquiry of textual data more logical, transparent and trustworthy.

In our example demonstrated that CAQDAS such as N*Vivo is able to store big amount of data and facilitates transparency of the research process. Interview-based texts and any other textual and non-textual material can be made available for a multi-lingual, multi-cultural research team. Existing material can be incorporated easily and provides a comprehensive basis for search queries. Aspects which might be not relevant at first glance are stored together with the raw data which increases comparability of data. From an international view N*Vivo helps to understand the research contexts from where the data originated, because both the original language but also the derived interpretation system are retained. The use of protocols and memos by researchers supports the process by facilitating constructive discussion and interpretation. N*Vivo offers options to organise textual data in a transparent way; it offers outline views and highlighting options. To this end, there is the potential for unexpected insights which may emerge from re-contextualising material and a live computer-facilitated dialogue with the node system, codes and search processes.

Future research and any attempts to push the frontiers of knowledge, largely depend on replications of research in other contexts under similar and/or different circumstances. We hope therefore that enhanced qualitative International Business research will encourage other researchers to build on existing qualitative research methodologies, extend original findings and finally bridge the traditional qualitative-quantitative divide.