(Preamble -- Theoretical work -- Building knowledge)


High quality empirical research in strategy may either report empirical regularities or aim to test theory. In both cases, high quality research is transparent in data presentation and analysis. In the case of theory-testing research, high quality work uses data that conform closely to the theory to be tested as well as empirical designs and methods that generate valid inferences.


Empirical analysis continues to be the mainstay of the strategy field. This reflects both the field’s origins in the practice of strategic management and its status as an applied social science field. We see two broad, appropriate uses of empirical analysis in strategy research.

The first role of empirical analysis is to identify interesting facts related to organizational performance and its precedents—in other words, to find and describe the phenomena that need to be explained by strategy theory. This may seem like an obvious and important role, but in practice, it is remarkably rare to see strategy research papers that simply report empirical regularities. This gap reflects an apparent – and we believe, perverse – requirement that all submissions to strategy and management journals contribute to theory. In a recent eloquent and impassioned editorial, Don Hambrick bemoans this theory fetish that effectively “bans the reporting of facts – no matter how important or competently generated – that lack explanation but that, once reported, might stimulate the search for an explanation” (Hambrick, 2007: 1346).

Hambrick goes on to enumerate a variety of costs imposed on the field by this deification of theory, which is arguably absent from other disciplines or applied fields. A particularly salient cost to note here is the creation of incentives to contort data to conform to theoretical predictions. The most effective way to counter this tendency – and to place empirical analysis above suspicion on this front – is to be as complete and transparent as possible in displaying empirical data, presenting rich sample information, full descriptive statistics, etc. Transparency in data presentation and analysis is therefore an important hallmark of high quality empirical research in strategy.

Another important cost of a theory fetish is the creation of incentives to contort theoretical predictions to conform to data. Scholars are sorely tempted to analyze their data, find patterns, and then construct theoretical arguments that “predict” those patterns with eerie accuracy. Loose verbal theorizing is flexible enough to provide a rationale for virtually any pattern one observes. The best protections against this form of low quality research are the characteristics of good theoretical research laid out in Section 1 above.

A second role for empirical research in strategy is to test the propositions that theories generate. A strong link between theory and empirical analysis is a fundamental prerequisite for generating useful normative guidance for managers: Normative statements derived from empirical analysis are not meaningful (and indeed may be quite misleading) if not grounded in theory. Conversely, elegant theory is unlikely to have a positive impact on practice unless it is tested and validated by empirical research.

High quality efforts to test theory require reliable data that conform closely to the theoretical constructs being tested. That one should strive for congruence between theory and measures may seem obvious to the point of being trite, but our reading of empirical research in strategy suggests that many studies fall short of this ideal. A common problem is a mismatch between the unit of analysis in the theoretical model and the level of aggregation of the data used in the study. For example, firm-level data may be (mis)used to test a theory that focuses on interactions among individual agents or project teams. Such a mismatch may obscure significant empirical regularities or generate spurious inferences. A second common problem is to choose as an empirical proxy for a theoretical construct some measure that others can interpret as a proxy for a very different construct. When proxies allow different interpretations, empirical studies cannot distinguish between alternative theories. Because it is often impossible to identify perfect proxies, strategy scholars must be sensitive to alternative explanations for empirical regularities.

Theory-testing efforts should also employ empirical designs and methods that generate valid inferences that support or refute the stated hypotheses. Empirical designs and methods can be flawed in countless ways, but one problem is pervasive in strategy research: a failure to account adequately for the inherent endogeneity of strategic action. Almost all strategy theories assume that managers selectively choose strategic actions based on expected performance outcomes. This selection implies that firms whose managers choose different actions are likely different, often in ways the researcher can’t observe. It is common in empirical research in strategy to observe differences in actions and in performance across firms and to conclude that the differences in action caused the differences in performance. This causal conclusion may be deeply flawed, however, if the researcher did not take care: It may be that underlying, unobserved differences across firms caused both the differences in actions and the differences in outcomes. Such mistakes can lead us to reject sound theories, support faulty theories, and offer bad advice to managers. Only careful empirical designs and methods can protect the strategy field from such mistakes. 1

In sum, scholars should ask themselves the following questions as they design and execute empirical studies in strategy; affirmative answers to these questions set high quality empirical papers apart from the rest.
  • Are data presented and analyzed as transparently as possible?
  • Does the research employ reliable data that conform closely to theoretical constructs? In particular, do the level of analysis and the unit of observation in the study match those of the theory? Can the empirical proxies discriminate among alternative explanations?
  • Are the empirical designs and methods chosen carefully to generate valid inferences? In particular, have concerns about endogeneity and selection been addressed adequately?


1 See, Hamilton & Nickerson (2003) and Bascle (2008) for discussion of potential sources of bias in strategy research, as well as suggested solutions. Note, however, that blind reliance on canned empirical ‘fixes’ to endogeneity problems to the exclusion of conceptual discussion can also be misleading, since choosing an empirical method often involves trading off different problems. For example, to correct for self selection one typically has to impose some fairly strict distributional assumptions, which themselves are often violated. Investigating and testing the validity of assumptions underlying different empirical models in particular settings is a crucial step.