"A systematic review is a review of a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research, and to collect and analyze data from the studies that are included in the review. Statistical methods (metaanalysis) may or may not be used to analyze and summarize the results of the included studies. Metaanalysis refers to the use of statistical techniques in a systematic review to integrate the results of included studies."
The main idea...
The objective of metaanalysis is to enhance the precision and generalisability of statistical parameter estimates generated in single studies by integrating data across multiple, independent studies with similar methodologies. It is an attractive strategy when the feasibility of a largescale study is low. It may also be seen as a quantitative and reproducible approach to systematic review which includes clear appraisal of the potential biases and weaknesses inevitably present. Careful integration of study results and data will lead to an increase in statistical power as well as more informative assessments of heterogeneity across studies. Indeed, metaanalysis may overturn the significance or insignificance of effects (roughly, differences in one variable that may attributed to variation in another) as determined by individual studies with small sample size. Further, metaanalysis also offers the potential to broaden the scale of data interpretation. These are valuable properties in ecological research at large, where large sample sizes are often difficult to obtain.
As discussed by Stewart (2010) and Cadotte et al. (2012) , metaanalysis appeared in classical ecological research in the 1990s and its application has been growing rapidly. Instances of metaanalysis are also found in the microbial ecology literature (e.g. Shade et al., 2013). While there are many attractive attributes of metaanalysis, integrating data from multiple studies is nontrivial and, if done recklessly, may easily lead to erroneous results. In his opinion piece, Stewart (2010), perhaps echoing observations by Moher et al. (2009), called for more sophisticated forms of metaanalysis to be used in ecological research in order to reach more stable conclusions and the abandonment of simple (or perhaps simplistic) "vote counting" strategies that are frequently employed.
A brief recipe...
 Following the definition of a testable review question, the first stages of a metaanalysis involve steps to identify studies appropriate for inclusion in the qualitative and quantitative components of a metaanalysis. The criteria for inclusion and rejection should be well defined and reported and rejected studies should also be reported. This is perhaps the most critical phase of any metaanalysis: poor study selection will quickly doom the results of even the most sophisticated analytical procedures.
 It is likely that some of the studies included in a metaanalysis will be of higher quality (e.g. have greater sample size, more rigorous experimental design, etc.) than others. While some analysts will only include the best studies, others may choose to "weight" studies based on some measure of quality. Studies with more weight, and thus higher quality, will therefore be more influential in any quantitative analysis performed (e.g. through weighted averaging). Naturally, the criteria of and justification for any such weighting must be clearly recorded and reported.
 Following the selection of set of studies, the potential bias (see below) of each study should be estimated. This estimation should be done systematically and each factor that would affect a study's risk of bias clearly defined and presented in, e.g., tabular form. The risk of bias can be established by examining study aspects such as the use of randomised sampling design, data completeness, and reporting consistency. Common classes of bias include selection bias, performance bias, detection bias, reporting bias, and attrition bias.
 It is only after the previous steps have been diligently completed that summary statistics should be computed. The choice of statistics should, of course, be appropriate to the data at hand. Commonly used statistics include various odds ratios and the difference in means. The consistency of results from each constituent study and the heterogeneity between them are typically of the greatest interest to the analyst. Further, metaregression, uni and multivariate model building, and the use of randomeffects models may also employed to extend the metaanalysis (see Implementations).
 Finally, it is important to recognise (both qualitatively and quantitatively) that metaanalysis requires the analyst to make several assumptions and judgements. The impact of these can be evaluated using sensitivity analysis, which will indicate how robust the results of a given metaanalysis are in the context of the decisions taken. A wellperformed sensitivity analysis is a key mark of a strong metaanalysis.
Bias
Publication bias With some exceptions, journals tend to favour publication of positive results (i.e. where some null hypothesis has been rejected), creating a bias in subsequent metaanalyses (i.e. a metaanalyst is more likely to select studies which show significant effects). Conducting metaanalyses on a body of data that is biased in this way may lead to the overestimation of effect sizes (i.e. the results of studies that suggest that a relationship is weaker or not present are underrepresented and cannot temper the positive results). This is not, then, a true random sample of the studies.
Approaches to mitigate the influence of publication bias include acquiring data sets which are not necessarily associated with a (traditional) publication. Metaanalysts should make efforts to acquire such data sets from repositories or individuals in order to better round their study. R epositories such as PANGAEA and the Soil Genetic Network for Agriculture Sustainability are examples of resources that may be used to address publication bias.
One popular approach to detect bias is the use of funnel plots (Egger et al., 1997). In their basic form, these plots graph the effect estimates against the sample size of the studies included in a metaanalysis and rest upon the assumption that the precision of effect estimates increases with sample size. Thus, studies with small sample sizes will scatter at the 'base' of the funnel while those with larger sample sizes will concentrate at the 'tip' of the funnel. If there is notable asymmetry in the plot, there is likely to be bias present. Despite their intuitive appeal, there are concerns that the choice of axes in funnel plots can lead to variable conclusions (Sterne & Egger, 2001) and that users of this technique must be cautious in its application and interpretation (Terrin et al., 2005).
Intentional bias Unfortunately, it is always possible that the selection of studies included in a metaanalysis is driven by the intention to support or weaken a certain position. Naturally, this is detrimental to science and academic research at large and most would avoid such bias. Further, when intentional bias is observed or suspected, it must be brought to the attention of journal editors and challenged in the via published comments.
Key assumptions  There is sufficient information on the constituent studies to allow a metaanalyst to decide on the appropriateness of their inclusion.
 The studies included in the metaanalysis are independent of one another. That is, the results of one do not influence the results of another. It may be useful to avoid including multiple studies from a large research project or from the same laboratory.
 The results of constituent studies are exchangeable under the null hypothesis. This assumption is unlikely to be fulfilled and it may be useful to semiquantitatively assess how similar the selected studies are in terms of methodologies, designs, etc. A metaanalyst may then describe their rationale for excluding studies they feel are too dissimilar.
Warnings  Simply appending data sets to form one, larger data set is not a valid metaanalytical approach and can lead to incorrect estimates. One must carefully determine whether the studies included in a metaanalysis are comparable.
 Studies with fewer samples are generally less likely to detect effects. Attempting to synthesise studies with comparable samples sizes is desirable, but may be impractical.
 Due to biases in the selection of constituent studies, it is entirely possible to produce results based on an unrepresentative sample of investigations. Where applicable, claims from metaanalyses should be tempered by this fact.
 If the constituent studies included in a metaanalysis are poorly replicated, the results of the metaanalysis will almost certainly suffer. Hurlbert (2004) discusses why metaanalysis is not a "panacea" for poor replication in individual studies.
 Metaanalysts must necessarily focus on quantitative results, but should not ignore qualitative information related to their investigation.
 Consider whether reanalysing specific study data with nonmetaanalytic methods would be more informative than metaanalysis.
Implementations  R
 The mvmeta package offers functions for fixed and randomeffects multivariate and univariate metaanalysis and metaregression
 The MAc package offers a range of userfriendly functions to perform correlational metaanalysis.
 The MAd package offers functions for metaanalysis based on mean differences.
 The meta and rmeta packages allows fixed and random effects modelling in metaanalysis and allows tests for bias.
 The metamisc package allows the estimation of uni, bi, and multivariate models using frequentist or Bayesian approaches.
 A CRAN task view tracking packages relevant to metaanalysis is available here.
References  Cadotte MW, Mehrkens LR, Menge DNL (2012) Gauging the impact of metaanalysis on ecology. Evol Ecol, 26(5), 1153–1167.
 Egger M, Smith GD, Schneider M, Minder C (1997) Bias in metaanalysis detected by a simple, graphical test. BMJ. 315(7109):629–634.
 Hurlbert SH (2004) On misinterpretations of pseudoreplication and related matters: a reply to Oksanen. Oikos.104(3): 591–597.
 Kroeker KJ, Kordas RL, Crim RN, Singh GG (2010) Metaanalysis reveals negative yet variable effects of ocean acidification on marine organisms. Ecol Lett. 13:1419–1434.
 Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, et al. (2009) The PRISMA Statement for Reporting Systematic Reviews and MetaAnalyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration. PLoS Med 6(7): e1000100.
 Liu J, Weinbauer MG, Maier C, Dai M, Gattuso J (2010) Effect of ocean acidification on microbial diversity and on microbedriven biogeochemistry and ecosystem functioning. Aquat Microb Ecol. 61:291305.
 Moher D, Liberati A, Tetzlaff J, Altman DG, The PRISMA Group (2009) Preferred Reporting Items for Systematic Reviews and MetaAnalyses: The PRISMA Statement. PLoS Med 6(6): e1000097.
 Shade A, Caporaso JG, Handelsman J, Knight R, Fierer N (2013). A metaanalysis of changes in bacterial and archaeal communities with time. ISME J. 7(8):1493–506.
 Sterne JAC, Egger M (2001) Funnel plots for detecting bias in metaanalysis: guidelines on choice of axis. J Clin Epidemiol. 54(10):1046–55.
 Stewart G (2010) Metaanalysis in applied ecology. Biol Lett. 6(1):78–81.
 Terrin N, Schmid CH, Lau J (2005) In an empirical evaluation of the funnel plot, researchers could not visually identify publication bias. J Clin Epidemiol. 58(9):894–901.
