Warnings‎ > ‎


This page is a stub
This page is under construction. If you would like to contribute to this endpoint, please let us know through our user forum!
== incomplete ==

The main idea...

Overdetermination occurs when one has more explanatory power than is necessary to account for a given effect in one's response data. In other words, one detects multiple potential causes for a single response and has no way of deciding (statistically) which of these are valid.This may happen if one has many explanatory variables and a relatively small number of objects (e.g. samples) over which variation has been observed and recorded. In this scenario, there may be a greater probability of observing 'chance' explanatory relationships, particularly if the samples are clustered in the same site or have all been affected by a single phenomenon (e.g. an organic matter fall in their vicinity)  While not necessarily catastrophic, working with an overdetermined system can limit useful interpretation of one's results: if an effect can be accounted for by the variation in many explanatory variables (or combinations of these), determining which variables are causally linked to an effect can be difficult. 

One approach to address overdetermination is to increase the number of objects (e.g. samples, sites) in a data set relative to the number of explanatory variables present. This may be done, for example, by reducing the depth of sequencing in favour of sequencing more samples (keeping in mind the issue of pseudoreplication). If this is not a possibility, carefully reconsidering one's sampling design or target ecosystem may help focus on the causal influence of a few, hypothetically important explanatory variables.