We offer a new approach to the \emph{information decomposition} problem in information theory: given a `target' random variable co-distributed with multiple `source' variables, how can we decompose the mutual information into a sum of non-negative terms that quantify the contributions of each random variable, not only individually but also in combination? We define a new way to decompose the mutual information, which we call the \emph{Information Attribution} (IA), and derive a solution using cooperative game theory. It can be seen as assigning a "fair share'' of the mutual information to each combination of the source variables. Our decomposition is based on a different lattice from the usual `partial information decomposition' (PID) approach, and as a consequence {the IA} has a smaller number of terms {than PID}: it has analogs of the synergy and unique information terms, but lacks separate terms corresponding to redundancy, instead sharing redundant information between the unique information terms. Because of this, it is able to obey equivalents of the axioms known as `local positivity' and `identity', which cannot be simultaneously satisfied by a PID measure., Nihat Ay, Daniel Polani and Nathaniel Virgo., and Obsahuje bibliografické odkazy