We present a general information theoretic approach for identifying functional subgraphs in complex neuronal networks where the spiking dynamics of a subset of nodes (neurons) are observable. We show that the uncertainty in the state of each node can be written as a sum of information quantities involving a growing number of variables at other nodes. We demonstrate that each term in this sum is generated by successively conditioning mutual information on new measured variables, in a way analogous to a discrete differential calculus. The analogy to a Taylor series suggests efficient optimization algorithms for determining the state of a target variable in terms of functional groups of other nodes. We apply this methodology to electrophysiological recordings of cortical neuronal network grown in vitro. Despite strong stochasticity, we show that each cell's firing is generally explained by the activity of a small number of other neurons. We identify these neuronal subgraphs in terms of their redundant or synergetic character and reconstruct neuronal circuits that account for the state of target cells.