Abstract
One of the goals of artificial intelligence is to develop agents that learn and act in complex environments. Realistic environments typically feature a variable number of objects, relations amongst them, and non-deterministic transition behavior. While standard probabilistic sequence models provide efficient inference and learning techniques for sequential data, they typically cannot fully capture the relational complexity. On the other hand, statistical relational learning techniques are often too inefficient to cope with complex sequential data. In this paper, we introduce a simple model that occupies an intermediate position in this expressiveness/efficiency trade-off. It is based on CP-logic (Causal Probabilistic Logic), an expressive probabilistic logic for modeling causality. However, by specializing CP-logic to represent a probability distribution over sequences of relational state descriptions and employing a Markov assumption, inference and learning become more tractable and effective. Specifically, we show how to solve part of the inference and learning problems directly at the first-order level, while transforming the remaining part into the problem of computing all satisfying assignments for a Boolean formula in a binary decision diagram.
We experimentally validate that the resulting technique is able to handle probabilistic relational domains with a substantial number of objects and relations.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Biswas, R., Thrun, S., & Fujimura, K. (2007). Recognizing activities with multiple cues. In Lecture notes in computer science (Vol. 4814, p. 255). Berlin: Springer.
Bratko, I. (1990). Prolog programming for artificial intelligence (2nd edn.). Reading: Addison-Wesley.
Bryant, R. E. (1986). Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computers, 35(8), 677–691.
De Raedt, L., Kimmig, A., & Toivonen, H. (2007). ProbLog: A probabilistic prolog and its application in link discovery. In Proceedings of the 20th international joint conference on artificial intelligence (pp. 2462–2467).
De Raedt, L., Frasconi, P., Kersting, K., & Muggleton, S. (Eds.) (2008). Lecture notes in computer science: Vol. 4911. Probabilistic inductive logic programming—theory and applications. Berlin: Springer.
Fern, A. (2005). A simple-transition model for relational sequences. In Proceedings of the 19th international joint conference on artificial intelligence (pp. 696–701). Edinburgh, Scotland, UK.
Fikes, R. E., & Nilsson, N. J. (1995). STRIPS: a new approach to the application of theorem proving to problem solving. In Computation & intelligence: collected readings (pp. 429–446). Menlo Park, CA, USA, American Association for Artificial Intelligence.
Getoor, L., & Taskar, B. (Eds.) (2007). Statistical relational learning. Cambridge: MIT Press.
Getoor, L., Friedman, N., Koller, D., & Pfeffer, A. (2001). Learning probabilistic relational models. In Relational data mining (pp. 307–335). Berlin: Springer.
Ghahramani, Z. (1997). Learning dynamic Bayesian networks. In Adaptive processing of sequences and data structures (pp. 168–197). International summer school on neural networks.
Jaeger, M. (1997). Relational Bayesian networks. In D. Geiger & P. Shenoy (Eds.), Proceedings of the thirteenth annual conference on uncertainty in artificial intelligence (UAI-97) (pp. 266–273). Providence, Rhode Island, USA, San Mateo: Morgan Kaufmann.
Karwath, A., Kersting, K., & Landwehr, N. (2008). Boosting relational sequence alignments. In: Proceedings of the 8th IEEE international conference on data mining (ICDM 2008).
Kersting, K., & De Raedt, L. (2003). Logical Markov decision programs. In Proceedings of the international joint conference on artificial intelligence IJCAI-2003 (Vol. 3).
Kersting, K., & De Raedt, L. (2007). Bayesian logic programming: theory and tool. In L. Getoor & B. Taskar (Eds.), An introduction to statistical relational learning. Cambridge: MIT Press.
Laird, J. E., & van Lent, M. (2000). Human-Level AI’s killer application: Interactive computer games. In Proceedings of the seventeenth national conference on artificial intelligence and twelfth conference on innovative applications of artificial intelligence.
Milch, B., Zettlemoyer, L. S., Kersting, K., Haimes, M., & Kaelbling, L. P. (2008). Lifted probabilistic inference with counting formulas. In Proceedings of the 23rd national conference on artificial intelligence (AAAI-2008).
Mutton, P. (2004). Inferring and visualizing social networks on Internet Relay Chat. In Proceedings of the 8th international conference on information visualization (IV-2004) (pp. 35–43).
Pollack, M. E. (2005). Intelligent technology for an aging population: The use of AI to assist elders with cognitive impairment. AI Magazine, 26(2), 9–24.
Poole, D. (1997). The independent choice logic for modelling multiple agents under uncertainty. Artificial Intelligence, 94(1–2), 7–56.
Poole, D. (2003). First-order probabilistic inference. In G. Gottlob & T. Walsh (Eds.), IJCAI (pp. 985–991). San Mateo: Morgan Kaufmann.
Puterman, M. (1994). Markov decision processes: Discrete stochastic dynamic programming. New York: Wiley.
Rabiner, L. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2), 257–286.
Richardson, M., & Domingos, P. (2006). Markov logic networks. Machine Learning, 62, 107–136.
Sato, T. (1995). A statistical learning method for logic programs with distribution semantics. In Proceedings of the twelfth international conference on logic programming (ICLP-1995) (pp. 715–729).
Sato, T., & Kameya, Y. (1997). PRISM: A symbolic-statistical modeling language. In Proceedings of the 15th international joint conference on artificial intelligence (IJCAI-97) (pp. 1330–1339). San Mateo: Morgan Kaufmann.
Saul, L. K., & Jordan, M. I. (1999). Mixed memory Markov models: Decomposing complex stochastic processes as mixtures of simpler ones. Machine Learning, 37, 75–87.
Thon, I. (2009). Don’t fear optimality: sampling for probabilistic-logic sequence models. In Proceedings of the international conference on inductive logic programming (ILP-2009), July 2009.
Thon, I., Landwehr, N., & Raedt, L. D. (2008). A simple model for sequences of relational state descriptions. In Proceedings of the 19th European conference on machine learning (pp. 506–521).
Thon, I., Gutmann, B., van Otterlo, M., Landwehr, N., & Raedt, L. D. (2009). From non-deterministic to probabilistic planning with the help of statistical relational learning. In ICAPS 2009—Proceedings of the workshop on planning and learning, September 2009.
Vennekens, J., Denecker, M., & Bruynooghe, M. (2006). Representing causal information about a probabilistic process. In Lecture Notes in Computer Science : Vol. 4160. Logics in artificial intelligence (pp. 452–464). Berlin: Springer.
Younes, H. L., & Littman, M. L. (2004). PPDDL1.0: The language for the probabilistic Part of IPC-4. In Proceedings of the international planning competition.
Zettlemoyer, L. S., Pasula, H., & Kaelbling, L. P. (2005). Learning planning rules in noisy stochastic worlds. In Proceedings of the 20th national conference on artificial intelligence (AAAI-05) (pp. 911–918).
Author information
Authors and Affiliations
Corresponding author
Additional information
Editors: S.V.N. Vishwanathan, Samuel Kaski, Jennifer Neville, and Stefan Wrobel.
Rights and permissions
About this article
Cite this article
Thon, I., Landwehr, N. & De Raedt, L. Stochastic relational processes: Efficient inference and applications. Mach Learn 82, 239–272 (2011). https://doi.org/10.1007/s10994-010-5213-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10994-010-5213-8