Abstract
Markov Logic is a powerful representation that unifies first-order logic and probabilistic graphical models. However, scaling-up inference in Markov Logic Networks (MLNs) is extremely challenging. Standard graphical model inference algorithms operate on the propositional Markov network obtained by grounding the MLN and do not scale well as the number of objects in the realworld domain increases. On the other hand, algorithms which perform inference directly at the first-order level, namely lifted inference algorithms, although more scalable than propositional algorithms, require the MLN to have specific symmetric structure. Worse still, evidence breaks symmetries, and the performance of lifted inference is the same as propositional inference (or sometimes worse, due to overhead). In this paper, we propose a general method for solving this “evidence” problem. The main idea in our method is to approximate the given MLN having, say, n objects by an MLN having k objects such that k < < n and the results obtained by running potentially much faster inference on the smaller MLN are as close as possible to the ones obtained by running inference on the larger MLN. We achieve this by finding clusters of “similar” groundings using standard clustering algorithms (e.g., K-means), and replacing all groundings in the cluster by their cluster center. To this end, we develop a novel distance (or similarity) function for measuring the similarity between two groundings, based on the evidence presented to the MLN. We evaluated our approach on many different benchmark MLNs utilizing various clustering and inference algorithms. Our experiments clearly show the generality and scalability of our approach.
Chapter PDF
Similar content being viewed by others
Keywords
- Hierarchical Cluster
- Distance Function
- Gibbs Sampling
- Marginal Probability
- Neural Information Processing System
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Bui, H., Huynh, T., de Salvo Braz, R.: Exact lifted inference with distinct soft evidence on every object. In: Proceedings of the 26th AAAI Conference on Artificial Intelligence. AAAI Press (2012)
Bui, H., Huynh, T., Riedel, S.: Automorphism groups of graphical models and lifted variational inference. In: Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence, pp. 132–141. AUAI Press (2013)
de Salvo Braz, R.: Lifted First-Order Probabilistic Inference. Ph.D. thesis, University of Illinois, Urbana-Champaign, IL (2007)
Domingos, P., Lowd, D.: Markov Logic: An Interface Layer for Artificial Intelligence. Morgan & Claypool, San Rafael (2009)
Geman, S., Geman, D.: Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Transactions on Pattern Analysis and Machine Intelligence 6, 721–741 (1984)
Gogate, V., Domingos, P.: Probabilistic Theorem Proving. In: Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial Intelligence. pp. 256–265. AUAI Press (2011)
Gogate, V., Jha, A., Venugopal, D.: Advances in Lifted Importance Sampling. In: Proceedings of the 26th AAAI Conference on Artificial Intelligence. AAAI Press (2012)
Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: An update. SIGKDD Explor. Newsl. 11(1), 10–18 (2009)
Jha, A., Gogate, V., Meliou, A., Suciu, D.: Lifted Inference from the Other Side: The tractable Features. In: Proceedings of the 24th Annual Conference on Neural Information Processing Systems (NIPS), pp. 973–981 (2010)
Kersting, K., Ahmadi, B., Natarajan, S.: Counting Belief Propagation. In: Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence. pp. 277–284. AUAI Press (2009)
Kok, S., Sumner, M., Richardson, M., Singla, P., Poon, H., Lowd, D., Wang, J., Domingos, P.: The Alchemy System for Statistical Relational AI. Tech. rep., Department of Computer Science and Engineering, University of Washington, Seattle, WA (2008), http://alchemy.cs.washington.edu
Milch, B., Zettlemoyer, L.S., Kersting, K., Haimes, M., Kaelbling, L.P.: Lifted Probabilistic Inference with Counting Formulas. In: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, pp. 1062–1068 (2008)
Niepert, M.: Markov chains on orbits of permutation groups. In: Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence. pp. 624–633. AUAI Press (2012)
Niu, F., Ré, C., Doan, A., Shavlik, J.W.: Tuffy: Scaling up statistical inference in markov logic networks using an rdbms. PVLDB 4(6), 373–384 (2011)
Poole, D.: First-Order Probabilistic Inference. In: Proceedings of the 18th International Joint Conference on Artificial Intelligence, pp. 985–991 (2003)
Poon, H., Domingos, P.: Sound and Efficient Inference with Probabilistic and Deterministic Dependencies. In: Proceedings of the 21st National Conference on Artificial Intelligence. pp. 458–463. AAAI Press (2006)
Poon, H., Domingos, P.: Joint Unsupervised Coreference Resolution with Markov Logic. In: Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. pp. 649–658. ACL (2008)
Richardson, M., Domingos, P.: Markov Logic Networks. Machine Learning 62, 107–136 (2006)
Shavlik, J.W., Natarajan, S.: Speeding up inference in markov logic networks by preprocessing to reduce the size of the resulting grounded network. In: Proceedings of the 21st International Joint Conference on Artificial Intelligence, pp. 1951–1956 (2009)
Singla, P., Domingos, P.: Lifted First-Order Belief Propagation. In: Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence. pp. 1094–1099. AAAI Press (2008)
Singla, P., Mooney, R.J.: Abductive Markov Logic for Plan Recognition. In: Proceedings of the 25th AAAI Conference on Artificial Intelligence, pp. 1069–1075. AAAI Press (2011)
Tran, S.D., Davis, L.S.: Event modeling and recognition using markov logic networks. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 610–623. Springer, Heidelberg (2008)
van den Broeck, G.: On the completeness of first-order knowledge compilation for lifted probabilistic inference. In: Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS), pp. 1386–1394 (2011)
van den Broeck, G., Darwiche, A.: On the complexity and approximation of binary evidence in lifted inference. In: Proceedings of the 27th Annual Conference on Neural Information Processing Systems (NIPS), pp. 2868–2876 (2013)
van den Broeck, G., Davis, J.: Conditioning in first-order knowledge compilation and lifted probabilistic inference. In: Proceedings of the 26th AAAI Conference on Artificial Intelligence. AAAI Press (2012)
van den Broeck, G., Taghipour, N., Meert, W., Davis, J., De Raedt, L.: Lifted Probabilistic Inference by First-Order Knowledge Compilation. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence, pp. 2178–2185 (2011)
Venugopal, D., Gogate, V.: On lifting the gibbs sampling algorithm. In: Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS), pp. 1664–1672 (2012)
Yedidia, J.S., Freeman, W.T., Weiss, Y.: Generalized Belief Propagation. In: Proceedings of the 14th Annual Conference on Neural Information Processing Systems (NIPS), pp. 689–695 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Venugopal, D., Gogate, V. (2014). Evidence-Based Clustering for Scalable Inference in Markov Logic. In: Calders, T., Esposito, F., Hüllermeier, E., Meo, R. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2014. Lecture Notes in Computer Science(), vol 8726. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-44845-8_17
Download citation
DOI: https://doi.org/10.1007/978-3-662-44845-8_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-44844-1
Online ISBN: 978-3-662-44845-8
eBook Packages: Computer ScienceComputer Science (R0)