Embeddings as Probabilistic Equivalence

Neurosymbolic Learning Under New Semantics for Probabilistic Logic Programming

The integration of logic programs with embedding models resulted in a class of neurosymbolic frameworks that jointly learn symbolic rules and representations for the symbols in the logic. The key idea that enabled this integration was the differentiable relaxation of unification, the algorithm for variable instantiation during inference in logic programs. However, as we show, soft unification has undesirable side effects in learning and inference.

To alleviate those side effects, we are the first to revamp the well-known possible world semantics of probabilistic logic programs into new semantics called equivalence semantics. In our semantics, a probabilistic logic program induces a probability distribution over all possible equivalence relations between symbols, instead of a probability distribution over all possible subsets of probabilistic facts. We propose both exact and approximate techniques for reasoning in our semantics. Experiments on well-known benchmarks show that the equiva- lence semantics leads to neurosymbolic models with up to 42% higher results than the Neural Theorem Prover (NTP), the Greedy Neural Theorem Prover (GNTP), the Conditional Theorem Prover (CTP), and DeepSoftLog.

Repository

Python Library for Neurosymbolic Learning Under the Equivalence Semantics

Relevant publications

2025

  1. NeurIPS
    Embeddings as Probabilistic Equivalence in Logic Programs
    Jaron Maene, and Efthymia Tsamoura
    In Proceedings of the Thirty-Ninth Conference on Neural Information Processing Systems (NeurIPS), 2025