In this paper, we will exemplify compositionality issues of neural networks using logical theories. The idea is to implement first-order logic on the neural level by using category theoretic methods in order to get a variable-free representation of logic with only one operation (composition). More precisely, logic as well as neural networks are represented as algebraic systems. On the underlying algebraic level it is possible to consider compositionality aspects of first-order logical formulas and their realization by a neural network. We will demonstrate the approach with some well-known logical inferences using a straightforward implementation of a simple backpropagation network.