1  Logics

Logic is the study of reasoning, or the principles of valid inference and demonstration. Its aim is to describe and check reasoning. Formal logics introduced formal methods for describing knowledge and rigorously reasoning with it. It is a framework to represent knowledge.

1.1 Propositional logics (Zero-Order Logic)

Propositional logic is a mathematical model allowing reasoning about the truth and falsity of propositions.

A proposition is a claim about which one can say, objectively and for sure, that it is true or false.

Frege’s thesis proposed that every possible proposition is just another way to identify True and False.

A logistic connective is a function allowing the composition of propositions, assigning a specific truth value to each configuration of truth values of the propositions it connects. It can be schematized in truth tables, where the first columns have the truth values of the propositions and the last column has the truth value of the compound proposition.

In principle, there are 4 monadic operators (operator with one argument) and 16 diadic operators (operator with two arguments). The most common are:

  • Negation (NOT): \negp

  • Conjunction (AND): p \land q

  • Disjunction (OR): p \lor q

  • Implication (IF…THEN): p \Rightarrow q: false only when p is true and q is false. It does not involve a causal relationship between premise and consequence, so they may be unrelated.

  • Equivalence (IFF): p \iff q: true if both have the same truth value. Notice that this is the negation of the XOR.

  • Exclusive OR (XOR): p \oplus q: true if one of the two is true, but not both. It is the negation of the equivalence.

  • NAND: negation of the conjunction.

  • NOR: negation of the disjunction.

A set of connectives is complete if all other connectives can be expressed by a suitable combination of this set. Negation and disjunction form a complete set, so NOR alone is complete. The same applies to negation and conjunction, and consequently, to NAND alone. This means that with a single connective, we have the expressiveness of the whole propositional logic.

The expressions allowed are defined as well-formed formulas (WFF), which are the syntactic objects of propositional logic. The rules for building WFFs are:

  1. A proposition P is a wff.
  2. If P and Q are wffs, then (\negP), (P \land Q), (P \lor Q), (P \Rightarrow Q), (P \iff Q) are also wffs.

Note that in the second point, the parentheses are important in the definition.

To compute the value of a compound expression, given a truth assignment, we partially evaluate the single logistic connectives following the priority order, defined, from highest to lowest, as:

\neg, \land, \lor, \Rightarrow, \iff

A tautology is a logistic expression which is always true, independently of the truth values assumed by its components.

A contradiction is a logistic expression which is always false, independently of the truth values assumed by its components. We might argue that a contradiction is useless with respect to tautologies, but this is not true: if we negate a contradiction, then we obtain a new tautology. So every contradiction provides a new tautology.

The power of equivalence-based tautology is that we can use the substitution of equals by equals: expressions on the right-hand side and left-hand side of an equivalence-based tautology can always be replaced one for the other in any expression. Alternatively, by the substitution principle, we can say that a tautology remains as such, whatever substitution is applied to its variables; that is, whatever I replace the variables with other expressions, the result is still a tautology, so we can replace the entire formula with True.

1.1.1 Algebraic laws

We can compare some operators with some arithmetic operators; in particular, we can view negation as sign change, disjunction as sum, and conjunction as product. In this way, we can find analogies between these operators in terms of properties:

  • Commutativity of \land: p \land q = q \land p
  • Commutativity of \lor: p \lor q = q \lor p
  • Associativity of \land: (p \land q) \land r = p \land (q \land r)
  • Associativity of \lor: (p \lor q) \lor r = p \lor (q \lor r)
  • Distributivity of \land on \lor: p \land (q \lor r) = (p \land q) \lor (p \land r)
  • Neutral element of \land: p \land True = p
  • Neutral element of \lor: p \lor False = p
  • Null element of \land: p \land False = False
  • Null element of \lor: p \lor True = True
  • Elimination of double negation: \neg(\neg p) = p

Below we define properties that differentiate such logic operators from the corresponding arithmetic ones:

  • Idempotency of \land: p \land p = p; p \lor p = p
  • Idempotency of \lor: p \lor p = p; p \land p = p
  • Distributivity of \lor on \land: p \lor (q \land r) = (p \lor q) \land (p \lor r)
  • Subsumption: subsumption consists of removing a part of the expression that is already covered by another part. In a sum of products, any product including another product may be dropped by replacing propositional variables in the first property with products of literals. In a product of sums, any sum including another sum may be dropped by replacing propositional variables in the second property with a sum of literals. Superfluous literals could not affect in any case the truth value taken by the expression.
    • P \lor (P \land Q) \iff P
    • P \land (P \lor Q) \iff P
    • P \land (\negP \lor Q) \iff Q
  • Elimination of some negations:
    • P \land (\negP \lor Q) \iff P \land Q
    • P \lor (\negP \land Q) \iff P \lor Q

De Morgan’s Laws:

  • \neg(P \land Q) \iff \neg P \lor \neg Q
  • \neg(P \lor Q) \iff \neg P \land \neg Q
  • \neg(P \Rightarrow Q) \iff P \land \neg Q
  • \neg(P \iff Q) \iff (P \land \neg Q) \lor (\neg P \land Q)

We can then define the duality principle: a tautology involving only \land and \lor is still a tautology if we swap such connectives. This means that conjunction is the dual of disjunction. Properties for which they are different are those missing in operator \lor with respect to \land, needed for the duality principle to hold.

Using the algebraic analogy, we can define some laws on equivalence:

  • Reflexivity: P \iff P
  • Commutativity: (P \iff Q) \iff (Q \iff P)
  • Transitivity: (P \iff Q) \land (Q \iff R) \iff (P \iff R)
  • Equivalence of negated: (P \iff Q) \iff (\neg P \iff \neg Q)

We can then define some laws on implication:

  • Transitivity:
    • ((P \Rightarrow Q) \land (Q \Rightarrow R)) \iff (P \Rightarrow R)
    • (P \iff Q) \Rightarrow (P \Rightarrow Q)
    • (P \Rightarrow Q) \land (Q \Rightarrow R) \Rightarrow (P \Rightarrow R)
    • (P \Rightarrow Q) \iff (\neg P \lor Q)
    • (P_1 \land P_2 \land\land P_k \Rightarrow Q) \iff (\neg P_1 \lor \neg P_2 \lor\neg P_k \lor Q)

1.1.2 Proof methods

Propositional Logics allows us to formally and rigorously check that proof methods are correct, starting from hypotheses and axioms to conclude with a theorem. To do this, we use calculus, the portion of logic that produces new truths starting from other truths.

Given a set of true expressions (hypotheses) and a set of tautologies based on implication or equivalence (inference rules), a proof is a sequence of formulas, each of whose entries is a tautology, a hypothesis, or a formula derived from two previous elements in the sequence by using an inference rule. A formula is proven starting from the hypotheses, through the inference rules, if a proof including them can be built.

So this sequence of true formulas is obtained in the following way:

  • Adding a tautology
  • Adding axioms
  • Adding a formula guaranteed to be true given true formulas presented before in the proof

How to achieve this? There are many techniques:

  • Analysis by cases: if a conclusion can be drawn both from a proposition and from its negation, then the conclusion is always true:
    • e.g. (P \Rightarrow Q) \land (\negP \Rightarrow Q) \Rightarrow Q
  • Modus Ponens: if the premises of an implication are true, then its conclusions are also true: \dfrac{P \Rightarrow Q, P}{Q} This can be seen as (P \land (P \Rightarrow Q)) \Rightarrow Q.
  • Modus Tollens: when a proposition is true only if another one is true, if the latter is not true, then neither is the former: \dfrac{P \Rightarrow Q, \neg Q}{\neg P} This can be seen as (P \Rightarrow Q) \iff (\negQ \Rightarrow \negP).
  • Proof ad absurdum: if from a proposition one can deduce falsity, then its negation must be true: \dfrac{P, \neg P}{\bot} This can be seen as (\neg P \Rightarrow False) \iff P.
  • Reduction to True: a tautology can be transformed into True through substitution of equals by equals.

1.1.3 Limitations

Propositional Calculus considers each proposition as a whole, based on its truth value, and it cannot capture relations between different propositions. From the propositions “Cisternino is in Italy” and “Ceglie is in Italy,” we cannot capture the connection that relates the towns (they are located in the same nation).

1.2 Predicate Logics (First-Order Logic)

Limitations of propositional logics are overcome by Predicate Logics by extracting the objects about which the propositions are. From the previous example, we can obtain as predicates “Ceglie is in …”, “… is in …”, or “… is in Italy”. By replacing the empty spaces with objects, one obtains several propositions whose truth value depends on the applied substitution.

Formally, a predicate is a proposition from which one or many objects have been removed and replaced by variables. Each of these variables may have a value from its domain, may depend on the value of other objects (even from different domains) through the use of functions, or they can be identified by constants. The number of variables appearing in a predicate or function is called its arity. The smallest unit that can get a truth value is an atom.

Note that predicates are neither true nor false themselves, but their truth value depends on the objects assigned to them.

A proposition can be seen as a predicate with arity zero.

As in propositional logics, complex expressions can be built using logistic connectives.

A literal is an atom (positive literal) or a negated atom (negative literal).

An atom, literal, or expression is ground if it does not involve any variable.

Predicates allow reasoning not only about specific objects but also on classes of objects, using quantifiers:

  • Universal quantifier: \forall (for all): it is true if the predicate is true for all objects in the domain. It’s equivalent to conjunction.
  • Existential quantifier: \exists (there exists): it is true if the predicate is true for at least one object in the domain. It’s equivalent to disjunction.

They have the highest priority among all operators. The scope of a quantifier is the one immediately following it. A variable falling in the scope of many quantifiers is associated with the innermost one, but instead of doing anything so horrible, we can simply standardize apart (rettificare), that is, assign each quantifier a variable that is different from all the others.

All quantifiers can be collected, in the order they appear, at the start of the predicate.

A quantified variable is bound; otherwise, it is free. An expression not involving free variables is closed.

An expression is a tautology if it is true for any possible interpretation. For this purpose, free variables are to be interpreted as universally quantified. All the rules and substitutions seen in propositional logics remain valid for predicate logic tautologies.

1.2.1 Skolemization

Skolemization allows the elimination of existential quantifiers by replacing them with a new function g(y) (Skolem function) with arity equal to the number of universal quantifiers in whose scope the variable involved in the existential quantifier falls. If the existential quantifier is not in the scope of any universal quantifier, a constant is used. Symbols in Skolem functions must be fresh, meaning that they have not already appeared in the formula.

For example,

\forall w Q(w) \Rightarrow \forall x \{\forall y \{\exists z[P(x,y,z) \Rightarrow \forall u R(x,y,u,z)]\}\}

becomes

\forall w Q(w) \Rightarrow \forall x \forall y [P(x,y,g(x, y)) \Rightarrow \forall u R(x,y,u,g(x, y))]\}\}

1.2.2 Interpretations

An interpretation of a predicate P is a function that, for any possible combination of values taken from its arguments, assigns the resulting atom the value True or False. It corresponds to truth assignment in propositional calculus; the difference is that here it depends on the objects the claim refers to.

Using interpretations, we can reason about the consistency of sets of expressions (theories). A set of expressions is satisfiable if there exists at least one interpretation that makes them true at the same time. An interpretation that makes a set of formulas true is a model for that set. A set of formulas is valid if any interpretation is a model for it, that is, it makes all expressions in the set true.

To abstract from meaningful names, Herbrand defined the concepts of universe, base and least model:

  • Herbrand universe: set of all terms that can be built in the language

  • Herbrand base: set of all ground atoms that can be built from the universe

  • Herbrand least model: the set of all ground atoms evaluated to true in all Herbrand models.

The set of all Herbrand models of a program forms a lattice. The least element is the least Herbrand model, and the greatest element is the entire Herbrand base.

1.2.3 Proof methods

The goal is to answer questions about a knowledge base expressed in a logic formalism.

The procedure is as in propositional logics; we have to add the handling of variables and quantifiers:

  • Variable substitution law: in the proof sequence, it is possible to add an expression obtained by replacing in a previous expression in the sequence all occurrences of a free variable with another variable or a constant.

  • Inference rule: E \Rightarrow subst(e), where subst is a function binding each free variable in an expression E with a constant from its domain, or a variable.

For example, if the expression P(x, y) \lor (\exists z) Q(x,z) belongs to the sequence, then P(a,b) \lor (\existsz) Q(a,z) can be added too.

1.2.3.1 Clausal logic

In a particular kind of proof, the hypotheses are in one of two forms:

  • facts (ground atoms): claims about the objects in the world, known to be true
  • rules (implication-type expressions): general principles that may be applied to known facts to deduce more true facts

If all premises in a rule are true, it is possible to deduce its consequence and add it to the proof sequence.

A clause is a disjunction of literals, whose variables are assumed to be universally quantified. It can be represented in the following ways:

  • As logistic expression:
    • A_1 \lor A_2 \lor\lor A_n \lor \negB_1 \lor \negB_2 \lor\negB_m
    • (A_1 \lor\lor A_n) \lor (\negB_1 \lor \negB_2 \lor\negB_m)
  • As a set of literals: {A_1, A_2, …, A_n, \negB_1, \negB_2, …, \negB_m}
  • As an inference rule: A_1, A_2, …, A_n :- B_1, B_2, …, B_m

It is to be interpreted as B_1 \land B_2 \land\land B_m \Rightarrow A_1 \lor A_2 \lor\lor A_n.

B_j’s are called the body, A_i’s are called the head. The head is the conclusion of the rule, while the body is the premise.

Any wff of predicate logics can be transformed into a set of clauses, to be interpreted as a conjunction, given the fact that we said that NOT and OR (the only ones appearing in clauses) are complete.

Transforming wffs into clauses
  1. Remove implication symbols: p \Rightarrow q replaced by \negp \lor q.
  2. Reduce the scope of \neg using double negation or De Morgan, so as to apply negation only to one atomic formula:
    • \neg(p \land q) = \negp \lor \negq
    • \neg(p \lor q) = \negp \land \negq
    • \neg \forallx P(x) = \existsx \negP(x)
    • \neg \existsx P(x) = \forallx \negP(x)
  3. Standardize apart variables.
  4. Skolemize the expression.
  5. Move all quantifiers to the left-hand-side part of the formula without changing their mutual order. The result is a wff with a string of quantifiers (prefix) followed by a formula without quantifiers (stub).
  6. Turn the stub into normal conjunctive form, using the distributive property:
    • (p \lor q) \land (p \lor r) = p \lor (q \land r)
    • (p \land q) \lor (p \land r) = p \land (q \lor r)
  7. Drop all universal quantifiers: all remaining variables at this point are universally quantified.
  8. Remove \land symbols, replacing expressions in the form p \land q by the set of wffs {p, q}.
  9. Rename variables: no variable symbol must appear in more than one clause.

1.2.3.2 Resolution

The resolution is a proof method that allows deriving new clauses from existing ones. It is based on the idea that if two clauses have a complementary literal, then the disjunction of the remaining literals in the two clauses is also a tautology:

(P_1 \lor P_2 \lor ... \lor P_n) \land (\neg P_1 \lor Q_1 \lor Q_2 \lor ... \lor Q_m) \Rightarrow (P_2 \lor ... \lor P_n \lor Q_1 \lor Q_2 \lor ... \lor Q_m)

Premises are called parents, the consequence is called resolvent, and P_1 and \negP_1 are the literals resolved upon.

Resolution encloses several other inference rules; take a look, for instance, at modus ponens:

\dfrac{P \land P \Rightarrow Q}{Q} \Rightarrow \dfrac{P \land \neg{P} \lor Q}{Q} \Rightarrow Q

Example

Hypotheses {P, \negP \lor Q, \negQ \lor R}

Prove R

  1. P : hypothesis
  2. \negP \lor Q : hypothesis
  3. \negQ \lor R : hypothesis
  4. Q : resolution of 1,2
  5. \negP \lor R : resolution of 2,3
  6. R : resolution of 1,5 or of 3,4

One interesting application of resolution is the proof by refutation: assuming the negation of the thesis is true, show that such a negation, along with the hypotheses, leads to a contradiction by proving the empty clause (that is, a clause having no head nor body, \boxed{}), then conclude that the theorem is true. In this way, the idea is to have a direction of proof: further reduce the dimension of the clauses.

Example

Hypotheses {P, \negP \lor Q, \negQ \lor R}

Prove R \rightarrow Hypotheses {P, \negP \lor Q, \negQ \lor R, \negR}

  1. P : hypothesis
  2. \negP \lor Q : hypothesis
  3. \negQ \lor R : hypothesis
  4. \negR : hypothesis
  5. Q : resolution of 1,2
  6. \negP \lor R : resolution of 2,3
  7. R : resolution of 1,6 or of 3,5
  8. \boxed{} : resoution of 4,7

Resolution is defined for propositional calculus but can also be applied to clauses involving variables: look for substitutions of terms for the variables involved in the parent clauses such that these will contain complementary atoms. This is possible because clauses are universally quantified!

For predicate logic, assuming the two parent clauses are standardized apart, we can apply the substitution law: look for a substitution that makes a literal in the former complementary with a literal in the latter (unifier of the two literals), apply it to both expressions, and resolve the two resulting formulas.

1.2.4 Substitution

Substitution is a function from variables to terms; that is, it replaces a variable with a term. An example is \sigma = {x \leftarrow g(h(a), a), y \leftarrow h(a)}, in which \sigma(x) is bound to g(h(a), a) and \sigma(y) is bound to h(a). The application of a substitution to an expression is denoted \sigma(E).

Caution

It is not permitted to replace a variable with a term involving that variable, unless we admit infinite terms

Given two substitutions, \sigma and \delta, we can define the composition of the two substitutions as: \sigma \circ \delta = \{ (t_j/\delta(x_i)) | x_i \in VAR, t_j \in TERM \}

The composition of two substitutions is a substitution that replaces the variables in the first substitution with the terms in the second substitution. The result of applying a composition of substitutions to a term t is denoted by (\sigma \circ \delta)(t). We then add the (term/variable) pairs in \delta concerning variables not occurring in \sigma.

Example

Given

\delta = { g(y,z)/x, a/y }

\sigma = { b/y, c/z }

We have

\sigma \circ \delta = { g(b,c)/x, a/y, c/z }

It is easily checked that (\sigma \circ \delta)(t) = \sigma(\delta(t))

Caution

(\sigma \circ \delta) \neq \delta \circ \sigma

Given 3 substitutions \tau, \sigma and \delta, we have: ((\sigma \circ \delta) \circ \tau) = (\sigma \circ (\delta \circ \tau)) (associative property).

Two terms, s and t, are unifiable if there exists a substitution \sigma such that \sigma(s) = \sigma(t). \sigma is a unifier of s and t, and \sigma(s) is a unification of s and t.

There can be many unifiers for the same pair of terms. Given two terms, s and t, and a unifier \sigma of s and t, \sigma is the most general unifier (MGU) of s and t if, for any other unifier \delta of s and t, there exists a substitution \tau such that \delta = (\tau \circ \sigma). Intuitively, we may think of the MGU as the simplest among all unifiers of two given terms. This MGU is unique unless the function symbols occurring in the terms fulfill at least one condition between commutativity (f(x,y) = f(y,x)) and associativity (f(f(x,y),z) = f(x, f(y,z))), which means the order of the arguments is not fixed.

Now the problem is: how to find this unifier? The unification problem is decidable in first-order logic, and in general for logic of order \alpha, \alpha \geq 2.

To unify well formed formulas, we first need the concept of permutation, that is, any bijective function p: A \rightarrow A.

Now we can define unification for every part involved in first-order logic:

  • Unifiable Atomic Formulae: two atoms a = P(t_1, t_2, ..., t_n) and a^{'} = Q(t_1^{'}, t_2^{'}, ..., t_m^{'}) unify if there exists a substitution \sigma such that \sigma(a) = \sigma(a^{'}), which means:

    • P = Q
    • n = m (same length)
    • \exists \sigma s.t. \forall i \in {1, …, n}: \sigma(t_i) = \sigma(t_i^{'}).
      • This is slightly modified if P is commutative, that is, the order of terms in P does not matter: there exists a permutation p of the first n natural numbers and a substitution \sigma such that \foralli \in {1, …, n}: \sigma(t_i) = \sigma(t_{p(i)}^{'}).
  • Unifiable literals: two literals unify if they have the same sign and the same predicates.

  • Unifiability of Conjunction of Literals: two conjunctions (OR) of literals unify if the number of literals is the same and there exists a permutation p of the first n (the length of the expression) natural numbers and a substitution \sigma such that \forall i \in {1, …, k}: \sigma(l_i) = \sigma(l^{'}_{p(i)})).

  • Unifiability of WFFs in Normal Disjunctive Form (Disjunction of Literals): same condition as the conjunction of literals.

  • Unification of sets of literals: the conditions are not the same, as the two sets do not need the same length, because we can have collapsing literals through substitution.

    For example: c = P(f(x), y) \land P(f(a), b), the set of literals is L = {P(f(x), y), P(f(a), b)}; c^{'} = P(f(z), w), L^{'} = {P(f(z), w)}. There exists \sigma = {a/x, b/y, a/z, b/w} such that \sigma(L) = \sigma(L^{'}). However, according to the definition of unifiability for conjunctions of literals, they are not unifiable because the lengths do not match.

1.2.5 Unification Algorithm

We can represent a term in different ways:

  • String representation: a term is represented as a linear array, whose elements are function symbols, variables, constants, commas, and parentheses.
  • Tree representation: a term is represented as a tree, where function symbols are roots of subtrees and the child nodes are function’s arguments. The leaves are variables and constants occurring in the term.

These representations are good for handling simple terms, but not complex ones, because they ignore common structures (subterms). The alternative is a graph representation (DAG, Directed Acyclic Graph). If we allow cycles, we may handle terms of infinite length and have infinite unification. With this representation, subterms in common have to be unified only once. If subterms are not shared, it may be necessary to generate structures of exponential width during unification.

Below we present an algorithm based on the string representation.

The idea is to unify the two terms by recursively unifying their components, starting from the outermost function symbols and going down to the innermost ones. The algorithm uses a substitution list to keep track of the substitutions made during the unification process, in order to return the most general unifier (MGU) at the end. The algorithm is based on the following principles:

  1. If the two terms are identical, return an empty substitution.
  2. If one term is a variable and the other is a term, check if the variable occurs in the term. If it does, return failure; otherwise, return a substitution that replaces the variable with the term.
  3. If both terms are variables, return a substitution that replaces one variable with the other.
  4. If both terms are compound terms with the same functor and arity, recursively unify their arguments and return the composition of the substitutions.
  5. If the terms are not unifiable, return failure.
Algorithm UNIFY(t1, t2: terms):
    if t1 = t2
        return empty substitution
    
    if t1 is a variable
        if t1 occurs in t2
            return FAILURE
        else
            return {t2/t1}
    
    if t2 is a variable
        if t2 occurs in t1
            return FAILURE
        else
            return {t1/t2}
    
    if functor(L1) != functor(L2) # functor check for the same function AND the same arity
        return FAILURE
    
    SUBST = empty
    for i in range(1, number of arguments of t1) 
        S = UNIFY(t1[i], t2[i])
        if S = FAILURE
            return FAILURE
        else
            Apply S to the remaining part of both t1 and t2
            SUBST = SUBST + S
    return SUBST

1.2.6 Pattern matching

Unification is too strict as a concept; an important variant is pattern matching: given two terms, s and t, we say that a matching exists between them if there exists a substitution \mu such that \mu(s) = t (or \mu(t) = s). In such a case, we say that \mu is a matcher of s and t, and \mu(s) (or \mu(t)) is a matching of s and t (and vice versa).

In other words, a matching problem is a unification problem in which substitutions are permitted only in either of the two terms.

1.2.7 Subsumption

Matching is a special case of subsumption. Let s and t be terms from which a matching \mu exists:

  • \mu(s) = t \Rightarrow \mu(s) \subseteq t = s subsumes t
  • \mu(t) = s \Rightarrow \mu(t) \subseteq s = t subsumes s

Necessary and sufficient condition for a matching to exist between terms s and t is that s “subsumes t” or t “subsumes” s.

So, matching is a special case of unification, in which the two terms are not necessarily identical, but one is a more general form of the other.

In other words, matching is a way to find a substitution that makes one term a more specific instance of another term.

If in our model we have a bicycle, if I observe a blue bicycle, I can say that this is still a bicycle, the detail that it is blue is an extra detail.

A matching happens if given a model the observation is equal to it, while the subsumption happens when the model is contained in the observation (as for the blue bicycle).

Formally, the model object matches a fact object if the formula describing the model unifies with some sub-conjunction of the formulas of the fact; we have a matching only if the formula describing the model can be derived from that describing the fact.

Example

Given our observation

\begin{aligned} &\text{shape}(c1, \text{rectangle}) \land \text{shape}(c2, \text{circle}) \land \text{shape}(c3, \text{circle}) \\ &\land \text{size}(c1, \text{medium}) \land \text{size}(c2, \text{small}) \land \text{size}(c3, \text{small}) \end{aligned}

And our model

\exists x \exists y \exists z \text{shape}(x, \text{rectangle}) \land \text{shape}(y, \text{circle}) \land \text{shape}(z, \text{circle})

We can notice that the model is a subset of our observation: a very simple car needs to be described given its part, while in our observation we have also detail about the size of the single parts.

1.2.8 Unification vs Pattern Matching

First-order unification fulfils the following properties:

  • Monotonicity: it adds properties, does not subtract them. It’s not possible to remove information from a structure unifying it with another one.
  • Commutativity and associativity: the order of unifications is irrelevant
  • It is not distributive with respect to generalization (and vice versa). [Generalization is the contrary of unification.]
  • It’s a process of constraint merging

Unification is bi-directional, while matching is uni-directional.

1.2.9 Deduction vs Induction

  • Deduction: reasoning from general principles to specific instances, the main operator is unification.
  • Induction: reasoning from specific instances to general principles, the main operator is generalization.