First-order logic. Quantifiers, predicates, relations.
Self-serve tutorial - low prerequisites, straightforward concepts.
Propositional logic can say “(P ∧ Q) → R”, but it can’t say “for every user, if they reset their password then they can log in” without inventing a separate proposition for each user. Predicate logic (first-order logic) is the upgrade that lets you talk about objects, their properties, and relationships—at scale—using variables and quantifiers.
Predicate logic extends propositional logic with (1) a domain of discourse (objects), (2) terms that name objects (variables/constants/functions), (3) predicates/relations that make statements about objects, and (4) quantifiers ∀ and ∃ that range over the domain. You build formulas like ∀x (Human(x) → Mortal(x)) and reason by translating English carefully, tracking scope, and using sound inference patterns (e.g., universal instantiation, existential generalization) while avoiding common quantifier traps.
Propositional logic treats whole statements as indivisible atoms: P, Q, R. That’s great for reasoning about fixed, named facts, but it doesn’t let you open up a statement and talk about its internal structure.
Consider the English statement:
In propositional logic, you’d need one proposition per student (“Alice submitted”, “Bob submitted”, …) and then conjoin them all. That’s not scalable and it misses the structure that these are all instances of the same pattern.
Predicate logic (often called first-order logic, FOL) solves this by introducing:
This is the language used by mathematics (“∀ε > 0 ∃N …”), by program specification (“∀i: 0 ≤ i < n → a[i] ≥ 0”), and by many formal methods tools.
Predicate logic is built from a small set of syntactic building blocks.
The domain (also called the universe) is the set over which variables range. It must be nonempty.
Examples:
The same formula can mean different things under different domains.
A term denotes an element of the domain.
Terms come from:
Examples of terms:
Intuition: a term is like an expression that evaluates to “an object”.
A predicate is an n-ary relation symbol, like:
When you apply a predicate to terms, you get an atomic formula:
An atomic formula evaluates to True/False once you know what the symbols mean in your domain.
From atomic formulas, you build larger formulas using the same connectives as propositional logic:
And you add quantifiers:
Crucially, quantifiers bind variables (like how parentheses bind subexpressions). The part of the formula a quantifier controls is its scope.
A variable occurrence is:
Example:
A formula with no free variables is called a sentence. Sentences are what we typically evaluate as True/False in a model.
To interpret a predicate-logic language, you need:
Then:
This “assignment of variables” idea is why quantifiers are powerful: they quantify over all objects in the domain, not just those you named.
| Feature | Propositional Logic | Predicate Logic (FOL) |
|---|---|---|
| Atomic units | Propositions P, Q, R | Predicates applied to terms, e.g., P(x), R(x,y) |
| Can talk about objects? | No | Yes (domain + terms) |
| Can express “all / some”? | Only by enumerating | Yes via ∀ and ∃ |
| Typical use | Circuit reasoning, SAT | Math, specs, verification, databases |
Predicate logic keeps propositional logic’s connectives, but gives you a language to express structure and patterns over a domain.
Most statements we care about in math and software are inherently quantified:
Quantifiers let you express these patterns directly. But they also introduce subtlety: the order and scope of quantifiers can change meaning dramatically.
The formula:
means: “for every x in the domain, φ(x) is true.”
Example:
This does not claim that every object is a student. It says: if an object is a student, then it has an ID.
A common pattern:
reads as: “All A’s are B’s.”
The formula:
means: “there exists some x in the domain such that φ(x) is true.”
Example:
This asserts at least one student is late.
Scope is usually controlled by parentheses.
Compare:
1) ∀x (P(x) → Q(x))
2) (∀x P(x)) → Q(x)
These are not the same.
This is one of the biggest conceptual leaps from propositional logic.
Consider:
Assume the domain is people.
1) ∀x ∃y Loves(x, y)
2) ∃y ∀x Loves(x, y)
These are very different. The second is much stronger.
When translating, go slowly:
Example: “Every user who reset their password can log in.”
Translation:
Notice why it’s → and not ∧: we’re not claiming everyone reset their password.
Negation interacts with quantifiers in a structured way:
Intuition:
Do the algebra carefully. For example:
¬(∀x (P(x) → Q(x)))
Step-by-step:
where ψ(x) = (P(x) → Q(x))
So:
≡ ∃x ¬(P(x) → Q(x))
Now expand implication:
so ¬(P → Q) ≡ ¬(¬P ∨ Q)
Apply De Morgan:
Final:
≡ ∃x (P(x) ∧ ¬Q(x))
Reading: “It is not true that all P’s are Q’s” means “There exists a P that is not Q.”
| English | Typical logic form |
|---|---|
| “All A are B” | ∀x (A(x) → B(x)) |
| “Some A are B” | ∃x (A(x) ∧ B(x)) |
| “No A are B” | ∀x (A(x) → ¬B(x)) (equiv. ¬∃x (A(x) ∧ B(x))) |
| “Not all A are B” | ∃x (A(x) ∧ ¬B(x)) |
Quantifiers are simple symbols with big consequences. Most real errors are scope/order errors, not symbol errors.
Predicate logic splits the world into two roles:
Predicates are what turn objects into truth claims.
This separation is why you can’t, for example, treat a predicate like an object unless your logic explicitly supports that (standard first-order logic does not—this is a doorway to higher-order logic).
Predicates and functions have an arity:
Similarly for functions:
Arity matters because it encodes what kind of relationship you’re expressing.
If your domain is D, then:
So the atomic formula R(a, b) is true exactly when (a, b) ∈ R.
This viewpoint connects predicate logic to:
Most first-order settings include equality “=”.
Equality lets you state things like:
Be careful: equality is a logical symbol with its own meaning (identity), not just another predicate you can interpret arbitrarily (in standard FOL with equality).
Function symbols let you talk about structured objects without introducing extra quantifiers.
Example domain: integers.
Then you can write:
Without functions, you’d need a relation Plus(x, 1, y) to express y = x + 1, and then quantify y.
You can encode a function f(x) using a relation F(x, y) meaning “y = f(x)” plus axioms:
But using function symbols is typically cleaner.
A fundamental operation is substituting a term into a formula.
If φ(x) is a formula with x free, and t is a term, then φ(t) is the result of substituting t for x.
Example:
But you must avoid variable capture: substituting a term containing a variable that becomes accidentally bound.
Example of what can go wrong:
Naively you get ∀y Loves(y, y), where the y inside t became bound by ∀y.
The safe approach is:
So you’d rename:
Then substitute x := y:
If the domain is finite D = {d₁, …, dₙ}, then:
This is not how we compute it in general (domains can be infinite), but it’s an excellent intuition-builder.
Keep these categories separate:
If you can reliably tell “object expressions” from “truth expressions”, you’ll write correct formulas much faster.
Formal methods is about specifying and proving properties of systems. Predicate logic is the backbone for:
Predicate logic is the first language where you can naturally express invariants like:
or existence claims like:
Predicate logic is flexible because you choose the domain and interpretation.
Examples:
1) Data structures
2) Security policies
Then a policy might be:
3) Databases (relational model)
A table like Enrolled(student, course) is literally a binary predicate Enrolled(·,·).
Queries often correspond to existential formulas.
Example: “Find students enrolled in CS101” corresponds to:
Even if you don’t do full theorem-prover work yet, you should recognize the standard moves.
From ∀x φ(x), you may conclude φ(t) for any term t.
Example:
UI on (1) with t = socrates gives:
Then by modus ponens with (2):
From ∃x φ(x), you can introduce a fresh symbol k such that φ(k) holds, but k must be new (it can’t carry hidden assumptions).
This is subtle: you don’t get to pick a specific known constant unless you can prove it works.
From φ(t), infer ∃x φ(x) (replace t by a variable).
Example:
From φ(x) for an arbitrary x (with no special assumptions about x), infer ∀x φ(x).
This “arbitrary” requirement is exactly where many informal proofs go wrong.
You already know satisfiability in propositional logic. In FOL:
Example (valid):
This is valid because if P holds for all x in a nonempty domain, then certainly there exists an x where P holds.
Non-valid (not always true):
True in some models, false in others.
Pure FOL is expressive, but automated reasoning is harder than for propositional logic.
So real tools often:
The key takeaway: learning predicate logic is the gateway to reading specifications, understanding solver input languages, and following proofs about programs.
Domain: all animals in a shelter.
Predicates:
Translate:
1) “Every dog is vaccinated.”
2) “Some dog is adoptable.”
3) “Not every dog is adoptable.”
1) “Every dog is vaccinated.”
We want: for all x, if x is a dog then x is vaccinated.
∀x (Dog(x) → Vaccinated(x))
2) “Some dog is adoptable.”
We want: there exists an x that is a dog and is adoptable.
∃x (Dog(x) ∧ Adoptable(x))
3) “Not every dog is adoptable.”
Start from “Every dog is adoptable”: ∀x (Dog(x) → Adoptable(x))
Negate it:
¬∀x (Dog(x) → Adoptable(x))
Now push negation inward using ¬∀x φ ≡ ∃x ¬φ.
Let φ(x) = (Dog(x) → Adoptable(x)).
¬∀x φ(x) ≡ ∃x ¬φ(x)
So:
¬∀x (Dog(x) → Adoptable(x))
≡ ∃x ¬(Dog(x) → Adoptable(x))
Eliminate implication:
(Dog(x) → Adoptable(x)) ≡ (¬Dog(x) ∨ Adoptable(x))
So:
¬(Dog(x) → Adoptable(x))
≡ ¬(¬Dog(x) ∨ Adoptable(x))
≡ Dog(x) ∧ ¬Adoptable(x)
Final:
∃x (Dog(x) ∧ ¬Adoptable(x))
Insight: “All dogs are adoptable” is an implication because it’s conditional on being a dog. Negating a universal statement almost always turns into an existential counterexample: “there exists a dog that is not adoptable.”
Domain: users.
Predicate Knows(u, p): user u knows password p.
We want to compare two statements:
A) ∀u ∃p Knows(u, p)
B) ∃p ∀u Knows(u, p)
Statement A:
∀u ∃p Knows(u, p)
Read carefully: for each user u, there exists some password p such that u knows p.
Different users may have different passwords.
This is typically true in a realistic system (each user knows at least one password).
Statement B:
∃p ∀u Knows(u, p)
Read carefully: there exists a single password p such that every user knows p.
This is far stronger: it implies a shared universal password.
Show non-equivalence by countermodel thinking.
Suppose there are two users: alice and bob.
Suppose alice knows password "a1" and bob knows password "b1".
Then:
Insight: Mentally underline which variables must be the same across all choices. In ∃p ∀u, the password is chosen once and reused for every user; in ∀u ∃p, each user gets their own choice.
Given:
1) ∀x (Student(x) → HasEmail(x))
2) Student(alex)
Prove: HasEmail(alex)
Start with the universal statement:
1) ∀x (Student(x) → HasEmail(x))
Apply universal instantiation with t = alex:
From ∀x φ(x), infer φ(alex).
So we get:
3) Student(alex) → HasEmail(alex)
Use the fact:
2) Student(alex)
Apply modus ponens to (3) and (2):
If Student(alex) → HasEmail(alex) and Student(alex), then HasEmail(alex).
Therefore:
4) HasEmail(alex)
Insight: Universal facts are like templates. Instantiation turns the template into a concrete implication you can fire with propositional reasoning.
Predicate logic = propositional logic + a domain of objects + terms + predicates + quantifiers ∀ and ∃.
Terms denote objects; formulas denote truth. Predicates turn terms into atomic formulas.
Scope and quantifier order matter: ∀x ∃y R(x,y) is generally not equivalent to ∃y ∀x R(x,y).
Translate “All A are B” as ∀x (A(x) → B(x)), not ∀x (A(x) ∧ B(x)).
Negating quantifiers flips them: ¬∀x φ ≡ ∃x ¬φ and ¬∃x φ ≡ ∀x ¬φ.
A formula with no free variables is a sentence; sentences are what become True/False in a model.
Core inference patterns include universal instantiation, existential generalization, and careful existential instantiation with fresh witnesses.
Using ∧ instead of → when translating statements like “All A are B”, accidentally asserting everything is A.
Losing track of scope/parentheses, producing formulas with free variables or unintended bindings.
Assuming ∀x ∃y and ∃y ∀x mean the same thing; swapping quantifiers without checking meaning.
Negating quantified statements incorrectly (e.g., thinking ¬∀x P(x) means ∀x ¬P(x) instead of ∃x ¬P(x)).
Translate into predicate logic. Domain: all integers ℤ. Predicates/relations: Even(x), Odd(x), Less(x,y). Use arithmetic terms like x+1 if you want.
A) “Every even integer is not odd.”
B) “There exists an integer that is less than every integer.”
Hint: A uses implication. B is about a single special integer that compares to all others; watch quantifier order.
A) ∀x (Even(x) → ¬Odd(x))
B) ∃x ∀y Less(x, y)
Negate the sentence and simplify:
∀x (P(x) → ∃y (Q(y) ∧ R(x, y)))
Hint: Push ¬ inward step by step: ¬∀ → ∃¬, ¬∃ → ∀¬, and ¬(A → B) ≡ A ∧ ¬B. Then use De Morgan on ¬(Q ∧ R).
Start:
¬∀x (P(x) → ∃y (Q(y) ∧ R(x, y)))
≡ ∃x ¬(P(x) → ∃y (Q(y) ∧ R(x, y)))
≡ ∃x (P(x) ∧ ¬∃y (Q(y) ∧ R(x, y)))
≡ ∃x (P(x) ∧ ∀y ¬(Q(y) ∧ R(x, y)))
≡ ∃x (P(x) ∧ ∀y (¬Q(y) ∨ ¬R(x, y)))
Are these two formulas equivalent? If not, give a counterexample model (a small finite domain and an interpretation of R).
1) ∀x ∃y R(x, y)
2) ∃y ∀x R(x, y)
Hint: Try a domain with two elements {a,b}. Make R relate each element to itself only.
They are not equivalent.
Counterexample:
Then:
1) ∀x ∃y R(x,y) is True (for x=a choose y=a; for x=b choose y=b).
2) ∃y ∀x R(x,y) is False (no single y works for both x=a and x=b).
Next nodes you’re ready for:
Background refreshers: