The exponent of a number indicates how many times the number is multiplied by itself. For example:
\( 4^{3} = 4 \cdot 4 \cdot 4 = 64 \)
Here, 3 is the exponent (or power), and 4 is the base. In words, we say "4 to the power of 3."
Here are the laws of exponents:
Law | Example |
---|---|
\( x^{1} = x \) | \( 3^{1} = 3 \) |
\( x^{0} = 1 \) | \( 4^{0} = 1 \) |
\( x^{-1} = \frac{1}{x} \) | \( 5^{-1} = \frac{1}{5} \) |
\( x^{m} \cdot x^{n} = x^{m+n} \) | \( x^{2} \cdot x^{3} = (x \cdot x) \cdot (x \cdot x \cdot x) = x \cdot x \cdot x \cdot x \cdot x = x^{5} = x^{2+3} \) |
\( \frac{x^{m}}{x^{n}} = x^{m-n} \) | \( \frac{x^{4}}{x^{2}} = \frac{x \cdot x \cdot x \cdot x}{x \cdot x} = x \cdot x \cdot \frac{x \cdot x}{x \cdot x} = x \cdot x \cdot 1 = x^{2} = x^{4-2} \) |
\( (x^{m})^{n} = x^{m \cdot n} \) | \( (x^{2})^{3} = (x \cdot x) \cdot (x \cdot x) \cdot (x \cdot x) = x \cdot x \cdot x \cdot x \cdot x \cdot x = x^{6} = x^{2 \cdot 3} \) |
\( (x \cdot y)^{n} = x^{n} \cdot y^{n} \) | \( (x \cdot y)^{2} = (x \cdot y) \cdot (x \cdot y) = x \cdot x \cdot y \cdot y = x^{2} \cdot y^{2} \) |
\( (\frac{x}{y})^{n} = \frac{x^{n}}{y^{n}} \) | \( (\frac{x}{y})^{3} = (\frac{x}{y}) \cdot (\frac{x}{y}) \cdot (\frac{x}{y}) = \frac{x \cdot x \cdot x}{y \cdot y \cdot y} = \frac{x^{3}}{y^{3}} \) |
\( x^{-n} = \frac{1}{x^{n}} \) | \( x^{-3} = (x^{-1})^{3} = (\frac{1}{x})^{3} = \frac{1}{x} \cdot \frac{1}{x} \cdot \frac{1}{x} = \frac{1 \cdot 1 \cdot 1}{x \cdot x \cdot x} = \frac{1}{x^{3}} \) |
\( x^{\frac{1}{n}} = \sqrt[n]{x} \) | \( 9^{\frac{1}{2}} = \sqrt{9} = 3 \) |
\( x^{\frac{m}{n}} = \sqrt[n]{x^m} = (\sqrt[n]{x})^m \) | \( 8^{\frac{2}{3}} = \sqrt[3]{8^2} = \sqrt[3]{64} = 4 \) |
In cryptography, we often compute exponents in modular arithmetic:
\( a^b \mod n \) means raising \(a\) to power \(b\) and then taking the remainder when divided by \(n\)
Example: \( 3^4 \mod 7 = 81 \mod 7 = 4 \)
The same exponent rules apply in modular arithmetic:
\( (a^m \cdot a^n) \mod n = a^{m+n} \mod n \)
\( (a^m)^n \mod p = a^{m \cdot n} \mod p \)
These exponent rules are fundamental to many cryptographic algorithms:
For cryptographic applications, we need to efficiently compute large exponents. The square-and-multiply algorithm lets us calculate \(x^n\) with only \(O(\log n)\) multiplications:
Example: To compute \(3^{13}\), we use binary 13 = 1101:
result = 1
bit 1: result = \( 1^{2} \times 3 = 3 \)
bit 1: result = \( 3^{2} \times 3 = 27 \)
bit 0: result = \( 27^{2} = 729 \)
bit 1: result = \( 729^{2} \times 3 = 1,594,323 \)
You already use modulo computation when you look at a clock and, for example, need to figure out what time it will be 3 hours after 11 o'clock, which is 2 o'clock. In mathematics, we write this as:
\( (11 + 3) \: mod \: 12 = 2 \)
Here, 12 is the modulus because we want the time as an integer between 0 and 11 (in this case, 12 o'clock is denoted by 0). In words, we say "11 plus 3 modulo 12 equals 2." The result of a modulo computation is an integer between 0 and the modulus minus 1. For example, with modulus 3, we have:
For example, if we look at \( 27 \: mod \: 5 \), modulo computes how many times 5 divides into 27 and then returns the remainder, which is 2 in this case. That is, \( 27 \: mod \: 5 = 2 \). But how do we get this result?
First, we compute how many times we can multiply 5 by an integer \( x \) so that the result is as close as possible to 27 without exceeding it. In other words, we find the maximum value of \( x \) such that \( 5 \cdot x \leq 27 \). In this case, \( x = 5 \) because \( 5 \cdot 5 = 25 \leq 27 \). Then, by subtracting 25 from 27, we get the answer: \( 27 - 25 = 2 \).
If the integer is negative, for example \( -27 \: mod \: 5 \), we have to do it slightly differently, and the answer is \( -27 \: mod \: 5 = 3 \). In this case, the integer \( x \) is negative and should be the closest integer such that \( 5 \cdot x \) does not exceed -27. That is, we find the minimum value of \( -x \) such that \( 5 \cdot -x \geq -27 \). Here, \( -x = -6 \) because \( 5 \cdot -6 = -30 \geq -27 \). Then, by subtracting -30 from -27, we get the answer: \( -27 - (-30) = -27 + 30 = 3 \).
It is important that \( x \) or \( -x \) is an integer such as \( -14, 3, 17 \), etc., and NOT a fraction or decimal such as \( \frac{1}{4}, \frac{-3}{7}, 2.5, 5.1 \), etc.
If two integers \( a \) and \( b \) modulo the same modulus \( c \) return the same remainder \( r \), then we say that \( a \) and \( b \) are congruent modulo \( c \). That is, if \( a \: mod \: c = r \) and \( b \: mod \: c = r \), then \( a \equiv b \: (mod \: c) \). Also, notice that if the modulus \( c \) is greater than the integer \( a \), i.e., \( c > a \), the result will always be \( a \: mod \: c = a \).
Modular arithmetic has several important properties that make it useful in cryptography:
These properties allow us to perform calculations on smaller values rather than potentially very large ones, which is essential in cryptographic algorithms.
A modular inverse of an integer \(a\) with respect to modulus \(n\) is an integer \(a^{-1}\) such that:
\(a \cdot a^{-1} \equiv 1 \: (\text{mod} \: n)\)
The modular inverse exists if and only if \(a\) and \(n\) are coprime (i.e., their greatest common divisor is 1). Finding modular inverses is crucial in cryptography, particularly in algorithms like RSA where we need to find \(d\) such that \(e \cdot d \equiv 1 \: (\text{mod} \: \phi(n))\).
Fermat's Little Theorem: If \(p\) is a prime number and \(a\) is not divisible by \(p\), then \(a^{p-1} \equiv 1 \: (\text{mod} \: p)\).
Euler's Theorem: If \(a\) and \(n\) are coprime, then \(a^{\phi(n)} \equiv 1 \: (\text{mod} \: n)\), where \(\phi(n)\) is Euler's totient function.
These theorems are fundamental to understanding and implementing many cryptographic algorithms, especially RSA.
Modular arithmetic is fundamental to many cryptographic algorithms:
A prime number is an integer greater than 1 that can only be divided evenly by 1 and itself. "Divided evenly" means the result is an integer, not a float. For example, if you divide 13 by 3, you get the float \( \frac{13}{3} = 4.333 \). We see that 13 is a prime number because it can only be divided evenly by 1 and itself: \( \frac{13}{1} = 13 \) and \( \frac{13}{13} = 1 \).
If an integer is not a prime number, it is called a composite number. For example, the integer 6 is a composite number because it can be divided evenly by 1, 2, 3, and 6:
\( \frac{6}{1} = 6 \), \( \frac{6}{2} = 3 \), \( \frac{6}{3} = 2 \), and \( \frac{6}{6} = 1 \)
The term "composite" means "something made by combining things." So, a composite number is an integer made by multiplying prime numbers:
Therefore, every integer greater than 1 is either a prime number or a product of prime numbers (a composite number).
The famous Greek mathematician Euclid proved that there are infinitely many prime numbers.
Michael Rabin and Gary Miller developed an algorithm that determines whether an integer is a prime or a composite number by testing the integer with multiple bases, denoted by \( a \). The algorithm is called the Rabin-Miller primality test.
Every integer greater than 1 can be expressed as a unique product of prime numbers (up to the order of the factors). This fundamental property makes prime numbers the "building blocks" of all natural numbers and is crucial for many cryptographic applications.
Prime numbers form the foundation of many cryptographic systems:
Determining whether a number is prime is crucial in cryptography. Several methods exist:
The Miller-Rabin test is particularly useful because it's efficient and can achieve any desired level of certainty by increasing the number of test rounds.
The greatest common divisor (GCD) of two or more integers is the largest positive integer that divides each of the given integers without a remainder. It is denoted as \(\gcd(a, b)\) or sometimes as \((a, b)\).
Formally, for two integers \(a\) and \(b\), not both zero, \(\gcd(a,b)\) is the largest positive integer \(d\) such that \(d|a\) and \(d|b\) (where \(d|a\) means "\(d\) divides \(a\)" or "\(a\) is divisible by \(d\)").
Before we describe what the greatest common divisor of two integers is, we first define what we mean by a divisor. In this context, a divisor of an integer \( x \) is any integer that divides \( x \) evenly, meaning the result is not a fraction. For example, if you divide \( x=12 \) by 5, you get the fraction \( \frac{12}{5} = 2.4 \), so 5 is not a divisor of \( x=12 \). For \( x=12 \), the divisors are 1, 2, 3, 4, 6, and 12 because \( \frac{12}{1} = 12 \), \( \frac{12}{2} = 6 \), \( \frac{12}{3} = 4 \), \( \frac{12}{4} = 3 \), \( \frac{12}{6} = 2 \), and \( \frac{12}{12} = 1 \).
Similarly, the divisors of 16 are 1, 2, 4, 8, and 16 because \( \frac{16}{1} = 16 \), \( \frac{16}{2} = 8 \), \( \frac{16}{4} = 4 \), \( \frac{16}{8} = 2 \), and \( \frac{16}{16}=1 \).
The greatest common divisor of 12 and 16 is therefore 4, because it is the largest integer among their common divisors. In mathematics, we write this as \( \gcd(12, 16) = 4 \).
Two integers whose greatest common divisor is 1 are called relatively prime numbers or co-primes. For example, 15 and 28 are relatively prime because \( \gcd(15, 28) = 1 \) (note that 28 is not a prime number).
If one of the two integers is a prime number, the greatest common divisor will always be 1, i.e., \( \gcd(a, p) = 1 \), where \( a \) is any integer (either prime or composite) and \( p \) is a prime number.
One method to compute the greatest common divisor of two integers is by using the Euclidean algorithm, developed by the famous Greek mathematician Euclid. See "The extended Euclidean algorithm" for more information about how to compute the greatest common divisor of two integers.
The Euclidean algorithm is an efficient method to find the GCD of two integers. It works by repeatedly applying the division algorithm:
Example: Find \(\gcd(48, 18)\)
Since the remainder is now 0, \(\gcd(48, 18) = 6\).
The GCD has several important applications in cryptography:
The extended Euclidean algorithm is an extension of the Euclidean algorithm, which only returns the greatest common divisor of two integers. Given two integers \( a \) and \( b \), the extended Euclidean algorithm returns the integers \( a \), \( b \), \( \lambda \), and \( \mu \) such that:
\( a \cdot \lambda + b \cdot \mu = \gcd(a, b) \)
Here, \( \lambda \) and \( \mu \) are called the Bézout coefficients for \( a \) and \( b \). Only if \( a \) and \( b \) are relatively prime, i.e., \( \gcd(a, b) = 1 \), then:
\( a \cdot \lambda + b \cdot \mu = 1 \)
In this case, \( \lambda \; mod \; b \) is the inverse of \( a \), denoted \( a^{-1} = \lambda \; mod \; b \), and \( \mu \: mod \: a \) is the inverse of \( b \), denoted \( b^{-1} = \mu \: mod \: a \) (see "Modulo computation" for more information about the \( mod \) operator). One useful property of an integer and its inverse is that \( a \cdot a^{-1} \; mod \; b = 1 \) and \( b \cdot b^{-1} \; mod \; a = 1 \).
You can easily compute \( \gcd(a, b) \), \( \lambda \), and \( \mu \) for example with \( a=5 \) and \( b=39 \) using a simple table. First, let us create a table with three columns (we do not yet know how many rows there will be). Let us denote the entry in the first row and first column as [1,1], the entry in the first row and second column as [1,2], the entry in the second row and first column as [2,1], and so on.
Next, we write \( b=39 \) in entry [1,1] and \( a=5 \) in entry [2,1]. Then we try to find the largest integer \( q_{1} \) such that \( q_{1} \cdot a \leq b \). We have \( q_{1}=7 \), which we write in entry [2,2], because \( 7 \cdot 5 = 35 \leq 39 \), and a remainder of \( r_{1}=4 \), which we write in entry [3,1].
Again, we try to find the largest integer \( q_{2} \) such that \( q_{2} \cdot r_{1} \leq a \). We have \( q_{2}=1 \), which we write in entry [3,2], because \( 1 \cdot 4 = 4 \leq 5 \), and a remainder of \( r_{2}=1 \), which we write in entry [4,1]. Notice that we are repeating the same process as before, just with the numbers in the next row.
The next computation returns a remainder of \( r_{3} = 0 \) because \( q_{3} \cdot r_{2} = 4 \cdot 1 = 4 \leq 4 = r_{1} \). We have now computed \( \gcd(5, 39)=r_{2}=1 \) since \( r_{3} = 0 \). Because 5 and 39 are relatively prime, we know that \( \lambda \) and \( \mu \) exist, and we can start using the last column.
First, we write \( x_{1}=0 \) in entry [4,3] and \( x_{2}=1 \) in entry [3,3]. Then we write \( x_{3}=q_{2} \cdot x_{2} + x_{1} = 1 \cdot 1 + 0 = 1 \) in entry [2,3]. For entry [1,3], we compute as before, just with numbers from the row above, i.e., \( x_{4}=q_{1} \cdot x_{3} + x_{2} = 7 \cdot 1 + 1 = 8 \).
Finally, we have that \( a \cdot x_{4} \pm b \cdot x_{3} = r_{2} \), where we need to decide whether it should be plus or minus between the two terms. Because \( a \cdot x_{4} = 5 \cdot 8 = 40 \), \( b \cdot x_{3} = 39 \cdot 1 \), and \( 40 \geq 39 \), we have \( 5 \cdot 8 - 39 \cdot 1 = 1 \) (which is the same as \( 5 \cdot 8 + 39 \cdot (-1) = 1 \)), so the Bézout coefficients are \( \lambda=8 \) and \( \mu=-1 \). Notice that \( a^{-1} = \lambda \; mod \; b = 8 \; mod \; 39 = 8\) and \( b^{-1} = \mu \; mod \; a = -1 \: mod \: 5 = 4\), where \( a \cdot a^{-1} \; mod \; b = 5 \cdot 8 \; mod \; 39 = 1 \) and \( b \cdot b^{-1} \; mod \; a = 39 \cdot 4 \; mod \; 5 = 1 \).
The table for computing \( 5 \cdot \lambda + 39 \cdot \mu = \gcd(5, 39) \) is:
\( b=39 \) | \( x_{4}=8 \) | |
\( a=5 \) | \( q_{1}=7 \) | \( x_{3}=1 \) |
\( r_{1}=4 \) | \( q_{2}=1 \) | \( x_{2}=1 \) |
\( r_{2}=1 \) | \( q_{3}=4 \) | \( x_{1}=0 \) |
\( r_{3}=0 \) |
function extended_gcd(a, b) if b = 0 then return (a, 1, 0) else (d, x', y') = extended_gcd(b, a mod b) return (d, y', x' - (a div b) * y')
The function returns a tuple (d, x, y) such that d = gcd(a, b) and d = ax + by.
While the standard Euclidean algorithm computes only the GCD through successive divisions, the extended version keeps track of the coefficients at each step. This "extension" allows us to express the GCD as a linear combination of the original numbers, which is crucial for finding modular inverses.
The extended Euclidean algorithm has the same time complexity as the standard Euclidean algorithm: O(log min(a,b)). This makes it highly efficient even for very large integers, which is crucial for its application in cryptographic systems.
In abstract algebra, a ring is a set with two operations (usually addition and multiplication) where addition forms an abelian group and multiplication is associative and distributes over addition. A group is a set with a single operation that satisfies closure, associativity, identity, and inverse properties.
The set of integers \( \{ \dots, -2, -1, 0, 1, 2, \dots \} \) is denoted by the symbol \( \mathbb{Z} \), i.e., \( \mathbb{Z} = \{ \dots, -2, -1, 0, 1, 2, \dots \} \), and is called the ring of integers or an additive group. The group of integers modulo \( n \) is a subset of \( \mathbb{Z} \) and is denoted by \( \mathbb{Z}/n\mathbb{Z} \), but we use the shorthand \( \mathbb{Z}_{n} \). This subset contains the following elements (because we compute modulo \( n \)):
\( \mathbb{Z}_{n} = \{ 0, 1, 2, \dots, n - 1 \}\)
Notice that whenever we perform addition or multiplication in \( \mathbb{Z}_{n} \), we always compute the result modulo \( n \) to obtain an integer in \( \mathbb{Z}_{n} \). To illustrate this, let us look at \( n = 5 \):
\( \mathbb{Z}_{5} = \{ 0, 1, 2, 3, 4 \}\)
When adding \( 3 \) and \( 4 \) in \( \mathbb{Z}_{5} \), we do the following: \( (3 + 4) \: mod \: 5 = 7 \: mod \: 5 = 2 \). Likewise, when multiplying \( 3 \) by \( 4 \) in \( \mathbb{Z}_{5} \), we have: \( (3 \cdot 4) \: mod \: 5 = 12 \: mod \: 5 = 2 \).
An integer \( a \) in \( \mathbb{Z}_{n} \) has an inverse if the greatest common divisor of \( a \) and \( n \) is 1, i.e., \( \gcd(a, n) = 1 \) (see "The extended Euclidean algorithm" for more information). The integer \( a \) is then called a unit. The set of units (all integers with an inverse in \( \mathbb{Z}_{n} \)) is denoted by \( (\mathbb{Z}/n\mathbb{Z})^{*} \). Again, we use the shorthand \( \mathbb{Z}_{n}^{*} \). If \( a_{1} \) and \( a_{2} \) are units, then their product \( (a_{1} \cdot a_{2}) \: mod \: n \) is also always a unit (i.e., \( a_{1} \cdot a_{2} \) is in the group \( \mathbb{Z}_{n}^{*} \)), but the sum \( (a_{1} + a_{2}) \: mod \: n \) may NOT be a unit (i.e., \( a_{1} + a_{2} \) is in \( \mathbb{Z}_{n} \) but may not be in \( \mathbb{Z}_{n}^{*} \)). We see the difference in the two sets \( \mathbb{Z}_{8} \) and \( \mathbb{Z}_{8}^{*} \):
If we choose \( n \) to be a prime number \( p \), then for all integers \( a \) except 0 in \( \mathbb{Z}_{p} \), we have \( \gcd(a, p) = 1 \), which means that \( \mathbb{Z}_{p}^{*} \) contains all integers from \( \mathbb{Z}_{p} \) except 0, i.e.:
\( \mathbb{Z}_{p}^{*} = \{ 1, 2, 3, \dots, p - 1 \}\)
For example, for \( p=7 \), the only difference between the two sets \( \mathbb{Z}_{7} \) and \( \mathbb{Z}_{7}^{*} \) is the integer 0:
The number of elements in \( \mathbb{Z}_{n}^{*} \) is denoted by \( \phi(n) \), named after the famous Swiss mathematician Euler, and is called Euler's phi function. As we saw previously, if \( n \) is a prime number \( p \), then \( \phi(p) = p-1 \). The number of elements in a group \( G \) is also called the order of \( G \) and is written as \( \left| G \right| \). The order of:
Euler's phi function \(\phi(n)\) can be computed efficiently when the prime factorization of \(n\) is known:
For a prime power \(p^k\): \(\phi(p^k) = p^k - p^{k-1} = p^k(1 - \frac{1}{p})\)
For a product of prime powers: \(\phi(n) = n \prod_{p|n} (1 - \frac{1}{p})\)
For example, \(\phi(36) = \phi(2^2 \cdot 3^2) = 36 \cdot (1 - \frac{1}{2}) \cdot (1 - \frac{1}{3}) = 36 \cdot \frac{1}{2} \cdot \frac{2}{3} = 12\)
If, for example, we choose the elements \( x \), \( a \), and \( b \) from the group \( \mathbb{Z}_{p}^{*} \) and want to compute \( x^{a + b} \: mod \: p \), then in the exponent we actually compute \( a + b \) modulo the order of the group, i.e., \( a + b \: mod \: (p-1) \) because \( \left| \mathbb{Z}_{p}^{*} \right| = \phi(p) = p-1 \). So, what we actually compute is \( x^{a + b \: mod \: (p-1)} \: mod \: p \). The same is true if we had chosen one of the other groups: we always compute modulo the order of the group in the exponent. Therefore, next time you look at an equation from a cryptosystem and wonder why they suddenly compute, for example, modulo \( p-1 \) instead of modulo \( p \), it is because the equation is used in the exponent of some integer.
Units are particularly important in cryptography because they have multiplicative inverses, allowing operations like division in modular arithmetic. The existence of these inverses is crucial for many cryptographic algorithms, including RSA, where we need to find values \(e\) and \(d\) such that \(e \cdot d \equiv 1 \pmod{\phi(n)}\).
Fermat's Little Theorem: If \(p\) is a prime number and \(a\) is not divisible by \(p\), then \(a^{p-1} \equiv 1 \pmod{p}\).
Euler's Theorem: For any coprime integers \(a\) and \(n\), \(a^{\phi(n)} \equiv 1 \pmod{n}\). This generalizes Fermat's Little Theorem and forms the mathematical foundation of the RSA cryptosystem.
A generator (or primitive root) of a group is an element whose powers can produce every element in the group. In other words, if g is a generator of a group G, then by computing all possible powers of g, you will eventually generate all elements in G.
In the following, we use the group \( \mathbb{Z}_{p} \) as an example for simplicity, but any group could be chosen.
There exists an element \( g \) in the group of integers \( \mathbb{Z}_{p} \) (see "The group of integers and units" for more information), where \( p \) is a prime number, whose powers generate every element in the set of units \( \mathbb{Z}_{p}^{*} \). That is, there exists an integer \( g \) in \( \mathbb{Z}_{p} \) such that:
\( \mathbb{Z}_{p}^{*} = \{ g^{1} \: mod \: p, g^{2} \: mod \: p, g^{3} \: mod \: p, \dots, g^{p-1} \: mod \: p \} \)
where \( g^{p-1} \: mod \: p = 1 \) because \( p-1 \: mod \: (p-1) = 0 \) (remember that the order of \( \mathbb{Z}_{p}^{*} \) is \( p-1 \), and we compute modulo the order of the group in the exponent of \( g \) because \( g \) belongs to the group \( \mathbb{Z}_{p}^{*} \)), and \( g^{0} = 1 \). Such a \( g \) is called a generator or a primitive root of the group, and it's denoted as \( \left< g \right> = \mathbb{Z}_{p}^{*} \). We also say that the order of \( g \) is \( ord(g) = p-1 \) because \( g \) generates the group \( \mathbb{Z}_{p}^{*} \) with \( p-1 \) elements.
To clarify, let's look at an example with the prime number 5. Recall that \( \mathbb{Z}_{5}^{*} = \{ 1, 2, 3, 4 \} \) (see "The group of integers and units" for the example). The group \( \mathbb{Z}_{5}^{*} \) has 2 as a generator because the powers of 2 generate every element in the group, i.e., \( \left< 2 \right> = \mathbb{Z}_{5}^{*} \):
The last power is 4 because \( p - 1 = 5 - 1 = 4 \). On the other hand, 4 is NOT a generator of \( \mathbb{Z}_{5}^{*} \) because the powers of 4 only generate the integers 1 and 4:
If you don't know the factorization of the integer \( p-1 \), then the only way to find a generator is to perform the above computation, i.e., check that it generates every element in the group \( \mathbb{Z}_{p}^{*} \). However, if you know the factorization of \( p-1 \), then for every prime number \( q \) that evenly divides \( p-1 \), we check that \( g^{(p-1)/q} \: mod \: p \neq 1 \) for a random integer \( g \) in the group \( \mathbb{Z}_{p} \). If this is the case, then \( g \) is a generator of \( \mathbb{Z}_{p}^{*} \).
Because the prime factorization problem is hard (the problem of computing the factorization of the integer \( p-1 \)), we use a so-called safe prime \( p \). A safe prime \( p \) is of the form \( p = 2 \cdot q + 1 \) where \( q \) is a prime number. Now, the factorization of \( p-1 \) is always only \( 2 \) and \( q \). For example, if we compute the safe prime \( p = 2 \cdot 5 + 1 = 11 \) with the prime number \( q = 5 \), we see that \( g = 7 \) is a generator of the group \( \mathbb{Z}_{11}^{*} \) because:
where 2 and \( q=5 \) are the only prime numbers that divide \( p-1=11-1=10 \) evenly. For the same reason, \( g = 5 \) is not a generator of \( \mathbb{Z}_{11}^{*} \) because:
The group \( \mathbb{Z}_{n}^{*} \), where \( n \) is a composite number, may not have a generator, but if \( n \) is a prime number \( p \), then the group \( \mathbb{Z}_{p}^{*} \) has at least one generator.
For a prime p, the number of generators (primitive roots) in \(\mathbb{Z}_p^*\) is exactly \(\phi(\phi(p)) = \phi(p-1)\), where \(\phi\) is Euler's totient function. This means that generators are relatively common in prime-order groups.
For example, in \(\mathbb{Z}_{11}^*\), there are \(\phi(10) = 4\) generators: 2, 6, 7, and 8.
Generators are fundamental to many cryptographic protocols:
When an element g is not a generator of the entire group, it generates a subgroup. The order of this subgroup will be the smallest positive integer k such that \(g^k = 1\).
For example, in \(\mathbb{Z}_5^*\), the element 4 generates the subgroup \(\{1, 4\}\) of order 2, while the element 2 generates the entire group of order 4.
A group that can be generated by a single element is called a cyclic group. All groups \(\mathbb{Z}_p^*\) where p is prime are cyclic groups.
Let \( g \) be a generator of the group \( G \). Given the values \( g \) and \( H \), the discrete logarithm (DL) problem is to compute the exponent \( a \) in the following equation, which is considered a hard problem:
\( g^{a} = H \)
The exponent \( a \) is also called the discrete logarithm of \( H \) to the base \( g \).
For example, if we use the group \( \mathbb{Z}_{p}^{*} \), where \( p \) is a large prime number and \( g \) is a generator of the group, then given \( g \), \( p \), and \( H \), it is hard to compute \( a \) such that the following equation holds:
\( g^{a} \: mod \: p = H \)
In addition to the DL problem, there are two related problems: the Diffie-Hellman (DH) problem and the Decisional Diffie-Hellman (DDH) problem. Given the values \( g \), \( g^{a} \), and \( g^{b} \), the DH problem is to compute the exponent \( a \cdot b \) in \( g^{a \cdot b} \).
Similarly, given the values \( g \), \( g^{a} \), \( g^{b} \), and \( g^{c} \), the DDH problem is to decide whether \( c = a \cdot b \) or whether \( c \) is a random integer.
You may have noticed that if we can solve the DL problem, that is, compute \( a \) in \( g^{a} = H \), then we can also solve the DH problem: first compute \( a \) from \( g^{a} \), then \( b \) from \( g^{b} \), and finally calculate \( a \cdot b \). This also implies that we can solve the DDH problem: first compute \( a \cdot b \) as described above, then compute \( c \) from \( g^{c} \), and finally check whether \( c = a \cdot b \).
The discrete logarithm problem is believed to be computationally hard when proper parameters are chosen. For \( \mathbb{Z}_{p}^{*} \), the problem becomes harder as the size of the prime \( p \) increases, especially when \( p-1 \) has at least one large prime factor. The best known classical algorithms (Number Field Sieve) run in sub-exponential time, making the problem practically intractable for large enough parameters.
Consider a small example in \( \mathbb{Z}_{11}^{*} \) with generator \( g = 2 \):
If given \( H = 9 \), finding \( a \) such that \( 2^a \mod 11 = 9 \) means finding \( a = 6 \). For small groups like this, we can compute all powers, but for cryptographically secure groups, this approach becomes computationally infeasible.
The hardness of these problems forms the security foundation for many cryptographic protocols:
While the examples above use the multiplicative group of integers modulo a prime, these problems are also defined in other groups:
Several algorithms exist for solving the discrete logarithm problem:
Let \( n \) be the product of two large prime numbers \( p \) and \( q \), i.e. \( n = p \cdot q \). Then, if you know the value of \( n \) it's hard to compute \( p \) and \( q \) from \( n \).
The problem of computing \( p \) and \( q \) is called the prime factorization problem.
What makes the prime factorization problem difficult is that while multiplication is easy (computing \(n = p \cdot q\) is straightforward), the reverse operation of factoring has no known efficient classical algorithm. The computational complexity grows exponentially with the number of bits in \(n\).
For small numbers, factorization is trivial: If \(n = 15\), we can easily find \(p = 3\) and \(q = 5\).
However, when the primes are large (hundreds of digits), factoring becomes practically impossible with current classical computing methods. For example, factoring a 2048-bit RSA modulus would take billions of years using the best-known algorithms.
The security of the RSA cryptosystem fundamentally depends on the hardness of the prime factorization problem:
In 1994, Peter Shor developed a quantum algorithm that can factor integers in polynomial time. If large-scale quantum computers become available, they could break RSA encryption by efficiently solving the prime factorization problem. This threat has motivated research into post-quantum cryptography algorithms that aren't vulnerable to quantum attacks.
As of 2023, the largest RSA number factored using general methods is RSA-250 (829 bits), which was factored in February 2020. Current cryptographic recommendations suggest using RSA keys of at least 2048 bits for security through the next decade.
Identification schemes allow one party to prove their identity to another securely. This page covers three important approaches:
The goal of an identification scheme is to allow someone's identity to be confirmed; in our case, Peggy wants to prove to Victor that she is really Peggy. If Peggy and Victor meet face to face, Peggy might use her driver's license to identify herself, but the problem becomes more difficult when Peggy and Victor are communicating over an insecure channel that is being eavesdropped on by their adversary, Eve. An identification scheme should prevent Eve from impersonating Peggy. Even Victor should not be able to impersonate Peggy after Peggy has identified herself to him.
The Fiat-Shamir zero-knowledge proof is a protocol proposed by Amos Fiat and Adi Shamir in 1986.
A zero-knowledge proof, such as the Fiat-Shamir zero-knowledge proof, is one way of performing identification and typically involves multiple rounds of challenge and response messages. In this process, Victor sends Peggy a challenge, and Peggy responds to the challenge using knowledge that only she possesses, thereby identifying herself to Victor. You can think of it as Peggy using her secret key to create the response and Victor verifying the response with her public key. The reason it is called a zero-knowledge proof is because Victor learns nothing new (zero knowledge) from the exchange of messages between him and Peggy (the challenge and response messages). In other words, Peggy convinces Victor that a certain claim is true without giving Victor any information he could use to convince others that the claim is true.
An identification scheme based on a zero-knowledge proof must satisfy the following three conditions:
So, why not just use a public key system such as RSA or ElGamal as an identification scheme with the zero-knowledge property, where Peggy wants to convince Victor that she is the owner of the secret key corresponding to the public key that Victor possesses? In this case, Victor could encrypt a message \( m \) with Peggy's public key and send the ciphertext to Peggy (the challenge). If Victor then receives the correct plaintext from Peggy (the response), he knows that Peggy has the secret key (because only the corresponding secret key to the public key can correctly decrypt the ciphertext).
If Victor is honest and chooses the message \( m \) himself, he learns nothing new from the exchange of messages, and the protocol would work as desired. However, if Victor is malicious, he could simply choose a previously seen ciphertext from Peggy and send it to her as the challenge. Now, when Victor receives the plaintext from Peggy (the response), he gains knowledge about the content of this plaintext, which he did not have before.
This problem can be solved if Peggy and Victor also use a commitment scheme such as the Pedersen commitment scheme, where Peggy makes a commitment to a value that is hidden from Victor until she chooses to reveal it by opening the commitment. In the above example, when Peggy receives the ciphertext from Victor (the challenge), she first decrypts the ciphertext, resulting in a plaintext \( m' \). She then makes a commitment to \( m' \) and sends the commitment to Victor instead of \( m' \) itself. Victor then sends the original plaintext \( m \) to Peggy, who checks that \( m' = m \). Only if this is the case does she know that Victor is honest, and she opens the commitment by sending \( m' \) (and the randomness used in the commitment) to Victor, who checks that the commitment is correct. If the commitment is correct, Peggy has convinced Victor that she is the owner of the secret key corresponding to the public key that Victor possesses. Otherwise, if \( m' \neq m \) in Peggy's last check, then Victor has tried to gain knowledge about the plaintext of the ciphertext.
Another classic example of a commitment scheme is the "coin flipping by telephone" problem: In this scenario, Peggy and Victor each flip a coin and need to agree on who the winner is. Peggy wins if both coins show the same side, and Victor wins if they are different. The obvious problem is that if Peggy tells the result of her coin first, Victor can respond in such a way that he wins, no matter what the actual result of his coin is. At the same time, Peggy has no way of verifying that Victor is telling the truth, because they are communicating over the telephone.
One solution to this problem is for Peggy to first flip her coin and then make a commitment to the result, which she sends to Victor. The commitment has two properties: first, the result of Peggy's coin is hidden from Victor, and second, the commitment prevents Peggy from changing her answer later on. Victor then flips his coin and sends the result to Peggy. When Peggy receives Victor's result, she opens the commitment, i.e., she reveals the result of her coin to Victor. Now Peggy and Victor can agree on the winner of the game in an honest way.
A commitment scheme comes in two different flavors: either it is unconditionally binding and computationally hiding, or it is computationally binding and unconditionally hiding. The four properties are defined as follows:
If the commitment scheme is unconditionally binding and computationally hiding, then Peggy generates the secret and public keys and sends the public key to Victor, who verifies that it was generated correctly. Otherwise, if the commitment scheme is computationally binding and unconditionally hiding, Victor generates the keys and sends the public key to Peggy, who verifies that it was generated correctly.
So, it might seem that an unconditionally binding and unconditionally hiding commitment scheme would be preferable. Why don't we create such a scheme? The answer is that it is impossible: Assume that we have a commitment scheme with both of these properties. Because it is unconditionally hiding, there must exist two different values \( m \) and \( m' \) that generate the same commitment. If this were not the case, an adversary (or Peggy and Victor) could simply compute the commitment for every possible value (remember that "unconditional" implies the adversary has infinite computing power/time) and compare them to the commitment, thereby finding the committed value. But the unconditional hiding property contradicts the unconditional binding property, because we now have two different values \( m \) and \( m' \) that generate the same commitment.
As described previously, an identification scheme based on a zero-knowledge proof performs multiple rounds of challenge and response messages, which is not always desirable. Therefore, we can use sigma protocols such as Schnorr's sigma protocol that require only one round of challenge and response messages.
In a sigma protocol, Peggy knows a witness \( w \) for a problem \( x \) in some relation \( R \). You can think of \( w \) as the solution to the problem \( x \), where the relation \( R \) is the set of all problems and their solutions. For example, \( R \) could be the relation of all discrete logarithm problems, where \( w \) is the discrete logarithm of \( x \), i.e., \( x = g^{w} \) for some \( g \). This is also denoted as \( (x,w) \in R \). A sigma protocol must satisfy the following four conditions:
Some sigma protocols possess one or both of the following properties:
Protocol | Type | Security Property | Challenge-Response Rounds | Key Use Case |
---|---|---|---|---|
Fiat-Shamir | Zero-Knowledge Proof | Unconditionally sound, computationally zero-knowledge | Multiple | Identity verification without revealing secret information |
Pedersen | Commitment Scheme | Unconditionally hiding, computationally binding | Two-phase: commit & reveal | Secure commitments to values that must be revealed later |
Schnorr | Sigma Protocol | Special honest-verifier zero-knowledge | Single | Efficient authentication with minimal interaction |
The Fiat-Shamir zero-knowledge proof is a protocol proposed by Amos Fiat and Adi Shamir in 1986.
Peggy first chooses two large prime numbers \( p \) and \( q \), and computes \( n = p \cdot q \). She then selects a secret value \( x \) and computes the public value \( y = x^{2} \: mod \: n \). Peggy claims that \( y \) is a square modulo \( n \), and this is what she wants to convince Victor of, without revealing any information (specifically, the value of \( x \)) that would allow Victor to convince others that \( y \) is a square modulo \( n \). You can think of \( x \) as Peggy's secret key and \( y \) as her public key. Peggy's goal is to convince Victor that she is the owner of the secret key \( x \) corresponding to the public key \( y \) that Victor possesses.
In this protocol, a single round of challenge and response messages is not enough to convince Victor. Therefore, the following steps are performed \( k \) times, where \( k \) is chosen so that if Peggy does not know the secret value \( x \) and simply guesses, the probability that she guesses correctly every time is very small.
In the protocol, Peggy first chooses a random value \( r \) between \( 1 \) and \( n - 1 \), and then computes \( a = r^{2} \: mod \: n \), which she sends to Victor. Notice that \( a \) is a square modulo \( n \). Victor then chooses a random challenge \( b \), where either \( b=0 \) or \( b=1 \), and sends it to Peggy. Peggy responds by computing \( z = x^{b} \cdot r \: mod \: n \), where she only uses the secret value \( x \) when \( b = 1 \).
Finally, Victor verifies the response by checking that \( p = z^{2} \: mod \: n \) is equal to \( p' = y^{b} \cdot a \: mod \: n \). Only if \( p=p' \) does a new round start, where Victor sends a new challenge \( b' \) to Peggy; otherwise, the protocol stops.
We now have to check that the protocol satisfies the three conditions: completeness, soundness, and zero-knowledge.
The last check is always true when \( y \) is a square modulo \( n \) because:
\( \eqalign{ p &= z^{2} \: mod \: n &&(z = x^{b} \cdot r) \\ &= (x^{b} \cdot r)^{2} \: mod \: n &&(\mbox{exponent rule}) \\ &= x^{b \cdot 2} \cdot r^{2} \: mod \: n &&(\mbox{exponent rule}) \\ &= (x^{2})^{b} \cdot r^{2} \: mod \: n &&(y = x^{2} \: \mbox{and} \: a = r^{2}) \\ &= y^{b} \cdot a \: mod \: n = p' } \)
We assume for contradiction that \( y \) is not a square modulo \( n \), and yet Peggy is able, in a single round, to provide valid responses for both \( b = 0 \) and \( b = 1 \). That is, Victor's final check is always satisfied even when \( y \) is not a square modulo \( n \). For \( b = 0 \), let Peggy's response be \( z_{0} \), so \( z_{0}^{2} \: mod \: n = a \: mod \: n \). For \( b = 1 \), let her response be \( z_{1} \), so \( z_{1}^{2} \: mod \: n = y \cdot a \: mod \: n \). From the following equation, we see that \( y \) must be a square modulo \( n \), which contradicts our assumption:
\( \eqalign{ z_{1}^{2} \: mod \: n &= y \cdot a \: mod \: n &&(a = z_{0}^{2}) \\ z_{1}^{2} \: mod \: n &= y \cdot z_{0}^{2} \: mod \: n \\ z_{1}^{2} \cdot (z_{0}^{2})^{-1} \: mod \: n &= y \cdot z_{0}^{2} \cdot (z_{0}^{2})^{-1} \: mod \: n &&(z_{0}^{2} \cdot (z_{0}^{2})^{-1} = 1) \\ z_{1}^{2} \cdot (z_{0}^{2})^{-1} \: mod \: n &= y \: mod \: n &&(\mbox{exponent rule}) \\ z_{1}^{2} \cdot (z_{0}^{-1})^{2} \: mod \: n &= y \: mod \: n &&(\mbox{exponent rule}) \\ (z_{1} \cdot z_{0}^{-1})^{2} \: mod \: n &= y \: mod \: n } \)
In other words, if Peggy can produce a valid response for both \( b = 0 \) and \( b = 1 \), then \( y \) must be a square modulo \( n \) because \( y = (z_{1} \cdot z_{0}^{-1})^{2} \: mod \: n \). Therefore, when \( y \) is not a square modulo \( n \), Peggy can only produce a valid response for the challenge \( b = 0 \). This implies that in one round, Victor has a \( \frac{1}{2} \) probability of accepting Peggy's response when \( y \) is not a square modulo \( n \). Thus, if the protocol is repeated \( k \) times, the probability is \( (\frac{1}{2})^{k} \), and for example, with \( k = 80 \), the probability becomes acceptably small.
To prove that the protocol is zero-knowledge, we need to construct a simulator \( M \) that can convince Victor that \( y \) is a square modulo \( n \) (note that \( M \) does not know the value of \( x \)). The simulator \( M \) is constructed as follows:
Step (3), where \( M \) rewinds Victor, may seem a bit odd, but it is essentially the same as restarting a computer (in our case, Victor) when it freezes (in our case, when \( M \) receives a challenge from Victor that it cannot answer). We see that \( M \) still satisfies completeness because when \( c=b \), then:
\( \eqalign{ a_{M} \cdot y^{c} \: mod \: n &= (r^{2} \cdot (y^{-1})^{c}) \cdot y^{c} \: mod \: n &&(\mbox{exponent rule})\\ &= r^{2} \cdot (y^{c})^{-1} \cdot y^{c} \: mod \: n &&((y^{c})^{-1} \cdot y^{c} \: mod \: n = 1)\\ &= r^{2} \: mod \: n } \)
Remember that \( (a_{M},c,z_{M}) \) (the output of \( M \)) must have the same distribution as \( (a,b,z) \) (the output of the real protocol between Peggy and Victor):
The Fiat-Shamir protocol has been implemented in smart cards for secure authentication. The card proves it possesses a secret without revealing it, preventing card cloning.
Zero-knowledge proofs based on Fiat-Shamir are used in privacy-focused cryptocurrencies like Zcash to verify transactions without revealing sensitive details.
Initialization | |
Peggy:
|
|
Peggy sends the public values \( (n, y, a) \) to Victor. | |
The following steps are repeated \( k \) times | |
Challenge | |
Victor:
|
|
Victor sends the challenge \( b \) to Peggy. | |
Response | |
Peggy:
|
|
Peggy sends the response \( z \) to Victor. | |
Verification | |
Victor:
|
Try a demo of the zero-knowledge protocol here.
Peggy first chooses the two prime numbers \( p = 197 \) and \( q = 281 \), and computes \( n = p \cdot q = 197 \cdot 281 = 55357 \). She then selects the secret value \( x = 57 \) and computes the public value \( y = x^{2} \: mod \: n = 57^{2} \: mod \: 55357 = 3249 \), which she wants to convince Victor is a square modulo \( n = 55357 \). Next, Peggy chooses the random integer \( r = 60 \) and computes \( a = r^{2} \: mod \: n = 60^{2} \: mod \: 55357 = 3600 \), which she sends to Victor.
In the first round, Victor chooses the challenge \( b = 0 \) and sends it to Peggy. Peggy responds to the challenge by computing \( z = x^{b} \cdot r \: mod \: n = 57^{0} \cdot 60 \: mod \: 55357 = 60 \), which she sends to Victor. Victor then verifies the response by checking that \( p = z^{2} \: mod \: n = 60^{2} \: mod \: 55357 = 3600 \) is equal to \( p' = y^{b} \cdot a \: mod \: n = 3249^{0} \cdot 3600 \: mod \: 55357 = 3600 \).
Because \( p = p' \), Victor accepts the response as valid and starts a new round by sending a new challenge \( b' \) to Peggy. If Peggy's response is valid in all \( k \) rounds, then Victor is convinced that \( y = 3249 \) is a square modulo \( n = 55357 \).
The Pedersen commitment scheme, proposed in 1991 by Torben P. Pedersen, is an unconditionally hiding and computationally binding commitment scheme based on the discrete logarithm problem. The setup of the scheme is the same as in the Schnorr signature and the Digital Signature Algorithm (DSA). Note that Victor generates the keys because the commitment scheme is unconditionally hiding and computationally binding.
First, Victor (or a trusted third party) chooses two prime numbers \( p \) and \( q \) such that \( p \: mod \: q = 1 \) (by first choosing \( q \) and then computing \( p = i \cdot q + 1 \) for \( i \geq 2 \) until \( p \) is a prime number), and a generator \( g \) of order \( q \) from the group \( \mathbb{Z}_{p}^{*} \). If a generator \( g \) has order \( q \), it means that \( g \) generates a subgroup \( G \) of \( \mathbb{Z}_{p}^{*} \) with \( q \) elements. Victor can easily find \( g \) by computing \( g = g_{1}^{(p-1)/q} \: mod \: p \), where \( g_{1} \) is a generator of the group \( \mathbb{Z}_{p} \).
Victor then chooses a secret key \( a \) between \( 1 \) and \( q-1 \) and computes the public key \( pk = h \), where \( h = g^{a} \: mod \: p \), which he sends to Peggy. Before Peggy makes a commitment to the value \( x \), she verifies the received public key and only continues if it was generated correctly. Peggy first chooses a random integer \( r \) between \( 1 \) and \( q-1 \), and then uses the received public key \( pk \) to compute the commitment \( c = commit_{pk}(x, r) = g^{x} \cdot h^{r} \: mod \: p \), which she sends to Victor.
At some later point, Peggy opens the commitment by sending the values \( x \) and \( r \) to Victor, who first computes the commitment \( c' = commit_{pk}(x, r) = g^{x} \cdot h^{r} \: mod \: p \) and then verifies that \( c = c' \). Only if this is true does he know that the received \( x \) is the same value that Peggy committed to earlier.
In the following, we will explain why the scheme is unconditionally hiding and computationally binding (in fact, Pedersen's commitment scheme is perfectly hiding). Perfect hiding means that the commitment \( c = commit_{pk}(x, r) = g^{x} \cdot h^{r} \: mod \: p \) reveals no information whatsoever about the value \( x \).
Rewriting the commitment, we have:
\(\eqalign{ c &= g^{x} \cdot h^{r} \: mod \: p &&(h = g^{a}) \\ &= g^{x} \cdot (g^{a})^{r} \: mod \: p &&(\mbox{exponent rule}) \\ &= g^{x} \cdot g^{a \cdot r} \: mod \: p &&(\mbox{exponent rule}) \\ &= g^{x + a \cdot r} \: mod \: p }\)
In other words, the commitment \( c \) is a random element in the group \( G \): \( g \) is an element in \( G \) (since \( g \) generates \( G \), it must also belong to the group), and the group \( G \) is closed under multiplication, so \( g^{x + a \cdot r} \) is an element in \( G \) (think of \( g^{x + a \cdot r} \) as \( g \) multiplied by itself \( x + a \cdot r \) times). Because we multiply by the random integer \( r \) in the exponent, the commitment \( c \) is a completely random element in \( G \), and therefore the scheme is perfectly hiding.
The computational binding property means that Peggy can only change the committed value \( x \) if she has infinite computing power or time; that is, she can compute a valid commitment \( c \) for two values \( x \) and \( x' \) where \( x \neq x' \). We can prove the computational binding property by showing that if Peggy can compute such a commitment \( c \), then she can solve the discrete logarithm problem: Assume that Peggy has computed a valid commitment \( c \) for both \( x \) and \( x' \) where \( x \neq x' \), i.e. \( commit_{pk}(x, r) = c = commit_{pk}(x', r') \). We then have:
\(\eqalign{ g^{x} \cdot h^{r} \: mod \: p &= g^{x'} \cdot h^{r'} \: mod \: p \\ g^{x} \cdot h^{r} \cdot (h^{r'})^{-1} \: mod \: p &= g^{x'} \cdot h^{r'} \cdot (h^{r'})^{-1} \: mod \: p &&(h^{r'} \cdot (h^{r'})^{-1} = 1) \\ g^{x} \cdot (g^{x})^{-1} \cdot h^{r} \cdot (h^{r'})^{-1} \: mod \: p &= g^{x'} \cdot (g^{x})^{-1} \: mod \: p &&(g^{x} \cdot (g^{x})^{-1} = 1) \\ h^{r} \cdot (h^{r'})^{-1} \: mod \: p &= g^{x'} \cdot (g^{x})^{-1} \: mod \: p &&(\mbox{exponent rule}) \\ h^{r} \cdot h^{-r'} \: mod \: p &= g^{x'} \cdot g^{-x} \: mod \: p &&(\mbox{exponent rule}) \\ h^{r - r'} \: mod \: p &= g^{x' - x} \: mod \: p \\ }\)
This implies that \( r-r' \: mod \: q = x'-x \: mod \: q \) (remember that we compute modulo \( q \) in the exponent of \( g \) because the order of \( g \) is \( q \)). Because \( r \neq r' \), we have that \( r-r' \neq 0 \), and hence \( \gcd(r-r', q)=1 \) (recall that \( \gcd(a,q)=1 \) for every integer \( a \geq 1 \) when \( q \) is a prime number), which means we can compute the inverse \( (r-r')^{-1} \) of \( r-r' \) using the extended Euclidean algorithm. By multiplying both sides of the equation \( r-r' \: mod \: q = x'-x \: mod \: q \) by \( (r-r')^{-1} \), we find that the discrete logarithm of \( h \) is \( a = (x'-x) \cdot (r-r')^{-1} \: mod \: q \) (remember that \( h = g^{a} \: mod \: p \), so we have solved the discrete logarithm problem).
Why does this prove that the scheme is computationally binding? We know that the discrete logarithm problem is computationally hard in the group \( G \) (because \( G \) is a subgroup of \( \mathbb{Z}_{p}^{*} \) of order \( q \)). Therefore, since Peggy can only change her commitment by solving the discrete logarithm problem—which is computationally hard—it is also computationally hard for Peggy to change the committed value.
Pedersen commitments are used in Monero, Grin, and other privacy coins to hide transaction amounts while ensuring they balance correctly. The homomorphic property allows verifying that inputs equal outputs.
Electronic voting systems use Pedersen commitments to ensure ballot secrecy while allowing voters to verify their vote was counted correctly. This provides both privacy and verifiability.
Public parameters | |
A trusted third party publishes two large prime numbers \( p \) and \( q \) such that \( p \: mod \: q = 1 \), and a generator \( g = g_{1}^{(p-1)/q} \: mod \: p \) of order \( q \) from the group \( \mathbb{Z}_{p}^{*} \) (where \( g_{1} \) is a generator of the group \( \mathbb{Z}_{p} \)). | |
Initialization | |
Victor:
|
|
Victor sends the public key \( pk = h \) to Peggy. | |
Commitment | |
Peggy:
|
|
Peggy sends the commitment \( c \) to Victor. | |
Opening | |
Peggy opens the commitment by sending the values \( x \) and \( r \) to Victor. | |
Victor:
|
Try a demo of the commitment scheme here.
Victor chooses the two prime numbers \( p = 1447 \) and \( q = 241 \), and the generator \( g_{1} = 1252 \) of the group \( \mathbb{Z}_{1447} \). He then computes the generator \( g = g_{1}^{(p-1)/q} \: mod \: p = 1252^{(1447-1)/241} \: mod \: 1447 = 123 \), which has order \( q = 241 \). Next, he chooses the secret key \( a = 161 \) and computes the public key \( h = g^{a} \: mod \: p = 123^{161} \: mod \: 1447 = 944 \), which he sends to Peggy.
Peggy then chooses the random value \( r = 5 \) and makes a commitment to \( x = 52 \) by computing \( c = g^{x} \cdot h^{r} \: mod \: p = 123^{52} \cdot 944^{5} \: mod \: 1447 = 325 \), which she sends to Victor. When she wants to open the commitment, she sends \( x = 52 \) and \( r = 5 \) to Victor. Victor then computes the commitment \( c' = g^{x} \cdot h^{r} \: mod \: p = 123^{52} \cdot 944^{5} \: mod \: 1447 = 325 \) and checks that \( c' = 325 \) is equal to \( c = 325 \).
Schnorr's sigma protocol, developed by Claus P. Schnorr in 1989, is based on the discrete logarithm problem, like ElGamal. The setup of the protocol is the same as in the Schnorr signature and the Digital Signature Algorithm (DSA).
First, Peggy (or a trusted third party) chooses a prime number \( q \) and computes another prime number \( p \) by \( p = 2 \cdot q + 1 \) (so \( p \: mod \: q = 1 \)), and a generator \( g \) of order \( q \) from the group \( \mathbb{Z}_{p}^{*} \). If a generator \( g \) has order \( q \), it means that \( g \) generates a subgroup \( G \) of \( \mathbb{Z}_{p}^{*} \) with \( q \) elements. Peggy can easily find \( g \) by computing \( g = g_{1}^{(p-1)/q} \: mod \: p \), where \( g_{1} \) is a generator of the group \( \mathbb{Z}_{p} \). Peggy then chooses a secret key \( w \) between \( 1 \) and \( q-1 \) and computes the public key \( h = g^{w} \: mod \: p \), which she publishes. The goal of the protocol is for Peggy to convince Victor that she is really Peggy, i.e., she is the owner of the secret key \( w \) corresponding to the public key \( h \).
Remember that the messages sent in a sigma protocol must be in the special form \( (a,e,z) \), which is called a conversation, where Peggy sends the first message \( a \), Victor sends the challenge \( e \), and Peggy responds with \( z \): Peggy first chooses a random number \( r \) between \( 1 \) and \( q-1 \) and computes \( a = g^{r} \: mod \: p \), which she sends to Victor. Victor then chooses a random challenge \( e \) between \( 1 \) and \( q-1 \) and sends it to Peggy, who responds with \( z = r + e \cdot w \: mod \: q \), where she uses the secret key \( w \). Finally, Victor verifies Peggy's identity by checking that \( s = g^{z} \: mod \: p \) is equal to \( s' = a \cdot h^{e} \: mod \: p \).
We now need to check that Schnorr's sigma protocol satisfies completeness, special soundness, and special HVZK.
If Peggy knows the value of \( w \), it's easy to check completeness:
\(\eqalign{ s &= g^{z} \: mod \: p &&(z = r + e \cdot w) \\ &= g^{r + e \cdot w} \: mod \: p &&(\mbox{exponent rule}) \\ &= g^{r} \cdot g^{e \cdot w} \: mod \: p &&(\mbox{exponent rule}) \\ &= g^{r} \cdot (g^{w})^{e} \: mod \: p &&(a = g^{r}, h=g^{w}) \\ &= a \cdot h^{e} \: mod \: p =s' }\)
To prove special soundness, we need to show that if Peggy can produce two accepting conversations \( (a,e,z) \) and \( (a,e',z') \) where \( e \neq e' \), then she can efficiently compute the secret key \( w \). Since both conversations are accepting, the final verification holds for both: \( g^{z} \: mod \: p = a \cdot h^{e} \: mod \: p \) and \( g^{z'} \: mod \: p = a \cdot h^{e'} \: mod \: p \). By dividing one equation by the other, we obtain:
\(\eqalign{ \frac{g^{z}}{g^{z'}} \: mod \: p &= \frac{a \cdot h^{e}}{a \cdot h^{e'}} \: mod \: p \\ \frac{g^{z}}{g^{z'}} \: mod \: p &= \frac{h^{e}}{h^{e'}} \: mod \: p &&(\mbox{exponent rule}) \\ g^{z-z'} \: mod \: p &= h^{e-e'} \: mod \: p }\)
If we look at the exponent, we have that \( z-z' \: mod \: q = e-e' \: mod \: q\) because \( g \) is an element in the group \( G \) of order \( q \). We also know that \( e \neq e' \), which implies that \( e - e' \neq 0 \), and therefore \( \gcd(e-e', q) = 1 \) (recall that \( \gcd(x,q)=1 \) for any integer \( x \geq 1 \) when \( q \) is a prime number). Because \( \gcd(e-e', q) = 1 \), we can find the inverse \( (e-e')^{-1} \) of \( e-e' \) using the extended Euclidean algorithm. By multiplying both sides of the exponent by \( (e-e')^{-1} \), we obtain:
\(\eqalign{ g^{(z-z') \cdot (e-e')^{-1}} \: mod \: p &= h^{(e-e') \cdot (e-e')^{-1}} \: mod \: p &&((e-e') \cdot (e-e')^{-1} = 1) \\ g^{(z-z') \cdot (e-e')^{-1}} \: mod \: p &= h^{1} \: mod \: p }\)
Recall that \( h = g^{w} \: mod \: p \), so we have efficiently computed \( w = (z-z') \cdot (e-e')^{-1} \: mod \: q \). This proves that Schnorr's sigma protocol satisfies special soundness. Therefore, if Peggy does not know the secret key \( w \), she can produce at most one accepting conversation, which only occurs with very small probability when the prime numbers are large.
Now, for special HVZK, we need to construct a simulator \( M \) which, given \( h \) and \( e \), can produce an accepting conversation \( (a_{M},e_{M},z_{M}) \) with the same distribution as a real conversation \( (a,e,z) \) between Peggy and Victor. The simulator \( M \) is constructed as follows:
We see that \( M \) still satisfies completeness because:
\( \eqalign{ a_{M} \cdot h^{e_{M}} \: mod \: p &= (g^{z_{M}} \cdot h^{-e_{M}}) \cdot h^{e_{M}} \: mod \: p &&(h^{-e_{M}} \cdot h^{e_{M}} \: mod \: p = 1)\\ &= g^{z_{M}} \: mod \: p } \)
\( (a_{M},e_{M},z_{M}) \) also have the same distribution as \( (a,e,z) \) because:
Bitcoin implemented Schnorr signatures (BIP 340) to enable more efficient multi-signature transactions and improved privacy. The signature scheme is derived directly from Schnorr's protocol.
Systems like IBM's Identity Mixer use Schnorr protocol variants to allow users to prove they possess valid credentials without revealing which specific credentials they have or identifying themselves.
Public parameters | |
A trusted third party publishes two large prime numbers \( p \) and \( q \) such that \( p \: mod \: q = 1 \), and a generator \( g = g_{1}^{(p-1)/q} \: mod \: p \) of order \( q \) from the group \( \mathbb{Z}_{p}^{*} \) (where \( g_{1} \) is a generator of the group \( \mathbb{Z}_{p} \)). | |
Initialization | |
Peggy:
|
|
Peggy sends the public key \( h \) to Victor. | |
First message \( a \) | |
Peggy:
|
|
Peggy sends \( a \) to Victor. | |
Challenge \( e \) | |
Victor:
|
|
Victor sends the challenge \( e \) to Peggy. | |
Response \( z \) | |
Peggy:
|
|
Peggy sends the response \( z \) to Victor. | |
Verification | |
Victor:
|
Try a demo of the sigma protocol here.
Peggy chooses the two prime numbers \( p = 467 \) and \( q = 233 \), and the generator \( g_{1} = 439 \) of the group \( \mathbb{Z}_{467} \). She then computes the generator \( g = g_{1}^{(p-1)/q} \: mod \: p = 439^{(467-1)/233} \: mod \: 467 = 317 \), which has order \( q = 233 \). Next, she chooses the secret key \( w = 20 \) and computes the public key \( h = g^{w} \: mod \: p = 317^{20} \: mod \: 467 = 47 \), which she publishes.
In the protocol, Peggy first chooses the random value \( r = 12 \) and computes \( a = g^{r} \: mod \: p = 317^{12} \: mod \: 467 = 23 \), which she sends to Victor. Victor then randomly chooses a challenge \( e = 117 \), which he sends to Peggy. Peggy responds by computing \( z = r + e \cdot w \: mod \: q = 12 + 117 \cdot 20 \: mod \: 233 = 22 \). Finally, Victor verifies Peggy's identity by checking that \( s = g^{z} \: mod \: p = 317^{22} \: mod \: 467 = 212 \) is equal to \( s' = a \cdot h^{e} \: mod \: p = 23 \cdot 47^{117} \: mod \: 467 = 212 \).