Difference between revisions of "Introduction to digital logic"

From CPUdev wiki
Jump to navigation Jump to search
(add a section on sequential circuits)
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
[https://en.wikipedia.org/wiki/Boolean_algebra Boolean algebra] (after George Boole) is the mathematical study of statements that are true or false. Several decades later, Claude Shannon realized the same ideas could be used to build electronic circuits to compute things in binary.
[https://en.wikipedia.org/wiki/Boolean_algebra Boolean algebra] (after George Boole) is the mathematical study of statements that are true or false. Several decades later, Claude Shannon realized the same ideas could be used to build electronic circuits to compute things in binary.


# Boolean logic
== Boolean mathematics ==


Boolean algebra is similar to ordinary algebra: we can write down equations and solve them (or prove they are right or wrong). However, instead of numbers, Boolean equations work on the "numbers" ⊤ (true) and ⊥ or F (false). In binary these are also written as 1 and 0, and we'll use that because this is a wiki about calculating things in binary. Instead of +, -, ×, ÷ and so on, the things you can do with the numbers are ∧, ∨ and ¬. Let's take a look at them.
Boolean algebra is similar to ordinary algebra: we can write down equations and solve them (or prove they are right or wrong). However, instead of numbers, Boolean equations work on the "numbers" ⊤ (true) and ⊥ or F (false). In binary these are also written as 1 and 0, and we'll use that because this is a wiki about calculating things in binary. Instead of +, -, ×, ÷ and so on, the things you can do with the numbers are ∧, ∨ and ¬. Let's take a look at them.
Line 48: Line 48:
|}
|}


# Some laws of Boolean algebra
== Some Boolean algebraic laws ==


Just like normal maths, we can write more complicated equations and solve them. For example, 1∧((0∧1)∨¬(1∧¬1)) is 1.
Just like normal maths, we can write more complicated equations and solve them. For example, 1∧((0∧1)∨¬(1∧¬1)) is 1.
Line 58: Line 58:
(TODO: include a list of simplifying rules without just copying Wikipedia?)
(TODO: include a list of simplifying rules without just copying Wikipedia?)


# Digital logic circuits
== Digital logic circuits ==


Digital logic lets us build machines that calculate Boolean equations. Each operator becomes a little circuit, which is called a *logic gate*, and we can wire them together to calculate complicated equations. We don't draw how each gate is made, but just which ones are used and how they're wired together. Each Boolean operator also has a logic gate symbol. We can make a logic circuit so that if we tell it what a, b, c and d stand for, it will tell us what ((a∧b)∨¬(¬a∧c∧¬d)) is.
Digital logic lets us build machines that calculate Boolean equations. Each operator becomes a little circuit, which is called a *logic gate*, and we can wire them together to calculate complicated equations. We don't draw how each gate is made, but just which ones are used and how they're wired together. Each Boolean operator also has a logic gate symbol. We can make a logic circuit so that if we tell it what a, b, c and d stand for, it will tell us what ((a∧b)∨¬(¬a∧c∧¬d)) is.
Line 64: Line 64:
(insert a diagram, or even better, an interactive simulation, of a circuit that does this)
(insert a diagram, or even better, an interactive simulation, of a circuit that does this)


# Doing something useful
== Doing something useful ==


We can make circuits that compute useful things. We can make a simple calculator - if we have a number a and a number b, that are 0, 1 or 2, we can write a set of equations that calculates the answer, and make a machine that calculates them.
We can make circuits that compute useful things. We can make a simple calculator - if we have a number a and a number b, that are 0, 1 or 2, we can write a set of equations that calculates the answer, and make a machine that calculates them.
Line 76: Line 76:


Of course, as you'll see later on, it's better to calculate with binary numbers.
Of course, as you'll see later on, it's better to calculate with binary numbers.
== Memory ==
Digital logic can also use memory units. The most straightforward type is a "D-latch". A single D-latch remembers a true or false value; when its enable signal is true, it remembers the new true or false value from its input. When it's false, it continues remembering the same value. What does it start with, when the power is turned on, before it gets an enable signal? Depending on the design, it might be made to start with true, start with false, or start with an unpredictable true or false. Some designs also have a reset signal which sets the remembered value to false.
Being able to store the output from a circuit and feed it back into the same circuit is one of the most powerful abilities of digital logic because it allows for multiple computation steps. For example, if you feed the output from an adding circuit back to the input, you can reset it to 0, then keep feeding different numbers into the other input, and calculate the sum total of all the numbers you put in.
(insert a diagram, or even better, an interactive simulation, of this non-working circuit)
But there's a big problem with this idea: pacing. If you put 1 into the second input, and enable the latch, you meant to calculate 0+1=1, but the adding circuit will calculate 0+1=1, then the 1 will loop around through the latch and it will calculate 1+1=2, then the 2 will loop around and it will calculate 2+1=3, and so on. The numbers will race around the loop as fast as the adding circuit can calculate. Worse, if there are several bits involved (e.g. when adding numbers in binary) the different parts of the same number may get out of sync and cause the answer to be even more wrong.
You could solve this by providing a very short enable signal - so short that the enable turns off before the next number loops around. But you have to make sure the signal is extremely short, and the exact length can vary depending on the temperature, the voltage, etc. It's much more reliable to build a latch that only updates once when the enable signal goes from 0 to 1 (a "rising edge") and then there's no need to be careful with the signal speed. To make it update again, you'd have to turn the enable signal back to 0 and back again to 1. In this case the enable signal is known as a *clock* signal and the latch is called a *D flip-flop*. You know the adding circuit loop will process the number exactly once, each time the clock goes from 0 to 1 and back to 0.
(insert a diagram, or even better, an interactive simulation, of this better circuit)
Most D flip-flops have an enable signal as well as a clock signal. The flip-flop will only update when the clock goes from 0 to 1 *and* the enable is 1 at the same time. This makes it even easier to design reliable circuits. When you use logic gates to calculate some equation, they take a little time to update and in a half-updated state, could generate a "glitch" pulse from 0 to 1 and back to 0 even if they final output is still 0. If you used logic gates to calculate a clock signal, or an enable signal for a latch, you'd have to be very careful not to let this happen - possible, but difficult. But this isn't a problem for a flip-flop's enable signal, since it only cares at the exact moment of the clock's rising edge.
<!-- is it possible to do a glitch simulation? -->

Latest revision as of 12:00, 15 June 2025

Boolean algebra (after George Boole) is the mathematical study of statements that are true or false. Several decades later, Claude Shannon realized the same ideas could be used to build electronic circuits to compute things in binary.

Boolean mathematics

Boolean algebra is similar to ordinary algebra: we can write down equations and solve them (or prove they are right or wrong). However, instead of numbers, Boolean equations work on the "numbers" ⊤ (true) and ⊥ or F (false). In binary these are also written as 1 and 0, and we'll use that because this is a wiki about calculating things in binary. Instead of +, -, ×, ÷ and so on, the things you can do with the numbers are ∧, ∨ and ¬. Let's take a look at them.

∧ is the "AND" operator (or, by its more fancy name, "conjunction"). 1∧1=1, but 0∧1=0, 1∧0=0 and 0∧0=0. A AND B is true only if A is true and B is true. Makes sense, right? Since the sides are only true or false, there are only 4 ways it can be, and the rule should be easy to remember - there's no need to memorize AND tables like how you had to memorize times tables at school. You have to remember the symbol: ∧ looks a bit like A for AND.

∨ is the "OR" operator (fancy name: "disjunction"). It's true if either side is true, even if both sides are true. So 1∨1=1, 1∨0=1, 0∨1=1, and 0∨0=0. To remember the symbol: it's the opposite of AND.

¬ is the "NOT" operator (fancy name: "complement" or "inversion" or "negation"). It's true if the thing that comes after it is false. ¬0=1 and ¬1=0.

Also - more commonly used in digital logic than in pure mathematics - is the ⊕ "XOR" operator. It's true if one side is true but not both.

So if x and y have different values, here's what each combination would give you:

x y x ∧ y
(AND)
x ∨ y
(OR)
x ⊕ y
(XOR)
0 0 0 0 0
1 0 0 1 1
0 1 0 1 1
1 1 1 1 0
x ¬x
(NOT)
0 1
1 0

Some Boolean algebraic laws

Just like normal maths, we can write more complicated equations and solve them. For example, 1∧((0∧1)∨¬(1∧¬1)) is 1.

Like in normal algebra, we use letters to stand for things we don't know. So if we have a∧b we can't tell if it's 0 or 1 until we know whether a stands for 0 or 1, and whether b stands for 0 or 1. But we can tell that a∧0 is always 0, no matter what a stands for, and we can simplify equations like this. Even if a stands for another really complicated equation, we know that a∧0 is 0.

If we see ((a∧b)∨(c∧¬d∧0)) we know it's the same as just a∧b since the second part is always 0, since x∧0 is 0 no matter what x is, and then x∨0 is x no matter what x is.

(TODO: include a list of simplifying rules without just copying Wikipedia?)

Digital logic circuits

Digital logic lets us build machines that calculate Boolean equations. Each operator becomes a little circuit, which is called a *logic gate*, and we can wire them together to calculate complicated equations. We don't draw how each gate is made, but just which ones are used and how they're wired together. Each Boolean operator also has a logic gate symbol. We can make a logic circuit so that if we tell it what a, b, c and d stand for, it will tell us what ((a∧b)∨¬(¬a∧c∧¬d)) is.

(insert a diagram, or even better, an interactive simulation, of a circuit that does this)

Doing something useful

We can make circuits that compute useful things. We can make a simple calculator - if we have a number a and a number b, that are 0, 1 or 2, we can write a set of equations that calculates the answer, and make a machine that calculates them.

out0 = a0 ∧ b0
out1 = (a1 ∧ b0) ∨ (a0 ∧ b1)
out2 = (a2 ∧ b0) ∨ (a1 ∧ b1) ∨ (a0 ∧ b2)
etc...

(insert a diagram, or even better, an interactive simulation, of a circuit that does this)

Of course, as you'll see later on, it's better to calculate with binary numbers.

Memory

Digital logic can also use memory units. The most straightforward type is a "D-latch". A single D-latch remembers a true or false value; when its enable signal is true, it remembers the new true or false value from its input. When it's false, it continues remembering the same value. What does it start with, when the power is turned on, before it gets an enable signal? Depending on the design, it might be made to start with true, start with false, or start with an unpredictable true or false. Some designs also have a reset signal which sets the remembered value to false.

Being able to store the output from a circuit and feed it back into the same circuit is one of the most powerful abilities of digital logic because it allows for multiple computation steps. For example, if you feed the output from an adding circuit back to the input, you can reset it to 0, then keep feeding different numbers into the other input, and calculate the sum total of all the numbers you put in.

(insert a diagram, or even better, an interactive simulation, of this non-working circuit)

But there's a big problem with this idea: pacing. If you put 1 into the second input, and enable the latch, you meant to calculate 0+1=1, but the adding circuit will calculate 0+1=1, then the 1 will loop around through the latch and it will calculate 1+1=2, then the 2 will loop around and it will calculate 2+1=3, and so on. The numbers will race around the loop as fast as the adding circuit can calculate. Worse, if there are several bits involved (e.g. when adding numbers in binary) the different parts of the same number may get out of sync and cause the answer to be even more wrong.

You could solve this by providing a very short enable signal - so short that the enable turns off before the next number loops around. But you have to make sure the signal is extremely short, and the exact length can vary depending on the temperature, the voltage, etc. It's much more reliable to build a latch that only updates once when the enable signal goes from 0 to 1 (a "rising edge") and then there's no need to be careful with the signal speed. To make it update again, you'd have to turn the enable signal back to 0 and back again to 1. In this case the enable signal is known as a *clock* signal and the latch is called a *D flip-flop*. You know the adding circuit loop will process the number exactly once, each time the clock goes from 0 to 1 and back to 0.

(insert a diagram, or even better, an interactive simulation, of this better circuit)

Most D flip-flops have an enable signal as well as a clock signal. The flip-flop will only update when the clock goes from 0 to 1 *and* the enable is 1 at the same time. This makes it even easier to design reliable circuits. When you use logic gates to calculate some equation, they take a little time to update and in a half-updated state, could generate a "glitch" pulse from 0 to 1 and back to 0 even if they final output is still 0. If you used logic gates to calculate a clock signal, or an enable signal for a latch, you'd have to be very careful not to let this happen - possible, but difficult. But this isn't a problem for a flip-flop's enable signal, since it only cares at the exact moment of the clock's rising edge.