In the late 1930s, Claude Shannon showed that by using switches which that close for "true" and open for "false," it was possible to carry out logical operations by assigning the number 1 to "true" and 0 for "false."
This information encoding system is called binary
. It's the form of encoding that allows computers to run. Binary uses two states (represented by the digits 0 and 1) to encode information.
Since 2000 BCE, humans have counted using 10 digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9). This is called "decimal base" (or base 10). However, older civilizations, and even some current applications, used and still use other number bases: sexagesimal (60), used by the Sumerians, which is also used in our current system of timekeeping, for minutes and seconds; vicesimal (20), used by the Mayans; duodecimal (12), used in the monetary systems of the United Kingdom and Ireland until 1971; quinary (5), used by the Mayans; and binary (2), used by all digital technology.
The term bit
(shortened to the lowercase b
) means binary digit
, meaning 0 or 1 in binary numbering. It is the smallest unit of information, which can be manipulated by a digital machine. It is possible to represent this binary information: with an electrical or magnetic signal, which, beyond a certain threshold, stands for 1; by the roughness of bumps
in a surface; and using flip-flops, electrical components that have two stable states (one standing for 1, the other 0).
Therefore, a bit can be set to one of two states: either 1 or 0. With two bits, you can have four different states (2*2):
With 3 bits, you can have eight different states (2*2*2):
|3-bit binary value||Decimal value|
For a group of n bits, it is possible to represent 2n
In a binary number, a bit's value depends on its position, starting from the right. Like tens, hundreds, and thousands in a decimal number, a bit's value grows by a power of two as it goes from right to left, as shown in the following chart:
|value||27 = 128||26 = 64||25 = 32||24 = 16||23 = 8||22 = 4||21 = 2||20 = 1|
To convert a binary string into a decimal number, multiply the value of each bit by its weight, then add together the products. Therefore, the binary string 0101, in decimal, becomes:
23x0 + 22x1 + 21x0 + 20x1
= 8x0 + 4x1 + 2x0 + 1x1
(shortened to the uppercase B
) is a unit of information composed of 8 bits. It can be used to store, among other things, a character, such as a letter or number.
Grouping numbers in clusters of 8 makes them easier to read, much as grouping numbers in threes helps to make thousands clearer when working in base-10. For example, the number "1,256,245" is easier to read than "1256245".
A 16-bit unit of information is usually called a word
. A 32-bit unit of information is called a double word
(sometimes called a dword
). For a byte, the smallest number possible is 0 (represented by eight zeroes, 00000000), and the largest is 255 (represented by eight ones, 11111111), making for 256 different possible values.
|27 =128||26 =64||25 =32||24 =16||23 =8||22 =4||21 =2||20 =1|
Kilobytes and Megabytes
For a long time, computer science was unusual in that it used different values for its units than metric system (also called the International System). Computer users would often learn that 1 kilobyte was made up of 1024 bytes. For this reason, in December 1998, the International Electrotechnical Commission weighed in on the issue http://physics.nist.gov/cuu/Units/binary.html on their website
. Here are the IEC's standardized units: one kilobyte (kB) = 1000 bytes ; one megabyte (MB) = 1000 kB = 1,000,000 bytes; one gigabyte (GB) = 1000 MB = 1,000,000,000 bytes; and one terabyte (TB) = 1000 GB = 1,000,000,000,000 bytes.
|Warning: some software (and even some operating systems) still use the pre-1998 notation, which is as follows:
- One kilobyte (kB) = 210 bytes = 1024 bytes
- One megabyte (MB) = 220 bytes = 1024 kB = 1,048,576 bytes
- One gigabyte (GB) = 230 bytes = 1024 MB = 1,073,741,824 bytes
- One terabyte (TB) = 240 bytes = 1024 GB = 1,099,511,627,776 bytes
The IEC has also defined binary kilo (kibi), binary mega (mebi), binary giga (gibi), and binary tera (tebi).
They are defined as: one kibibyte (kiB) is worth 210
= 1024 bytes; one mebibyte (MiB) is worth 220
= 1,048,576 bytes; one gibibyte (GiB) is worth 230
= 1,073,741,824 bytes; and one tebibyte (TiB) is worth 240
= 1,099,511,627,776 bytes.
In some languages, such as French and Finnish, the word for byte does not begin with "b", but the international community, on the whole, generally prefers the English term "byte." This gives the following notations for kilobyte, megabyte, gigabyte, and terabyte:
kB, MB, GB, TB
|Note the use of an uppercase B to distinguish Byte from bit.|
This is a screenshot from the software HTTrack,the most popular offline web browser, showing how this notation is used:
Simple arithmetic operations — such as addition, subtraction, and multiplication — are easily performed in binary.
Addition in Binary
Addition in binary follows the same rules as in decimal: start by adding the lowest-valued bits (those on the right) and carry the value over to the next place when the sum of two bits in the same position is greater than the largest value of the unit (in binary: 1). This value is, then, carried over to the bit in the next position.
Multiplication in Binary
The multiplication table in binary is simple:
Multiplication is performed by calculating a partial product for each multiplier (only the non-zero bits will give a non-zero result). When the bit of the multiplier is zero, the partial product is null; when it is equal to one, the partial product is formed from the multiplicand, shifted X places, where X is equal to the weight of the multiplier bit.
Image: © Signs and Symbols - Shutterstock.com
Latest update on June 19, 2018 at 07:24 PM by owilson.