Number Systems
There are infinite ways to represent a number. The four commonly associated with modern computers and digital electronics are: decimal, binary, octal, and hexadecimal.
Decimal (base 10) is the way most human beings represent numbers. Decimal is sometimes abbreviated as dec.
Decimal counting goes: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, and so on.
Binary (base 2) is the natural way most digital circuits represent and manipulate numbers. (Common misspellings are "bianary", "bienary", or "binery".) Binary numbers are sometimes represented by preceding the value with '0b', as in 0b1011. Binary is sometimes abbreviated as bin.
Binary counting goes: 0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, 1111, 10000, 10001, and so on.
Octal (base 8) was previously a popular choice for representing digital circuit numbers in a form that is more compact than binary. Octal is sometimes abbreviated as oct.
Octal counting goes: 0, 1, 2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14, 15, 16, 17, 20, 21, and so on.
Hexadecimal (base 16) is currently the most popular choice for representing digital circuit numbers in a form that is more compact than binary. (Common misspellings are "hexdecimal", "hexidecimal", "hexedecimal", or "hexodecimal".) Hexadecimal numbers are sometimes represented by preceding the value with '0x', as in 0x1B84. Hexadecimal is sometimes abbreviated as hex.
Hexadecimal counting goes: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, 10, 11, and so on.
All four number systems are equally capable of representing any number. Furthermore, a number can be perfectly converted between the various number systems without any loss of numeric value.
At first blush, it seems like using any number system other than humancentric decimal is complicated and unnecessary. However, since the job of electrical and software engineers is to work with digital circuits, engineers require number systems that can best transfer information between the human world and the digital circuit world.
It turns out that the way in which a number is represented can make it easier for the engineer to perceive the meaning of the number as it applies to a digital circuit. In other words, the appropriate number system can actually make things less complicated.
Binary Number Conversion
The digits that all 4 numeral systems use are shown below:
Decimal

Binary

Hexadecimal

Octal

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

0
1
10
11
100
101
110
111
1000
1001
1010
1011
1100
1101
1110
1111
10000

0
1
2
3
4
5
6
7
8
9
A
B
C
D
E
F
10

0
1
2
3
4
5
6
7
10
11
12
13
14
15
16
17
20

Binary to Octal
An easy way to convert from binary to octal is to group binary digits into sets of three, starting with the least significant (rightmost) digits.
Binary: 11100101 =

11 100 101
 
011 100 101

Pad the most significant digits with zeros if necessary to complete a group of three.

Then, look up each group in a table:
Binary:

000

001

010

011

100

101

110

111

Octal:

0

1

2

3

4

5

6

7

Binary =

011

100

101
 
Octal =

3

4

5

= 345 oct

Binary to Hexadecimal
An equally easy way to convert from binary to hexadecimal is to group binary digits into sets of four, starting with the least significant (rightmost) digits.
Binary: 11100101 = 1110 0101
Then, look up each group in a table:
Binary:

0000

0001

0010

0011

0100

0101

0110

0111

Hexadecimal:

0

1

2

3

4

5

6

7

Binary:

1000

1001

1010

1011

1100

1101

1110

1111

Hexadecimal:

8

9

A

B

C

D

E

F

Binary =

1110

0101
 
Hexadecimal =

E

5

= E5 hex

Binary to Decimal
101101,0010111_{(2)}>Χ_{(10)}
Index the digits of the number
1^{5}0^{4}1^{3}1^{2}0^{1}1^{0}
Multiply each digit
1 * 2^{5} + 0 * 2^{4} + 1 * 2^{3} + 1 * 2^{2} + 0 * 2^{1} + 1 * 2^{0}
32 + 0 + 8 + 4 + 0 + 1 + 0 + 0 + = 45_{ (10)}
Decimal Number Conversion
A repeated division and remainder algorithm can convert decimal to binary, octal, or hexadecimal.
1. Divide the decimal number by the desired target radix (2, 8, or 16).
2. Append the remainder as the next most significant digit.
3. Repeat until the decimal number has reached zero.
Decimal to Binary
Here is an example of using repeated division to convert 1792 decimal to binary:
Decimal Number

Operation

Quotient

Remainder

Binary Result
 
1792

÷ 2 =

896

0

0
 
896

÷ 2 =

448

0

00
 
448

÷ 2 =

224

0

000
 
224

÷ 2 =

112

0

0000
 
112

÷ 2 =

56

0

00000
 
56

÷ 2 =

28

0

000000
 
28

÷ 2 =

14

0

0000000
 
14

÷ 2 =

7

0

00000000
 
7

÷ 2 =

3

1

100000000
 
3

÷ 2 =

1

1

1100000000
 
1

÷ 2 =

0

1

11100000000
 
0

done.

Decimal to Octal
Here is an example of using repeated division to convert 1792 decimal to octal:
Decimal Number

Operation

Quotient

Remainder

Octal Result
 
1792

÷ 8 =

224

0

0
 
224

÷ 8 =

28

0

00
 
28

÷ 8 =

3

4

400
 
3

÷ 8 =

0

3

3400
 
0

done.

Decimal to Hexadecimal
Here is an example of using repeated division to convert 1792 decimal to hexadecimal:
Decimal Number

Operation

Quotient

Remainder

Hexadecimal Result
 
1792

÷ 16 =

112

0

0
 
112

÷ 16 =

7

0

00
 
7

÷ 16 =

0

7

700
 
0

done.

The only addition to the algorithm when converting from decimal to hexadecimal is that a table must be used to obtain the hexadecimal digit if the remainder is greater than decimal 9.
Decimal:

0

1

2

3

4

5

6

7

Hexadecimal:

0

1

2

3

4

5

6

7

Decimal:

8

9

10

11

12

13

14

15

Hexadecimal:

8

9

A

B

C

D

E

F

The addition of letters can make for funny hexadecimal values. For example, 48879 decimal converted to hex is:
Decimal Number

Operation

Quotient

Remainder

Hexadecimal Result
 
48879

÷ 16 =

3054

15

F
 
3054

÷ 16 =

190

14

EF
 
190

÷ 16 =

11

14

EEF
 
11

÷ 16 =

0

11

BEEF
 
0

done.

Octal Number Conversion
Octal to Binary
Converting from octal to binary is as easy as converting from binary to octal. Simply look up each octal digit to obtain the equivalent group of three binary digits. (group of 3 digits)
Octal:

0

1

2

3

4

5

6

7

Binary:

000

001

010

011

100

101

110

111

Octal =

3

4

5
 
Binary =

011

100

101

= 011100101 binary

Octal to Hexadecimal
When converting from octal to hexadecimal, it is often easier to first convert the octal number into binary and then from binary into hexadecimal. For example, to convert 345 octal into hex:
Octal =

3

4

5
 
Binary =

011

100

101

= 011100101 binary

Drop any leading zeros or pad with leading zeros to get groups of four binary digits (bits):
Binary 011100101 = 1110 0101
Then, look up the groups in a table to convert to hexadecimal digits. (group of 4 digits)
Binary:

0000

0001

0010

0011

0100

0101

0110

0111

Hexadecimal:

0

1

2

3

4

5

6

7

Binary:

1000

1001

1010

1011

1100

1101

1110

1111

Hexadecimal:

8

9

A

B

C

D

E

F

Binary =

1110

0101
 
Hexadecimal =

E

5

= E5 hex

Therefore, through a twostep conversion process, octal 345 equals binary 011100101 equals hexadecimal E5.
Octal to Decimal
· Converting octal to decimal can be done with repeated division.
· Start the decimal result at 0.
· Remove the most significant octal digit (leftmost) and add it to the result.
· If all octal digits have been removed, you're done. Stop.
· Otherwise, multiply the result by 8.
· Go to step 2.
Octal Digits

Operation

Decimal Result

Operation

Decimal Result

345

+3

3

× 8

24

45

+4

28

× 8

224

5

+5

299

done.

The conversion can also be performed in the conventional mathematical way, by showing each digit place as an increasing power of 8.
345 octal = (3 * 8^{2}) + (4 * 8^{1}) + (5 * 8^{0}) = (3 * 64) + (4 * 8) + (5 * 1) = 229 decimal
Hexadecimal Number Conversion
Hexadecimal to Binary
Converting from hexadecimal to binary is as easy as converting from binary to hexadecimal. Simply look up each hexadecimal digit to obtain the equivalent group of four binary digits.
Hexadecimal:

0

1

2

3

4

5

6

7

Binary:

0000

0001

0010

0011

0100

0101

0110

0111

Hexadecimal:

8

9

A

B

C

D

E

F

Binary:

1000

1001

1010

1011

1100

1101

1110

1111

Hexadecimal =

A

2

D

E
 
Binary =

1010

0010

1101

1110

= 1010001011011110 binary

Hexadecimal to Octal
When converting from hexadecimal to octal, it is often easier to first convert the hexadecimal number into binary and then from binary into octal. For example, to convert A2DE hex into octal:
(from the previous example)
Hexadecimal =

A

2

D

E
 
Binary =

1010

0010

1101

1110

= 1010001011011110 binary

Add leading zeros or remove leading zeros to group into sets of three binary digits.
Binary: 1010001011011110 = 001 010 001 011 011 110
Then, look up each group in a table:
Binary:

000

001

010

011

100

101

110

111

Octal:

0

1

2

3

4

5

6

7

Binary =

001

010

001

011

011

110
 
Octal =

1

2

1

3

3

6

= 121336 octal

Therefore, through a twostep conversion process, hexadecimal A2DE equals binary 1010001011011110 equals octal 121336.
Hexadecimal to Decimal
Converting hexadecimal to decimal can be performed in the conventional mathematical way, by showing each digit place as an increasing power of 16. Of course, hexadecimal letter values need to be converted to decimal values before performing the math.
Hexadecimal:

0

1

2

3

4

5

6

7

Decimal:

0

1

2

3

4

5

6

7

Hexadecimal:

8

9

A

B

C

D

E

F

Decimal:

8

9

10

11

12

13

14

15

Convert: 2D_{ (16)}>Χ_{(10)}
Index the digits of the number
Hexadecimal D is decimal 13
2^{1}13^{0}
We multiply each digit
2 * 16^{1} + 13 * 16^{0}
32 + 13 = 45_{(10)}
Calculation in Binary
 0 + 0 = 0
 0 + 1 = 1
 1 + 0 = 1
 1 + 1 = 0, and carry 1 to the next more significant bit
For example,
00011010 + 00001100 = 00100110

1 1

carries
 
0 0 0 1 1 0 1 0

=

26_{(base 10)}
 
+ 0 0 0 0 1 1 0 0

=

12_{(base 10)}
 
0 0 1 0 0 1 1 0

=

38_{(base 10)}
 
00010011 + 00111110 = 01010001

1 1 1 1 1

carries
 
0 0 0 1 0 0 1 1

=

19_{(base 10)}
 
+ 0 0 1 1 1 1 1 0

=

62_{(base 10)}
 
0 1 0 1 0 0 0 1

=

81_{(base 10)}

 0  0 = 0
 0  1 = 1, and borrow 1 from the next more significant bit
 1  0 = 1
 1  1 = 0
For example,
00100101  00010001 = 00010100

0

borrows
 
0 0

=

37_{(base 10)}
 
 0 0 0 ^{ }1 0 0 0 1

=

17_{(base 10)}
 
0 0 0 ^{ }1 0 1 0 0

=

20_{(base 10)}
 
00110011  00010110 = 00011101

0 ^{1}0 1

borrows
 
0 0

=

51_{(base 10)}
 
 0 0 0 ^{ }1 0 ^{ }1 1 0

=

22_{(base 10)}
 
0 0 0 ^{ }1 1 ^{ }1 0 1

=

29_{(base 10)}

 0 x 0 = 0
 0 x 1 = 0
 1 x 0 = 0
 1 x 1 = 1, and no carry or borrow bits
For example,
00101001 × 00000110 = 11110110

0 0 1 0 1 0 0 1

=

41_{(base 10)}
 
× 0 0 0 0 0 1 1 0

=

6_{(base 10)}
 
0 0 0 0 0 0 0 0
 
0 0 1 0 1 0 0 1
 
0 0 1 0 1 0 0 1
 
0 0 1 1 1 1 0 1 1 0

=

246_{(base 10)}
 
00010111 × 00000011 = 01000101

0 0 0 1 0 1 1 1

=

23_{(base 10)}
 
× 0 0 0 0 0 0 1 1

=

3_{(base 10)}
 
1 1 1 1 1

carries
 
0 0 0 1 0 1 1 1
 
0 0 0 1 0 1 1 1
 
0 0 1 0 0 0 1 0 1

=

69_{(base 10)}

Binary Division
Binary division is the repeated process of subtraction, just as in decimal division.
10)11(1
10

1
Complements Methods
In mathematics and computing, the method of complements is a technique used to subtract one number from another using only addition of positive numbers. This method was commonly used in mechanical calculators and is still used in modern computers.
Complements are used in digital computers for simplifying the subtraction operation and for logical manipulation. There are two types of complements for each base r system: the r’s complement and the (r1)’s complement. When the value of the base r is substituted in the name, the twotypes are referred to as the 2’s and 1’s complement for binary numbers and the 10’s and 9’s complement for decimal numbers.
One’s Complement Methods(Binary subtraction)
Let's consider how we would solve our problem of subtracting 1_{10} from 7_{10} using 1's complement.

0111 (7)
 0001  (1)


0001 > 1110


0111 (7)
+ 1110 +(1)
10101 (?)


0101
+ 1
0110 (6)


0111 (7)
 0001  (1)
0110 (6)

Now let's look at an example where our problem does not generate an overflow bit. We will subtract 7_{10} from 1_{10} using 1's complement.

0001 (1)
 0111  (7)


0001 (1)
+ 1000 +(7)
1001 (?)


0001 (1)
+ 1000 +(7)
1001 (6)

Two’s Complement Methods(Binary subtraction)
Now let's consider how we would solve our problem of subtracting 1_{10} from 7_{10} using 2's complement.

0111 (7)
 0001  (1)


0001 > 1110
1
1111


0111 (7)
+ 1111 +(1)
10110 (?)


0111 (7)
 0001  (1)
0110 (6)

Nine’s Complement Methods (Decimal Subtraction)
The 9's complement of a decimal number is found by subtracting each digit in the number from 9.
In 9's complement subtraction when 9's complement of smaller number is added to the larger number carry is generated. It is necessary to add this carry to the result. When larger number is subtracted from smaller one, there is no carry and the result is in 9's complement form and negative.
Subtract (normal):
8
2
=====
First step: complement to the lower number: 92 =7
Second step: add upper number and complemented number: 8+7=1 5 (here, 1 is over flow bit and answer is positive)
Third step: carry out the overflow bit and add it to remaining number.
Subtract (Negative value):
4
8
=====
First step: complement to the lower number: 98 =1
Second step: add upper number and complemented number: 4+1= 5 (No overflow bit, its mean answer is negative)
Third step: complement the result: 95=4, so answer is 4.
More Examples:
Ten’s Complement Methods (Decimal Subtraction)
The 10's complement of a decimal number is equal to the 9's complement plus 1. The 10's complement can be used to perform subtraction by adding the minuend to the 10's complement of the subtrahend and dropping the carry.
Subtract (normal):
8
2
=====
First step: complement to the lower number: 102 =8
Second step: add upper number and complemented number: 8+8=1 6 (here, 1 is over flow bit and answer is positive)
Third step: Ignore the overflow bit and remaining number is the answer.
Subtract (Negative value):
4
8
=====
First step: complement to the lower number: 108 =2
Second step: add upper number and complemented number: 4+2=6 (No overflow bit, its mean answer is negative)
Third step: complement the result: 106=4, so answer is 4.
More Examples:
Codes: Absolute Binary, BCD, ASCII ,EBCDIC, Unicode
A code is a rule for converting a piece of information (for example, a letter, word, phrase, or gesture) into another form or representation (one sign into another sign), not necessarily of the same type. Probably the most widely known data communications code so far in use today is ASCII. In one or another (somewhat compatible) version, it is used by nearly all personal computers, terminals, printers, and other communication equipment. It represents 128 characters with sevenbit binary numbers—that is, as a string of seven 1s and 0s. In ASCII a lowercase "a" is always 1100001, an uppercase "A" always 1000001, and so on. There are many other encodings, which represent each character by a byte (usually referred as code pages), Binary Code, ASCII Code, BCD Code, EBCDIC Code, Unicode.
Binary Code
A binary code is a way of representing text or computer processor instructions by the use of the binary number system's twobinary digits 0 and 1. This is accomplished by assigning a bit string to each particular symbol or instruction. For example, a binary string of eight binary digits (bits) can represent any of 256 possible values and can therefore correspond to a variety of different symbols, letters or instructions.
In computing and telecommunication, binary codes are used for any of a variety of methods of encoding data, such as character strings, into bit strings. Those methods may be fixedwidth or variablewidth. In a fixedwidth binary code, each letter, digit, or other character, is represented by a bit string of the same length; that bit string, interpreted as a binary number, is usually displayed in code tables in octal, decimal or hexadecimal notation. There are many character sets and many character encodings for them.
A bit string, interpreted as a binary number, can be translated into a decimal number. For example, the lowercase "a" as represented by the bit string 01100001, can also be represented as the decimal number 97.
For Example:
Hi = 1001000 1101001 (ASCII codes H=72 and i=105 decimal)
ASCII Code
The American Standard Code for Information Interchange (ASCII) is a characterencoding scheme originally based on the English alphabet. ASCII codes represent text in computers, communications equipment, and other devices that use text. Most modern characterencoding schemes are based on ASCII, though they support many more characters than ASCII does. The standard ASCII character set uses just 7 bits for each character. There are several larger character sets that use 8 bits, which gives them 128 additional characters. The extra characters are used to represent nonEnglish characters, graphics symbols, and mathematical symbols. Several companies and organizations have proposed extensions for these 128 characters. The DOS operating system uses a superset of ASCII called extended ASCII or high ASCII.
CHAR

DEC

CHAR

DEC

CHAR

DEC

CHAR

DEC

[NUL]

0

32

@

64

`

96
 
[SOH]

1

!

33

A

65

a

97

[STX]

2

"

34

B

66

b

98

[ETX]

3

#

35

C

67

c

99

[EOT]

4

$

36

D

68

d

100

[ENQ]

5

%

37

E

69

e

101

[ACK]

6

&

38

F

70

f

102

[BEL]

7

'

39

G

71

g

103

[BS]

8

(

40

H

72

h

104

[HT]

9

)

41

I

73

i

105

[LF]

10

*

42

J

74

j

106

[VT]

11

+

43

K

75

k

107

[FF]

12

,

44

L

76

l

108

[CR]

13



45

M

77

m

109

[SO]

14

.

46

N

78

n

110

[SI]

15

/

47

O

79

o

111

[DLE]

16

0

48

P

80

p

112

[DC1]

17

1

49

Q

81

q

113

[DC2]

18

2

50

R

82

r

114

[DC3]

19

3

51

S

83

s

115

[DC4]

20

4

52

T

84

t

116

[NAK]

21

5

53

U

85

u

117

[SYN]

22

6

54

V

86

v

118

[ETB]

23

7

55

W

87

w

119

[CAN]

24

8

56

X

88

x

120

[EM]

25

9

57

Y

89

y

121

[SUB]

26

:

58

Z

90

z

122

[ESC]

27

;

59

[

91

{

123

[FS]

28

<

60

\

92



124

[GS]

29

=

61

]

93

}

125

[RS]

30

>

62

^

94

~

126

[US]

31

?

63

_

95

[DEL]

127

Binarycoded decimal
Binarycoded decimal (BCD) is a digital encoding method for numbers using decimal notation, with each decimal digit represented by its own binary sequence. In BCD, a numeral is usually represented by four bits which, in general, represent the decimal range 0 through 9. Other bit patterns are sometimes used for a sign or for other indications (e.g., error or overflow). Uncompressed (or zoned) BCD consumes a byte for each represented numeral, whereas compressed (or packed) BCD typically carries two numerals in a single byte by taking advantage of the fact that four bits will represent the full numeral range.
BCD's main virtue is ease of conversion between machine and humanreadable formats, as well as a more precise machineformat representation of decimal quantities. As compared to typical binary formats, BCD's principal drawbacks are a small increase in the complexity of the circuits needed to implement basic mathematical operations and less efficient usage of storage facilities.
BCD takes advantage of the fact that any one decimal numeral can be represented by a four bit pattern:
Decimal: 0 1 2 3 4 5 6 7 8 9
Binary : 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001
It is possible to perform addition in BCD by first adding in binary, and then converting to BCD afterwards.
1001 + 1000 = 10001 = 0001 0001
9 + 8 = 17 = 1 1
EBCDIC Code
EBCDIC (Extended Binary Coded Decimal Interchange Code ) is a binary code for alphabetic and numeric characters that IBM developed for its larger operating systems. It is the code for text files that is used in IBM's OS/390 operating system for its S/390 servers and that thousands of corporations use for their legacy applications and databases. In an EBCDIC file, each alphabetic or numeric character is represented with an 8bit binary number (a string of eight 0's or 1's). 256 possible characters (letters of the alphabet, numerals, and special characters) are defined.
Dec

Hex

Char

129

81

a

130

82

b

131

83

c

132

84

d

133

85

e

134

86

f

135

87

g

136

88

h

137

89

i

:

:
 
145

91

j

146

92

k

147

93

l

148

94

m

149

95

n

150

96

o

151

97

p

152

98

q

153

99

r

:

:
 
162

A2

s

163

A3

t

164

A4

u

165

A5

v

166

A6

w

167

A7

x

168

A8

y

169

A9

z

Dec

Hex

Char

193

C1

A

194

C2

B

195

C3

C

196

C4

D

197

C5

E

198

C6

F

199

C7

G

200

C8

H

201

C9

I

:

:
 
209

D1

J

210

D2

K

211

D3

L

212

D4

M

213

D5

N

214

D6

O

215

D7

P

216

D8

Q

217

D9

R

:

:
 
226

E2

S

227

E3

T

228

E4

U

229

E5

V

230

E6

W

231

E7

X

232

E8

Y

233

E9

Z

240

F0

0

241

F1

1

242

F2

2

243

F3

3

244

F4

4

245

F5

5

246

F6

6

247

F7

7

248

F8

8

249

F9

9

Unicode
Fundamentally, computers just deal with numbers. They store letters and other characters by assigning a number for each one. Before Unicode was invented, there were hundreds of different encoding systems for assigning these numbers. No single encoding could contain enough characters: for example, the European Union alone requires several different encodings to cover all its languages. Even for a single language like English no single encoding was adequate for all the letters, punctuation, and technical symbols in common use.
These encoding systems also conflict with one another. That is, two encodings can use the same number for two different characters, or use different numbers for the same character. Any given computer (especially servers) needs to support many different encodings; yet whenever data is passed between different encodings or platforms, that data always runs the risk of corruption.
Unicode Table
