Skip to content

Introduction to Computing

In this introductory chapter, you’ll answer the following questions:

  • What is a computer?
  • How is data converted to bits?
  • What’s inside of a computer?
  • What is programming?
  • What tools do I need?

Answering these questions will help you better understand what you’re going to learn in this course.

What is a computer?

A computer is a programmable machine. This means you can give it instructions ahead of time. The computer then executes these instructions on its own.

Each set of instructions is known as a program. You can download existing programs for your computer, but you can also create your own. This is what makes computers such versatile machines.

Computers are also digital machines. A digital machine stores and processes data — such as text, images, and sound — as numbers. Because of this, digital machines are much cheaper to build than analog machines, which store and process data in its original form.

Unlike you and me, computers don’t use decimal numbers. To reduce cost and complexity, computers use only two digits: 0 and 1. The resulting numbers are binary numbers and their digits are known as bits, a contraction of binary digits.

How is data converted to bits?

Because computers use binary numbers exclusively, all data must be converted to bits before it can be stored and processed by a computer. This section explains how decimal numbers, text, images, and sound are converted to bits.

Decimal numbers

Converting decimal numbers to binary is straightforward because both number systems are positional. For example, a decimal number is a series of decimal digits where each digit is multiplied by a power of 10 that increases from right to left:


Similarly, a binary number is a series of bits where each bit is multiplied by a power of 2:



The subscript 2 distinguishes binary numbers from decimal numbers and isn’t part of the number.

To convert a decimal number to binary, you split the number into powers of 2. The corresponding binary number has a 1 bit for the powers included in the decimal number, and a 0 bit for all others.

Here’s an example that converts 19 to binary:


Range and overflow

Computers allocate a fixed amount of memory for each number. This limits the range of numbers you can represent. For example, using eight bits, you can only represent 256 (28) different numbers.

The following table shows some common bit sizes and the range of numbers they can represent:

BitsUnsigned range
80 to 255
160 to 65535
320 to 4.294.967.295
640 to 18.446.744.073.709.551.615

This table shows the range of unsigned numbers, which allocate the entire range to positive numbers.

The next table shows the range of signed numbers, which allocate half of the range to negative numbers:

BitsSigned range
8-128 to 127
16-32768 to 32767
32-2.147.483.648 to 2.147.483.647
64-9.223.372.036.854.775.808 to 9.223.372.036.854.775.807

In general, the range of numbers you can represent using n bits is 0 to 2n1 for unsigned numbers, and 2n1 to 2n11 for signed numbers.

Keep this range in mind when programming. An operation involving two numbers of the same size may result in a number that falls outside of the representable range for that size. This problem is known as overflow.

For example, adding the unsigned 8-bit numbers 160 and 140 results in a number that requires nine bits. Some programming languages discard this extra bit, leaving you with an incorrect result; however, Swift detects overflow and reports it as an error in your program.

Floating-point numbers

Computers do not store fractional numbers as two whole numbers separated by a decimal point. Doing so would be wasteful as some numbers may not need digits before the decimal point, whereas others may not need digits after the decimal point. Instead, computers rely on scientific notation in which numbers are written as ±a×2b.

The benefit of this notation is that the decimal point can be in any position. The resulting binary numbers are therefore called floating-point numbers. Their bit pattern stores the following information:

  • A sign bit.
  • A significand a. This unsigned whole number holds only significant digits, meaning it ignores leading and trailing zeros. Its size determines the accuracy of the floating-point number.
  • An exponent b. This signed whole number determines the position of the significant digits relative to the decimal point. Its size determines the range of the floating-point number.

The IEEE 754 standard defines various floating-point formats. Two notable formats are single-precision, which uses 32 bits and has an accuracy of about seven significant decimal digits, and double-precision, which uses 64 bits and has an accuracy of about sixteen significant decimal digits.

This limited accuracy means many fractional numbers can’t be exactly represented as floating-point numbers. The result is a rounding error that grows with every operation you perform.

Even the exponent can be a limiting factor. If a number requires an exponent that’s outside of the representable range, this results in either overflow (if the exponent is too big) or underflow (if the exponent is too small).

Keep these issues in mind when using floating-point numbers, and don’t depend on them for scientific or financial calculations that require exact results.


To convert text to binary, you map each character to a number and store the resulting numbers.

A mapping of characters to numbers is known as a character set. Swift uses the Unicode character set, which covers most of the world’s languages. It even includes historical languages, mathematical symbols, and emojis.

Because Unicode is large and complex, it’s not a straightforward mapping of characters to numbers. Instead, it has several encodings that offer a different balance of efficiency and performance. By far, the most common encoding is UTF-8, which uses only eight bits for common characters, and up to 32 bits for less common ones.


Digital images consist of tiny elements known as pixels, a contraction of picture elements. The number of pixels in an image is known as the resolution of that image. The higher the resolution, the clearer the image.

Each pixel has a single color. The number of bits per pixel determines the number of colors an image can have. This is known as the color depth of the image:

  • You can use one bit per pixel to create a black-and-white image.
  • You can use n bits per pixel to create an image with 2n shades of gray. Alternatively, you can create a color map of 2n colors and use n bits to select from one of these predefined colors.
  • You can use n bits for each of the primary colors red, green, and blue. This creates an image with 3n bits per pixel and 23n different colors. You can add an additional n bits per pixel to support 2n levels of transparency.


Digital sound consists of a series of samples where each sample is a measurement of the sound wave at a specific point in time.

The quality of the sound is determined by two factors:

  • The number of samples per second, also known as the sample rate. A high sample rate is required to capture the shape of the sound accurately.
  • The number of bits per sample, also known as the bit depth. This determines the accuracy of the samples.

A typical audio CD, for example, has a sample rate of 44100 and a bit depth of 16.

What’s inside of a computer?

The physical components that make up a computer are known as its hardware. You don’t have to be a hardware expert to be a good programmer, but it helps to have a general understanding of what’s inside of a computer.

The central processing unit (CPU) is the component that does the actual computing. A CPU can perform a limited set of basic operations known as instructions. The programs you write tell the CPU which instructions to perform and on which data to perform each instruction.

The CPU loads instructions and data from memory, typically random access memory (RAM). Memory components are optimized to store the programs and data the CPU is currently working on. They aren’t suitable for long-term storage because they require a continuous power supply and lose their contents when the computer shuts down.

For long-term storage, a computer uses components such as solid-state drives (SSD), magnetic hard drives (HD), or optical disc drives (CD or DVD). These components don’t require a continuous power supply and keep their contents when the computer shuts down. Compared to memory components, storage components offer more capacity at a lower price. However, they can’t equal the performance of memory components, which is why computers use a combination of both.

You can categorize most other components as peripherals. Peripherals provide a way to input data to the computer or output data from the computer. Users can use a keyboard, mouse, or touch device to control the computer, which uses its screen and speakers to communicate back to the user. Other peripherals let the computer send and receive data over a network, communicate with other devices, print documents, and so on.

Even with all of this fancy hardware, a computer is useless without those programs that tell it what to do. These programs are called software.

A central piece of software is the operating system. This program manages the computer’s resources and allocates them to other programs. An operating system shields programs from the underlying hardware and makes it possible to run multiple programs simultaneously. Operating systems you may be familiar with are macOS, Windows, Linux, iOS, and Android.

The programs that users interact with are known as applications. These programs are managed by the operating system and perform tasks such as creating and managing documents, playing music, and browsing the internet. As a programmer, you’ll most likely develop applications, not operating systems.

What is programming?

Programming is the task of designing and writing programs. Programmers write code — written instructions for the computer.

Each CPU has a set of instructions it can perform. At the lowest level, your code consists of instructions for the CPU to execute and the data these instructions operate on, all specified as binary numbers. This is called machine code.

As you can imagine, writing machine code is hard and tedious work. That’s why programmers rely on higher-level programming languages like Swift. These languages use English words instead of binary numbers and provide functionality at a much higher level than the CPU’s instruction set. Code written in a programming language is easier to read, write, understand, and maintain than machine code.

When you program in a higher-level language, you rely on tools to translate your code into machine code, ready for the CPU to execute.

What tools do I need?

You use an editor to create and edit the source files that contain your code. Any text editor can edit code, but programmers prefer editors purpose-built for programming. These provide additional features, such as colored highlights and auto-completion, that make your job a lot easier.

You use a compiler to translate your code into machine code. The compiler outputs an executable file known as a binary, which you use to run your program.

Not all programming languages use a compiler; some use an interpreter. An interpreter skips the compilation process and runs your code directly, translating it to machine code on-the-fly. Interpreted languages generally offer less performance than compiled languages, but they can be easier to learn.


Swift is a compiled language. However, it also comes with an interpreter known as a read-evaluate-print loop (REPL). Unfortunately, this REPL is quite unstable, which is why you won’t use it in this course.

Sooner or later, you’ll create your first bug, an error in your program. When that happens, you’ll use a debugger to figure out what went wrong. A debugger is an invaluable tool that steps through your code one instruction at a time and lets you inspect what’s going on while your program runs.

Finally, you may prefer to use an integrated development environment (IDE), which includes all of the tools you’ll need, such as an editor, a compiler and/or interpreter, and a debugger.

Up next

In the next chapter, you’ll learn about the tools you’ll use to program in Swift and write your first program.