Algorithms: What’s the problem?

A good programmer describes algo­rithms in a form that can be effi­ciently exe­cuted by ma­chines and easily under­stood by humans.

Algorithms and programs are pretty much the same thing – the main difference is that we can talk about algorithms without relying on a programming language. An algorithm invented today will be equally as useful when the last Java programmer has logged out. Here is an attempt at a formal definition.

An algorithm is a stepwise procedure of well-defined executable instructions intended to perform a task or solve a problem, often with the added requirement that the procedure must come to a stop.

A programmer’s job is to design and implement algorithms that are correct, clear and efficient.


Algorithms are commonly used to solve certain types of computational problems. We can often describe such a problem by specifying a relationship between input and output.

The sorting problem, for example, can be described like this:

As an example: given the input sequence 12, 24, 13, 12, 8 a sorting algorithm must produce the output sequence 8, 12, 12, 13, 24. One such specific example of input for a problem is called an instance of the problem.

We say that an algorithm is correct if the algorithmic procedure stops for each problem instance and returns the required output. The algorithm solves the problem.


When talking and writing about algorithms it’s convenient to use pseudocode instead of executable code.

Pseudocode is a compact and informal high-level description of an algorithm intended for humans rather than machines.

In pseudocode you can omit details that are easy to figure out for a human, such as variable declarations and system-specific function calls. You can also use natural language and mathematical notation.

For obvious reasons, there is no standard for writing pseudocode, but here is a typical example. First take a look at the code written in Go.

// Sum returns the sum 1 + 2 + ... + n, where n ≥ 1.
func Sum(n int) int {
    var sum int
    for i := 1; i <= n; i++ {
        sum += i
    return sum

The same algorithm described in pseudocode might look like this.

// Returns the sum 1 + 2 + ... + n, where n ≥ 1.
Algorithm sum(n)
    sum ← 0
    for i ← 1 to n
        sum += i
    return sum


Different algorithms that solve the same problem can have large performance differences. Here is a typical situation.

From this we can see that Quicksort is much faster than Insertion sort when n is large. This holds true even if we use an incompetent implementation of Quicksort running on a slow computer.


The differences become painfully obvious when we plug in some actual values in the formulas.

  1. Let's run Insertion sort on a fast computer that can execute 109 instructions per second. The algorithm is implemented by a skillful programmer and the constant c1 = 5.
  2. Quicksort run will run on a much slower computer that can only execute 107 instructions per second. Also, the implementation is sloppy: c2 = 100.

For small instances of the sorting problem, the faster computer wins.

For larger instances, the picture is very different.

For even larger problem instances, the differences are even bigger and the choice of algorithm becomes even more critical.

Further reading

See Time complexity: Count your steps for more on estimating algorithm performance.

See Top 3 Quicksort optimizations for a fast Quicksort implementation that uses Insertion sort to improve performance.

Share this page: