Skip to content

Computational complexity

Why were computational complexities introduced?

Imagine two programmers: Bob and Josh. They worked in parallel to solve the same problem. Both succeeded, although they approached it in a completely different way: they used their own algorithms. Before their boss decides which solution should be included in the program, their speed and resource needs to be analyzed.

Bob ran his code on his laptop, measured the runtime: the algorithm found a solution in 1 second. Josh did the same but on his gear. His algorithm did the job in 2 seconds.

Based on this, can we make it clear that Bob's algorithm is faster? At first glance, yes, but we shouldn't do that because the programs ran in different environments. It would be a natural idea to require the algorithms to run on the same computer, but such rules are absurd. Therefore, it was decided that the number of operations and how much longer and how much more memory will be used by the algorithm as the input data (collection) grows (e.g. a larger and larger list of data) should be used more than the immediate execution time of the program.

If, by doubling the input data of the program, the duration of Bob's code was four times longer (to 4 seconds), and Josh's algorithm twice (to 2 seconds), it turns out that the better algorithm was presented by Josh.

It is similar with memory. Double enlarging the input list, which will result in four times more memory consumption, will indicate a worse algorithm than the one for which the same increase in data will not increase the program resource at all.

  • A measure for comparing the effectiveness of algorithms.
  • It can be both temporal (how long the algorithm takes) and memory (how much memory space the algorithm takes up while performing calculations).

Computational complexity definition

Overall, summing up the considerations from the previous paragraph:

"Computational complexity" is a measure for comparing the effectiveness of algorithms. Its types are presented below:

  • time – the amount of time needed to execute the algorithm,
  • memory – the amount of memory needed to execute the algorithm

  • pessimistic (marked with the notation of a capital O: O()) - determines how many resources are needed to execute the algorithm with unlucky data,

  • expectations (marked with large sigm notation: Θ()) - the amount of resources needed to execute the algorithm with a typical data arrangement,
  • optimistic - the amount of resources needed to execute the algorithm assuming happily arranged data.

Due to how the memory consumption and execution time of the algorithm change, the complexities are defined:

  • constant,
  • logarithmic,
  • linear,
  • linear-logarithmic,
  • polynomial,
  • exponential,
  • and the permutation, factorial.

The complexities just mentioned are described and examples of operations are given.

Assumptions:

The input data in the form of a list contains n elements each.

c stands for any selected numerical constant.

Constant complexity

Symbol: O(1)

Best case, time and memory are independent of the amount of input.

Examples: returning the first element of an array, adding two numbers together, creating a new variable.

Logarithmic complexity

Symbol: O(log n)

The preferred complexity, time or memory increases logarithmically depending on the input data.

Examples: binary search (discussed in more detail in the next section).

Linear complexity

Symbol: O(n)

Time and/or memory increases in direct proportion to the size of the input data.

Examples: single execution of a loop iterating over list items.

Computing the sum of consecutively arranged numbers in a list is an operation with linear complexity, because to achieve this you need to spell and pass each item in the list. However, it is enough to use the formula for the sum of the arithmetic sequence (provided that the numbers have a constant difference) and the complexity becomes constant, which significantly speeds up the program.

Linear logarithmic complexity

Symbol: O(n * log n)

The complexity of the combination of the previous two (it is definitely less attractive).

Examples: quick sort algorithm.

Polynomial complexity

Symbol: O(n^c)

For polynomial complexity, time / memory increases in proportion to the square, cube, etc. of the data size (c symbolizes a constant number).

Examples: bubble sort; loop loops.

Exponential complexity

Symbol: O(c^n)

The relationship between data size and time and memory is exponential - from this complexity up, algorithms that have them are considered bad and unwanted.

Examples: recursive Fibonacci sequence - one function calls two more recursively; after four "down" descents $2^4& Fibonacci function calls will be placed on the call stack.

The complexity of the factorial

Symbol: O(n!)

The worst case scenario, a slight increase in the size of the data may prevent humanity from waiting for the algorithm to be executed (this is not a joke).

Examples: problems with permutation determination (traveling salesman problem) - in general all problems where it is necessary to try each possibility one after the other and there is no way around it in any way.

Comparison chart

Here is how the listed complexities relate to each other in the graph showing the ratio of the operation to the execution time of the algorithm:

Chart

Comparision table

The picture below shows how extremely important is the approach to minimizing complexity, and thus a smarter approach to creating algorithms. The results for the most complex algorithms are particularly interesting: rational and with the complexity of the factorial - the time of their execution with the constantly increasing input data is enormous! Suffice it to say that the universe is approximately 4.5*10^9 years old.

Table