Comparing Algorithms - Big O

From TRCCompSci - AQA Computer Science
Revision as of 08:06, 18 May 2017 by JWright (talk | contribs) (Constant complexity - O(1))
Jump to: navigation, search

Big O Notation is a measure of how long or how much memory is needed to execute and algorithm. This uses the worst case scenario, so that you get the maximum time and memory usage. It uses n as the number of items.

Time complexities:

Constant complexity - O(1)

An algorithm with constant complexity means that the time taken doesn't increase regardless of the items that you process. a items would take the same amount of time to compute as b items.

Linear complexity - O(n)

An algorithm with linear complexity takes more time, by a constant gradient, as you give it more items to process. The larger the n value, the longer it takes, with the time being n times larger than if n = 1. For example planting n seeds in a large field would take longer than planting n seeds in a smaller field by comparison.

Logarithmic complexity - O(log n)

An algorithm with logarithmic complexity is better than a linear complexity for searching, because it will grow in time in comparison with items, but not as fast. For example when you search a list, and see that it's not in the middle, it will cut the list in half and then it searches in that half, so the time is reduced as the list gets split.

Linearithmic complexity - O(nlog n)

As the input n grows, the time taken to process the queue, but the time taken will always grow faster than the quantity of the input. Like a comb sort.

Polynomial complexity - O(nk)

The time taken to process the input increases faster than the size of the input n. An example would be shaking hands, between two people only one is needed, however as the amount of people in increases there will have to be many more handshakes between the people. 1/2(n2 - n) would be the time complexity for handshakes.

Exponential complexity - O(kn)
Factorial complexity - O(n!)