If you’ve had a university education in software engineering you have probably come across algorithmic time complexity. This classifies algorithms by how long they take to run given an known amount of input, normally written using big O notation. Picking an algorithm with a good time complexity can make an enormous difference to the performance of your code. (If you don’t want a refresher on algorithmic time complexity then skip to **Beyond algorithms**.)

## Common examples

These are some of the most common time complexities. Searching and sorting algorithms are commonly used as examples. There the size of the input is just the number of items in the array. Keep an eye on the number of operations with an input size of 10.

Name | Notation | Operations | Example |
---|---|---|---|

constant time | O(1) | 1 | Indexing an array |

logarithmic time | O(log n) | 2.3 | Binary search, finding a value in a sorted array |

linear time | O(n) | 10 | Linear search, finding a value in an unsorted array |

lineararithmic time | O(n log n) | 23 | Fastest possible comparison sort |

quadratic time | O(n^2) | 100 | Bubble sort, simple array sorting |

cubic time | O(n^3) | 1000 | Naive matrix multiplication |

factorial time | O(n!) | 3628800 | Travelling salesman problem |

If you’re not using a known algorithm calculating then you can calculate your specific time complexity. This probably isn’t necessary unless your doing some formal. Roughly speaking look how deeply your loops are nested when they process your main input. A single level of loops is probably means O(n), a loop within a loop means O(n^2), less nesting is better.

With simpler time complexities the number of operations is small but it ramps up quickly.
Anything O(n log n) or below shouldn’t take *too* long, even with larger inputs.
Around O(n^2) and above the input size really starts to matter.

## Special cases

However it’s not *always* the case of picking the lowest time complexity:

- Simple algorithms can have bad time complexities but they are simple. It might be better to go with something slower but easier to implement, at least initially.
- Time complexity calculations are simplified and reflect the behaviour as the input “gets large”. For “small” input sizes it might be better to use something different. I’m using the quotes very deliberately there because what counts as small and large depends on the algorithm.
- There is often a balance between time complexity and space complexity. A faster algorithm might need a lot more memory which isn’t always available.

For some algorithms it’s worth considering their time complexity in different situations. The average, best and worst cases can be different. One of the easiest sorting algorithms to write is bubble sort. This has a poor O(n^2) performance for it’s average-case time complexity. However if the input is already sorted, or nearly sorted, then it can be O(n) which is great. One of the most common sorting algorithms is quicksort. This has a good O(n log n) performance on average but if the input is reverse sorted then it has the poor O(n^2). It maybe better to go with mergesort or heapsort which have a consistent O(n log n).

Some of our code is going to be performance critical and so you really should pay attention to time complexity. Some of our code is not going to be performance critical and so it shouldn’t be a problem. It’s always good to keep in mind just in case. Have a rough idea of how your code behaves with the input size during testing and how that might change with the input size during real use.

## Beyond algorithms

It occurred to me recently that time complexity calculations and big O notation might apply elsewhere. Specifically I was wondering about time estimation again.

Some software projects are obviously smaller and some are larger.
That’s the *n* that you would be using in the calculation.
Afterwards this would probably relate fairly well to the number of lines of code but
I don’t know how you’d quantify it before hand.
You would have to factor in team size and the number of teams involved which introduces
communication complexity.

Your methodology would matter as well. If I had to guess a basic waterfall model project is might be O(n^2). That’s because it tries to plan ahead and go through stages one by one but, if there’s a problem, you might have to jump back to an earlier stage to resolve things. I tend to think that problems are more common than no problems. So the planning itself is O(n) but there could be an O(n) penalty. It feels a bit like bubble sort, good in some situations but often slow. Another guess is that the spiral model or agile software development might be O(n log n). They are fundamentally based on iterating over the problem but each iteration should make the problem smaller or better understood. It feels a bit like quicksort, generally good but there could still be hidden issues.

Splitting a project up into smaller pieces and giving it to multiple teams would give you a smaller *n* but
introduces an integration stage.
That’s another loop that could slow things down.
You probably want to firm up how those pieces are going to connect early on and / or
make it flexible enough that small adjustments aren’t going to matter.
It would really start to matter how large each piece is, how many teams there are and how they communicate.

Debugging and testing feel like they could be a factor. If a feature is planned and implemented and there is a big delay before feedback and testing that’s another loop. Prototyping features before a final implementation might give early notice. Regular testing, say, overnight builds and unit testing would help. The smaller the gap between a problem appearing and that problem being noticed the smaller the loop.

This section has been filled with the words “guess”, “probably” and “feel”. I don’t have an answer for how to use this idea in practice. Online searches produces results for algorithmic complexity and project complexity but not obviously both together. I think this is the sort of think that could be, or already has been, the basis of academic research. From a practical point of view… maybe try to do simple things in batches and keep complicated things small.

## Leave a Reply