Eroxl's Notes
Big-O Notation (Computer Science)

In computer science big-O notation describes a generalization of the mathematical big-O notation to computer programs and is used to describe their behaviour with large input sizes.

General Rules

  • Single operations: constant time
  • Consecutive operations: sum operation times
  • Conditionals: condition check time plus branch time (which branch?)
  • Loops: sum of loop body times
  • Function call: time for function

For Loop Executions

Loops don't always take a single iteration approach where the integer starts at and loops till , instead when the loop iterates differently we can use the following steps to determine it's execution pattern:

  1. Determine the "range" of the loop variable
  2. Determine how many elements within that range will be "hit"

Examples

Simple Example

int findInd(vector<int>& arr, int q) {
	for (int i = 0; i < arr.size(); i++) {
		if (arr[i] == q) return i;
	}
	
	return -1;
}

Treating , and treating assignments, comparisons, and external function calls as constant time operations we can determine the worst case runtime of this program to be .

We arrive at this conclusion as all our operations are treated as constant time operations (thus only adding or multiplying by constants), and our for loop grow proportionally to our input size .

Nested Iterations

bool hasDuplicate(int arr[], int size) {
	for (int i = 0; i < size-1; i++) {
		for (int j = i+1; j < size; j++) {
			if (arr[i] == arr[j]) return true;
		}
	}
	
	return false;
}

Treating , and again treating assignments, comparisons, and external function calls as constant time operations we can determine the worst case runtime of this program to be .

This can be determined by again looking at the for loops we can see that in the worst case scenario the first for loop is run from 0 to size-1, and the internal loop from 1 to size-1, additionally, we can see that every time the first for loop is run the nested one runs as well, meaning we can multiply the time demands, giving us and thus .

Non-Trivial Iterators

Example 1

void candyApple(int n) {
	for (int i = 1; i < n; i *= 3) {
		cout << "iteration: " << i << endl;
	}
}

Here, the loop variable i starts at 1 and is multiplied by 3 every iteration. That means the number of iterations is the largest integer such that:

Taking logarithms:

So the runtime grows proportionally to , as we can ignore the base of the logarithm in big-O notation.

Example 2

void caramelCorn(int n) {
	for (int i = 0; i * i < 6 * n; i++) {
		cout << "iteration: " << i << endl;
	}
}

Here, the loop condition is i * i < 6n. The loop continues until , which means:

So the number of iterations is proportional to (the constant factor doesn’t matter asymptotically):