In computer science big-O notation describes a generalization of the mathematical big-O notation to computer programs and is used to describe their behaviour with large input sizes.
Loops don't always take a single iteration approach where the integer starts at
int findInd(vector<int>& arr, int q) {
for (int i = 0; i < arr.size(); i++) {
if (arr[i] == q) return i;
}
return -1;
}
Treating
We arrive at this conclusion as all our operations are treated as constant time operations (thus only adding or multiplying by constants), and our for loop grow proportionally to our input size
bool hasDuplicate(int arr[], int size) {
for (int i = 0; i < size-1; i++) {
for (int j = i+1; j < size; j++) {
if (arr[i] == arr[j]) return true;
}
}
return false;
}
Treating
This can be determined by again looking at the for loops we can see that in the worst case scenario the first for loop is run from 0 to size-1, and the internal loop from 1 to size-1, additionally, we can see that every time the first for loop is run the nested one runs as well, meaning we can multiply the time demands, giving us
void candyApple(int n) {
for (int i = 1; i < n; i *= 3) {
cout << "iteration: " << i << endl;
}
}
Here, the loop variable i starts at 1 and is multiplied by 3 every iteration. That means the number of iterations is the largest integer
Taking logarithms:
So the runtime grows proportionally to
void caramelCorn(int n) {
for (int i = 0; i * i < 6 * n; i++) {
cout << "iteration: " << i << endl;
}
}
Here, the loop condition is i * i < 6n. The loop continues until
So the number of iterations is proportional to