UNDERSTANDING TIME COMPLEXITY

Apurva
4 min readAug 18, 2020

While programming, we can come across multiple ways to solve a problem. Though a problem can be solved in many ways, yet understanding the most optimum solution for a problem is what sets a good software developer apart from others. A good program should have two qualities: it should consume less space and it should execute in the least time. For example, if we are given a problem search for an element in array, let us try two approaches:

//linear search approach
int x; //element which is to be searched
int n; //length of array
int arr[];//array
for(int i=0;i<n;i++)
if(arr[i]==x)
break;

In this solution, we have used linear search approach. In this approach, we have to scan through the entire array to find a particular element. For an array with a very large size, say 100,00,000, the computer has to scan through each element to search an element. Let us consider that for traversing an element the computer takes one unit time to traverse through one element of an array. Then for a large array, it will take 100,00,000 units of time to perform linear search operation. Therefore to search an element in an array, we need to find a better solution,

//binary search
int x;//element to be found
int arr[];//sorted array
int low = 0;//first element of array
int high = n-1;//last element index of array
int mid= (high+low)/2; //middle element of array
while(low<=high){
if (arr[mid]<x)
low = mid+1
else if(arr[mid] == x)
break;
else
high = mid-1;
mid = (high+low)/2;

Another approach to find an element in an array is through binary search. In this technique, complete traversal of array is not required. The array is divided into two blocks and the current array element is checked: whether it is greater than, or equal to or less than the element to be searched. Since it is a sorted array, if the current element is less than or greater than the element to be searched, the upper and lower limits are decreased and increased accordingly. Eventually, after each step, there are lesser elements to traverse through. Binary search technique executes in lesser time than linear search technique since scanning through each element every time is not required.

ASYMPTOTIC ANALYSIS

There are multiple algorithms to write a program, however we need to choose one that executes in least amount of time and occupies least space in RAM. Since occupation of space in RAM varies from computer to computer, we focus on the time complexity of the algorithm. Time complexity can be defined as the total time the program takes to complete its execution. We can check the time complexity of algorithms using Asymptotic notations.

There are three most commonly used Asymptotic Notations to check the time complexity of an algorithm. They check the worst case (taking the most time), average case (taking average time), and best case (taking most time).

  1. BIG-OH (O):

Big-Oh is a simplified analysis of an algorithm’s efficiency. This asymptotic notation deals with the worst possible of execution time of a program, which means, that a program cannot take more time in any case than what is predicted by big-oh. It is defined in the following manner:

O(g(n)) : f(n): There exists constants c and n1 such that for every f(n)<=g(n) for n>=n1. Let us consider an example

F(n) = 2n²+n+3

G(n) = n² // we ignore the constants and choose the highest power of n

C = 2+2+3 = 7

F(n)<=7n² and n1>=3.

When we plot the graph of time vs n1, F(n) is always less than C*G(n). Therefore, big oh is also called lower bound. There are certain rules that we should keep in mind while working with big — oh:

  1. Constants should be ignored.
  2. Certain terms dominate others: O(1)<O(log n)<O(n)<O(n log n)< O(n²)<O(2^n)<O(n!)
  3. Lower order terms should be ignored.

2. OMEGA:

Omega is another asymptotic notation that checks the minimum time a program requires to execute. This is called the best case, because in any scenario, any other algorithm will take more time than what is predicted by omega. It is defined in the following manner:

Omega(g(n)) = {f(n): there exist constants C and n1, such that C*g(n)<f(n), for n>n1}. Let us consider the following example:

f(n) = 6n²+7n+1 = Omega(n²) //taking the highest power
g(n) = n²
C=6 //coefficient of highest power, n1 = 0//lowest possible case
By the definition, 6n²>= f(n), n>=0
When we plot the graph of time vs n1, F(n) is always greater than C*G(n). Therefore, omega is also called upper bound.

3.THETA:

Theta gives the average time complexity, which means certain algorithms can take lesser time to execute while others can take higher time to execute than what is predicted by theta. It is defined as follows:

Theta(g(n)): {f(n): There exist constants C1 ,C2 and n1, C1*g(n)<=f(n)<=C2*g(n), for n>=n1}. Let us consider the following example:

f(n) = 5n²+3n+1 = Theta(n²) //highest power

g(n) = n²

C1 = 5(highest coefficient), C2 = 9(sum of coefficients), n1 = 1

By definition, 5n²<=f(n)<=9n², n>=1

We should note that among the three asymptotic notations, big-oh, omega and theta, theta gives the best idea about the rate of growth of an algorithm, because it provides a tight bound unlike big omega and big theta , which only give one bound.

--

--