The complexity of an algorithm is a function describing the efficiency of the algorithm in terms of the amount of input data. There are two main complexity measures of the efficiency of an algorithm:
Space complexity
The space complexity of an algorithm is the amount of memory it needs to run to completion.
Generally, space needed by an algorithm is the sum of the following two components:
Fixed part(
Variable Part(
Total Space
Time complexity
The time complexity of an algorithm is the amount of computer time it needs to run to completion. Computer time represents the number of operations executed by the processor.
Time complexity calculated in three types of cases:
The number of operations for an algorithm is usually expressed as a function of the input.
For Example:
s = 0 #1
for i in range(n): #n+1
for j in range(n): #n(n+1)
s = s + 1 #n^2
print(s)#1
Function for given code is :
Ignore all the constant and coefficient just look at the highest order term in relation to
The notation are mathematical notations that are commonly used to describe the time complexity of an algorithm or the upper and lower bounds of how an algorithm's running time grows as the input size grows.
Big-Oh(
Big
Omega(
Omega notation, on the other hand, describes the lower bound of an algorithm's running time. Specifically, we use the notation
Theta(
Theta notation describes both the upper and lower bounds of an algorithm's running time. Specifically, we use the notation
source = https://mitpress.mit.edu/9780262046305/introduction-to-algorithms/
It's important to note that big O notation describes the upper bound of the growth rate of a function. Some algorithms may have a best-case or average-case time complexity that is much faster than their worst-case time complexity. However, the worst-case time complexity is typically what is used to describe the time complexity of an algorithm.
We often use "big O" notation to describe the growth rate of functions. This notation allows us to compare the time complexity of algorithms and the behavior of functions as their input size grows to infinity.
Here are some common functions and their growth rates, listed from slowest-growing to fastest-growing:
The growth rate of a function is important when analyzing the time complexity of an algorithm. For example, if an algorithm has a time complexity of
a = 10
b = 20
s = a + b
print(s)
Complexity ?
Complexity for single loop
s = 0
for i in range(n):
s = s + 1
Complexity ? O(n)
Complexity for nested two loop
s = 0
for i in range(n):
for j in range(n):
s = s + 1
Complexity ?
Complexity for nested three loop
s = 0
for i in range(n):
for j in range(n):
for k in range(n)
s = s + 1
Complexity ?
Complexity for combination of all
s = 0
for i in range(n):
s = s + 1
s = 0
for i in range(n):
for j in range(n):
s = s + 1
s = 0
for i in range(n):
for j in range(n):
for k in range(n)
s = s + 1
Complexity ?
Complexity for recursive solution
def factorial(n)
if (n == 0):
return 1
return n * factorial(n - 1)
Recurrence relation ?
Complexity ?
Complexity for recursive solution
def merge(A,B):
#statement block for merging two sorted array
def mergesort(A):
n = len(A)
if n <= 1:
return(A)
L = mergesort(A[:n//2])
R = mergesort(A[n//2:])
B = merge(L,R)
return(B)
Recurrence relation -
Complexity ?
https://www.cs.usfca.edu/~galles/visualization/Search.html
Implementation
def naivesearch(L,v):
for i in range(len(L)):
if v == L[i]:
return i
return(False)
Best Case -
Average Case -
Worst Case -
Binary search is an efficient search algorithm used to find the position of a target value within a sorted list. The algorithm compares the target value to the middle element of the list. If the target value is equal to the middle element, the search is complete. Otherwise, the algorithm recursively searches either the left or right half of the list, depending on whether the target value is greater or less than the middle element.
Start with a sorted list and a target value.
Set the low and high pointers to the first and last indices of the list, respectively.
While the low pointer is less than or equal to the high pointer:
A. Calculate the middle index as the average of the low and high pointers (round down if necessary).
B. If the target value is equal to the middle element of the list, return the middle index.
C. If the target value is less than the middle element, set the high pointer to the index before the middle index.
D. If the target value is greater than the middle element, set the low pointer to the index after the middle index.
If the target value is not found in the list, return False
(or some other indication that the value is not present).
Iterative Implementation
def binarysearch(L, v): #v = target element
low = 0
high = len(L) - 1
while low <= high:
mid = (low + high) // 2
if L[mid ] < v:
low = mid + 1
elif L[mid ] > v:
high = mid - 1
else:
return mid
return False
Code Execution Flow
Recursive Implementation without slicing
def binarysearch(L,v,low,high): #v = target element, low = first index, high = last index
if high - low < 0:
return False
mid = (high + low)//2
if v == L[mid]:
return mid
if v < L[mid]:
return(binarysearch(L,v,low,mid-1))
else:
return(binarysearch(L,v,mid+1,high))
Code Execution Flow
Lecture Implementation (Not recommended to achieve
*Due to use of slicing, this implementation takes O(n) time
def binarysearch(L,v):
if L == []:
return(False)
mid = len(L)//2
if v == L[mid]:
return mid
if v < L[mid]:
return(binarysearch(L[:mid],v))
else:
return(binarysearch(L[mid+1:],v))
Best Case -
Average Case -
Worst Case -
Selection sort is a simple sorting algorithm that works by repeatedly finding the minimum element from an unsorted part of the list and placing it at the beginning of the sorted part of the list.
Select the SELECTION SORT(SEL) option on top bar then run the visualization.
Source - https://visualgo.net/en/sorting
def selectionsort(L):
n = len(L)
if n < 1:
return(L)
for i in range(n):
minpos = i
for j in range(i+1,n):
if L[j] < L[minpos]:
minpos = j
(L[i],L[minpos]) = (L[minpos],L[i])
return(L)
Code Execution Flow
Best Case -
Average Case -
Worst Case -
Stable - No
Sort in Place - Yes
Insertion sort is a simple sorting algorithm that works by repeatedly iterating through the list, comparing adjacent elements and swapping them if they are in the wrong order.
Select the INSERTION SORT(INS) option on top bar then run the visualization.
Source - https://visualgo.net/en/sorting
def insertionsort(L):
n = len(L)
if n < 1:
return(L)
for i in range(n):
j = i
while(j > 0 and L[j] < L[j-1]):
(L[j],L[j-1]) = (L[j-1],L[j])
j = j-1
return(L)
Code Execution Flow
Best Case -
Average Case -
Worst Case -
Stable - Yes
Sort in Place - Yes
Merge sort is a popular sorting algorithm that uses the divide-and-conquer technique. It works by recursively dividing an list into two halves, sorting each half separately, and then merging them back together into a single sorted list.
If the list contains only one element or is empty, it is already sorted. Return the list.
Divide the list into two halves, using the middle index as the divider.
Recursively sort the left half of the list by applying the merge sort algorithm on it.
Recursively sort the right half of the list by applying the merge sort algorithm on it.
Merge the two sorted halves back together to form a single sorted list. To do this:
A. Initialize three pointers: one for the left half, one for the right half, and one for the merged list.
B. Compare the elements pointed to by the left and right pointers. Choose the smaller element, and add it to the merged list.
C. Move the pointer of the list from which the chosen element was taken to the next element.
D. Repeat steps B and C until one of the lists is completely processed.
E. Add any remaining elements from the other list to the merged list.
Return the merged list as the final sorted list.
Select the MERGE SORT(MER) option on top bar then run the visualization.
Source - https://visualgo.net/en/sorting
def merge(A,B): # Merge two sorted list A and B
(m,n) = (len(A),len(B))
(C,i,j) = ([],0,0)
#Case 1 :- When both lists A and B have elements for comparing
while i < m and j < n:
if A[i] <= B[j]:
C.append(A[i])
i += 1
else:
C.append(B[j])
j += 1
#Case 2 :- If list B is over, shift all elements of A to C
while i < m:
C.append(A[i])
i += 1
#Case 3 :- If list A is over, shift all elements of B to C
while j < n:
C.append(B[j])
j += 1
# Return sorted merged list
return C
# Recursively divide the problem into sub-problems to sort the input list L
def mergesort(L):
n = len(L)
if n <= 1: #If the list contains only one element or is empty return the list.
return(L)
Left_Half = mergesort(L[:n//2]) #Recursively sort the left half of the list
Right_Half = mergesort(L[n//2:]) #Recursively sort the rightt half of the list
Sorted_Merged_List = merge(Left_Half, Right_Half) # Merge two sorted list Left_Half and Right_Half
return(Sorted_Merged_List)
Code Execution Flow
Recurrence relation -
Best Case -
Average Case -
Worst Case -
Stable - Yes
Sort in Place - No
Keep in mind before using for efficiency.
https://wiki.python.org/moin/TimeComplexity
List methods
Dictionary methods
Set Methods