Home Week-1 Week-3

PDSA - Week 2

Complexity

The complexity of an algorithm is a function describing the efficiency of the algorithm in terms of the amount of input data. There are two main complexity measures of the efficiency of an algorithm:

Space complexity

The space complexity of an algorithm is the amount of memory it needs to run to completion.

Generally, space needed by an algorithm is the sum of the following two components:

Time complexity

The time complexity of an algorithm is the amount of computer time it needs to run to completion. Computer time represents the number of operations executed by the processor.

Time complexity calculated in three types of cases:

Growth rate of functions

The number of operations for an algorithm is usually expressed as a function of the input.

For Example:

Function for given code is :

f(n)=2n2+2n+3

Ignore all the constant and coefficient just look at the highest order term in relation to n. So f(n) is proportional to n2

 

Notations to represent complexity

The notation are mathematical notations that are commonly used to describe the time complexity of an algorithm or the upper and lower bounds of how an algorithm's running time grows as the input size grows.

Big-Oh(O) - Upper bound:

Big O notation describes the upper bound of an algorithm's running time. Specifically, we use the notation O(f(n)) to describe the maximum growth rate of an algorithm's running time. This means that the algorithm's running time will not grow faster than some constant multiple of f(n) as the input size grows.

Omega(Ω) - Lower bound:

Omega notation, on the other hand, describes the lower bound of an algorithm's running time. Specifically, we use the notation Ω(f(n)) to describe the minimum growth rate of an algorithm's running time. This means that the algorithm's running time will not grow slower than some constant multiple of f(n)as the input size grows.

Theta(Θ) - Tightly bound:

Theta notation describes both the upper and lower bounds of an algorithm's running time. Specifically, we use the notation Θ(f(n)) to describe the tight bound of an algorithm's running time. This means that the algorithm's running time will grow at the same rate as some constant multiple of f(n) as the input size grows.

 

source = https://mitpress.mit.edu/9780262046305/introduction-to-algorithms/

It's important to note that big O notation describes the upper bound of the growth rate of a function. Some algorithms may have a best-case or average-case time complexity that is much faster than their worst-case time complexity. However, the worst-case time complexity is typically what is used to describe the time complexity of an algorithm.

We often use "big O" notation to describe the growth rate of functions. This notation allows us to compare the time complexity of algorithms and the behavior of functions as their input size grows to infinity.

Here are some common functions and their growth rates, listed from slowest-growing to fastest-growing:

The growth rate of a function is important when analyzing the time complexity of an algorithm. For example, if an algorithm has a time complexity of O(n2), it means that its running time grows quadratically with the input size. If the input size doubles, the running time will increase by a factor of four.

 

Calculate complexity

Complexity ? O(1)

Complexity for single loop

Complexity ? O(n)

Complexity for nested two loop

Complexity ? O(n2)

Complexity for nested three loop

Complexity ? O(n3)

Complexity for combination of all

Complexity ? O(n3)

 

Complexity for recursive solution

Recurrence relation ? T(n)=T(n1)+O(1)=1+1+1...n times

Complexity ? O(n)

 

Complexity for recursive solution

Recurrence relation - T(n)=2T(n/2)+O(n)

Complexity ? O(nlogn)

 

 

Searching Algorithm

Linear search and Binary search working visualization

https://www.cs.usfca.edu/~galles/visualization/Search.html

 

Implementation

Analysis

Best Case - O(1)

Average Case - O(n)

Worst Case - O(n)

 

Binary search is an efficient search algorithm used to find the position of a target value within a sorted list. The algorithm compares the target value to the middle element of the list. If the target value is equal to the middle element, the search is complete. Otherwise, the algorithm recursively searches either the left or right half of the list, depending on whether the target value is greater or less than the middle element.

Algorithm:

  1. Start with a sorted list and a target value.

  2. Set the low and high pointers to the first and last indices of the list, respectively.

  3. While the low pointer is less than or equal to the high pointer:

    A. Calculate the middle index as the average of the low and high pointers (round down if necessary).

    B. If the target value is equal to the middle element of the list, return the middle index.

    C. If the target value is less than the middle element, set the high pointer to the index before the middle index.

    D. If the target value is greater than the middle element, set the low pointer to the index after the middle index.

  4. If the target value is not found in the list, return False (or some other indication that the value is not present).

 

Iterative Implementation

 

Code Execution Flow

 

Recursive Implementation without slicing

 

Code Execution Flow

 

 

Lecture Implementation (Not recommended to achieve O(log n) time

*Due to use of slicing, this implementation takes O(n) time

 

Analysis

Best Case - O(1)

Average Case - O(logn)

Worst Case - O(logn)

 

Sorting Algorithm

Selection Sort

Selection sort is a simple sorting algorithm that works by repeatedly finding the minimum element from an unsorted part of the list and placing it at the beginning of the sorted part of the list.

Algorithm:

  1. Repeat steps 2-5 for the unsorted part of the list until the entire list is sorted.

  2. Set the first element as the minimum value in the remaining unsorted part of the list.

  3. Iterate through the unsorted part of the list, comparing each element to the current minimum value.

  4. If a smaller element is found, update the minimum value.

  5. After iterating through the unsorted part of the list, swap the minimum value with the first element of the unsorted part of the list.

 

Visualization

Select the SELECTION SORT(SEL) option on top bar then run the visualization.

Source - https://visualgo.net/en/sorting

 

Implementation

 

Code Execution Flow

 

 

Analysis

Best Case - n+(n1)+(n2)...2+1=n(n+1)/2=O(n2)

Average Case - n+(n1)+(n2)...2+1=n(n+1)/2=O(n2)

Worst Case - n+(n1)+(n2)...2+1=n(n+1)/2=O(n2)

Stable - No

Sort in Place - Yes

 

Insertion Sort

Insertion sort is a simple sorting algorithm that works by repeatedly iterating through the list, comparing adjacent elements and swapping them if they are in the wrong order.

Algorithm:

  1. Iterate through the list from the second element to the end (i.e., the element at index 1 to the last element at index n-1).

  2. For each element, compare it with the previous element.

  3. If the previous element is greater than the current element, swap the two elements.

  4. Continue to compare the current element with the previous element until the previous element is not greater than the current element or the current element is at the beginning of the list.

  5. Repeat steps 2-4 for all elements in the list.

 

Visualization

Select the INSERTION SORT(INS) option on top bar then run the visualization.

 

Source - https://visualgo.net/en/sorting

Implementation

 

Code Execution Flow

 

Analysis

Best Case - 1+1+1...1+1(ntimes)=n=O(n) - If list is already sorted

Average Case - n+(n1)+(n2)...2+1=n(n+1)/2=O(n2)

Worst Case - n+(n1)+(n2)...2+1=n(n+1)/2=O(n2)

Stable - Yes

Sort in Place - Yes

 

Merge Sort

Merge sort is a popular sorting algorithm that uses the divide-and-conquer technique. It works by recursively dividing an list into two halves, sorting each half separately, and then merging them back together into a single sorted list.

Algorithm:

  1. If the list contains only one element or is empty, it is already sorted. Return the list.

  2. Divide the list into two halves, using the middle index as the divider.

  3. Recursively sort the left half of the list by applying the merge sort algorithm on it.

  4. Recursively sort the right half of the list by applying the merge sort algorithm on it.

  5. Merge the two sorted halves back together to form a single sorted list. To do this:

    A. Initialize three pointers: one for the left half, one for the right half, and one for the merged list.

    B. Compare the elements pointed to by the left and right pointers. Choose the smaller element, and add it to the merged list.

    C. Move the pointer of the list from which the chosen element was taken to the next element.

    D. Repeat steps B and C until one of the lists is completely processed.

    E. Add any remaining elements from the other list to the merged list.

  6. Return the merged list as the final sorted list.

 

Visualization

Select the MERGE SORT(MER) option on top bar then run the visualization.

 

Source - https://visualgo.net/en/sorting

Implementation

 

Code Execution Flow

 

 

Analysis

Recurrence relation - T(n)=2T(n/2)+O(n)

Best Case - O(nlogn)

Average Case - O(nlogn)

Worst Case - O(nlogn)

Stable - Yes

Sort in Place - No

 

Complexity of python data structure's method

Keep in mind before using for efficiency.

https://wiki.python.org/moin/TimeComplexity

List methods

Dictionary methods

Set Methods