Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Algorithms: Concepts, Design, and Analysis, Essays (university) of Computer Systems Networking and Telecommunications

algorithm for dynamic programming

Typology: Essays (university)

2018/2019

Uploaded on 02/11/2019

dev-n
dev-n 🇮🇳

1 document

1 / 23

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
DESIGN AND ANALYSIS OF ALGORITHMS
UNIT I
An algorithm is a set of steps of operations to solve a problem performing calculation, data processing,
and automated reasoning tasks. An algorithm is an efficient method that can be expressed within finite amount
of time and space.
An algorithm is the best way to represent the solution of a particular problem in a very simple and
efficient way. If we have an algorithm for a specific problem, then we can implement it in any programming
language, meaning that the algorithm is independent from any programming languages.
Algorithm Design:
The important aspects of algorithm design include creating an efficient algorithm to solve a problem in
an efficient way using minimum time and space.
To solve a problem, different approaches can be followed. Some of them can be efficient with respect to
time consumption, whereas other approaches may be memory efficient. However, one has to keep in mind that
both time consumption and memory usage cannot be optimized simultaneously. If we require an algorithm to
run in lesser time, we have to invest in more memory and if we require an algorithm to run with lesser memory,
we need to have more time.
Characteristics of Algorithms
The main characteristics of algorithms are as follows:
Algorithms must have a unique name
Algorithms should have explicitly defined set of inputs and outputs
Algorithms are well-ordered with unambiguous operations
Algorithms halt in a finite amount of time. Algorithms should not run for infinity, i.e., an algorithm must
end at some point
There are three commonly used tools to help to document program logic (the algorithm). These are
Flowcharts
Pseudo code.
Algorithm
Generally, flowcharts work well for small problems but Pseudocode is used for larger problems. Pseudo-
Code is simply a numbered list of instructions to perform some task.
Pseudocode gives a high-level description of an algorithm without the ambiguity associated with plain
text but also without the need to know the syntax of a particular programming language.
Statements are written in simple English without regard to the final programming language.
Each instruction is written on a separate line.
The pseudo-code is the program-like statements written for human readers, not for computers. Thus, the
pseudo-code should be readable by anyone who has done a little programming.
Implementation is to translate the pseudo-code into programs/software, such as “C++” language
programs. Here is a pseudo code which describes how the high level abstract process mentioned
above in the algorithm Insertion-Sort could be described in a more realistic way.
for i 1 to length(A)
x A[i]
j i
while j > 0 and A[j-1] > x
A[j] A[j-1]
j j - 1
A[j] x
Algorithm:
An algorithm is a formal definition with some specific characteristics that describes a process, which
could be executed by a Turing-complete computer machine to perform a specific task. Generally, the word
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17

Partial preview of the text

Download Algorithms: Concepts, Design, and Analysis and more Essays (university) Computer Systems Networking and Telecommunications in PDF only on Docsity!

DESIGN AND ANALYSIS OF ALGORITHMS

UNIT I

An algorithm is a set of steps of operations to solve a problem performing calculation, data processing, and automated reasoning tasks. An algorithm is an efficient method that can be expressed within finite amount of time and space. An algorithm is the best way to represent the solution of a particular problem in a very simple and efficient way. If we have an algorithm for a specific problem, then we can implement it in any programming language, meaning that the algorithm is independent from any programming languages.

Algorithm Design: The important aspects of algorithm design include creating an efficient algorithm to solve a problem in an efficient way using minimum time and space. To solve a problem, different approaches can be followed. Some of them can be efficient with respect to time consumption, whereas other approaches may be memory efficient. However, one has to keep in mind that both time consumption and memory usage cannot be optimized simultaneously. If we require an algorithm to run in lesser time, we have to invest in more memory and if we require an algorithm to run with lesser memory, we need to have more time.

Characteristics of Algorithms The main characteristics of algorithms are as follows:

  • Algorithms must have a unique name
  • Algorithms should have explicitly defined set of inputs and outputs
  • Algorithms are well-ordered with unambiguous operations
  • Algorithms halt in a finite amount of time. Algorithms should not run for infinity, i.e., an algorithm must end at some point

There are three commonly used tools to help to document program logic (the algorithm). These are

  • Flowcharts
  • Pseudo code.
  • Algorithm Generally, flowcharts work well for small problems but Pseudocode is used for larger problems. Pseudo- Code is simply a numbered list of instructions to perform some task. Pseudocode gives a high-level description of an algorithm without the ambiguity associated with plain text but also without the need to know the syntax of a particular programming language. Statements are written in simple English without regard to the final programming language. Each instruction is written on a separate line. The pseudo-code is the program-like statements written for human readers, not for computers. Thus, the pseudo-code should be readable by anyone who has done a little programming. Implementation is to translate the pseudo-code into programs/software, such as “C++” language programs. Here is a pseudo code which describes how the high level abstract process mentioned above in the algorithm Insertion-Sort could be described in a more realistic way. for i ← 1 to length(A) x ← A[i] j ← i while j > 0 and A[j-1] > x A[j] ← A[j-1] j ← j - 1 A[j] ← x

Algorithm: An algorithm is a formal definition with some specific characteristics that describes a process, which could be executed by a Turing-complete computer machine to perform a specific task. Generally, the word

"algorithm" can be used to describe any high level task in computer science. For example, following is an

algorithm for Insertion Sort. Algorithm: Insertion-Sort Input: A list L of integers of length n Output: A sorted list L1 containing those integers present in L Step 1: Keep a sorted list L1 which starts off empty Step 2: Perform Step 3 for each element in the original list L Step 3: Insert it into the correct position in the sorted list L1. Step 4: Return the sorted list Step 5: Stop

Flowchart: A flowchart is a visual representation of the sequence of steps and decisions needed to perform a process. Each step in the sequence is noted within a diagram shape. Steps are linked by connecting lines and directional arrows. This allows anyone to view the flowchart and logically follow the process from beginning to end.

Framework for Analysis:

We use a hypothetical model with following assumptions

  • Total time taken by the algorithm is given as a function on its input size
  • Logical units are identified as one step
  • Every step require ONE unit of time
  • Total time taken = Total Num. of steps executed

Input’s size: Time required by an algorithm is proportional to size of the problem instance. For e.g., more time is required to sort 20 elements than what is required to sort 10 elements.

Units for Measuring Running Time: Count the number of times an algorithm’s basic operation is executed. (Basic operation: The most important operation of the algorithm, the operation contributing the most to the total running time.) For e.g., The basic operation is usually the most time-consuming operation in the algorithm’s innermost loop.

Consider the following example:

For example, for a function f (n) Ο( f (n)) = { g (n) :there exists c > 0 and n 0 such that f (n) ≤ c. g (n) for all n > n0. }

Omega Notation(Ω): The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It measures the best case time complexity or the best amount of time an algorithm can possibly take to complete.

For example, for a function f (n) Ω( f (n)) ≥ { g (n) :there exists c > 0 and n 0 such that g (n) ≤ c. f (n) for all n > n (^) 0. }

Theta Notation(θ): The notation θ(n) is the formal way to express both the lower bound and the upper bound of an algorithm's running time. It is represented as follows −

θ( f (n)) = { g (n) if and only if g (n) = Ο( f (n)) and g (n) = Ω( f (n)) for all n > n (^) 0. }

Basic efficiency classes:

The time efficiencies of a large number of algorithms fall into only a few classes.

Divide and Conquer

In divide and conquer approach, the problem in hand, is divided into smaller sub-problems and then each problem is solved independently. When we keep on dividing the subproblems into even smaller sub- problems, we may eventually reach a stage where no more division is possible. Those "atomic" smallest

possible sub-problem (fractions) are solved. The solution of all sub-problems is finally merged in order to

obtain the solution of an original problem. Divide and Conquer algorithm solves a problem using following three steps.

  • Divide the problem into a number of sub problems that are smaller instances of the same problem.
  • (^) Conquer the sub problems by solving them recursively. If they are small enough, solve the sub problems as base cases.
  • Combine the solutions to the sub problems into the solution for the original problem.

Maximum subarray problem: The maximum subarray problem is the task of finding the contiguous subarray within a one-dimensional array, a[1...n], of numbers which has the largest sum, where, The list usually contains both positive and negative numbers along with 0. For example, for the array of values −2, 1, −3, 4, −1, 2, 1, −5, 4; the contiguous subarray with the largest sum is 4, −1, 2, 1, with sum 6. Using Divide and Conquer approach, we can find the maximum subarray sum in O(nLogn) time. Following is the Divide and Conquer algorithm.

  1. Divide the given array in two halves
  2. Return the maximum of following three a) Maximum subarray sum in left half (Make a recursive call) b) Maximum subarray sum in right half (Make a recursive call) c) Maximum subarray sum such that the subarray crosses the midpoint

Algorithm: MaxSubarray(A,low,high) if high == low // base case: only one element return (low, high, A[low]) else // divide mid = floor( (low + high)/2 )

In the above method, we do 8 multiplications for matrices of size N/2 x N/2 and 4 additions. Addition of two matrices takes O(N^2 ) time. So the time complexity can be written as T(N) = 8T(N/2) + O(N^2 ) Time complexity of above method is O(N 3 )

In the above divide and conquer method, the main component for high time complexity is 8 recursive calls. The idea of Strassen’s method is to reduce the number of recursive calls to 7. Strassen’s method is similar to above simple divide and conquer method in the sense that this method also divide matrices to sub-matrices of size N/2 x N/2 as shown in the above diagram, but in Strassen’s method, the four sub-matrices of result are calculated using following formulae.

Time Complexity of Strassen’s Method: Addition and Subtraction of two matrices takes O(N^2 ) time. So time complexity can be written as T(N) = 7T(N/2) + O(N^2 ) The time complexity of above method is O(NLog7^ ) which is approximately O(N2.8074) Recurrences: A recurrence relation is an equation that uses recursion to relate terms in a sequence or elements in an array. It is a way to define a sequence or array in terms of itself. Recurrence relations have applications in many

areas of mathematics:

  • number theory - the Fibonacci sequence
  • combinatorics - distribution of objects into bins
  • calculus - Euler's method

There are mainly three ways for solving recurrences.

  • Substitution Method
  • Recurrence Tree Method
  • Master Method

Substitution Method: We make a guess for the solution and then we use mathematical induction to prove the the guess is correct or incorrect. For example consider the recurrence T(n) = 2T(n/2) + n We guess the solution as T(n) = O(nLogn). Now we use induction to prove our guess. We need to prove that T(n) <= cnLogn. We can assume that it is true for values smaller than n.

T(n) = 2T(n/2) + n <= cn/2Log(n/2) + n = cnLogn - cnLog2 + n = cnLogn - cn + n <= cnLogn

  1. Recurrence Tree Method: In this method, we draw a recurrence tree and calculate the time taken by every level of tree. Finally, we sum the work done at all levels. To draw the recurrence tree, we start from the given recurrence and keep drawing till we find a pattern among levels. The pattern is typically a arithmetic or geometric series. For example consider the recurrence relation T(n) = T(n/4) + T(n/2) + cn 2

cn^2 /
T(n/4) T(n/2)

If we further break down the expression T(n/4) and T(n/2), we get following recursion tree.

cn^2 /
c(n^2 )/16 c(n 2 )/ / \ /
T(n/16) T(n/8) T(n/8) T(n/4) Breaking down further gives us following cn^2 /
c(n^2 )/16 c(n 2 )/ / \ /
c(n^2 )/256 c(n 2 )/64 c(n^2 )/64 c(n 2 )/

/ \ / \ / \ / \

To know the value of T(n), we need to calculate sum of tree nodes level by level. If we sum the above tree level by level, we get the following series T(n) = c(n^2 + 5(n^2)/16 + 25(n^2)/256) + .... The above series is geometrical progression with ratio 5/16.

To get an upper bound, we can sum the infinite series. We get the sum as (n^2 )/(1 - 5/16) which is O(n^2 )

4 rods of size 1 (achieved by 3 cuts) = 4 x p(1) = 4 x 1 = 4

2 rods of size 1 + 1 rod of size 2 (achieved by 2 cuts) = 2 x p(1) + 1 x p(2) = 2 x 1 + 5 = 7 2 rods of size 2 (achieved by 1 cut)= 2 x p(2) = 2 x 5 = 10 1 rod of size 1 + 1 rod of size 3 (achieved by 2 cuts)= 1 x p(1) + 1 x p(3) = 1 + 8= 9 Original rod of size 4 (achieved by no cuts)= 1 x p(4) = 9

Thus, maximum revenue possible is 10, can be achieved by making a cut at size=2, splitting the original rod into two rods of size 2 with no further cuts in any of them.

Matrix Chain Multiplication Given a sequence of matrices, find the most efficient way to multiply these matrices together. The problem is not actually to perform the multiplications, but merely to decide in which order to perform the multiplications. We have many options to multiply a chain of matrices because matrix multiplication is associative. In other words, no matter how we parenthesize the product, the result will be the same. For example, if we had four matrices A, B, C, and D, we would have: (ABC)D = (AB)(CD) = A(BCD) = .... However, the order in which we parenthesize the product affects the number of simple arithmetic operations needed to compute the product, or the efficiency. For example, suppose A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix. Then, (AB)C = (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations A(BC) = (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations. Clearly the first parenthesization requires less number of operations. Using dynamic programming Take the sequence of matrices and separate it into two subsequences. Find the minimum cost of multiplying out each subsequence. Add these costs together, and add in the cost of multiplying the two result matrices. Do this for each possible position at which the sequence of matrices can be split, and take the minimum over all of them.

Elements of dynamic programming There are three basic elements that characterize a dynamic programming algorithm: Optimal Substructure Variant: Memoization Overlapping Sub-problems

  1. Substructure (Optimal Substructure)

Decompose the given problem into smaller (and hopefully simpler) subproblems. Express the solution of the original problem in terms of solutions for smaller problems. Note that unlike divide-and-conquer problems, it is not usually sufficient to consider one decomposition, but many different ones.

  1. Table-Structure (Memoization) After solving the subproblems, store the answers (results) to the subproblems in a table. This is done because (typically) subproblem solutions are reused many times, and we do not want to repeatedly solve the same problem over and over again.
  2. Bottom-up Computation (Overlapping Sub-problems) Using table (or something), combine solutions of smaller subproblems to solve larger subproblems, and eventually arrive at a solution to the complete problem. The idea of bottom-up computation is as follow: Bottom-up means Start with the smallest subproblems. Combining theirs solutions obtain the solutions to subproblems of increasing size. Until arrive at the solution of the original problem.

Longest Common Subsequence

Longest Common Subsequence (LCS) problem as one more example problem that can be solved using Dynamic Programming. LCS Problem Statement: Given two sequences, find the length of longest subsequence present in both of them. A subsequence is a sequence that appears in the same relative order, but not necessarily contiguous. For

The cost of tree I is 341 + 502 = 134

The cost of tree II is 501 + 342 = 118

Example 2 Input: keys[] = {10, 12, 20}, freq[] = {34, 8, 50}

There can be following possible BSTs 10 12 20 10 20 \ / \ / \ / 12 10 20 12 20 10 \ / /
20 10 12 12 I II III IV V Among all possible BSTs, cost of the fifth BST is minimum. Cost of the fifth BST is 150 + 234 + 3*8 = 142 Greedy Algorithm Greedy is an algorithmic paradigm that builds up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. Greedy algorithms are used for optimization problems. An optimization problem can be solved using Greedy if the problem has the following property: At every step, we can make a choice that looks best at the moment, and we get the optimal solution of the complete problem. If a Greedy Algorithm can solve a problem, then it generally becomes the best method to solve that problem as the Greedy algorithms are in general more efficient than other techniques like Dynamic Programming. But Greedy algorithms cannot always be applied. Activity Selection Problem Let us consider the Activity Selection problem as our first example of Greedy algorithms. Following is the problem statement. You are given n activities with their start and finish times. Select the maximum number of activities that can be performed by a single person, assuming that a person can only work on a single activity at a time.

  1. Sort the activities according to their finishing time
  2. Select the first activity from the sorted array and print it.
  3. Do following for remaining activities in the sorted array. …….a) If the start time of second activity is greater than or equal to the finish time of previously selected activity then select this activity and print it.

Activity A1 A2 A3 A4 A5 A6 A7 A Start time 1 0 1 4 2 5 3 4 Finish time

Sort the activities according to their finishing time

Activity A3 A1 A2 A7 A8 A4 A6 A Start time 1 1 0 3 4 4 5 2 Finish time

Select the first activity from the sorted array and print it.

Activity A3 A7 A Start time 1 3 5 Finish time 2 5 8

Example 1 : Consider the following 3 activities sorted by by finish time. start[] = {10, 12, 20}; finish[] = {20, 25, 30}; A person can perform at most two activities. The

maximum set of activities that can be executed is {0, 2} [ These are indexes in start[] and finish[] ]

GREEDY-ACTIVITY-SELECTOR( s, f ) 1 n length [ s ] 2 A {1} 3 j 1

4 for i 2 to^ n 5 do if s (^) i â j 6 then A A { i } 7 j i

8 return A

Time Complexity : It takes O(n log n) time if input activities may not be sorted. It takes O(n) time when it is given that input activities are always sorted. Elements of the greedy strategy A greedy algorithm obtains an optimal solution to a problem by making a sequence of choices. For each decision point in the algorithm, the choice that seems best at the moment is chosen. This heuristic strategy does not always produce an optimal solution. Elements of the greedy strategy are Greedy-choice property Optimal substructure greedy-choice property : a globally optimal solution can be arrived at by making a locally optimal (greedy) choice. Here is where greedy algorithms differ from dynamic programming. In dynamic programming, we make a choice at each step, but the choice may depend on the solutions to subproblems. In a greedy algorithm, we make whatever choice seems best at the moment and then solve the subproblems arising after the choice is made. optimal substructure if an optimal solution to the problem contains within it optimal solutions to subproblems. Huffman Coding Huffman coding is a lossless data compression algorithm. The idea is to assign variable-length codes to input characters, lengths of the assigned codes are based on the frequencies of corresponding characters. The most frequent character gets the smallest code and the least frequent character gets the largest code. There are mainly two major parts in Huffman Coding

  1. Build a Huffman Tree from input characters.
  2. Traverse the Huffman Tree and assign codes to characters. Steps to build HUFFMAN Tree Input is array of unique characters along with their frequency of occurrences and output is Huffman Tree.
  1. Create a leaf node for each unique character and build a min heap of all leaf nodes (Min Heap is used as a priority queue. The value of frequency field is used to compare two nodes in min heap. Initially, the least frequent character is at root)
  2. Extract two nodes with the minimum frequency from the min heap.
  3. Create a new internal node with frequency equal to the sum of the two nodes frequencies. Make the first extracted node as its left child and the other extracted node as its right child. Add this node to the min heap.
  4. Repeat steps#2 and #3 until the heap contains only one node. The remaining node is the root node and the tree is complete. Let us understand the algorithm with an example: character Frequency a 5 b 9 c 12 d 13 e 16 f 45

Internal Node 55

Step 6: Extract two minimum frequency nodes. Add a new internal node with frequency 45 + 55 = 100

Now min heap contains only one node. character Frequency Internal Node 100 Since the heap contains only one node, the algorithm stops here. Steps to print codes from Huffman Tree: Traverse the tree formed starting from the root. Maintain an auxiliary array. While moving to the left child, write 0 to the array. While moving to the right child, write 1 to the array. Print the array when a leaf node is encountered.

The codes are as follows: character code-word f 0 c 100 d 101 a 1100 b 1101 e 111

Matroid It involves combinatorial (usefull) structure It provide abstraction for many problems like linear algebra and graph theory. Let S be a finite set, and F a nonempty family of subsets of S, that is, F P(S). We call (S,F) a matroid if and only if M1) If BF and A B, then AF. [The family F is called hereditary] M2) If A,BF and |A|<|B|, then there exists x in B\A such that A{x} in F [This is called the exchange property] TASK SCHEDULING PROBLEM Given an array of jobs where every job has a deadline and associated profit if the job is finished before the deadline. It is also given that every job takes single unit of time, so the minimum possible deadline for any job is 1. How to maximize total profit if only one job can be scheduled at a time.

A Simple Solution is to generate all subsets of given set of jobs and check individual subset for feasibility of

jobs in that subset. Keep track of maximum profit among all feasible subsets. The time complexity of this solution is exponential. This is a standard Greedy Algorithm problem. Following is algorithm.

  1. Sort all jobs in decreasing order of profit.
  2. Initialize the result sequence as first job in sorted jobs.
  3. Do following for remaining n-1 jobs .......a) If the current job can fit in the current result sequence without missing the deadline, add current job to the result. Else ignore the current job. Example

Index 1 2 3 4 5 Job J1 J2 J3 J4 J Dead line 2 1 3 2 1 Profit 30 50 10 20 10

Sort all jobs in decreasing order of profit

Index 1 2 3 4 5 Job J2 J1 J4 J3 J Dead line 1 2 2 3 1 Profit 50 30 20 10 10

JOB scheduling is J2, J1, and J The total profit is 50+30+10=

Step 5 − If data is greater than middle, search in right sub-list. Step 6 − If data is smaller than middle, search in left sub-list. Step 7 − Repeat until match. Pseudocode Procedure binary_search A ← sorted array n ← size of array x ← value to be searched Set lowerBound = 1 Set upperBound = n while x not found if upperBound < lowerBound EXIT: x does not exists. set midPoint = lowerBound + ( upperBound - lowerBound ) / 2 if A[midPoint] < x set lowerBound = midPoint + 1 if A[midPoint] > x set upperBound = midPoint - 1 if A[midPoint] = x EXIT: x found at location midPoint end while

end procedure The time complexity of Binary Search can be written as T(n) = T(n/2) + c

Sorting: Sorting refers to rearranging all the items in the array into increasing or decreasing order (where the meaning of increasing and decreasing can depend on the context). A sorting algorithm is an algorithm made up of a series of instructions that takes an array as input, performs specified operations on the array, sometimes called a list, and outputs a sorted array. Selection Sort Selection sort is a simple sorting algorithm. This sorting algorithm is an in-place comparison-based algorithm in which the list is divided into two parts, the sorted part at the left end and the unsorted part at the right end. Initially, the sorted part is empty and the unsorted part is the entire list. The smallest element is selected from the unsorted array and swapped with the leftmost element, and that element becomes a part of the sorted array. This process continues moving unsorted array boundary by one element to the right. This algorithm is not suitable for large data sets as its average and worst case complexities are of Ο(n 2 ), where

n is the number of items. Algorithm Step 1 − Set MIN to location 0 Step 2 − Search the minimum element in the list Step 3 − Swap with value at location MIN Step 4 − Increment MIN to point to next element Step 5 − Repeat until list is sorted Bubble Sort Bubble sort is a simple sorting algorithm. This sorting algorithm is comparison-based algorithm in which each pair of adjacent elements is compared and the elements are swapped if they are not in order. This algorithm is not suitable for large data sets as its average and worst case complexity are of Ο(n 2 ) where n is the number of

items. Algorithm begin BubbleSort(list) for all elements of list if list[i] > list[i+1] swap(list[i], list[i+1])

end if end for return list end BubbleSort Example: First Pass: ( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since 5 > 1. ( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4 ( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2 ( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not swap them. Second Pass: ( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ) ( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) Now, the array is already sorted, but our algorithm does not know if it is completed. The algorithm needs one whole pass without any swap to know it is sorted. Third Pass: ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) Insertion Sort This is an in-place comparison-based sorting algorithm. Here, a sub-list is maintained which is always sorted. For example, the lower part of an array is maintained to be sorted. An element which is to be 'insert'ed in this sorted sub-list, has to find its appropriate place and then it has to be inserted there. Hence the name, insertion sort. The array is searched sequentially and unsorted items are moved and inserted into the sorted sub-list (in the same array). This algorithm is not suitable for large data sets as its average and worst case complexity are of Ο (n 2 ), where n is the number of items.

Algorithm Step 1 − If it is the first element, it is already sorted. return 1; Step 2 − Pick next element Step 3 − Compare with all elements in the sorted sub-list Step 4 − Shift all the elements in the sorted sub-list that is greater than the value to be sorted Step 5 − Insert the value Step 6 − Repeat until list is sorted Example: 12, 11, 13, 5, 6 Let us loop for i = 1 (second element of the array) to 5 (Size of input array) i = 1. Since 11 is smaller than 12, move 12 and insert 11 before 12 11, 12, 13, 5, 6 i = 2. 13 will remain at its position as all elements in A[0..I-1] are smaller than 13 11, 12, 13, 5, 6 i = 3. 5 will move to the beginning and all other elements from 11 to 13 will move one position ahead of their current position. 5, 11, 12, 13, 6 i = 4. 6 will move to position after 5, and elements from 11 to 13 will move one position ahead of their current position. 5, 6, 11, 12, 13 Quick Sort