Let’s start off with the very first question that comes to your mind when you hear about the term sorting algorithms. A sorting algorithm is used to rearrange a given array or list of data to a particular pattern.

Basically, it rearranges the elements inside a list or array data structure into a specific order.The order can be of different types; if we talk about integers, it can be ascending or descending order. In the case of characters, alphabetical (called lexicographical) order is used.

A comparison operator is used to decide the new order of elements in the respective data structure. Comparison operators include:

**Less than (<)**- returns true if the value on the left is less than the value on the right, otherwise it returns false.**Equal to (=)**- returns true if the value on the left is equal to the value on the right, otherwise returns false.**Greater than (>)**- returns true if the value on the left is greater than the value on the right, otherwise returns false.**Less than or equal to (<=)**- returns true if the value on the left is less than or equal to the value on the right side, otherwise it returns false.**Greater than or equal to (>=)**- returns true if the value on the left is greater than or equal to the value on the right side, otherwise returns false.**Not equal to (!==)**- returns true if the value on the left is NOT equal to the value on the right side, otherwise returns false.

## Why Do We Need Sorting Algorithms?

Now that you understand what a sorting algorithm is, the next obvious question that you have in mind is why we need a sorting algorithm.

It is important to understand the need for sorting algorithms before moving on to other algorithms since they have direct applications in searching algorithms, database algorithms, divide and conquer methods, data structures and many more.

I’ve listed down some major reasons why we need sorting algorithms:

- Efficient sorting is essential for optimizing the efficiency of other algorithms (such as search and merge algorithms), which require input data to be in sorted lists. For example, the primary condition for the Binary search algorithm to work is that the array must be sorted.
- Producing more human-readable output. When you have a list of information in a particular order, it is more human-readable and more understandable.
- It is easier and faster to locate items in a sorted list than an unsorted list.

## Real-World Applications of Sorting Algorithms

Since sorting can significantly reduce the algorithmic time complexity of a coding problem, it tracks down critical utilizations in the world of computer science/software engineering.

A quick Google search uncovers that more than 40 distinctive sorting algorithms are utilized as of now. Quite crazy, right? You will be surprised when you understand exactly how helpful sorting algorithms are. Some examples are as follows:

- One of the most famous algorithms, bubble sort, is utilized in programming TV to sort channels dependent on crowd viewing time!
- Quick sort algorithm is used to publish sports scores by quickly organizing the data in real-time.
- Merge sort is the fundamental operation used externally in databases to sort out data that is large enough to be stacked together into memory.

Apart from this, telephone directories in which names or words are sorted out in alphabetical order also contribute to the importance of sorting algorithms. However, the biggest and foremost advantage is search and retrieval happens in no time.

## Different Sorting Techniques

There are multiple kinds of sorting techniques used in data structures. A few among them are listed as follows:

**1. Comparison-based sorting**

In this type of sorting, elements of an array or data sample are compared to each other to define the ordering of elements. A few examples of algorithms defined on comparison based sorting technique are as follows:

### Bubble sort

Bubble sort, also termed as sinking sort algorithm is indeed the most famous sorting algorithm. It uses a comparison based approach and sorts out N elements of an array in ascending order. For this purpose, adjacent elements of the array are compared and interchanged if they are in the wrong order through repeated steps.

The whole process continues until a completely sorted array is attained. It is just as like air bubbles move up to the surface of the water; each element of the array is pushed to the end in each pass. Bubble sort is reasonably practical and useful for small sets of data and should not be implemented on large sets of data.

A visual code implementation of the bubble sort algorithm is as follows:

```
// C++ program showcasing implementation of bubble sort
#include <bits/stdc++.h>
using namespace std;
//swap function to exchange values
void swap(int *x, int *y)
{
int tmp = *x;
*x = *y;
*y = tmp;
}
// Function to implement bubbleSort
void bubbleSort(int array[], int N)
{
int i, j;
for (i = 0; i <=N-2; i++)
for (j = 0; j < n-(i+1); j++)
if (array[j] > array[j+1])
swap(&array[j], &array[j+1]);
}
```

The time complexity of the bubble sort algorithm follows three cases which are:

- Best case time complexity: O(n)
- Average case time complexity: O(n2)
- Worst-case time complexity: O(n2)

In the best case, elements of the array are already arranged, while in the worst case, they are in reverse order. The average case includes a randomly ordered array.

### Insertion Sort

Insertion sort is a simple sorting algorithm in which the array is virtually divided into two halves, sorted and unsorted. One by one, elements from the unsorted halve are picked, compared with the elements of the sorted part and placed at their correct position.

A visual code implementation of insertion sort algorithm is as follows:

```
// C++ program showcasing implementation of insertion sort
#include <bits/stdc++.h>
using namespace std;
//insertionSort Function
void insertionSort(int array[], int n)
{
int i, hole, j;
for (i = 1; i < n; i++)
{
hole = array[i];
j = i - 1;
/* Move elements of array from 0 to i-1
to one position ahead of their current
one if they are greater than hole */
while (j >= 0 && array[j] > hole)
{
array[j + 1] = array[j];
j = j - 1;
}
array[j + 1] = hole;
}
}
```

The time complexity of insertion sort follows the same three cases of bubble sort, which are:

- Best case time complexity: O(n)
- Average case time complexity: O(n2)
- Worst-case time complexity: O(n2)

### Quick sort

Quick sort algorithm is based on the divide and conquer approach. A pivot element is to be picked, and based on it; the array is partitioned and sorted out. First, last, median or any random element of the array can be chosen as the pivot. Quick sort algorithm revolves around the Function of partition().

Partition function places pivot on its correct position. Moreover, it also ensures that all elements to the left of the pivot are smaller and all elements to the right of the pivot are greater than it. The following flowchart correctly explains the implementation of the quick sort:

A visual code implementation in C++ of quick sort algorithm is as follows:

```
// C++ program showcasing implementation of quick sort
#include <bits/stdc++.h>
using namespace std;
// function to interchange values of two elements
void swap(int* x, int* y)
{
int temp = *x;
*x = *y;
*y = temp;
}
/* function that chooses the last element as the pivot and then
places the pivot element at its correct position*/
int partition (int array[], int l, int h)
{
int pi = array[h]; // pivot
int i = (l - 1);
for (int j = l; j <= h- 1; j++)
{
//comparison if pivot is larger than current selected element of array
if (array[j] < pi)
{
i++;
swap(&array[i], &array[j]);
}
}
swap(&array[i + 1], &array[h]);
return (i + 1);
}
//Function to implement quicksort algorithm
void quickSort(int array[], int l, int h)
{
if (l< h)
{
int pi = partition(array, l, h);
quickSort(array, l, pi - 1);
quickSort(array, pi + 1, h);
}
}
```

Time complexity of quick sort algorithm is given as follows:

```
T(n) = T(k) + T(n-k-1) + (n)
```

It follows three cases:

- Best case time complexity: O(nLogn)
- Average case time complexity: O(nLogn)
- Worst-case time complexity: O(n2)

**2. Counting-based sorting**

Counting sort is an arranging strategy dependent on keys between a particular reach. It works by counting the number of items having particular key values(sort of hashing). Then, at that point, do some arithmetic calculation to work out the position of each element in the final sorted array. A few algorithms defined on counting based sorting are as under:

### Radix sort

It follows the digit by digit sorting approach. Radix sort arranges digits in a specific manner(ascending or descending), starting from least significant digit to most significant digit.

A visual code implementation of radix sort in C++ is as follows:

```
// C++ program showcasing implementation of Radix Sort
#include <iostream>
using namespace std;
// Function to find maximum value in an array
int getMax(int array[], int size)
{
int imax = array[0];
for (int i = 1; i < size; i++)
if (array[i] > imax)
imax= array[i];
return imax;
}
//countsort function
void countSort(int array[], int size, int exp)
{
int out[size]; // output array
int i, count[10] = { 0 };
for (i = 0; i < size; i++)
count[(array[i] / exp) % 10]++;
for (i = 1; i < 10; i++)
count[i] += count[i - 1];
// making the output array
for (i = size - 1; i >= 0; i--) {
out[count[(array[i] / exp) % 10] - 1] = array[i];
count[(array[i] / exp) % 10]--;
}
for (i = 0; i < size; i++)
array[i] = out[i];
}
// Radix Sort
void radixsort(int array[], int size)
{
int max= getMax(array, size);
for (int exp = 1; max / exp > 0; exp *= 10)
countSort(array, size, exp);
}
```

The best, average and worst-case time complexity of the radix sort algorithm is O(n+k), where k is defined to be the size of the count array.

### Shell-sort

Shell sort can be termed as an updated version of insertion sort. Insertion sort allows the movement of an item or element to only one position ahead, while shell sort allows the exchange of far ahead elements. In shell sort, initially, an interval(h) is defined. Based on it, sub-arrays are created and sorted out. In the final step, these sorted sub-lists are merged, and insertion sort is applied to give the final outcome of an array in a sorted manner.

A visual code implementation of shell-sort in C++ is as follows:

```
// C++ program showcasing implementation of Shell Sort
#include <iostream>
using namespace std;
int shellSort(int array[], int n)
{
// defining interval
for (int h = n/2; h> 0; h /= 2)
{
for (int i = h; i < n; i += 1)
{
int t = array[i];
int j;
for (j = i; j >= h && array[j - h] > t; j -= h)
array[j] = array[j - h];
array[j] = t;
}
}
return 0;
}
```

The time complexity of the shell sort algorithm mainly depends on the size of the gap or interval defined; however, the best case time complexity of the shell sort algorithm is O(nlogn).

**3. In-place / outplace sorting**

An arranging strategy is in-place, assuming that it doesn’t utilize any additional memory to sort the given array. Except for merge sort, all the comparison based sorting algorithms are examples of the in-place sorting technique. Likewise, all non-comparison based sorting algorithms reside upon out-place sorting.

## Conclusion

In this article, we have talked about a bunch of sorting algorithms along with the sorting techniques implemented for them. However, the main thing to learn is which algorithm will prove most efficient while applying to real-life problems. For this, consider the time complexity analysis while choosing the sorting algorithm.