Google News
logo
Algorithm Interview Questions
In computer programming terms, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output.
 
Algorithms are widely used throughout all areas of IT. In mathematics and computer science, an algorithm usually refers to a small procedure that solves a recurrent problem. Algorithms are also used as specifications for performing data processing and play a major role in automated systems.
 
An algorithm to add two numbers :
* Take two number inputs
* Add numbers using the + operator
* Display the result
 
Qualities of a Good Algorithm :
* Input and output should be defined precisely.
* Each step in the algorithm should be clear and unambiguous.
* Algorithms should be most effective among many different ways to solve a problem.
* An algorithm shouldn't include computer code. Instead, the algorithm should be written in such a way that it can be used in different programming languages.
Algorithms can be expressed as natural languages, programming languages, pseudocode, flowcharts and control tables. Natural language expressions are rare, as they are more ambiguous. Programming languages are normally used for expressing algorithms executed by a computer.
 
Algorithms use an initial input along with a set of instructions. The input is the initial data needed to make decisions and can be represented in the form of numbers or words. The input data gets put through a set of instructions, or computations, which can include arithmetic and decision-making processes. The output is the last step in an algorithm and is normally expressed as more data.
 
For example, a search algorithm takes a search query as input and runs it through a set of instructions for searching through a database for relevant items to the query. Automation software acts as another example of algorithms, as automation follows a set of rules to complete tasks. Many algorithms make up automation software, and they all work to automate a given process.
There are several types of algorithms, all designed to accomplish different tasks. For example, algorithms perform the following :
 
* Search engine algorithm : This algorithm takes search stringsof keywords and operators as input, searches its associated database for relevant webpages and returns results.

* Encryption algorithm : This computing algorithm transforms data according to specified actions to protect it. A symmetric key algorithm, such as the Data Encryption Standard, for example, uses the same keyto encrypt and decrypt data. As long as the algorithm is sufficiently sophisticated, no one lacking the key can decrypt the data.

* Greedy algorithm : This algorithm solves optimization problems by finding the locally optimal solution, hoping it is the optimal solution at the global level. However, it does not guarantee the most optimal solution.

* Recursive algorithm : This algorithm calls itself repeatedly until it solves a problem. Recursive algorithms call themselves with a smaller value every time a recursive function is invoked.

* Backtracking algorithm : This algorithm finds a solution to a given problem in incremental approaches and solves it one piece at a time.

* Divide-and-conquer algorithm : This common algorithm is divided into two parts. One part divides a problem into smaller subproblems. The second part solves these problems and then combines them together to produce a solution.

* Dynamic programming algorithm : This algorithm solves problems by dividing them into subproblems. The results are then stored to be applied for future corresponding problems.

* Brute-force algorithm : This algorithm iterates all possible solutions to a problem blindly, searching for one or more solutions to a function.

* Sorting algorithm : Sorting algorithms are used to rearrange data structure based on a comparison operator, which is used to decide a new order for data.

* Hashing algorithm : This algorithm takes data and converts it into a uniform message with a hashing

* Randomized algorithm : This algorithm reduces running times and time-based complexities. It uses random elements as part of its logic.
The complexity of the algorithm is a way to classify how efficient an algorithm is compared to alternative ones. Its focus is on how execution time increases with the data set to be processed. The computational complexity of the algorithm is important in computing.
 
It is very suitable to classify algorithm based on the relative amount of time or relative amount of space they required and specify the growth of time/ space requirement as a function of input size.
 
Time complexity :  Time complexity is a Running time of a program as a function of the size of the input.
 
Space complexity : Space complexity analyzes the algorithms, based on how much space an algorithm needs to complete its task. Space complexity analysis was critical in the early days of computing (when storage space on the computer was limited).
 
Nowadays, the problem of space rarely occurs because space on the computer is broadly enough.
 
We achieve the following types of analysis for complexity
 
Worst-case: f(n)
 
It is defined by the maximum number of steps taken on any instance of size n.
 
Best-case: f(n)
 
It is defined by the minimum number of steps taken on any instance of size n.
 
Average-case: f(n)
 
It is defined by the average number of steps taken on any instance of size n.
The mathematical foundation/framing of an algorithm's run time performance is defined by asymptotic analysis. We can easily determine the best case, average case, and worst-case scenarios of an algorithm using asymptotic analysis.
 
Best Case Scenario of an Algorithm : The best-case scenario for an algorithm is defined as the data arrangement in which the algorithm performs the best. Take a binary search, for example, where the best-case scenario is if the target value is in the very centre of the data we are looking for. The best-case scenario for binary search would have a time complexity of O(1) or constant time complexity.

Worst Case Scenario of an Algorithm : The worst collection of input for a given algorithm is referred to as the worst-case scenario of an Algorithm. For example, quicksort can perform poorly if the pivot value is set to the largest or smallest element of a sublist. Quicksort will degenerate into an algorithm with a time complexity of O(n^2), where n is the size of the list to be sorted.

Average Case Scenario of an Algorithm : The average-case complexity of an algorithm is the amount of some computational resource (usually time) used by the process, averaged over all possible inputs, according to computational complexity theory. For example, the average-case complexity of the randomised quicksort algorithm is O(n*log(n)), where n is the size of the list to be sorted.
Asymptotic analysis is a technique that is used for determining the efficiency of an algorithm that does not rely on machine-specific constants and avoids the algorithm from comparing itself to the time-consuming approach. For asymptotic analysis, asymptotic notation is a mathematical technique that is used to indicate the temporal complexity of algorithms.
 
The following are the three most common asymptotic notations.
 
Big Theta Notation: (θ Notation) : The exact asymptotic behaviour is defined using the theta (θ) Notation. It binds functions from above and below to define behaviour. Dropping low order terms and ignoring leading constants is a convenient approach to get Theta notation for an expression.
0 Notation
Big O Notation : The Big O notation defines an upper bound for an algorithm by bounding a function from above. Consider the situation of insertion sort: in the best case scenario, it takes linear time, and in the worst case, it takes quadratic time. Insertion sort has a time complexity O(n^2). It is useful when we just have an upper constraint on an algorithm's time complexity.
Big 0 Notation
Big Omega (Ω) Notation : The Ω Notation provides an asymptotic lower bound on a function, just like Big O notation does. It is useful when we have a lower bound on an algorithm's time complexity.
Omega Notation
Quick Sort algorithm has the ability to sort list or queries quickly. It is based on the principle of partition exchange sort or Divide and conquer. This type of algorithm occupies less space, and it segregates the list into three main parts.
 
* Elements less than the Pivot element
* Pivot element
* Elements greater than the Pivot element
8 .
Explain what is heap sort?
Heap-sort can be defined as a comparison based sorting algorithm. It divides its input into the unsorted and sorted region, until it shrinks the unsorted region by eliminating the smallest element and moving that to the sorted region.
Skip list the method for data structuring, where it allows the algorithm to search, delete and insert elements in a symbol table or dictionary. In a skip list, each element is represented by a node. The search function returns the content of the value related to key. The insert operation associates a specified key with a new value, while the delete function deletes the specified key.
Both Red-Black Trees and B-Trees are balanced search trees that can be used on items that can have a comparison operator defined on them. They allow operations like minimum, maximum, predecessor, successor, insert, and delete, in O(log N) time (with N being the number of elements). Thus, they can be used for implementing a map, priority queue, or index of a database, to name a few examples.
 
Red-Black Trees are an improvement upon Binary Search Trees. Binary Search Trees use binary trees to perform the named operations, but the depth of the tree is not controlled - so operations can end up taking a lot more time than expected. Red-Black Trees solve this issue by marking all nodes in the tree as red or black, and setting rules of how certain positions between nodes should be processed. Without going into too much detail, this method guarantees that the longest branch isn’t more than twice as long as the shortest branch, so each branch is shorter than 2*log_base2(N).
 
This is the ideal structure for implementing ordered maps and priority queues. B-Trees branch into K-2K branches (for a given number K) rather than into 2, as is typical for binary trees. Other than that, they behave pretty similarly to a binary search tree. This has the advantage of reducing access operations, which is particularly useful when data is stored on secondary storage or in a remote location. This way, we can request data in larger chunks, and by the time we finish processing a previous request, our new request is ready to be handled. This structure is often used in implementing databases, since they have a lot of secondary storage access.
A field with a True value represents a 1x1 square on its own. If a field has free fields on its left, top, and top-left, then it’s the bottom-right corner of a 2x2 square. If those three fields are all bottom-right corners of a 5x5 square, then their overlap, plus the queried field being free, form a 6x6 square. We can use this logic to solve this problem. That is, we define the relation of size[x, y] = min(size[x-1, y], size[x, y-1], size[x-1, y-1]) + 1 for every free field. For an occupied field, we can set size[x,y] = 0, and it will fit our recursive relation perfectly. Now, size[x, y] represents the largest square for which the field is the bottom-right corner. Tracking the maximum number achieved will give us the answer to our problem.
FUNCTION square_size(board)
	M = board.height()
	N = board.width()
	size = Matrix[M + 1, N + 1]
	FOR i IN [0..M]
		size[i, 0] = 0
	END FOR
	FOR i IN [0..N]
		size[0, i] = 0
	END FOR
	answer = 0
	FOR i IN [0..M]
		FOR j IN [0..N]
			size[i+1, j+1] = max(size[i, j+1], size[i+1, j], size[i, j]) + 1
			answer = max(answer, size[i+1, j+1])
		END FOR
	END FOR
	RETURN answer
Encryption is the process of converting plaintext into a secret code format referred as “Ciphertext”. To convert the text, algorithm uses a string of bits referred as “keys” for calculations. The larger the key, the greater the number of potential patterns for creating cipher text. Most encryption algorithm use codes fixed blocks of input that have length about 64 to 128 bits, while some uses stream method.
It is a trick question that is frequently asked in the interviews of various companies. This problem can be solved in a variety of ways. However, while solving the problem, we must solve it without using a temporary variable, which is an essential condition. For this problem, if we can consider the possibility of integer overflow in our solution while coming up with an approach to solving it, we can make a great impression on interviewers.
 
Let us say that we have two integers a and b, with a's value equal to 5 and b's value equal to 6, and we want to swap them without needing a third variable. We will need to use Java programming constructs to solve this problem. Mathematical procedures such as addition, subtraction, multiplication, and division can be used to swap numbers. However, it is possible that it will cause an integer overflow problem. 
 
Let us take a look at two approaches to solve this problem :
 
Using Addition and subtraction :
a = a + b;
b = a - b; // this will act like (a+b) - b, and now b equals a.
a = a - b; // this will act like (a+b) - a, and now an equals b.
It is a clever trick. However, if the addition exceeds the maximum value of the int primitive type as defined by Integer.MAX_VALUE in Java, or if the subtraction is less than the minimum value of the int primitive type as defined by Integer.MIN_VALUE in Java, there will be an integer overflow.
 
Using the XOR method :
 
Another way to swap two integers without needing a third variable (temporary variable) is using the XOR method. This is often regarded as the best approach because it works in languages that do not handle integer overflows, such as Java, C, and C++. Java has a number of bitwise operators. XOR (denoted by ^) is one of them.
x = x ^ y; 
y = x ^ y; 
x = x ^ y;
Some of the commonly used cryptographic algorithms are :
 
* 3-way
* Blowfish
* CAST
* CMEA
* GOST
* DES and Triple DES
* IDEA
* LOKI and so on
Algorithm to reverse a string.
 
Step1 : start
 
Step2 : Take two variable i and j
 
Step3 : do length (string)-1, to set J at last position
 
Step4 : do string [0], to set i on the first character.
 
Step5 : string [i] is interchanged with string[j]
 
Step6 : Increment i by 1
 
Step7 : Increment j by 1
 
Step8 : if i>j then go to step3
 
Step9 : Stop
Bubble sort is the simplest sorting algorithm among all sorting algorithm. It repeatedly works by swapping the adjacent elements if they are in the wrong order.
 
Ex : 
 
(72538) we have this array for sorting.
 
Pass1 :
(72538) -> (27538) swap 7 and 2.
(27538) -> (25738) swap 7 and 5.
(25738) -> (25378) swap 7 and 3.
(25378) -> (25378) algorithm does not swap 7 and 8 because 7<8.
 
Pass2 :
(25378) -> (25378) algorithm does not swap 2 and 5 because 2<5.
(25378) -> (23578) swap 3 and 5.
(23578) -> (23578) algorithm does not swap 5 and 7 because 5<7.
(23578) -> (23578) algorithm does not swap 7 and 8 because 7<8.

 

Here, the sorted element is (23578).
Algorithm to insert a node in a sorted linked list.
 
Case1 : Check if the linked list is empty then set the node as head and return it.
New_node-> Next= head;  
Head=New_node  
 
Case2 : Insert the new node in middle
While( P!= insert position)  
{  
P= p-> Next;  
}  
Store_next=p->Next;  
P->Next= New_node;  
New_Node->Next = Store_next;  

Case3 : Insert a node at the end
While (P->next!= null)  
{  
P= P->Next;  
}  
P->Next = New_Node;  
New_Node->Next = null;  
Divide and Conquer is an algorithm paradigm, not an algorithm itself. It is set up in such a way that it can handle a large amount of data, split it down into smaller chunks, and determine the solution to the problem for each of the smaller chunks. It combines all of the piecewise solutions of the smaller chunks to form a single global solution. This is known as the divide and conquer technique. The Divide and Conquer algorithmic paradigm employ the steps given below:
 
Divide : The algorithm separates the original problem into a set of subproblems in this step.
Conquer : The algorithm solves each subproblem individually in this step.
Combine : In this step, the algorithm combines the solutions to the subproblems to obtain the overall solution.
 
Some of the algorithms which use the Divide and Conquer Algorithmic paradigm are as follows:
 
* Binary Search
* Merge Sort
* Strassen's Matrix Multiplication
* Quick Sort
* Closest pair of points.
A greedy algorithm is an algorithmic method that aims to choose the best optimal decision at each sub-step, eventually leading to a globally optimal solution. This means that the algorithm chooses the best answer available at the time, regardless of the consequences. In other words, when looking for an answer, an algorithm always selects the best immediate, or local, option. Greedy algorithms may identify less than perfect answers for some cases of other problems while finding the overall, ideal solution for some idealistic problems.
 
The Greedy algorithm is used in the following algorithms to find their solutions :
 
* Travelling Salesman Problem
* Fractional Knapsack Problem
* Dijkstra's Algorithm
* Job Scheduling Problem
* Graph  Map Coloring
* Graph  Vertex Cover
* Prim's Minimal Spanning Tree Algorithm
* Kruskal's Minimal Spanning Tree Algorithm
Prim's algorithm is a minimum spanning tree algorithm that takes a graph as input and finds the subset of the edges of that graph which
 
* form a tree that includes every vertex
* has the minimum sum of weights among all the trees that can be formed from the graph
 
How Prim's algorithm works

It falls under a class of algorithms called greedy algorithms that find the local optimum in the hopes of finding a global optimum.
 
We start from one vertex and keep adding edges with the lowest weight until we reach our goal.
 
The steps for implementing Prim's algorithm are as follows :
 
* Initialize the minimum spanning tree with a vertex chosen at random.
* Find all the edges that connect the tree to new vertices, find the minimum and add it to the tree
* Keep repeating step 2 until we get a minimum spanning tree
The pseudocode for prim's algorithm shows how we create two sets of vertices U and V-U. U contains the list of vertices that have been visited and V-U the list of vertices that haven't. One by one, we move vertices from set V-U to set U by connecting the least weight edge.
T = ∅;
U = { 1 };
while (U ≠ V)
    let (u, v) be the lowest cost edge such that u ∈ U and v ∈ V - U;
    T = T ∪ {(u, v)}
    U = U ∪ {v}
Kruskal's algorithm is a minimum spanning tree algorithm that takes a graph as input and finds the subset of the edges of that graph which
 
* form a tree that includes every vertex
* has the minimum sum of weights among all the trees that can be formed from the graph

How Kruskal's algorithm works

* It falls under a class of algorithms called greedy algorithms that find the local optimum in the hopes of finding a global optimum.
 
* We start from the edges with the lowest weight and keep adding edges until we reach our goal.
 
The steps for implementing Kruskal's algorithm are as follows :
 
* Sort all the edges from low weight to high
* Take the edge with the lowest weight and add it to the spanning tree. If adding the edge created a cycle, then reject this edge.
* Keep adding edges until we reach all vertices.
 
Any minimum spanning tree algorithm revolves around checking if adding an edge creates a loop or not.
 
The most common way to find this out is an algorithm called Union FInd. The Union-Find algorithm divides the vertices into clusters and allows us to check if two vertices belong to the same cluster or not and hence decide whether adding an edge creates a cycle.
KRUSKAL(G):
A = ∅
For each vertex v ∈ G.V:
    MAKE-SET(v)
For each edge (u, v) ∈ G.E ordered by increasing order by weight(u, v):
    if FIND-SET(u) ≠ FIND-SET(v):       
    A = A ∪ {(u, v)}
    UNION(u, v)
return A
Kruskal's algorithm is another popular minimum spanning tree algorithm that uses a different logic to find the MST of a graph. Instead of starting from a vertex, Kruskal's algorithm sorts all the edges from low weight to high and keeps adding the lowest edges, ignoring those edges that create a cycle.
Prim's algorithm is another popular minimum spanning tree algorithm that uses a different logic to find the MST of a graph. Instead of starting from an edge, Prim's algorithm starts from a vertex and keeps adding lowest-weight edges which aren't in the tree, until all vertices have been covered.
Dijkstra's algorithm allows us to find the shortest path between any two vertices of a graph.
 
It differs from the minimum spanning tree because the shortest distance between two vertices might not include all the vertices of the graph.
 
How Dijkstra's Algorithm works : 

Dijkstra's Algorithm works on the basis that any subpath B -> D of the shortest path A -> D between vertices A and D is also the shortest path between vertices B and D.

Dijkstra's Algorithm

Djikstra used this property in the opposite direction i.e we overestimate the distance of each vertex from the starting vertex. Then we visit each node and its neighbors to find the shortest subpath to those neighbors.
 
The algorithm uses a greedy approach in the sense that we find the next best solution hoping that the end result is the best solution for the whole problem.
We need to maintain the path distance of every vertex. We can store that in an array of size v, where v is the number of vertices.
 
We also want to be able to get the shortest path, not only know the length of the shortest path. For this, we map each vertex to the vertex that last updated its path length.
 
Once the algorithm is over, we can backtrack from the destination vertex to the source vertex to find the path.
 
A minimum priority queue can be used to efficiently receive the vertex with least path distance.
function dijkstra(G, S)
    for each vertex V in G
        distance[V] <- infinite
        previous[V] <- NULL
        If V != S, add V to Priority Queue Q
    distance[S] <- 0
	
    while Q IS NOT EMPTY
        U <- Extract MIN from Q
        for each unvisited neighbour V of U
            tempDistance <- distance[U] + edge_weight(U, V)
            if tempDistance < distance[V]
                distance[V] <- tempDistance
                previous[V] <- U
    return distance[], previous[]
In this tutorial, you will learn what a backtracking algorithm is. Also, you will find an example of a backtracking approach.
 
A backtracking algorithm is a problem-solving algorithm that uses a brute force approach for finding the desired output.
 
The term backtracking suggests that if the current solution is not suitable, then backtrack and try other solutions. Thus, recursion is used in this approach.
 
This approach is used to solve problems that have multiple solutions. If you want an optimal solution, you must go for dynamic programming.
 
Backtracking Algorithm
Backtrack(x)
    if x is not a solution
        return false
    if x is a new solution
        add to list of solutions
    backtrack(expand x)
Example Backtracking Approach

Problem : You want to find all the possible ways of arranging 2 boys and 1 girl on 3 benches. Constraint: Girl should not be on the middle bench.
 
Solution : There are a total of 3! = 6 possibilities. We will try all the possibilities and get the possible solutions. We recursively try all the possibilities.
 
All the possibilities are :
Backtracking Algorithm
A space state tree is a tree representing all the possible states (solution or nonsolution) of the problem from the root as an initial state to the leaf as a terminal state.
State Space Tree
An algorithm for counting the number of leaf nodes in a binary tree is given below :
 
Step 1 : If the current node is null, return a value 0.

Step 2 : If a leaf node is encountered, that is, if the current node's left and right nodes are both null, then return 1.

Step 3 : Calculate the number of leaf nodes recursively by adding the number of leaf nodes in the left subtree by the number of leaf nodes in the right subtree.
A Hash table is a data structure for storing values to keys of arbitrary type. The Hash table consists of an index into an array by using a Hash function. Indexes are used to store the elements. We assign each possible element to a bucket by using a hash function. Multiple keys can be assigned to the same bucket, so all the key and value pairs are stored in lists within their respective buckets. Right hashing function has a great impact on performance.
 
To find all anagrams in a dictionary, we have to group all words that contain the same set of letters in them. So, if we map words to strings representing their sorted letters, then we could group words into lists by using their sorted letters as a key.
FUNCTION find_anagrams(words)  
    word_groups = HashTable<String, List>  
    FOR word IN words  
        word_groups.get_or_default(sort(word), []).push(word)  
    END FOR  
    anagrams = List  
    FOR key, value IN word_groups  
        anagrams.push(value)  
    END FOR  
    RETURN anagrams ​
 
The hash table contains lists mapped to strings. For each word, we add it to the list at the suitable key, or create a new list and add it to it.
BFS (Breadth First Search) is a graph traversal algorithm. It starts traversing the graph from the root node and explores all the neighboring nodes. It selects the nearest node and visits all the unexplored nodes. The algorithm follows the same procedure for each of the closest nodes until it reaches the goal state.
 
Algorithm
 
Step1 : Set status=1 (ready state)
 
Step2 : Queue the starting node A and set its status=2, i.e. (waiting state)
 
Step3 : Repeat steps 4 and 5 until the queue is empty.
 
Step4 : Dequeue a node N and process it and set its status=3, i.e. (processed state)
 
Step5 : Queue all the neighbors of N that are in the ready state (status=1) and set their status =2 (waiting state)
[Stop Loop]
 
Step6 : Exit
The major difference between the singly linked list and the doubly linked list is the ability to traverse.
 
* You cannot traverse back in a singly linked list because in it a node only points towards the next node and there is no pointer to the previous node.
 
* On the other hand, the doubly linked list allows you to navigate in both directions in any linked list because it maintains two pointers towards the next and previous node.
Stack and Queue both are non-primitive data structure used for storing data elements and are based on some real-world equivalent.
 
Let's have a look at key differences based on the following parameters.
 
Working principle : The significant difference between stack and queue is that stack uses LIFO (Last in First Out) method to access and add data elements whereas Queue uses FIFO (First in first out) method to obtain data member.
 
Structure : In Stack, the same end is used to store and delete elements, but in Queue, one end is used for insertion, i.e., rear end and another end is used for deletion of elements.
 
Number of pointers used : Stack uses one pointer whereas Queue uses two pointers (in the simple case).
 
Operations performed : Stack operates as Push and pop while Queue operates as Enqueue and dequeuer.
 
Variants : Stack does not have variants while Queue has variants like a circular queue, Priority queue, doubly ended Queue.
 
Implementation : The stack is simpler while Queue is comparatively complex.
An algorithm for string reversal is as follows:
 
Step 1 : Start.
Step 2 : We take two variables l and r.
Step 3 : We set the values of l as 0 and r as (length of the string  - 1).
Step 4 : We interchange the values of the characters at positions l and r in the string.
Step 5 : We increment the value of l by one.
Step 6 : We decrement the value of r by one.
Step 7 : If the value of r is greater than the value of l, we go to step 4
Step 8 : Stop.
DFS or Depth First Search is a technique for traversing or exploring data structures such as trees and graphs. The algorithm starts at the root node (in the case of a graph, any random node can be used as the root node) and examines each branch as far as feasible before retracing. So the basic idea is to start at the root or any arbitrary node and mark it, then advance to the next unmarked node and repeat until there are no more unmarked nodes. After that, go back and check for any more unmarked nodes to cross. Finally, print the path's nodes. The DFS algorithm is given below :
 
Step1 : Create a recursive function that takes the node's index and a visited array as input.
Step 2 : Make the current node a visited node and print it.
Step 3 : Call the recursive function with the index of the adjacent node after traversing all nearby and unmarked nodes.
In the given dictionary, a process to do a lookup in the dictionary and an M x N board where every cell has a single character. Identify all possible words that can be formed by order of adjacent characters. Consider that we can move to any of the available 8 adjacent characters, but a word should not have multiple instances of the same cell.
 
Example :
dictionary[] = {"FREE", "TIME", "LEARN"};
       boggle[][]   = {{'F', 'A', 'R'},
                       {'E', 'R', 'E'},
                       {'L', 'I', 'E'}};
      isWord(str): returns true if str is present in dictionary
                   else false.
 
Output :  Following words of dictionary are present FREE
Merge sort (also known as mergesort) is a general-purpose, comparison-based sorting algorithm developed in computer science. The majority of its implementations result in a stable sort, which indicates that the order of equal elements in the input and output is the same. In 1945, John von Neumann devised the merge sort method, which is a divide and conquer algorithm. The following is how a merge sort works conceptually:
 
* Separate the unsorted list into n sublists, each with one element (a list of one element is considered sorted).
* Merge sublists repeatedly to create new sorted sublists until only one sublist remains. The sorted list will be displayed then.
 
The time complexity of the Merge Sort Algorithm is O(nlog(n)) where n is the size of the list of the elements to be sorted while the space complexity of the Merge Sort Algorithm is O(n), that is, linear space complexity.
Recursive algorithm is a way of tackling a difficult problem by breaking it down into smaller and smaller subproblems until the problem is small enough to be solved quickly. It usually involves a function that calls itself (property of recursive functions).
 
The three laws which must be followed by all recursive algorithms are as follows :
 
* There should be a base case.
* It is necessary for a recursive algorithm to call itself.
* The state of a recursive algorithm must be changed in order for it to return to the base case.
The process of visiting all the nodes of a tree is known as tree traversal.
 
Some of the algorithms to traverse a binary tree are as follows :
 
* Pre-order Traversal.
* In order Traversal.
* Post order Traversal.
* Breadth First Search
* ZigZag Traversal.
Insertion sort is an in-place sorting method, which implies it does not require any additional or minimal data storage. In insertion sort, only a single list element must be stored outside of the starting data, resulting in a constant space complexity or O(1) space complexity.
Selection sort is an in-place sorting method, which implies it does not require any additional or minimal data storage. Therefore, the selection sort algorithm has a constant space complexity or O(1) space complexity.
Heap sort is a comparison-based sorting algorithm. Heapsort is similar to selection sort in that it separates its input into a sorted and an unsorted region, then successively decreases the unsorted part by taking the largest element from it and putting it into the sorted region. Unlike selection sort, heapsort does not waste time scanning the unsorted region in linear time; instead, heap sort keeps the unsorted region in a heap data structure to identify the largest element in each step more rapidly. Let us take a look at the heap sort algorithm :
 
The Heapsort algorithm starts by converting the list to a max heap. The algorithm then swaps the first and last values in the list, reducing the range of values considered in the heap operation by one, and filters the new first value into its heap place. This process is repeated until the range of values considered is only one value long.
 
* On the list, use the buildMaxHeap() function. This function, also known as heapify(), creates a heap from a list in O(n) operations.
* Change the order of the list's first and last elements. Reduce the list's considered range by one.
* To sift the new initial member to its appropriate index in the heap, use the siftDown() function on the list.
* Unless the list's considered range is one element, proceed to step 2.

Note : The buildMaxHeap() operation runs only one time with a linear time complexity or O(n) time complexity. The siftDown() function works in O(log n) time complexity, and is called n times. Therefore, the overall time complexity of the heap sort algorithm is O(n + n log (n)) = O(n log n).
Write a function to delete a given node from a Singly Linked List. The function must follow the following constraints :
 
* The function must accept a pointer to the start node as the first argument and node to be deleted as the second argument, i.e., a pointer to head node is not global.
* The function should not return a pointer to the head node.
* The function should not accept pointer to pointer to head node.

We may assume that the Linked List never becomes empty.
 
Suppose the function name is delNode(). In a direct implementation, the function needs to adjust the head pointer when the node to be deleted the first node.
 
C program for deleting a node in Linked List
 
We will handle the case when the first node to be deleted then we copy the data of the next node to head and delete the next node. In other cases when a deleted node is not the head node can be handled generally by finding the previous node.
#include <stdio.h>   
#include <stdlib.h>   
struct Node   
{     
    int data;   
    struct Node *next;   
};   
  
void delNode(struct Node *head, struct Node *n)   
{   
    if(head == n)   
    {     
        if(head->next == NULL)   
        {   
            printf("list can't be made empty because there is only one node. ");   
            return;   
        }   
        head->data = head->next->data;   
        n = head->next;   
        head->next = head->next->next;   
        free(n);   
        return;   
    }   
        struct Node *prev = head;   
    while(prev->next != NULL && prev->next != n)   
        prev = prev->next;   
    if(prev->next == NULL)   
    {   
        printf("\n This node is not present in  List");   
        return;   
    }   
    prev->next = prev->next->next;   
    free(n);   
    return;   
}   
void push(struct Node **head_ref, int new_data)   
{   
    struct Node *new_node =   
        (struct Node *)malloc(sizeof(struct Node));   
    new_node->data = new_data;   
    new_node->next = *head_ref;   
    *head_ref = new_node;   
}   
void printList(struct Node *head)   
{   
    while(head!=NULL)   
    {   
        printf("%d ",head->data);   
        head=head->next;   
    }   
    printf("\n");   
}   
int main()   
{   
    struct Node *head = NULL;   
    push(&head,3);   
    push(&head,2);   
    push(&head,6);   
    push(&head,5);   
    push(&head,11);   
    push(&head,10);   
    push(&head,15);   
    push(&head,12);   
    printf("Available Link list: ");   
    printList(head);   
    printf("\nDelete node %d: ", head->next->next->data);   
    delNode(head, head->next->next);   
  
    printf("\nUpdated  Linked List: ");   
    printList(head);   
  
    /* Let us delete the the first node */  
    printf("\nDelete first node ");   
    delNode(head, head);   
   
    printf("\nUpdated Linked List: ");   
    printList(head);   
  
    getchar();   
    return 0;   
}​
  
Output :
Available Link List: 12 15 10 11 5 6 2 3 
Delete node 10:
Updated Linked List: 12 15 11 5 6 2 3
Delete first node
Updated Linked list: 15 11 5 6 2 3 
We have two linked lists, insert nodes of the second list into the first list at substitute positions of the first list.
 
Example : 
 
if first list is 1->2->3 and second is 12->10->2->4->6, the first list should become 1->12->2->10->17->3->2->4->6 and second list should become empty. The nodes of the second list should only be inserted when there are positions available.
 
Use of extra space is not allowed i.e., insertion must be done in a place. Predictable time complexity is O(n) where n is number of nodes in first list.
#include <stdio.h>   
#include <stdlib.h>   
struct Node   
{   
    int data;   
    struct Node *next;   
};   
void push(struct Node ** head_ref, int new_data)   
{   
    struct Node* new_node =   
        (struct Node*) malloc(sizeof(struct Node));   
    new_node->data = new_data;   
    new_node->next = (*head_ref);   
    (*head_ref) = new_node;   
}   
void printList(struct Node *head)   
{   
    struct Node *temp = head;   
    while (temp != NULL)   
    {   
        printf("%d ", temp->data);   
        temp = temp->next;   
    }   
    printf("\n");   
}    
void merge(struct Node *p, struct Node **q)   
{   
    struct Node *p_curr = p, *q_curr = *q;   
    struct Node *p_next, *q_next;   
    while (p_curr != NULL && q_curr != NULL)   
    {  
        p_next = p_curr->next;   
        q_next = q_curr->next;   
        q_curr->next = p_next;  
        p_curr->next = q_curr;   
        p_curr = p_next;   
        q_curr = q_next;   
    }   
  
    *q = q_curr;  
}   
int main()   
{   
    struct Node *p = NULL, *q = NULL;   
    push(&p, 3);   
    push(&p, 2);   
    push(&p, 1);   
    printf("I Linked List:\n");   
    printList(p);   
  
    push(&q, 8);   
    push(&q, 7);   
    push(&q, 6);   
    push(&q, 5);   
    push(&q, 4);   
    printf("II Linked List:\n");   
    printList(q);   
  
    merge(p, &q);   
  
    printf("Updated I  Linked List:\n");   
    printList(p);   
  
    printf("Updated II Linked List:\n");   
    printList(q);   
            getchar();   
    return 0;   
} ​
 
Output :
I Linked List:        
1 2 3
II Linked List:      
4 5 6 7 8                
Updated I Linked List:         
1 4 2 5 3 6           
Updated II Linked List:          
7 8
Backpropagation is an algorithm used for training artificial neural networks, especially feedforward networks with a supervised learning approach. The backpropagation algorithm is a supervised learning method that uses gradient descent to update the weights of the network in order to minimize the difference between the network's predicted outputs and the actual outputs.

The backpropagation algorithm consists of the following steps :

Feedforward : The input data is fed into the network, and the activations are computed for each layer of the network, resulting in the final output.

Calculation of error : The difference between the actual output and the predicted output is calculated and used as the error signal.

Backpropagation of error : The error signal is backpropagated through the network, and the gradients of the error with respect to the weights are computed.
Weight update : The weights are updated using the gradients, with the aim of minimizing the error. This is typically done using gradient descent, where the weights are updated in the direction of the negative gradient of the error with respect to the weights.

Repeat : The process is repeated until the error between the predicted and actual outputs is minimized, or a predefined stopping criteria is met.

The backpropagation algorithm is an efficient and effective method for training artificial neural networks, and has been the foundation of many state-of-the-art deep learning models. The algorithm allows the network to learn the relationships between the inputs and outputs through iteratively adjusting the weights, resulting in a model that can make accurate predictions on unseen data.