Google News
logo
Artificial Intelligence Interview Questions
Artificial intelligence is computer science technology that emphasizes creating intelligent machine that can mimic human behavior. Here Intelligent machines can be defined as the machine that can behave like a human, think like a human, and also capable of decision making. It is made up of two words, "Artificial" and "Intelligence," which means the "man-made thinking ability."
 
Artificial Intelligence is today widely used for various areas or applications like computer vision, speech recognition, decision-making, perception, reasoning, cognitive capabilities, Bio-informatics, Humanoid robot, Computer software, Space and Aeronautics’s etc.

With artificial intelligence, we do not need to pre-program the machine to perform a task; instead, we can create a machine with the programmed algorithms, and it can work on its own.
Self Aware AI : AIs that posses human-like consciousness and reactions. Such machines have the ability to form self-driven actions.

Reactive Machines AI :
Based on present actions, it cannot use previous experiences to form current decisions and simultaneously update their memory.
Example : Deep Blue

Limited Memory AI : Used in self-driving cars. They detect the movement of vehicles around them constantly and add it to their memory.

Theory of Mind AI : Advanced AI that has the ability to understand emotions, people and other things in the real world.

Artificial Narrow Intelligence (ANI) : General purpose AI, used in building virtual assistants like Siri.

Artificial General Intelligence (AGI) : Also known as strong AI. An example is the Pillo robot that answers questions related to health.

Artificial Superhuman Intelligence (ASI) : AI that possesses the ability to do everything that a human can do and more. An example is the Alpha 2 which is the first humanoid ASI robot.
Strong Artificial Intelligence : 
* Widely applied, with vast scope
* Incredible human-level intelligence
* Uses clustering and association to process data
* Ex : Advanced Robotics

Weak Artificial Intelligence : 
* Narrow application, with very limited scope
* Good at specific tasks
* Uses supervised and unsupervised learning to process data
* Ex : Siri, Alexa, etc.
Robotics : Robotics is a subset of AI, which includes different branches and application of robots. These Robots are artificial agents acting in a real-world environment. An AI Robot works by manipulating the objects in it’s surrounding, by perceiving, moving and taking relevant actions.

Machine Learning : It’s the science of getting computers to act by feeding them data so that they can learn a few tricks on their own, without being explicitly programmed to do so.

Neural Networks : They are a set of algorithms and techniques, modeled in accordance with the human brain. Neural Networks are designed to solve complex and advanced machine learning problems.

Expert Systems : An expert system is a computer system that mimics the decision-making ability of a human. It is a computer program that uses artificial intelligence (AI) technologies to simulate the judgment and behavior of a human or an organization that has expert knowledge and experience in a particular field.

Fuzzy Logic Systems : Fuzzy logic is an approach to computing based on “degrees of truth” rather than the usual “true or false” (1 or 0) boolean logic on which the modern computer is based. Fuzzy logic Systems can take imprecise, distorted, noisy input information.

Natural Language Processing : Natural Language Processing (NLP) refers to the Artificial Intelligence method that analyses natural human language to derive useful insights in order to solve problems.
* AI In Banking
* AI In Gaming
*
AI In Marketing
* AI In Finance
* AI In Agriculture
* AI In HealthCare
* AI In Chatbots
* AI In Space Exploration
* AI In Artificial Creativity
* AI In Autonomous Vehicles
* R
* Lisp
* Java
* Prolog
* Python
An expert system is an AI-based program that has a lot of knowledge (expert-level) of a particular field. This program can use its expertise to solve real problems as well. Expert systems are capable of replacing human experts in their areas. 
 
The qualities of an AI expert system are :

* Fast
* Reliable
* Productive
* Understandable
The tools of artificial intelligence are :

* Auto ML
* CNTK
* Keras
* TensorFlow
* PyTorch
* Theano
* Caffe
* MxNet
* Google ML Kit
* Scikit Learn
* OpenNN
* H20 : Open Source AI Platform
 
Artificial intelligence is the all-encompassing branch of computer science to build machines that are capable like humans.  Example: Robotics
 
Machine learning is the subset of AI. It is the practice of getting machines to make decisions without being programmed. It aims to build machines learning through data so that they can solve problems. Example: churn prediction, detection of disease, text classification.
 
Deep Learning is the subset of Machine Learning. It has neural networks that can perform unsupervised learning from unstructured data. They learn through representation learning, and it could be unsupervised, supervised, or semi-supervised. Deep learning aims to build neural networks that automatically discover patterns for feature detection. Example: uncrewed cars and how they can recognize stop signs on the road.
TensorFlow is an open-source software library initially developed by the Google Brain Team for use in machine learning and neural networks research. It is used for data-flow programming. TensorFlow makes it much easier to build certain AI features into applications, including natural language processing and speech recognition.
It is a knowledge representation scheme in which facts are represented as a set of relations. For example knowledge about players can be represented using a relation called “player” having three fields: player name, height, and weight. This form of knowledge representation provides weak inferential capabilities when used as standalone but are useful as an input for sophisticated inferential procedures.
Frames : A frame is a record like structure which consists of a collection of attributes and its values to describe an entity in the world. Frames are the AI data structure which divides knowledge into substructures by representing stereotypes situations. It consists of a collection of slots and slot values. These slots may be of any type and sizes. Slots have names and values which are called facets.

A frame is also known as slot-filter knowledge representation in artificial intelligence.
 
Scripts : A script is a structured representation describing a stereotyped sequence of events in a particular context. Scripts are used in natural language understanding systems to organize a knowledge base in terms of the situations that the system should understand.
Q-learning is a popular algorithm used in reinforcement learning. It is based on the Bellman equation. In this algorithm, the agent tries to learn the policies that can provide the best actions to perform for maximining the rewards under particular circumstances. The agent learns these optimal policies from past experiences.
 
In Q-learning, the Q is used to represent the quality of the actions at each state, and the goal of the agent is to maximize the value of Q.
Deep learning is a subset of Machine learning that mimics the working of the human brain. It is inspired by the human brain cells, called neurons, and works on the concept of neural networks to solve complex real-world problems. It is also known as the deep neural network or deep neural learning.
 
Some real-world applications of deep learning are :
 
* Text generation
* Computer vision
* Deep-Learning Robots, etc.


Any Deep neural network will consist of three types of layers :

Input Layer :  The input layer has neurons that take the input from external sources like files, data sets, images, videos, and sensors. This part of the Neural Network doesn’t perform any computation.  It only transfers the data from the outside world to the Neural Network

Hidden Layer : The hidden layer receives the data from the input layer and uses it to derive results and train several Machine Learning models. The layer can be further divided into sub-layers that extract features, make decisions, connect with other sources, and predict future actions based on the events that happened. 

Output layer : After processing, the data is transferred to the output layer for delivering it to the outside environment.
Expert System is a computer program or system that has the ability similar to human experts to make decisions and judgment. Expert system emulates the if-then pattern rather than procedural programming. Expert systems or programs are used to solve complex problems by capturing the knowledge from the knowledge base. All knowledge in the knowledge base is stored by human experts and Expert Systems are used by Non Expert humans to solve complex problems.

There are following characteristics of the Expert System :
 
* Expert Systems helps to spread the expertise of a human to many.
* Expert Systems provide more efficient solutions as the knowledge base contains the information from many human experts rather than single human.
* Cost is reduced with expert systems as less cost is involved as compared to consultants for problem solving.
* New facts can be reduced based on existing knowledge base facts so expert systems can solve complex problems.
* The Expert system is a permanent computer system as human experts are perishable or limited in life.
Expert Systems in AI has certain advantages and disadvantages as follows.
 
Expert System Advantage :
* It gives fast results with less chances of errors.
* The Expert system comes with reduced cost.
* Expert systems have no effect on emotions like humans.
* It's a permanent solution not perishable.

Expert System Disadvantage :
* Does not support self learning, you need to update manually.
* Expert system does not care about emotions and does not have any common sense.
* The Expert system makes decisions fast but is not able to explain the reason behind it.
Natural Language Processing (NLP) is a subfield of Artificial Intelligence, computer science, and linguistics concerned with the communications between human language and computer language that helps in specific how to program computers to analyze and process large amounts of natural language data.
An Artificial Neural Network (ANN) is a computing system designed to simulate the human brain's analysis and processes of information. It acts as a foundation of AI and solves the problems that prove difficult or impossible or statistical standards. ANN has a self-learning capability that allows users to produce better results.
To solve a problem, there can be multiple Machine Learning algorithms with different approaches and constraints. However, a generic approach can be applied to most of the problems and find a suitable algorithm. Below are the steps you need to consider while choosing an algorithm :
 
Categorize the Problem : The first is finding the algorithm, which is to categorize it based on the type of input you have and the output you want from it. If the data is labeled, it’s a problem for supervised learning. If the data is not labeled, then it’s an unsupervised learning problem. At last, if the problem aims to optimize a model, then it’s a reinforcement learning problem. 
 
Similarly, you can categorize a problem based on the outcome you want from the algorithm. If the output is expected to be numerical then it’s a regression problem. Is class is the output of a model, it’s a classification problem, and grouping of the input values can be categorized as the clustering problems. 
 
Understanding the Data : Data plays an important role in the process of selecting the right algorithm for your problem. This is because, some algorithms can process tons of data, while some works better with smaller samples. Analyzing and transforming your data will also help you to know the constraints and the challenges you t=want to overcome while solving the problem. 
 
Find the available Algorithms : Identify the available algorithms you can apply for solving the problem in a reasonable timeframe. Some of the factors that may affect your choice of the right algorithm include the accuracy of the algorithm, complexity, scalability interpretability, build & training time, space, and the time it takes to solve the problem.
 
Implement the Algorithm : After selecting the algorithm, you have to make an evaluation criteria by carefully selecting the testing values and subgroups of the datasets. Also, check the time taken by each algorithm to solve the problem. The algorithm that provides accurate results in the given time while acquiring less space, would be the best algorithm for your problem. 
The Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is capable of thinking like a human being. The test is named after Alan Turing, the founder of the Turing Test and an English computer scientist, cryptanalyst, mathematician and theoretical biologist.
 
Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions. The original Turing Test requires three terminals, each of which is physically separated from the other two. One terminal is operated by a computer, while the other two are operated by humans.
 
During the test, one of the humans functions as the questioner, while the second human and the computer function as respondents. The questioner interrogates the respondents within a specific subject area, using a specified format and context. After a preset length of time or number of questions, the questioner is then asked to decide which respondent was human and which was a computer.
 
The test is repeated many times. If the questioner makes the correct determination in half of the test runs or less, the computer is considered to have artificial intelligence because the questioner regards it as "just as human" as the human respondent.
Reinforcement Learning, a type of Machine learning algorithm, which is based on the feedback loop where an agent and environment is set up.
 
The way it works is the agent learns to behave in an environment, by performing certain actions and observing the  rewards and results which it gets from those actions. Hence This technique is behavior-driven and is based on the reinforcements learned via trial-and-error. Example: to learn how to ride a bicycle.  This method can be used to optimize the operational productivity of systems including supply chain logistics, manufacturing and robotics.
Stemming : Stemming is a rudimentary rule-based process of stripping the suffixes (“ing”, “ly”,  “es”, “s” etc) from a word.
 
Lemmatization : Lemmatization, on the other hand, is an organized & step by step procedure of obtaining the root form of the word. It uses vocabulary (dictionary importance of words)  and morphological analysis (word structure and grammar relations).
Tower of Hanoi is a mathematical puzzle that shows how recursion is utilized to build an algorithm to solve a specific problem.  In Artificial Intelligence,  the Tower of Hanoi can be solved using a decision tree and a Breadth-First Search (BFS) algorithm.
In a bidirectional search algorithm, two searches are run simultaneously. The first search begins forward from the initial state, and the second goes backward in reverse from the goal state. Both the searches meet to identify a common state, and this way, the search ends. The initial state is linked with the goal state in a reverse manner.
Both Breadth-First Search (BFS)  and Depth-First Search (DFS) algorithms are used to search tree or graph data structures.
 
Breadth-first search algorithm :
* It is based on FIFO (first-in, first-out) method.
* It starts from the root node, proceeds through neighboring nodes, and finally moves towards the next level of nodes. Till the arrangement is found and created, it produces one tree at any given moment
* The method strategy gives the shortest path to the solution.
 
 
Depth-first search algorithm :
* It is based on the LIFO (Last-in, Last Out) approach.
* It starts at the root node and searches as far as possible along every branch before it performs backtracking.
* The path is stored in each iteration from root to leaf nodes in a linear fashion with space requirement.
Fuzzy Logic (FL) is a method of reasoning that resembles human reasoning. The approach of FL imitates the way of decision making in humans that involves all intermediate possibilities between digital values YES and NO.
 
The conventional logic block that a computer can understand takes precise input and produces a definite output as TRUE or FALSE, which is equivalent to human’s YES or NO.
 
The inventor of fuzzy logic, Lotfi Zadeh, observed that unlike computers, the human decision making includes a range of possibilities between YES and NO, such as following :
 
* CERTAINLY YES
* POSSIBLY YES
* CANNOT SAY
* POSSIBLY NO
* CERTAINLY NO
 
The fuzzy logic works on the levels of possibilities of input to achieve the definite output.
 
Implementation : 
It can be implemented in systems with various sizes and capabilities ranging from small micro-controllers to large, networked, workstation-based control systems.
 
It can be implemented in hardware, software, or a combination of both.
Fuzzy Logic some important applications are :
 
* Facial pattern recognition
* Weather forecasting systems
* Project risk assessment
* Medical diagnosis and treatment plans
* Stock marketing prediction and trading
* Control of subway systems and crewless helicopters
* Anti Skid braking systems and transmission systems
* Air conditioners, washing machines, and vacuum cleaners
* It is a technique to embody human-like thinkings into a control system.
* It may not be designed to give accurate reasoning but it is designed to give acceptable reasoning.
* It can emulate human deductive thinking, that is, the process people use to infer conclusions from what they know.
* Any uncertainties can be easily dealt with the help of fuzzy logic.
Advantages of Fuzzy Logic System :
 
* The construction of Fuzzy Logic Systems is easy and understandable.
* The algorithms can be described with little data, so little memory is required.
* Fuzzy logic comes with mathematical concepts of set theory and the reasoning of that is quite simple.
* This system can work with any type of inputs whether it is imprecise, distorted or noisy input information.
* It provides a very efficient solution to complex problems in all fields of life as it resembles human reasoning and decision-making.
 
 
Disadvantages of Fuzzy Logic Systems :

* As fuzzy logic works on precise as well as imprecise data so most of the time accuracy is compromised.
* Many researchers proposed different ways to solve a given problem through fuzzy logic which leads to ambiguity. There is no systematic approach to solve a given problem through fuzzy logic.
* Proof of its characteristics is difficult or impossible in most cases because every time we do not get a mathematical description of our approach.
In the context of AI and DL systems, the game theory allows some of the key capabilities required in multi-agent environments in which various AI programs are required to interact to meet the goals. 
 
Game Theory is a branch of mathematics used to develop the strategic interactions between multiple players with pre-defined rules and the outcomes. It is also used to define several instances in our daily life and machine learning models.
The following are the four elements of Markov Decision Process(MDP) :
 
* A set of possible world states S.
* A set of Models.
* A set of possible actions A.
* A real valued reward function R(s,a).
* A policy the solution of Markov Decision Process.
 
In this process, the agent implements an Action A to transition from the start state to the End state, and while performing these actions, the agent receives few rewards. The series of actions taken by the agent can be defined as a policy.
The First Order Predicate Logic (FOPL) is backbone of Artificial Intelligence, as well a method of formal representation of Natural Language (NL) text. The Prolog language for AI programming has its foundations in FOPL. The chapter demonstrates how to translate NL to FOPL in the form of facts and rules, use of quantifiers and variables, syntax and semantics of FOPL, and conversion of predicate expressions to clause forms.
The language of FOPL includes the following :
 
* A set of variables
* A set of function symbols
* A set of constant symbols
* A set of predicate symbols
* A special binary relation of equality
* The Universal Quantifier and Existential Quantifier
* The logical connective
Hidden Markov Model (HMM) is a statistical model used to describe the evolution of observable events that depend on internal factors, that are not observable directly. We can call the observed event a symbol and the invisible factor underlying the observation of a state.
Uniform-cost search is an uninformed search algorithm that uses the lowest cumulative cost to find a path from the source to the destination. Nodes are expanded, starting from the root, according to the minimum cumulative cost. The uniform-cost search is then implemented using a Priority Queue.
 
Uniform Cost Search Algorithm :
 
* Insert the root node into the priority queue
 
* Repeat while the queue is not empty :
   a) Remove the element with the highest priority
   b) If the removed node is the destination, print total cost and stop the algorithm
   c) Else, enqueue all the children of the current node to the priority queue, with their cumulative cost from the root as priority
Backpropagation is a Neural Network algorithm that is mainly used to process noisy data and detect unrecognized patterns for better clarification. It’s a full-state algorithm and has an iterative nature. As an ANN algorithm, Backpropagation has three layers, Input, hidden, and output layer. 
 
The input layers receive the input values and constraints from the user or the outside environment. After that, the data goes to the Hidden layer where the processing is done. At last, the processed data is transformed into some values or patterns that can be shared using the output layer.
 
Before processing the data, the following values should be there with the algorithm :
 
Dataset : The dataset which is going to be used for training a model.
Target Attributes : Output values that an algorithm should achieve after processing the data. 
Weights : In a neural network, weights are the parameters that transform input data within the hidden layer.
Biases : At each node, some values called bias are added to the sum calculated(except input nodes).

Backpropagation is simple ANN algorithm that follows a standard approach for training ML models. It doesn’t require high computational performance and is widely used in speed recognition, image processing, and optical character recognition(OCR).
A cost function is a scalar function that quantifies the error factor of the neural network. Lower the cost function better the neural network. For example, while classifying the image in the MNIST dataset, the input image is digit 2, but the neural network wrongly predicts it to be 3.
The three main hyperparameter optimization algorithms are :
 
Grid Search : It is a way to detect the family of models parameterized by a grid of parameters. It trains the model for all the possible combinations from the value of hyperparameters provided.

Random Search : In this, it randomly searches the sample space and evaluates the sets from a probability distribution. Here, the model is run only a fixed number of times.

Bayesian Optimization : It uses Bayes theorem to direct the search to find the minimum or maximum objective function. It is most useful for objective functions that are complex, noisy, and/or expensive to evaluate.
Gradient descent is an optimization algorithm used to minimize the cost function, which is the error term. It is an iterative method that converges to the optimum solution by moving in the direction of the steepest descent as defined by the negative of the gradient. The gradient descent technique has a hyperparameter called learning rate, α that specifies the jumps the algorithm takes to move towards the optimal solution.
The two paradigms of ensemble methods are:
 
Parallel ensemble methods : these methods build the several estimators or models independently and then take average for regression or voting for classification problems. Example are: Bagging methods, Random Forest

Sequential ensemble methods : These fall under the family of Boosting methods where the base estimators are butilt sequentially and then reduces thes bias of the combined estimator. Examples: AdaBoost, Gradient Boost, XGBoost.
A Bayesian network is a statistical model that represents a set of variables and their conditional dependencies in the form of a directed acyclic graph. 
 
On the occurrence of an event, Bayesian Networks can be used to predict the likelihood that any one of several possible known causes was the contributing factor.
 
For example, a Bayesian network could be used to study the relationship between diseases and symptoms. Given various symptoms, the Bayesian network is ideal for computing the probabilities of the presence of various diseases.
Overfitting occurs when a statistical model or machine learning algorithm captures the noise of the data. This causes an algorithm to show low bias but high variance in the outcome. 
 
Overfitting can be prevented by using the following methodologies :

Cross-validation : The idea behind cross-validation is to split the training data in order to generate multiple mini train-test splits. These splits can then be used to tune your model.
 
More training data : Feeding more data to the machine learning model can help in better analysis and classification. However, this does not always work.
 
Remove features : Many times, the data set contains irrelevant features or predictor variables that are not needed for analysis. Such features only increase the complexity of the model, thus leading to possibilities of data overfitting. Therefore, such redundant variables must be removed.
 
Early stopping : A machine learning model is trained iteratively, this allows us to check how well each iteration of the model performs. But after a certain number of iterations, the model’s performance starts to saturate. Further training will result in overfitting, thus one must know where to stop the training. This can be achieved by a mechanism called early stopping.
 
Regularization : Regularization can be done in n number of ways, the method will depend on the type of learner you’re implementing. For example, pruning is performed on decision trees, the dropout technique is used on neural networks and parameter tuning can also be applied to solve overfitting issues.
 
Use Ensemble models : Ensemble learning is a technique that is used to create multiple Machine Learning models, which are then combined to produce more accurate results. This is one of the best ways to prevent overfitting. An example is Random Forest, it uses an ensemble of decision trees to make more accurate predictions and to avoid overfitting.
Keras is an open source neural network library written in Python. It is designed to enable fast experimentation with deep neural networks.

TensorFlow is an open-source software library for dataflow programming. It is used for machine learning applications like neural networks.

PyTorch is an open source machine learning library for Python, based on Torch. It is used for applications such as natural language processing.
Computer vision is a field of Artificial Intelligence that is used to train the computers so that they can interpret and obtain information from the visual world such as images. Hence, computer vision uses AI technology to solve complex problems such as image processing, object detections, etc.
There are lots of misconceptions about artificial intelligence since starting its evolution. Some of these misconceptions are given below :
 
AI does not require humans : The first misconception about AI is that it does not require human. But in reality, each AI-based system is somewhere dependent on humans and will remain. Such as it requires human gathered data to learn about the data.

AI is dangerous for humans : AI is not inherently dangerous for humans, and still, it has not reached the super AI or strong AI, which is more intelligent than humans. Any powerful technology cannot be harmful if it is not misused.

AI has reached its peak stage : Still, we are so far away from the peak stage of the AI. It will take a very long journey to reach its peak.

AI will take your job : It is one of the biggest confusions that AI will take most of the jobs, but in reality, it is giving us more opportunities for new jobs. etc,.
A chatbot is Artificial intelligence software or agent that can simulate a conversation with humans or users using Natural language processing. The conversation can be achieved through an application, website, or messaging apps. These chatbots are also called as the digital assistants and can interact with humans in the form of text or through voice.
 
The AI chatbots are broadly used in most businesses to provide 24*7 virtual customer support to their customers, such as HDFC Eva chatbot, Vainubot, etc.
Knowledge representation techniques are given below :

* Production Rules
* Frame Representation
* Logical Representation
* Semantic Network Representation
 
 
* Google Cloud AI platform
* Microsoft Azure AI platform
* IBM Watson
* Infosys Nia
* Rainbird
* Dialogflow
* TensorFlow
Bayesian networks are graphical models showing the probabilistic relationship between the set of variables.

It depicts the variables and their respective conditional probabilities in the form of a directed acyclic graphical. These are based on the Bayes theorem, which uses conditional probabilities.

It is used to detect anomalies, classification of e-mail as spam in medical diagnosis.

Bayes rule : In Artificial Intelligence to answer the probabilistic queries conditioned on one piece of evidence, Bayes rule can be used.
Alpha–Beta pruning is a search algorithm that tries to reduce the number of nodes searched by the minimax algorithm in the search tree. It can be applied to ‘n’ depths and can prune the entire subtrees and leaves.
FNN is the first and simplest form of ANN wherein the connections between the nodes do not form a cycle. That means the data or the input travels in one direction and the data passes through the input nodes and exits on the output nodes. In this network, there may or may not be hidden layers.
CNN is a type of feed-forward artificial neural network with variations of multilayer perceptrons designed to use minimal amounts of preprocessing. A convolutional neural network takes the input features in batch wise like a filter, assigns importance (learnable weights and biases) to various aspects or objects in the image and is able to differentiate one from the other.  The network is able to remember the images in parts and compute the operations. It is mostly used to analyze visual imagery, for signal and image processing.
Long Short-Term Memory (LSTM) networks are a type of recurrent neural network capable of learning order dependence in sequence prediction problems. It uses special units in addition to the standard units which includes a ‘memory cell’ that can maintain the information in memory for long periods of time. LSTMs are complex areas of deep learning used in domains such as machine translation, speech recognition.
Bias is the difference between a model’s predicted values and the observed or the actual values. Variance of a model is the difference between predictions of the model when fit to the train and the test data set.
 
When a model is too simple, then it cannot make perfectly accurate predictions. However, the predictions will be consistent. In such a case, the model will be underfitting and have high bias and low variance. On the other hand, if the model is too complex, it can predict accurately but not consistently. In this case, the model is said to have high variance low bias indicating it will much better fit the train data than the test. Such a model is an overfitted model.
Artificial Intelligence is used in Fraud detection problems by implementing Machine Learning algorithms for detecting anomalies and studying hidden patterns in data. 
 
Detecting fraudulent activities are following  :
 
Data Extraction : At this stage data is either collected through a survey or web scraping is performed. If you’re trying to detect credit card fraud, then information about the customer is collected. This includes transactional, shopping, personal details, etc.
 
Data Cleaning : At this stage, the redundant data must be removed. Any inconsistencies or missing values may lead to wrongful predictions, therefore such inconsistencies must be dealt with at this step.
 
Data Exploration & Analysis : This is the most important step in AI. Here you study the relationship between various predictor variables. For example, if a person has spent an unusual sum of money on a particular day, the chances of a fraudulent occurrence are very high. Such patterns must be detected and understood at this stage.
 
Building a Machine Learning model : There are many machine learning algorithms that can be used for detecting fraud. One such example is Logistic Regression, which is a classification algorithm. It can be used to classify events into 2 classes, namely, fraudulent and non-fraudulent.
 
Model Evaluation : Here, you basically test the efficiency of the machine learning model. If there is any room for improvement, then parameter tuning is performed. This improves the accuracy of the model.
Components of GAN :

* Generator
* Discriminator


Deployment Steps :

* Train the model
* Validate and finalize the model
* Save the model
* Load the saved model for the next prediction
Gradient descent is an optimization algorithm that is used to find the coefficients of parameters that are used to reduce the cost function to a minimum.
 
Step 1 : Allocate weights (x,y) with random values and calculate the error (SSE)
 
Step 2 : Calculate the gradient, i.e., the variation in SSE when the weights (x,y) are changed by a very small value. This helps us move the values of x and y in the direction in which SSE is minimized
 
Step 3 : Adjust the weights with the gradients to move toward the optimal values where SSE is minimized
 
Step 4 : Use new weights for prediction and calculating the new SSE
 
Step 5 : Repeat Steps 2 and 3 until further adjustments to the weights do not significantly reduce the error
To understand further, it is good to get familiar with the term MLaaS, which is Machine Learning as a Service. MLaaS offers multiple ML-related services as an additional component to cloud computing. There are multiple options in the market, so it would depend on the use-cases being tackled. Let's discuss a few of the popular ones :
 
Google's Vertex AI : Google recently unveiled their Unified MLOps & AI Platform – Vertex AI to help Data Scientists / ML Engineers increase experimentation, deploy faster and manage models better.
 
AWS Machine Learning : AWS is the most significant player in the cloud computing market. Their Machine Learning offerings have matured over time to cater to all machine learning needs from speech recognition, computer vision, AI, etc.
 
Azure Machine Learning : Azure has been grabbing more market share over the years because it has done a lot right in the machine learning space. Azure Machine Learning is a no-code, drag and drop interface to perform machine learning at Scale along with model management.
 
Artificial Intelligence Governance can be measured as the following components : 
 
Data : Tracks the data flow from start to end to ensure that the data lineage and provenance is validated to ensure there are no loopholes
 
Security : If someone in the AI System can manipulate the model’s results by tampering, this can lead to severe issues. You can tackle this in the future by using blockchain to imprint AI Systems.
 
Cost and value of data : Key performance indicators to track the cost of the data and the value obtained from the algorithm help measure effectiveness continuously.
 
Bias : Exposing selection and measurement bias with continuous automated tracking can help understand when a model drifts from its initial purpose (through self-learning). You should monitor this constantly to ensure that AI Ethics are maintained.
 
Accountability : Clarity on the individuals responsible for the system and accountable for its decisions is part of AI Governance of the future. All the way from security loopholes, maintenance, and monitoring
 
Audit : Audit trails and third party reviews can ensure that systems that affect human life are held accountable. 
 
Time : Model drift and impact over time should be captured to ensure that the model is more efficient than the traditional implementation.
Ethical AI is the practice of leveraging AI with good intentions to empower employees and businesses. Ethical AI lets companies Scale AI with confidence. Companies are currently in the development phase of what AI is going t be in the future. There is no single source of truth. Each company has their own Ethical AI Framework :
 
* Google 
* Microsoft
* Facebook
 
So, there is currently no one-stop solution as to what Ethical AI should look like at an enterprise level as it is a work in progress. However, some of the pillars of Ethical AI are as follows :
 
* Privacy & Security
* Fairness & Inclusion
* Robustness & Safety
* Transparency & Control
* Accountability & Governance
Artificial Intelligence (AI) can be broadly classified into the following categories :

Narrow or Weak AI : designed to perform a specific task, such as voice recognition or image classification.

General or Strong AI : designed to perform any intellectual task that a human can. It does not exist yet.

Supervised Learning : the system is trained on a labeled dataset and makes predictions based on the patterns it learned from the data.

Unsupervised Learning : the system is given an unlabeled dataset and finds patterns and relationships within the data on its own.

Reinforcement Learning : the system learns through trial and error, receiving rewards or punishments for specific actions.

Deep Learning : a subset of machine learning, it uses artificial neural networks to model complex patterns in data.
The history of Artificial Intelligence (AI) can be traced back to the mid-20th century. Some key milestones include:

1956: British mathematician and computer scientist Alan Turing introduced the concept of a machine that could perform tasks that would typically require human intelligence. The field of AI research was formalized in 1956, at a conference at Dartmouth College, where the term "Artificial Intelligence" was coined.

1966-1973 : The "AI Winter" a period of reduced funding and interest in AI research due to unrealistic expectations and a lack of progress.

1980s : Expert systems, AI applications that mimic the decision-making abilities of a human expert in a particular domain, became popular.

1997 : Deep Blue, a computer program developed by IBM, defeated world chess champion Garry Kasparov, marking a significant milestone in AI.

Late 2000s : The rise of big data, cloud computing, and advances in machine learning algorithms led to a resurgence of AI research and development.

2010s : AI began to be integrated into everyday technologies, such as smartphones, personal assistants, and home automation systems.

Today, AI is a rapidly growing field with a wide range of practical applications, including natural language processing, computer vision, robotics, and autonomous systems.
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and act like humans. It involves the development of algorithms and systems that can learn from data, make decisions, and perform tasks that typically require human intelligence, such as speech recognition, image classification, and language translation.

AI differs from traditional software in several ways. Traditional software is built with a specific set of rules and instructions that dictate how it should function. AI, on the other hand, is designed to learn and adapt over time, without the need for additional programming. Additionally, AI systems can handle large amounts of data and operate at a much faster pace than traditional software, making it a valuable tool for solving complex problems and automating tasks.
There are many algorithms used in AI coding, each with its own strengths and weaknesses. Here are some common algorithms used in AI :

Linear Regression : A supervised learning algorithm used to model the relationship between dependent and independent variables.

Logistic Regression : A supervised learning algorithm used for classification problems.

Decision Trees : A supervised learning algorithm that creates a tree-like model of decisions and their possible consequences.

Random Forest : An ensemble learning algorithm that combines multiple decision trees to improve accuracy and reduce overfitting.

Support Vector Machines (SVM) : A supervised learning algorithm used for classification and regression problems.
k-Nearest Neighbors (k-NN) : A supervised learning algorithm used for classification and regression problems.

Naive Bayes : A probabilistic algorithm used for classification problems.

Artificial Neural Networks : A family of algorithms inspired by the structure and function of the human brain, used for a variety of problems including classification, regression, and image and speech recognition.

Convolutional Neural Networks (CNNs) : A type of neural network specifically designed for image recognition.

Recurrent Neural Networks (RNNs) : A type of neural network specifically designed for sequential data analysis, such as natural language processing.

These are just a few examples of the algorithms used in AI coding. The choice of algorithm will depend on the problem being solved and the type of data being used. It is important to choose the appropriate algorithm for the problem to achieve the best results.
Advertisement