Apisero Interview Preparation and Recruitment Process


About Apisero


Apisero is a global consulting firm specializing in MuleSoft and Salesforce solutions. Founded in 2016 and headquartered in Chandler, Arizona, Apisero has grown to become a key player in the integration and digital transformation space.

Apisero Interview Questions

Acquisition by NTT DATA


In October 2022, Apisero was acquired by NTT DATA, a global leader in IT and business services.
This acquisition aimed to enhance NTT DATA's capabilities in cloud, data, and engineering services, particularly in the Salesforce and MuleSoft ecosystems. Post-acquisition, Apisero operates as "Apisero, an NTT DATA company," maintaining its brand identity while leveraging NTT DATA's global resources.



Core Competencies

  • MuleSoft Expertise: Apisero is recognized as a strategic MuleSoft partner, offering services like system integration, connector development, and training.

  • Salesforce Services: The company provides comprehensive Salesforce solutions, including consulting, implementation, and support.

  • Industry Focus: Apisero serves various sectors such as healthcare, financial services, manufacturing, retail, and education.



Global Presence

Apisero has a significant global footprint, with a strong presence in India. Approximately 90% of its workforce is based in Indian cities like Pune, Mumbai, Delhi, Kolkata, Ranchi, Bangalore, Hyderabad, Guwahati, and Chennai.



Achievements

  • Eight-time MuleSoft Partner of the Year awardee.

  • Over 1,500 certified MuleSoft consultants and 500 Salesforce consultants.

  • Annual revenue of approximately $750 million as of April 2025.

Apisero's integration into NTT DATA has expanded its capabilities, allowing it to offer end-to-end digital transformation solutions to a broader client base.



Apisero Recruitment Process


While Apisero was acquired by NTT DATA and now operates as NTT DATA Salesforce & MuleSoft Services, the core recruitment process is likely to have similarities. Based on reports from candidates who interviewed with Apisero, the recruitment process typically involved the following stages:

1. Online Assessment (Coding and Aptitude Round):

  • This is usually the first step and is an elimination round.
  • It typically includes sections on:
    • Logical Reasoning: Tests your analytical and problem-solving abilities.
    • Aptitude: Evaluates general cognitive abilities.
    • Technical Evaluation:
      • Coding Questions: Basic to medium level, often in languages like C++, Java, or Python.
      • Multiple Choice Questions: Based on fundamental computer science concepts such as operating systems, database management systems, and computer networks.

2. Technical Interviews (Multiple Rounds):

  • Candidates who clear the online assessment proceed to one or more rounds of technical interviews. The number of rounds can vary (usually 2-3).
  • These rounds assess your technical skills in more depth. Questions may include:
    • Basic Computer Science Fundamentals: Expect questions on data structures, algorithms, operating systems, DBMS, and networking.
    • Coding: You might be asked to write code for medium to hard-level data structures and algorithm problems.
    • Programming Language Proficiency: Interviewers will likely delve into your chosen programming language(s) (e.g., Java, Python, C++).
    • MuleSoft/Salesforce Specific Questions (depending on the role): If you're applying for a role requiring these skills, expect questions related to the platform, APIs, integration concepts, and relevant technologies (like RAML, DataWeave for MuleSoft or Apex, Lightning for Salesforce).
    • SQL: You might be asked to write SQL queries for various scenarios.
    • Cloud Technologies: Questions on cloud platforms (like AWS, Azure, GCP) might be asked.
    • API Concepts: Understanding of how APIs work and distributed systems are designed can be a plus.
    • Experience-Based Questions: For experienced candidates, questions will focus on previous projects, your roles, and the technologies you've worked with.
    • Resume Review: Be prepared to discuss everything mentioned in your resume.
    • Puzzles: Some rounds might include logical or technical puzzles.

3. HR/Behavioral Round:

  • This is usually the final round.
  • It focuses on assessing your soft skills, communication abilities, cultural fit, and overall personality.
  • Common questions include:
    • Tell me about yourself.
    • Why do you want to work for our company (now NTT DATA)?
    • What are your strengths and weaknesses?
    • Where do you see yourself in 5 years?
    • Why should we hire you?
    • How do you handle conflict or pressure?
    • Questions about your teamwork and problem-solving skills.
    • Discussion about salary expectations and benefits.

Important Points to Note:

  • The process can vary depending on the specific role, your experience level (fresh graduate vs. experienced professional), and the current hiring needs of the company (now NTT DATA).
  • Elimination at Each Stage: Each round is typically an elimination round, so you need to perform well to move to the next stage.
  • Preparation is Key: To succeed, it's crucial to have a strong foundation in computer science fundamentals, be proficient in at least one programming language, and if applicable, have knowledge of MuleSoft or Salesforce. Be prepared to explain your projects and experiences clearly.
  • Now Part of NTT DATA: Keep in mind that Apisero is now part of NTT DATA. While the core interview principles might remain similar, the branding and some specific questions might reflect the parent company.

To get the most accurate and up-to-date information about the current recruitment process, it's best to check the NTT DATA careers website or reach out to their recruitment team directly.


Apisero Interview Questions :

1 .
What do you understand about the SLIP protocol?
SLIP is an acronym for Serial Line Internet Protocol. It is basically a TCP/IP based fundamental protocol that is used for communication through serial ports and routers. They allow machines to connect with one another that was previously configured for direct communication.

For example, a customer could be connected to the Internet Service Provider (ISP) over a slower SLIP line. When a client needs a service, he or she contacts the ISP and makes a request. The ISP responds to the request by sending it to the Internet on high speed multiplexed lines. The Internet Service Provider then sends the results back to the client over SLIP lines. A SLIP frame has a straightforward format, consisting of a payload and a flag that serves as an end delimiter. A special character with a decimal value of 192 is commonly used as the flag. If this flag is present in the data, it is preceded by an escape sequence that prevents the receiver from interpreting it incorrectly.
2 .
Highlight the differences between the SLIP and Point to Point protocol (PPP).
SLIP PPP
It is an acronym for Serial Line Internet Protocol. It is an acronym for Point to Point Protocol.
SLIP is a predecessor of the PPP. PPP is the successor of the SLIP.
The internet protocol packet is covered by SLIP. Datagram is covered by PPP.
Authentication mechanisms are not provided by SLIP. Authentication mechanisms are provided by PPP.
SLIP is a static IP addressing assignment. PPP is a dynamic IP addressing assignment.
Data is transferred in a synchronous form in SLIP. Data is transferred both in a synchronous and asynchronous form in PPP.
3 .
What are the key differences between TCP and UDP?
TCP (Transmission Control Protocol) UDP (User Datagram Protocol)
TCP (Transmission Control Protocol) is a connection-oriented protocol. The communicating devices should create a connection before transmitting data and close the connection after transmitting the data, according to connection orientation. The User Datagram Protocol (UDP) is a datagram oriented protocol. This is due to the lack of expense associated with creating, maintaining, and terminating connections. For broadcast and multicast network transmission, UDP is a good choice.
TCP is dependable because it ensures data delivery to the destination router. In UDP, data delivery to the destination cannot be assured and hence, it is not dependable.
TCP has a number of error checking methods. It is because it allows for data flow control and acknowledgement. UDP only provides a checksum-based error checking mechanism.
There is an acknowledgement segment in TCP. There is no acknowledgement segment in UDP.
TCP is slower, more complicated, and inefficient than UDP. UDP is faster, simple and more efficient than TCP.
The Transmission Control Protocol has a function that allows data to be sequenced. This means that packets arrive at the receiver in the sequence they were sent. In UDP, there is no data sequencing. The application layer must control the order if it is required.
TCP uses a variable-length (20-60) bytes header. UDP has a fixed-length header of 8 bytes.
TCP allows for the retransmission of dropped packets. UDP does not allow for the retransmission of dropped packets.
Broadcasting is not supported by TCP. Broadcasting is supported by UDP.
Handshakes such as SYN, ACK, and SYN-ACK are used in TCP.  It is a connectionless protocol, which means it does not require a handshake.
TCP is a heavy-weight protocol. UDP is a lightweight protocol.
HTTP, HTTPs, FTP, SMTP, and Telnet all use TCP. DNS, DHCP, TFTP, SNMP, RIP, and VoIP all use UDP.
4 .
State the advantages and disadvantages of the star topology in Computer Networks?

The star topology is a common network configuration where all devices (nodes) in the network are connected to a central hub or switch. Here are the advantages and disadvantages of this topology:

Advantages:

  • Easy to Install and Wire: Each device connects directly to the central hub with its own cable, making installation straightforward.
  • Easy Fault Detection and Isolation: If a connection fails, only that specific device is affected, and the rest of the network continues to function. Identifying the faulty device or cable is also relatively easy.
  • Reliable: The failure of one node or its connection does not impact the rest of the network.
  • Scalable: Adding or removing devices is simple and doesn't disrupt the existing network. You just need to connect or disconnect the cable at the central hub.
  • Centralized Management: The central hub provides a single point for network administration, making it easier to monitor and manage the network.
  • High Performance: In a switched star topology, each device has a dedicated connection to the switch, reducing the chances of data collisions and leading to better performance compared to topologies like bus topology.
  • Secure: Each device has a dedicated connection, which can enhance security compared to shared media topologies. It's easier to implement security measures at the central hub.
  • Cost-Effective for Small to Medium Networks: For many small to medium-sized networks, the cost of the hub or switch and the cabling can be reasonable, especially considering the ease of management and troubleshooting.


Disadvantages:

  • Single Point of Failure: The central hub or switch is a single point of failure. If this central device fails, the entire network goes down, and communication between all connected devices is disrupted.
  • More Cabling Required: Compared to a bus topology, the star topology requires more cable because each device needs a separate connection to the central hub. This can increase installation costs and complexity, especially for larger networks.
  • Dependent on Central Device Capacity: The performance and the number of nodes that can be added to the network are limited by the capacity of the central hub or switch. A less powerful central device can become a bottleneck as the network grows.
  • Higher Cost Compared to Bus Topology: The initial cost of implementing a star topology can be higher than a bus topology due to the cost of the central hub or switch.
  • Potential for Congestion at the Hub/Switch: If the central hub or switch is not powerful enough to handle the network traffic, it can become congested, leading to slower data transfer rates for all connected devices.
  • Limited Distance: The distance between each device and the central hub is limited by the type of cable used.
5 .
State your understanding of Distributed Database Management Systems with Transparency or Transparent Distributed Database Management Systems?

My understanding of Distributed Database Management Systems (DDBMS) with Transparency, often referred to as Transparent Distributed Database Management Systems, is that they are systems designed to manage a database that is spread across multiple physical locations (sites) in a way that is invisible to the end-user or application. The goal of transparency is to make the distributed nature of the database appear as if it were a single, centralized database.

In essence, a transparent DDBMS aims to hide the complexities of distribution from users, allowing them to interact with the database as if it were a local, monolithic system. This simplifies application development and user interaction, as they don't need to be aware of where the data is physically stored or how it is accessed across the network.  

There are several types or levels of transparency that a DDBMS can aim to achieve:

  • Fragmentation Transparency: This hides the fact that a single logical relation (table) might be divided into several physical fragments stored at different sites. Users should be able to query the logical relation without knowing how it's fragmented or where the fragments reside. The DDBMS handles the task of locating the necessary fragments and assembling the result.  
  • Location Transparency (or Data Location Transparency): This hides the physical location of the data. Users don't need to know which site stores a particular data item or fragment. The DDBMS is responsible for locating the data based on the user's request.  
  • Replication Transparency: This hides the fact that data might be replicated (copied) across multiple sites for reasons like improved availability and performance. Users should not be concerned with which copy of the data is being accessed or how updates are propagated to all replicas. The DDBMS manages the consistency of the replicated data.
  • Access Transparency (or Network Transparency): This hides the network access mechanisms required to retrieve data from different sites. Users interact with the database using standard SQL or other data manipulation languages, and the DDBMS handles the underlying network communication.
  • Failure Transparency (or Transaction Transparency): This aims to ensure that transactions complete correctly even in the presence of site or communication failures. The DDBMS should handle issues like transaction commit protocols across multiple sites and recovery mechanisms to maintain data consistency despite failures.
  • Concurrency Transparency: This ensures that concurrent transactions executing at different sites do not interfere with each other and produce the same results as if they were executed serially on a single database. The DDBMS employs distributed concurrency control mechanisms (like distributed locking or timestamping) to achieve this.

Key Characteristics of a Transparent DDBMS:

  • Single Logical View: Presents a unified view of the distributed data to users and applications.
  • Automated Data Access: The system automatically determines the location of the requested data and retrieves it.
  • Hides Distribution Details: Users are shielded from the physical organization, fragmentation, replication, and location of data.  
  • Simplified Application Development: Developers can write applications as if they were interacting with a centralized database, reducing complexity.
  • Improved Data Independence: Changes in the physical distribution of data do not necessarily require changes in application programs.

In essence, the concept of transparency in DDBMS is about providing a seamless and user-friendly experience by abstracting away the underlying complexities introduced by data distribution. The more levels of transparency a DDBMS achieves, the easier it is for users and applications to interact with the distributed data as if it were a single, cohesive whole.

6 .
What are the differences between RDBMS (Relational Database Management Systems) and DBMS (Database Management Systems)?
RDBMS DBMS
They store data in the form of tables of rows and columns They store data in the form of files.
Related data are stored in a single table. There is no relationship between the data stored in a single file of DBMS.
Many data items can be fetched simultaneously using mechanisms like grouping, etc. Only a single data item can be fetched at a particular point of time.
Relational tables can be normalized due to which Data Redundancy can be reduced. No concept of Normalization and hence Data Redundancy is prevalent. 
Distributed databases are supported by RDBMS. DBMS does not support distributed databases.
It can support multiple users at the same time (concurrent access is possible). Only one user can use the DBMS at a particular time.
Faster data fetching due to the relational approach. Data fetching is normally very slow since there is no relation between data in a file.
It has a lot of hardware and software needs. Very few hardware and software needs are there in it.
Data is more secure in RDBMS than in DBMS as various security mechanisms are implemented at various levels in it. Data is highly insecure in DBMS.
Examples of RDBMS include Oracle, MySQL, Postgre, etc. Examples of DBMS include  XML, Window Registry, etc.
7 .
What are the various types of memory spaces that the Java Virtual Machine allocates in Java?
The following are the several types of memory spaces allocated by the Java Virtual Machine:

* The Class(Method) Area: The Class(Method) Area keeps track of per-class structures such as the runtime constant pool, fields, method data, and method code.

* Program Counter (PC) Register: The PC (program counter) register stores the address of the Java virtual machine instruction currently being executed.

* Stack: Frames are kept in the Java Stack. Local variables and partial results, as well as invoking and returning methods, are all managed by it. Each thread has its own JVM stack, which is created at the same time as the thread. A new frame is created each time a method is called. When the method invocation of a frame is complete, it is destroyed.

* The Native Method Stack: The Native Method Stack contains every single native method utilized in the application.

* Heap: This is where the objects' memory is allocated during runtime.
8 .
What do you understand about Polymorphism in Object Oriented Programming?
Polymorphism in Object-Oriented Programming is defined as the presence of several forms. It is made up of two words: "Poly" which means many and "morph" which means form. Its behaviour varies depending on the situation.

When we have numerous classes that are related to each other by inheritance, this happens. Consider a base class named car, which has a method called carBrand().

Volkswagen, Ferrari, BMW and Audi are examples of derived automobile classes, and each has its own implementation of a car. In C++, there are two types of polymorphism:

* Compile Time Polymorphism or Static Polymorphism.

* Runtime Polymorphism or Dynamic Polymorphism.
9 .
What do you understand about Memory Management in Operating Systems and why is it important?
The operating system occupies a portion of memory in a multiprogramming computer, while the rest is used by many processes. Memory management is the practice of dividing memory across many operations. Memory management is an operating system approach for coordinating actions between main memory and disc during the execution of a task. The primary goal of memory management is to maximize memory use.

Memory Management  is necessary because of the following reasons:

* Before and after the process, allocation and deallocation of memory are needed.
* To keep track of how much memory is being consumed by processes.
* To keep fragmentation to a minimum.
* To make the best use of the main memory.
* To keep data safe while a process is running.
10 .
State a few benefits and a few drawbacks of using threads with respect to Operating Systems?
A thread is a path of execution within a process. A process can have several threads. Within a process, it's a separate control flow. It consists of a context and a series of instructions that must be followed. Threads in the same process use shared memory space. Because threads aren't truly independent of one another, they share their code, data, and OS resources with other threads (like open files and signals).

The following are the main benefits of using threads in Operating Systems:

* A separate communication system is not required.
* Threads simplify software structure and improve readability.
* Threads have a faster context switching time (time to switch from one thread to another) than processes.
* The system gets more efficient as fewer system resources are used.

The following are the most significant drawbacks of using threads:

* Because threads are part of a single process, they cannot be reused.
* They interfere with the address space of their process.
* They require synchronization for concurrent read and write memory access.
11 .
What are your thoughts on virtual memory in terms of operating systems?
Virtual Memory is a storage allocation method that lets you address secondary memory as if it were the main memory. Program-generated addresses are automatically converted to machine addresses, which are different from the addresses used by the memory system to designate physical storage places. The quantity of secondary memory available is defined by the number of main storage sites available rather than the actual number of main storage locations, and the capacity of virtual storage is restricted by the computer system's addressing scheme.
12 .
What do you understand about Spooling in Operating Systems? Give an application of spooling.
The practice of temporarily storing data so that it can be used and processed by a device, software, or system is known as spooling. Data is supplied to and stored in memory or other volatile storage until a programme or computer requests it for execution. "Simultaneous Peripheral Operations Online" is an acronym for Spooling. The spool is typically stored in physical memory, buffers, or interrupts for Input and Output devices on the computer. To process the spool in ascending order, the FIFO (first in, first out) approach is employed. Spooling is the collection and storage of data from many Input and Output activities in a buffer. Input and Output devices can access this buffer, which is a piece of memory or hard disc. In a distributed context, an operating system performs the following tasks:

* Data spooling for Input and Output devices with varying data access rates is handled.

* Maintains the spooling buffer, which acts as a temporary data storage region while the slower device catches up.

* The spooling process maintains parallel computing because a computer can do input and output in parallel order. It is now possible for a computer to simultaneously read data from a tape, write data to disc, and print data to a tape printer.

The most obvious application of spooling is in Printing. Before being added to the printing queue, the printed papers are held in the SPOOL. Several programmes can run and use the CPU during this period without having to wait for the printer to finish printing each page individually. Many additional features, such as setting priorities, receiving notifications when the printing process is complete, and selecting different types of paper to print on based on the user's preferences, can be added to the Spooling printing process.
13 .
Define IPSec and state its components.
IP security (IPSec) is a set of protocols developed by the Internet Engineering Task Force (IETF) to provide data authentication, integrity, and confidentiality between two communication points over an IP network. It also specifies the encryption, decryption, and authentication of packets. It defines the protocols for secure key management and key exchange.

It is made up of the following components:

Encapsulating Security Payload (ESP): Data integrity, encryption, authentication, and anti-replay are all supplied by the Encapsulating Security Payload (ESP). Payload authentication is also supported.

Header of Authentication (AH):
Data integrity, authentication, and anti-replay are also supported by this header, but not encryption. Unwanted packet transmission is protected by anti-replay protection. It does not guarantee data privacy.

IKE (Internet Key Exchange): IKE (Internet Key Exchange) is a network security protocol that allows two devices to communicate across a Security Association by dynamically exchanging encryption keys (SA). The Security Association (SA) creates common security features between two network entities to facilitate secure communication. IKE (Internet Key Exchange) secures message content and provides an open framework for implementing standard algorithms such as SHA and MD5. The IP sec users of the technique assign a unique identifier to each packet. Using this identifier, the device can assess whether a packet is correct or not. Unauthorized packets are discarded and do not reach their intended receiver.
14 .
State a few functionalities of Operating Systems.
A few functionalities of Operating Systems are as follows:

* Provides a user interface: Operating systems serve as a link between computer hardware and the people who use it. It allows the user to access the hardware in a methodical fashion.

* File Management: To make navigation and usage more effective, a file system is organized into directories. These directories may include additional directories and files. Among other things, the operating system keeps track of where data is stored, user access settings, and the condition of each file.

* Security: Password protection and other security features are used by the operating system to protect user data. It also guards against unauthorized access to programmes and user data.

* Maintains system performance: By keeping an eye on the overall health of the system, it might help you get more out of it. Keep track of the time between service requests and system responses to get a complete picture of the system's health. This can help with performance by providing important information for debugging.

* Memory Management: The operating system is in charge of the primary memory, also known as main memory. The main memory is made up of a large number of bytes or words, each with its own address. Main memory is a type of fast storage that the CPU can directly access. Before a programme can be executed, it must first be loaded into the main memory. An operating system manages memory by performing the following tasks:

* It keeps track of primary memory, that is, which user programmes use specific memory bytes. Memory addresses that have already been assigned as well as those that have not yet been used.

* In multiprogramming, the OS sets the order in which processes are allowed memory access and for how long. When a process requests memory, it is allocated, and memory is released when the process quits or performs an I/O activity.

* Error detection: The operating system continuously monitors the system in order to detect errors and keep the machine from failing.

* Device Management: Drivers are used by an operating system (OS) to handle device connectivity. It keeps track of all of the system's connected gadgets. The Input/Output controller is a programme that manages all of the devices in the system. Determines which processes and for how long are permitted access to a device. Devices are distributed in an effective and efficient manner. A device gets deallocated when it is no longer required.

* Processor Management: The operating system determines the order in which processes access the processor and the amount of processing time each process has in a multiprogramming environment.
15 .
What is classloader in Java? States its various types.
The Java Virtual Machine's Classloader subsystem is in charge of loading class files. When we execute a Java application, the classloader loads it first.

The following are Java's three built-in classloaders:

ClassLoader in Bootstrap: ClassLoader is the superclass of Extension classloader, which is the default classloader in Bootstrap. It loads the rt.jar file, which contains all Java Standard Edition class files, including java.lang, java.net, java.util, java.io, and java.sql.

Extension ClassLoader: This is the parent of System ClassLoader and the child of Bootstrap ClassLoader. The $JAVA_HOME/jre/lib/ext directory's jar files are loaded.

System or Application ClassLoader: System or Application ClassLoader is the child classloader of Extension ClassLoader. The classpath is used to load the class files. By default, the classpath is set to the current directory. To change the classpath, use the "-cp" or "-classpath" switches.
16 .
What do you understand about Socket Programming? State the advantages and disadvantages of Sockets in Java.
Socket programming is a technique for allowing two network nodes to communicate. One socket (node) listens for traffic on a specified port at a specific IP address, while the other socket connects to it. The listener socket is created by the server while the client is connected to it.

Some advantages of Java Sockets are as follows:

* Sockets are versatile and sufficient. Socket based programming is straightforward to implement for ordinary communications.

* Sockets are to blame for low network traffic. Unlike HTML forms and CGI scripts, which construct and send entire web pages for each new request, Java applets can simply send the changed information.


Some disadvantages of Java Sockets are as follows:

* Security constraints can be burdensome at times because a Java applet running in a Web browser can only connect to the machine from which it came and nowhere else on the network.

* Socket based connections, despite all of Java's advantages, are limited to the delivery of raw data packets between programs. Both the client and the server must provide mechanisms for turning the data into something useful.

* Because data formats and protocols are application-specific, socket-based systems are limited in their reusability.
17 .
Define Storage Classes in C. State the various storage classes which are present in C.
Storage Classes are used to define the properties of a variable or function. Scope, visibility, and longevity are all qualities that allow us to track the presence of a variable during the execution of a programme. There are four storage classes in the C programming language:

Auto: This is the default storage class for all variables specified inside a function or a block. Auto variables can only be utilized within the block/function in which they were defined; they can't be used outside (which defines their scope). They can, however, be accessed outside of their scope by using pointers, which point to the memory address where the variables are kept. They are given a garbage value by default whenever they are declared.

Static: Static variables are often used in C language programs, and this storage type is used to declare them. Even after they have been removed from their scope, static variables can keep their value! As a result, we may assert that they are only initialized once and live only till the programme terminates. No fresh memory is allocated because they are not declared again. They have a limited scope that is restricted to the role they have been allocated. Global static variables in the programme can be accessed from anywhere in the code. By default, the compiler assigns a value of 0 to these.

Register: This storage class declares register variables, which are functionally comparable to auto variables. The only difference is that if there is a free registration available, the compiler will attempt to store these variables in the CPU's register. If no free registers are available, the data is kept in memory alone. The register keyword is used to create a few variables that will be accessed frequently in a programme, hence speeding up its execution. A register variable's address cannot be accessed via pointers, which is an important fact to note.

Extern: The storage class extern merely means that the variable is defined outside of the block in which it is referenced. In essence, the value is assigned to it in another block, which can then be overwritten/changed in another block. Any function or block can make a regular global variable extern by using the 'extern' keyword before its declaration or definition. This indicates that instead of establishing a new variable, we're simply accessing and using the global variable. The extern keyword can be used to extend the visibility of variables and functions. Because functions are visible throughout the programme by default, the use of extern in function declarations or definitions is unnecessary. Its use is self-evident. When you use extern to declare a variable, you are not actually defining it.
18 .
What are your thoughts on Structured Programming?
Structured Programming is a programming paradigm in which the control flow is completely structured. A structure is a block that has a set of rules and has a defined control flow, such as (if/then/else), (while and for), block structures, and subroutines. Nearly all programming paradigms, including the Object-Oriented Programming model, require structured programming.
19 .
Explain a copy constructor with the help of an example.
A copy constructor is a member function that uses another object of the same class to initialize an object. Our copy constructor can be defined by us. If no copy constructor is defined, the default copy constructor is used.
// Including all the header files
#include<bits/stdc++.h>
// class showing the usage of a copy constructor
class Fun{
   long long a,b;
   Fun(long long _a, long long _b){
       this -> a = _a;
       this -> b = _b;
   }
};
// Main function of the C++ program
int main(){
   Fun obj1(5LL,7LL);
   Fun obj2 = obj1;//In this line, the copy constructor will be called
   return 0;
}​

The above-mentioned code snippet shows how the copy constructor is used to define the object "obj2" using the already existing object "obj1".
20 .
Define Macros in C/C++. Explain with an example.
Macros in C/C++ are constants in the preprocessor that are replaced at compile time. As a result, a macro is a named block of code within a programme. When the compiler detects this name, it replaces it with the actual piece of code. The disadvantage of macros is that they are not function calls, but rather code changes. Similarly, when substituting the identical values, they have the advantage of saving time.

In the sample code snippet given below, all instances of the phrases TEXT, EVEN, and SUMMATION will be replaced with whatever is in their body.
#include <bits/stdc++.h>
// Macros are being defined below
#define HELLOWORLD "HELLO WORLD!"
#define EVENNUMBER 4
#define ODDNUMBER 3
#define ADD (4 + 3)
// Main function of the C++ Program
int main()
{
  cout << "String: " << HELLOWORLD << "\n";
  cout << "Even Number is: << EVENNUMBER << "\n";
  cout << "Odd Number is: << ODDNUMBER << "\n";
  cout << "The sum of the given even and odd numbers is: " << ADD << "\n";
  return 0;
}​