JP Morgan Interview Preparation and Recruitment Process


About JP Morgan


J.P. Morgan is a leading global financial services firm and a core component of JPMorgan Chase & Co. (NYSE: JPM), one of the world's oldest, largest, and most prominent financial institutions. Headquartered in New York City, it operates in over 100 countries, serving millions of consumers, small businesses, corporations, governments, and institutional clients. The firm is renowned for its comprehensive financial services, including investment banking, commercial banking, asset management, private banking, and financial transaction processing.

JP Morgan

Overview:

  • Headquarters: New York City, USA

  • Founded: Originally in 1799 as the Bank of the Manhattan Company; became JPMorgan Chase after a series of mergers, most notably in 2000.

  • CEO: Jamie Dimon (as of 2024)

  • Ticker Symbol: JPM (traded on the NYSE)

  • Type: Public company

  • Industry: Financial services


Key Areas of Operation


JPMorgan Chase, under which J.P. Morgan operates, is structured into several core business segments:

* Consumer & Community Banking (CCB): Offers retail banking services under the Chase brand, including credit cards, mortgages, auto loans, and small business banking. It serves millions of U.S. and U.K. customers through branches, ATMs, and digital platforms like the Chase mobile app.

* Corporate & Investment Bank (CIB): Provides investment banking services, including mergers and acquisitions (M&A), corporate advisory, equity and debt underwriting, trading, and market-making. J.P. Morgan is a global leader in these areas, working with over 80% of Fortune 500 companies.

* Commercial Banking (CB): Serves midsize businesses, municipalities, real estate investors, and nonprofits with lending, treasury, and investment banking solutions.

* Asset & Wealth Management (AWM): Manages investments for institutional and retail clients, including pensions, endowments, and high-net-worth individuals. J.P. Morgan is among the world’s largest asset managers by total assets.


History and Evolution

* Origins: J.P. Morgan & Co. was founded in 1871 by John Pierpont Morgan, building on the legacy of his father, Junius S. Morgan, who established J.S. Morgan & Co. in London. The firm’s roots trace back to 1799 with the founding of The Bank of the Manhattan Company, one of JPMorgan Chase’s predecessor institutions.

* Key Milestones:

* Late 19th Century: J.P. Morgan financed major U.S. industrial consolidations, including the formation of U.S. Steel (the world’s first billion-dollar corporation) and General Electric.

* 1895: Supplied the U.S. government with $62 million in gold to stabilize the Treasury during a financial crisis.

* 1907: Led efforts to avert a financial collapse during the Panic of 1907, solidifying its influence.

* 2000: Merged with Chase Manhattan Bank to form JPMorgan Chase & Co., combining J.P. Morgan’s investment banking expertise with Chase’s retail banking strengths.

* Post-2000: Expanded through acquisitions like Bank One (2004), Bear Stearns (2008), Washington Mutual (2008), and First Republic Bank (2023).

* India Presence: J.P. Morgan has operated in India since 1945, with significant milestones like a $25 million workforce development commitment in 2019 and the opening of a major campus in Hyderabad in 2021.


Financial Performance

* Market Position: As of April 2025, JPMorgan Chase is the largest bank in the U.S. by assets and one of the top global investment banks by revenue.

* Stock: Traded as JPM on the NYSE, it’s a key component of indices like the S&P 500. Recent sentiment on X suggests market sensitivity to JPMorgan’s economic forecasts, with posts citing a 60%+ recession probability and potential $9.4 trillion losses for Americans (inconclusive claims).

* Technology Investment: Spends ~$15 billion annually on technology, including digital banking, cybersecurity, and blockchain initiatives like Quorum (launched 2016).


Leadership and Values

* Leadership: Led by Jamie Dimon, Chairman and CEO, whose annual letters emphasize long-term investment, client focus, and global economic insights.

* Core Principles: Integrity, client-first service, and excellence, rooted in the Morgan family’s 150-year legacy of “first-class business in a first-class way.”

* Philanthropy: Committed $2 billion in global philanthropic capital by 2025, focusing on workforce readiness, small business development, and community empowerment.



JP Morgan Recruitment Process


The JP Morgan recruitment process typically involves several structured stages designed to assess candidates' skills, knowledge, and fit for the role. Here's a detailed overview of the process:


Eligibility Criteria

  • A relevant graduate degree (e.g., B.Tech) with a minimum 7.0 CGPA.

  • At least 60% aggregate in 10th and 12th grades.

  • No backlogs at the time of application.

  • Relevant technical knowledge as per the job description.


How to Apply

  • Direct application through the official JP Morgan careers website.

  • Participation in on-campus or off-campus recruitment drives.

  • Participation in the annual "Code for Good" hackathon.

  • Employee referrals, which can expedite the process.



Recruitment Process Rounds

Round 1: Online Assessment

  • Comprises aptitude and numerical reasoning questions (25-30 MCQs).

  • Coding questions focusing on data structures and algorithms (easy to moderate difficulty).

  • Duration: 45-50 minutes.

  • Purpose: Assess logical reasoning, problem-solving, and time management.


Round 2: Technical Interview

  • Duration: 30-40 minutes, can be online or on-site.

  • Focus on technical knowledge, coding, and discussion of past projects.

  • Key topics: Computer Science fundamentals, operating systems, DSA, OOPs, DBMS, computer networks.

  • Usually one round, but occasionally a second technical interview may be conducted.


Round 3: Behavioral Interview

  • Evaluates past behaviors and experiences to predict future job performance.

  • Questions relate to how candidates handled situations relevant to the role.


Round 4: HR Interview

  • Duration: 25-30 minutes.

  • Assesses personality traits, cultural fit, and general background.

  • May include leadership and management questions for relevant roles.

  • Considered crucial despite being non-technical.


Resume and Interview Preparation Tips

  • Tailor your resume to the specific job profile, highlighting relevant achievements, projects, and leadership roles.

  • Be prepared to discuss past projects in detail.

  • Stay updated on company, industry, and general news.

  • Prepare thoughtful questions for interviewers to demonstrate interest.

  • Conduct mock interviews and technical dry runs for virtual interviews.


Additional Notes

  • JP Morgan may use tools like pymetrics games during the application process to assess candidates beyond resumes.

  • The company emphasizes reasonable accommodations for applicants with disabilities.

Once all rounds are cleared successfully, candidates receive an offer letter, which they can accept, reject, or negotiate.

This comprehensive process ensures that JP Morgan selects candidates who are technically proficient, culturally aligned, and motivated to contribute effectively.

JP Morgan Interview Questions :

1 .
What is the difference between atomicity and aggregation?
Atomicity and aggregation are two different concepts in the context of databases and data management:

Atomicity: Atomicity refers to the property of a database transaction that ensures that either all of its operations are executed or none of them are, and that the database remains in a consistent state even in the presence of failures or errors. In other words, if a transaction consists of multiple operations or steps, either all of them are executed successfully and the transaction is committed, or none of them are executed and the transaction is rolled back, leaving the database in its original state. Atomicity ensures that a transaction is treated as a single, indivisible unit of work, and that the database remains in a consistent state even in case of failures or errors during the transaction.

Aggregation: Aggregation refers to the process of combining or summarizing data from multiple rows or records in a database into a single value or result. Aggregation is typically used to perform calculations, summarizations, or calculations on groups of data in a database, such as calculating the average, sum, or count of a particular field or attribute across multiple records. Aggregation functions, such as COUNT, SUM, AVG, MAX, MIN, etc., are commonly used in SQL and other database query languages to perform aggregation operations on data in a database.
2 .
What is an entity-relationship model and how does it work?
An Entity-Relationship (ER) model is a conceptual data model used to represent the relationships between entities (or objects) in a database system. It is a graphical representation that helps in understanding and designing the structure and relationships of data in a database.

The main components of an ER model are:

* Entities: Entities are objects or things that exist and have attributes (or properties) that describe them. In a database context, entities represent the real-world objects or concepts that we want to store and manage data about, such as customers, employees, products, or orders.

* Relationships: Relationships represent the associations or connections between entities in the database. They define how entities are related to each other and can be one-to-one, one-to-many, or many-to-many relationships. Relationships are usually depicted as lines connecting entities in an ER diagram, with labels indicating the type of relationship, such as "is-a," "has," "owns," "works-for," etc.

* Attributes: Attributes are the properties or characteristics of entities that describe them. Attributes represent the specific data or information we want to store about an entity, such as the name, age, address, or phone number of a customer.
3 .
What are transactions when it comes to DBMS?
In the context of a Database Management System (DBMS), transactions refer to a unit of work or a sequence of operations that are executed on a database and are treated as a single, indivisible operation. Transactions are used to ensure the integrity, consistency, and reliability of data in a database, particularly in multi-user environments where multiple users may access and modify the data concurrently.

Transactions in DBMS typically have four properties, known as the ACID properties:

* Atomicity: Transactions are atomic, which means that either all of their operations are executed successfully or none of them are. If any part of a transaction fails, the entire transaction is rolled back, and all changes made by the transaction are undone, leaving the database in its original state.

* Consistency: Transactions ensure that the database starts in a consistent state and ends in a consistent state. This means that the database transitions from one valid state to another valid state after a transaction is executed, maintaining data integrity and any defined constraints.

* Isolation: Transactions are isolated from each other, meaning that the intermediate state of a transaction is not visible to other transactions until it is committed. This prevents interference and conflicts among concurrent transactions that may access the same data simultaneously.

* Durability: Once a transaction is committed, its changes are permanently saved in the database and cannot be rolled back, even in the case of system failures. This ensures that the changes made by a committed transaction persist in the database and are not lost.
4 .
What is a collection of entities?
A collection of entities refers to a group or set of individual objects or items that are considered as a whole. It could be a gathering of similar or dissimilar things, such as objects, data, or concepts, that are grouped together based on a common characteristic or purpose. Collections are commonly used in various fields, including but not limited to, computer science, mathematics, statistics, biology, and social sciences. Examples of collections of entities include a library of books, a database of customer records, a collection of biological specimens in a museum, a portfolio of investments, or a set of images in a photo album.
5 .
What is a unique key?
A unique key, also known as a unique constraint, is a database concept that specifies that the values in one or more columns of a table must be unique across all rows in that table. In other words, a unique key ensures that no duplicate values are allowed in the specified column(s) of a table.

A unique key provides a way to uniquely identify each row in a table, and it can be used as a means of enforcing data integrity and consistency in a database. When a unique key is defined on one or more columns of a table, the database management system (DBMS) automatically checks and enforces the uniqueness of values in those columns. Any attempt to insert or update a row with a value that violates the unique key constraint will result in an error, preventing duplicate data from being stored in the table.
6 .
What does it mean to be in a deadlock?
In the context of computer science and database management, a deadlock refers to a difficult situation where two or more processes or transactions are waiting for each other to release resources, resulting in a circular dependency that prevents any of them from progressing, leading to a standstill or deadlock.

Deadlocks can occur in various computing systems, including databases, operating systems, distributed systems, and concurrent programming environments. They typically arise when processes or transactions are competing for shared resources, such as database locks, file locks, or other system resources, and are not properly managed or coordinated.

Deadlocks can cause the affected processes or transactions to hang indefinitely, leading to a system or application becoming unresponsive or stuck. Resolving deadlocks typically involves detecting the deadlock condition and then taking appropriate actions to break the deadlock, such as releasing resources, rolling back transactions, or applying deadlock avoidance or deadlock detection algorithms.

To prevent deadlocks, proper concurrency control mechanisms, such as locking protocols, can be implemented to manage shared resources effectively, ensuring that processes or transactions do not enter into a cyclic dependency that can lead to a deadlock. Careful design and implementation of systems and applications, including proper resource allocation and synchronization techniques, can help prevent deadlocks and ensure the smooth and efficient execution of concurrent processes or transactions.
7 .
What is meant by static SQL?
"Static SQL" refers to SQL (Structured Query Language) statements that are embedded directly in application programs or scripts, and are compiled or parsed by the application or database system during the compilation or execution of the program, rather than being generated dynamically at runtime.

In other words, static SQL refers to SQL statements that are hard-coded in the application code or script, and their text is known and fixed at compile-time or load-time, without changing during the runtime of the application. These SQL statements are typically written in the source code of an application or script and are compiled or parsed along with the rest of the application code.

Static SQL is often used in traditional database application development approaches, where SQL statements are embedded directly in the source code of the application program or script, and are typically written in the same programming language as the application itself (e.g., Java, C++, C#, etc.). The SQL statements are usually concatenated or embedded within the application code as strings, and are passed directly to the database system for execution during the runtime of the application.
8 .
What are indexes and their role in databases?
Indexes in databases are data structures that provide a quick and efficient way to look up and retrieve data based on specific columns or fields in a table. Indexes serve as a reference or a pointer to the actual data stored in a database table, allowing for faster retrieval of data when querying the database.

The main role of indexes in databases is to improve query performance by reducing the amount of data that needs to be scanned or searched when executing a query. Indexes can significantly speed up query execution times, especially for large tables or complex queries, by providing a way to quickly locate relevant data based on the indexed columns. Indexes can be created on one or more columns of a table, and they are typically implemented using various data structures such as B-trees, hash indexes, or bitmap indexes, depending on the database management system (DBMS) being used.
9 .
How is it that a database is not the same as a file processing system?
A database and a file processing system are not the same due to several key differences:

* Data structure and organization: In a file processing system, data is typically stored in individual files that are managed separately by different applications. Each file may have its own format, structure, and organization, and there may be redundancy and inconsistency in data storage. On the other hand, in a database, data is organized into tables with predefined schemas, where each table contains rows (records) and columns (fields) that define the structure and relationships of the data in a systematic and organized manner.

* Data integration and consistency: In a file processing system, data integration and consistency may be challenging, as data may be duplicated or inconsistent across multiple files or applications. In contrast, a database system provides mechanisms for data integration and consistency, such as data normalization, referential integrity, and transaction management, which help maintain the consistency, accuracy, and integrity of the data across the database.

* Data sharing and concurrency: In a file processing system, sharing and concurrent access to data by multiple users or applications may be complex and prone to conflicts, as files may be locked or accessed in an ad-hoc manner. In a database system, concurrent access to data is typically managed through database management systems (DBMS) that provide mechanisms such as locking, transactions, and isolation levels to ensure that multiple users can access and modify data concurrently without conflicts.

* Scalability and performance: Database systems are designed to handle large amounts of data and concurrent users efficiently and provide performance optimizations, such as indexing, caching, and query optimization, to improve query execution times. In contrast, file processing systems may lack such optimizations and may not be as scalable or performant when dealing with large or complex datasets.

* Data integrity and security: Database systems provide built-in mechanisms for ensuring data integrity, such as data validation, referential integrity, and access controls, to protect data from unauthorized access, modification, or corruption. File processing systems may lack these built-in mechanisms, making data integrity and security more challenging to manage.
10 .
What is the function of the DROP command?
The DROP command is used in computer programming and database management to remove or delete a database object, such as a table, view, index, or schema, from a database system. The DROP command is typically used to permanently delete database objects that are no longer needed or that need to be removed from the database system for various reasons, such as to free up storage space, clean up unnecessary data, or perform maintenance tasks.

In the context of SQL (Structured Query Language), which is a widely used language for managing relational databases, the DROP command is used to delete database objects. For example, the syntax for dropping a table in SQL would typically be:
DROP TABLE table_name;​

This command would delete the specified table and permanently remove all data and metadata associated with it from the database.
11 .
What are checkpoints?
In the context of Database Management Systems (DBMS), a "checkpoint" refers to a mechanism used to periodically save the current state of a database system to stable storage, such as a disk, in order to provide a reliable point of recovery in case of system failures or crashes. Checkpoints are used to ensure the durability and consistency of data in a database.

Checkpoints typically involve writing the current state of the database, including any changes made by active transactions, to a persistent storage location. This creates a stable, consistent point that can be used as a reference during recovery in case of failures. Once the checkpoint is completed, the system can continue processing transactions from the updated state of the database.
12 .
What is meant by multi-threading?
Multithreading refers to a computing concept where a single process or program is divided into multiple threads of execution that can be executed concurrently by the operating system or a computing system with multiple processors or cores. Each thread represents a separate sequence of instructions that can be scheduled and executed independently, allowing for concurrent execution of multiple threads within a single process.

Threads are smaller units of a program that share the same memory space and system resources, such as CPU time, file handles, and network connections, with the parent process. Threads within a process can communicate with each other more easily and quickly compared to separate processes running in isolation, as they share the same memory space. This makes multithreading a popular approach for achieving parallelism and improving the performance and responsiveness of concurrent software applications.

Multithreading can be used in various types of applications, including desktop applications, server applications, embedded systems, and high-performance computing. Common use cases for multithreading include performing multiple tasks concurrently, handling concurrent user requests, improving performance in resource-intensive applications, and achieving responsiveness in user interfaces.
13 .
What is the distinction between the main key and the foreign key?
The main key, also known as the primary key, and the foreign key are two important concepts in relational Database Management Systems (DBMS) that are used to establish relationships between tables in a database. They have distinct roles and characteristics:

Primary Key:

* A primary key is a unique identifier that is used to uniquely identify each row or record in a table in a relational database.
* It must have a unique value for each row in the table, meaning no two rows in the same table can have the same primary key value.
* It must be not null, meaning it must have a value for every row in the table.
* It is used to uniquely identify each record in the table and ensure data integrity and consistency.
* A table can have only one primary key, although it can be a composite key composed of multiple attributes.

Foreign Key:

* A foreign key is a column or set of columns in a table that refers to the primary key of another table in a relational database.
* It establishes a relationship between two tables, where the table containing the foreign key is called the referencing table or child table, and the table referred to by the foreign key is called the referenced table or parent table.
* It is used to establish relationships between tables and enforce referential integrity, which ensures that data in the referencing table corresponds to the data in the referenced table.
* It does not have to be unique and can have duplicate values in the referencing table.
* It allows for navigation between related tables and enables the creation of relationships, such as one-to-many or many-to-many, between tables in a database.
14 .
What is memory management in Java?
Memory management in the context of Java refers to the management of computer memory resources by the Java Virtual Machine (JVM), which is the component of the Java runtime environment (JRE) responsible for executing Java programs. Java uses an automatic memory management system known as garbage collection to automatically allocate and deallocate memory resources used by Java programs.

In Java, objects are created dynamically at runtime and stored in the heap memory, which is a region of memory used for storing objects and arrays. The JVM automatically allocates memory for objects when they are created using the "new" keyword, and deallocates memory for objects that are no longer reachable, i.e., objects that are no longer referenced by any live variables or reachable object graph. This automatic memory management system frees developers from explicitly allocating and deallocating memory, reducing the risk of memory-related bugs such as memory leaks and dangling pointers.
15 .
What is the meaning of the term merge?
In the context of data, "merge" typically refers to the process of combining data from two or more sources into a unified dataset. Data merging is a common data integration technique used in data management and data analysis to combine data from different sources and create a consolidated and integrated view of the data.

Data merging may involve combining data from multiple databases, spreadsheets, files, or other data sources. This can be done using various techniques, such as matching records based on common fields or keys, aggregating data, resolving conflicts or inconsistencies in the data, and creating a merged dataset that retains relevant information from all the sources.

Data merging is commonly used in data integration scenarios, such as data warehousing, data consolidation, and data analysis, where data from multiple sources needs to be combined to create a single, unified dataset for further processing or analysis. This process may also involve data cleansing, data transformation, and data enrichment to ensure data quality and consistency in the merged dataset.
16 .
What is inherited wealth?
In Java, inheritance is a concept related to Object-Oriented Programming (OOP) that allows one class to inherit properties and methods from another class. Inheritance enables code reuse and promotes code organization and modularity.

In Java, a class that inherits properties and methods from another class is called a "subclass" or "derived class", and the class from which properties and methods are inherited is called the "superclass" or "base class". The subclass inherits the public and protected properties and methods of the superclass, and can override or extend them as needed.

Inheritance in Java allows for the creation of a class hierarchy, where classes can be organized in a hierarchical manner based on their relationships. The superclass can provide common properties and methods that are inherited by its subclasses, and subclasses can provide specialized implementations or additional functionality.
17 .
Who is responsible for handling data looping?
In computer programming, data looping is typically handled by the software developer who writes the code. The responsibility for handling data looping lies with the programmer, who designs and implements the logic for iterating over data elements in a loop.

Data looping is a fundamental concept in programming and is used to iterate over collections of data, such as arrays, lists, or other data structures. The programmer is responsible for designing the loop structure, specifying the loop condition, and defining the loop body, which contains the code to be executed for each iteration of the loop.

The loop structure typically includes the loop initialization, loop condition, and loop update or increment. The programmer defines the loop condition, which determines when the loop should continue iterating, and the loop body, which contains the code to be executed for each iteration of the loop. The loop update or increment is used to modify the loop control variable, which determines the progression of the loop.

Here's an example of a simple loop in Java:

for (int i = 0; i < 10; i++) {
System.out.println("Iteration: " + i);
}
​
18 .
Differentiate between String and StringBuffer?
String StringBuffer
It is a non-mutable class. It is a mutable class.
It is slow and consumes more memory space.  It is fast and takes less memory space.
The string class uses a string pool area. StringBuffer uses heap memory.
String class overrides the equals() method of an object class. So using the equals() method you can compare two strings easily. StringBuffer class doesn't override the equals() method of an object class.
19 .
What is a singleton class?
Singleton class is a class that can have only one object at a time. After this, if you further try to create an object of the Singleton class, then the new variable also points to the first object that you had created initially. So whatever changes you do to any variable inside the class through any object, it affects the variable of the single instance created.
20 .
What is an object-oriented model?
An object-oriented model is a way to apply object-oriented concepts to all the stages of the software development cycle. In an object-oriented model, we think about the problems using models organized around real-world problems.

The main objective of the object-oriented model is the following:

* Testing an entity before actually starting building it.

* Coordination with the customers.

* Visualization.

* Reducing complexity that leads to scalable products.
21 .
Differentiate between thread and process?
Thread Process
Thread is the segment of a process. A program into execution is known as a process.
Thread generally takes less time to get complete. The process takes a long time to get complete.
It takes less time while context switching. It takes more time while context switching.
Thread shares memory. Process is isolated.
Less time is required for its creation. More time is required for its creation.
22 .
Differentiate between multitasking and multithreading?
Multitasking Multithreading
In multitasking, the CPU can form more than one task. In multithreading, a process is divided into many sections and each section is allowed to run concurrently.
In multitasking, processes do not share resources. In multithreading, different threads share the same resource.
Termination of the process takes more time. Termination of thread takes less time.
It helps in the development of efficient programs. It helps in the development of an efficient operating system.
23 .
How is a method is different from a constructor?
Method Constructor
Method is used to depict the functionality of an object. A constructor is used to initialize an object.
Methods are invoked explicitly. Constructors are invoked implicitly.
The method must contain a return type. Constructors don’t contain any return type.
If a method is not specified by the user, no default method is provided. If a constructor is not specified by the user then in that case default constructor is provided by the compiler.
24 .
What is a deadlock and discuss the necessary conditions for deadlock?
Deadlock is a situation in which two or more processes wait for each other to get complete but none of them can ever complete (More specifically, they wait for the resources being held by the other).

Let us consider a scenario in which there are three different resources Resource1, Resource2, and Resource3, and three different processes Process1, Process2, and Process3. Resource1 is allocated to Process1, Resource2 is allocated to Process2, and Resource3 is allocated to Process3. After some time, Process1 asks for Resource1 that is being used by Process2. Process1 stops or halts its execution as it cannot be completed without Resource2. Process2 also demands Resource3 which is being used by Process3. Likewise, Process1 and Process2 also halt their execution because they cannot continue without Resource3. Process3 also asks for Resource1 which is used by Process1. Eventually, Process3 also halts its execution.

The four necessary conditions for deadlock are listed below:

* Mutual Exclusion: It states that a resource can be used in a mutually exclusive manner. It means that two or more two processes cannot share a resource at the same time.

* Hold and Wait: A process holds for a resource (that is being held by another process) while holding another resource.

* No preemption: A process cannot release a resource until it gets completed.

* Circular wait: It is a logical extension of hold and wait. It states that all the processes are arranged in a cyclic manner. Each process in the circular list waits for the resource being held by the next immediate process.

For example, if P[i] is the process and there are N number of processes in total, then the P[i] process waits for the resource allocated to the P[i] % (N + 1) process.
25 .
What does normalization mean?
Normalization is the process of organizing and structuring data in a database to eliminate redundancy and improve data integrity and consistency. It involves applying a set of rules or guidelines to design and structure the database tables in a way that minimizes data redundancy and ensures that each piece of data is stored in only one place.

The goal of normalization is to prevent anomalies and inconsistencies in the database that can arise from redundant data storage or data duplication. Normalization helps maintain data integrity and consistency by reducing redundancy and ensuring that data is stored in a well-structured and organized manner.

Normalization is typically done according to a set of normalization rules or normal forms, which are guidelines that specify the requirements for organizing data in a relational database. The most commonly used normal forms are:

* First Normal Form (1NF): Ensures that each column in a table contains atomic (indivisible) values, and there are no repeating groups or arrays.

* Second Normal Form (2NF): Builds on 1NF and requires that each non-primary key column is fully dependent on the primary key, eliminating partial dependencies.

* Third Normal Form (3NF): Builds on 2NF and requires that each non-primary key column is independent of other non-primary key columns, eliminating transitive dependencies.

There are higher normal forms such as Boyce-Codd Normal Form (BCNF) and Fourth Normal Form (4NF) that further eliminate redundancies, but they are less commonly used.

By normalizing the data, redundant data is eliminated, and the database becomes more efficient in terms of storage space, data retrieval, and data modification operations. Normalization helps maintain data integrity, consistency, and accuracy, which are critical aspects of database design and data management.
26 .
What are the notions of OOP?
The notions of Object-Oriented Programming (OOP) are the fundamental concepts and principles that form the basis of the OOP paradigm. OOP is a programming paradigm that uses objects, which are instances of classes, to represent and manipulate data and behavior in a software program. The main notions of OOP include:

* Classes and Objects: Classes are blueprints or templates that define the structure and behavior of objects, while objects are instances of classes that represent individual entities with their own state (data) and behavior (methods).

* Encapsulation: Encapsulation is the process of hiding the internal details and implementation of objects and exposing only the necessary information through well-defined interfaces. It helps in achieving data abstraction and information hiding.

* Inheritance: Inheritance is a mechanism that allows a class to inherit properties and behavior from another class, called the superclass or base class. It enables code reuse and promotes code organization and modularity.

* Polymorphism: Polymorphism allows objects of different classes to be treated as if they were of the same class, providing a common interface for interacting with objects of different types. It enables code flexibility, extensibility, and reusability.

* Abstraction: Abstraction is the process of simplifying complex systems by breaking them down into smaller, more manageable parts. It involves defining abstract classes, interfaces, and methods that provide common behavior and characteristics to a group of related classes.

* Message Passing: In OOP, objects communicate with each other by sending messages, which are requests for invoking methods on objects. Message passing is a way of achieving communication and interaction among objects in an OOP program.

* Polymorphic Relationships: Polymorphic relationships allow objects to be associated with one another through a common interface or abstract class, rather than through concrete classes. This promotes flexibility and extensibility in the design of software systems.

* Overloading and Overriding: Overloading is the ability to define multiple methods in a class with the same name but different parameter lists, while overriding is the ability of a subclass to provide a new implementation for a method that is already defined in its superclass.
27 .
How does the process of exchanging data work?
The process of exchanging data typically involves transferring data between two or more entities, such as software applications, systems, devices, or users. The exchange of data can occur through various methods, protocols, and formats depending on the context and requirements of the data exchange.

Here is a general overview of the process of exchanging data:

Data Generation: The data to be exchanged is generated or created by the source entity. This can include user input, sensor readings, data processing results, or any other form of data that needs to be exchanged.

Data Representation: The data is represented in a format that can be understood by both the source and destination entities. This can involve converting data into a common data format, such as XML, JSON, CSV, or binary formats, that is agreed upon by both parties.

Data Transmission: The data is transmitted from the source entity to the destination entity over a communication channel or network. This can be done using various communication protocols, such as HTTP, FTP, TCP/IP, or custom protocols, depending on the nature of the data exchange and the communication medium being used.

Data Reception: The destination entity receives the transmitted data and decodes it to understand the original data format. This may involve parsing the data, decrypting it if necessary, and converting it into a format that can be processed by the destination entity.

Data Processing: The destination entity processes the received data according to its intended purpose. This can involve storing the data in a database, performing calculations, updating system states, or triggering actions based on the received data.

Data Acknowledgment: The destination entity may send an acknowledgment or response back to the source entity to confirm the successful receipt and processing of the data. This can be done using acknowledgment messages, response codes, or other means of communication to ensure data integrity and reliability.

Error Handling: If any errors or exceptions occur during the data exchange process, error handling mechanisms may be implemented to handle exceptions, retries, or notifications to ensure data integrity and reliability.

Security Considerations: Data exchange may involve sensitive or private information, and therefore security measures such as encryption, authentication, and authorization may be implemented to protect the data from unauthorized access or tampering.

The act of taking data that has been formatted according to a source schema and transforming it into information that has been structured according to a target schema in such a way that the dataset is an accurate representation of something like the source data, is known as data exchange. The ability to transmit data across applications on a computer makes it possible to share data.
28 .
Describe an architecture with two levels.
An architecture with two levels typically refers to a design or structure with two distinct layers or tiers. Each tier has a specific purpose and functionality, and they interact with each other to achieve a particular goal. Here are some examples of architectures with two levels:

* Client-Server Architecture: This is a common architecture where clients, typically user interfaces or applications running on end-user devices, interact with servers, which are responsible for processing requests and providing services. The client layer handles user interactions and user interface rendering, while the server layer handles business logic, data processing, and storage. Communication between the client and server occurs over a network, often using standard protocols such as HTTP, TCP/IP, or other custom protocols.

* Presentation-Logic Architecture: This architecture involves separating the presentation (UI/UX) layer from the business logic layer. The presentation layer handles the user interface, user experience, and user interaction, while the logic layer handles the business rules, processing, and data manipulation. This separation allows for independent development and maintenance of the UI and business logic, making it easier to update or modify each layer without affecting the other.

* Database Application Architecture: In this architecture, there are two main layers: the database layer and the application layer. The database layer is responsible for managing data storage, retrieval, and manipulation, while the application layer handles the business logic, data processing, and user interface. The application layer communicates with the database layer to perform CRUD (Create, Read, Update, Delete) operations on the data.

* Two-Tier Web Application Architecture: This architecture involves a client layer (usually a web browser) that communicates directly with a server layer (which includes a web server and a database server). The client layer handles the user interface and user experience, while the server layer handles the business logic, data processing, and data storage. The client sends requests to the server, which processes the requests and returns responses to the client.

* Message-Queue Architecture: In this architecture, there are typically two main layers: the sender layer and the receiver layer. The sender layer is responsible for generating and sending messages, while the receiver layer is responsible for processing and consuming those messages. The sender and receiver layers communicate through a message queue, which acts as an intermediary for exchanging messages between the two layers.
29 .
Within the realm of DBMS, what is meant by the term 'Correlated Subquery'?
In the context of database management systems (DBMS), a correlated subquery refers to a type of SQL (Structured Query Language) subquery that is evaluated for each row of the outer query. It is called "correlated" because the subquery is dependent on the values of the outer query, and the values from the outer query are used as parameters or references in the subquery.

A correlated subquery is typically enclosed in parentheses and appears within a larger SQL query. It can reference columns from tables in the outer query, and the subquery's results are used in the evaluation of the outer query. The subquery is executed repeatedly, once for each row in the outer query, and the results of the subquery are used in the evaluation of the outer query's condition or expression.

Correlated subqueries are used to perform complex queries that involve data from multiple tables or require calculations or comparisons with data from the outer query. They can be used in various SQL clauses such as SELECT, WHERE, FROM, and HAVING to filter or retrieve data based on related data in other tables or based on conditions that depend on values from the outer query.
30 .
What is the difference between the main key and the unique constraints?
In the context of databases, both primary keys and unique constraints are used to ensure data integrity and enforce uniqueness of values. However, there are some key differences between the two:

Definition: A primary key is a special type of unique constraint that uniquely identifies each row in a table. It is used to uniquely identify a specific row in a table and must have a unique value for each row. A table can have only one primary key. On the other hand, a unique constraint is used to ensure that a column or a combination of columns in a table contains unique values. A table can have multiple unique constraints.

Null values: Primary keys cannot contain null values, meaning that every row in a table must have a value for the primary key column. Unique constraints, on the other hand, can allow null values, meaning that multiple rows in a table can have null values for the columns with unique constraints.

Relationship with foreign keys: Primary keys are often used as references in other tables, creating relationships between tables in a database. Foreign keys in other tables refer to the primary key of a table, establishing relationships and enforcing referential integrity. Unique constraints can also be used as references in other tables as foreign keys, but they are not as commonly used for this purpose.

Number of columns: A primary key is typically defined on a single column in a table, although it can also be defined on multiple columns as a composite key. Unique constraints, on the other hand, can be defined on a single column or on multiple columns as well, providing flexibility in ensuring uniqueness based on different combinations of columns.

Modification: Primary keys are typically immutable and should not be modified once they are assigned to a row. Changing the value of a primary key is generally not recommended as it can lead to data integrity issues and can also affect relationships with foreign keys in other tables. Unique constraints, on the other hand, can be modified as long as the new value is unique, allowing for more flexibility in data modification.
31 .
How is Structured Query Language (SQL) designed?
Structured Query Language (SQL) is a domain-specific programming language designed for managing relational databases. SQL is based on the relational model, which was introduced by Dr. Edgar F. Codd in the 1970s, and it provides a standardized way to communicate with relational databases to store, retrieve, update, and manage data.

SQL is designed with the following key features:

* Declarative language: SQL is a declarative language, which means that users specify what they want to do with the data, rather than how to do it. Users define the desired outcome, and the database management system (DBMS) takes care of how to execute the query or operation.

* Set-based operations: SQL is optimized for working with sets of data, rather than individual rows or records. SQL provides powerful set-based operations, such as SELECT, INSERT, UPDATE, DELETE, and others, which allow users to manipulate data in batches, making it efficient for working with large datasets.

* Data definition and data manipulation: SQL provides both data definition language (DDL) and data manipulation language (DML) capabilities. DDL allows users to define and manage the structure of a database, including creating tables, defining constraints, and managing indexes. DML allows users to query, insert, update, and delete data in the database.

* Schema and data integrity: SQL allows users to define a database schema, which is a logical structure that defines the relationships between tables and the constraints that must be satisfied by the data. SQL provides mechanisms for enforcing data integrity, such as primary keys, foreign keys, unique constraints, check constraints, and triggers.

* Transaction management: SQL supports transaction management, allowing users to perform multiple operations as part of a single transaction that can be committed or rolled back as a unit. This ensures data consistency and integrity in multi-user environments.

* Client-Server architecture: SQL is designed to work in a client-server architecture, where a client application communicates with a server-based database management system (DBMS) to perform operations on the database. SQL provides mechanisms for connecting to databases, managing connections, and executing queries and operations from client applications.

* Extensibility and standardization: SQL provides a rich set of standard features that are supported by most relational database management systems (RDBMS). SQL also allows for extensibility through vendor-specific extensions, stored procedures, functions, and triggers, which provide additional functionality beyond the standard SQL features.
32 .
Why is it that Java's String object cannot be changed?
In Java, the object is immutable, which means that its value cannot be changed after it is created. Once a object is created, its state remains constant, and any operation that appears to modify a actually creates a new object with the desired value. There are several reasons why objects are designed to be immutable in Java:

String Pool: Java maintains a special area in the heap memory called the "String pool" where it stores literal string values to conserve memory. Since strings are commonly used in Java programs, making objects immutable allows them to be safely stored in the string pool and shared by multiple references, without the need for redundant copies. This helps to reduce memory usage and improve performance.

Security: Strings in Java are often used to store sensitive information such as passwords or encryption keys. Making objects immutable ensures that once a string is created with sensitive information, its value cannot be changed inadvertently or maliciously by other parts of the code. This helps to improve the security of the sensitive data.

Thread-safety: Immutable objects, including objects, are inherently thread-safe. Since their state cannot be changed after creation, they can be safely shared among multiple threads without the need for explicit synchronization. This simplifies concurrent programming and helps to prevent potential thread-safety issues.

Predictable behavior: Immutable objects, including objects, have predictable behavior because their state does not change. This makes them easier to reason about and avoids potential bugs that can arise from unexpected changes in the object state.

Performance optimizations: Immutable objects, including objects, enable performance optimizations in Java compilers and runtime environments. For example, Java compilers can perform compile-time concatenation of string literals, and JVMs can optimize string concatenation operations for performance, knowing that objects are immutable.
33 .
What is the key distinction between reading from files and reading from buffers?
The key distinction between reading from files and reading from buffers is the source of data (external storage vs. in-memory), the data transfer mechanism (I/O operations vs. direct memory access), buffering, and the flexibility in handling different data sources and formats. Reading from files involves reading data from external storage devices while reading from buffers involves reading data that is already in memory, typically resulting in faster access and processing.

Frequently Asked Questions


1. Do interns get paid at JP Morgan?
Yes, JP Morgan pays well to their interns. The average stipend is around Rs. 6,48,053 per year.


2. What is the eligibility criteria at JP Morgan’s?
The eligibility criteria for software developer roles are given below:

* BS/BA degree or equivalent experience.
* Proficiency in one or more modern programming languages.
* Advanced knowledge of application, data, and infrastructure architecture disciplines.
* Understanding of software skills such as business analysis, development, maintenance, and software improvement.
* Understanding of architecture and design across all systems Working proficiency in developmental toolsets Knowledge of industry-wide technology trends and best practices.

3. What is your biggest failure in your life and how did you handle it?
This is the most frequently asked question in a HR interview. Such types of questions are asked in an interview to assess the honesty and attitude of the candidate. You have to be careful while sharing the incident you choose to explain. The mistakes that resulted in huge losses must be avoided. After sharing the incident, also share the lesson you learnt from it.

For example,

“I was managing a project and the senior authority wanted me to complete the project within two weeks. Since I was quite excited about the project so I decided to take up the project. But the project took more than two weeks time.

After this incident, I always analyze the project first and if I think I need more time for the project to be done, I simply ask them to impart me more time.”

4. What is a HireVue interview?
After clearing the online assessment round, candidates are required to go through a HireVue interview. HireVue is a kind of software that assesses the candidate on the basis of the traits like body language, eye movement, and more. If you are applying for a software engineer role then you should prepare JP Morgan interview questions to clear this round.

5. How much time does JP Morgan take to declare the result?
JP Morgan takes around 3 weeks to declare the final result.

6. How long is the JP Morgan interview process?
It is a fairly long process. Generally, it consists of three rounds but this process can take up to 2 months.

7. Is it hard to get into JP Morgan?
One needs a proper preparation strategy and then anyone can make it to J.P. Morgan easily. To crack interviews you must prepare the following topics thoroughly:

To ace Online assessment and technical interview rounds:

Data Structures and Algorithms:

* Linked lists
* Recursion
* Dynamic Programming
* Sorting algorithms.

DBMS:


* This is the most asked topic after data structure and algorithms.
* Generally, keys and normalization concepts are asked from this subject.

Operating System:

* You must have a good knowledge of how the CPU schedules different tasks in our system.
* Generally, questions from deadlock are asked in the interview.

8. Why do you want to join JP Morgan?

While answering these types of questions, mention the company's work culture that inspires you to join the company. Also, tell them about your goals to learn new things while working at the company.

For example,

You can say, “I am a determined person and want to work for an organization where I would get the opportunity to work on challenging problems. JP Morgan has a set of principles that are quite impressive. The principles will help me to improve and grow at the same time.”