Google News
logo
Manual Testing Interview Questions
Manual Testing is the process of manually testing software for defects. Manual testing is a software testing process in which test cases are executed manually without using any automated tool. All test cases executed by the tester manually according to the end user's perspective. It ensures whether the application is working, as mentioned in the requirement document or not. Test cases are planned and implemented to complete almost 100 percent of the software application.
A test plan stores all possible testing activities to ensure a quality product. It gathers data from the product description, requirement, and use case documents.
 
The test plan document includes the following :
 
* Testing objectives
* Test scope
* Testing the frame
* Environment
* Reason for testing
* Deliverables
* Risk factors
* Criteria for entrance and exit
Quality control is the process of running a program to determine if it has any defects, as well as making sure that the software meets all of the requirements put forth by the stakeholders. Quality assurance is a process-oriented approach that focuses on making sure that the methods, techniques, and processes used to create quality deliverables are applied correctly.
Manual testing’s strengths are :
 
* It’s cheaper
* It’s great for testing UI’s
* It’s perfect for ad hoc testing
* It’s ideal for testing minor changes
* You get visual feedback that’s accurate and quick
* Testers don’t have to know anything about automation tools
The testing activity ends when the testing team completes the following milestones.
 
Test case execution : The successful completion of a full test cycle after the final bug fix marks the end of the testing phase.
 
Testing deadline : The end date of the validation stage also declares the closure of the validation if no critical or high-priority defects remain in the system.
 
Code Coverage(CC) ratio : It is the amount of code concealed via automated tests. If the team achieves the intended level of code coverage (CC) ratio, then it can choose to end the validation.
 
Mean Time Between Failure (MTBF) rate : Mean time between failure (MTBF) refers to the average amount of time that a device or product functions before failing. This unit of measurement includes only operational time between failures and does not include repair times, assuming the item is repaired and begins functioning again. MTBF figures are often used to project how likely a single unit is to fail within a certain period of time
Quality control is a product-oriented approach of running a program to determine if it has any defects, as well as making sure that the software meets all of the requirements put forth by the stakeholders.
Some types of testing are conducted by software developers and some by specialized quality assurance staff. Here are a few different kinds of software testing, along with a brief description of each.

Type Description
Unit Testing A programmatic test that tests the internal working of a unit of code, such as a method or a function.
Integration Testing Ensures that multiple components of systems work as expected when they are combined to produce a result.
Regression Testing Ensures that existing features/functionality that used to work are not broken due to new code changes.
System Testing Complete end-to-end testing is done on the complete software to make sure the whole system works as expected.
Smoke Testing A quick test performed to ensure that the software works at the most basic level and doesn’t crash when it’s started. Its name originates from the hardware testing where you just plug the device and see if smoke comes out.
Performance Testing Ensures that the software performs according to the user’s expectations by checking the response time and throughput under specific load and environment. 
User-Acceptance Testing Ensures the software meets the requirements of the clients or users. This is typically the last step before the software is live, i.e. it goes to production.
Stress Testing Ensures that the performance of the software doesn’t degrade when the load increases. In stress testing, the tester subjects the software under heavy loads, such as a high number of requests or stringent memory conditions to verify if it works well.
Usability Testing Measures how usable the software is. This is typically performed with a sample set of end-users, who use the software and provide feedback on how easy or complicated it is to use the software. 
Security Testing Now more important than ever. Security testing tries to break a software’s security checks, to gain access to confidential data. Security testing is crucial for web-based applications or any applications that involve money. 
Test coverage is a quality metric to represent the amount (in percentage) of testing completed for a product. It is relevant for both functional and non-functional testing activities. This metric is used to add missing test cases.
We need software testing for the following reasons :

* Testing provides an assurance to the stakeholders that the product works as intended.
 
* Avoidable defects leaked to the end-user/customer without proper testing adds a bad reputation to the development company.
 
* Defects detected earlier phase of SDLC results in lesser cost and resource utilization of correction.
 
* Saves development time by detecting issues in an earlier phase of development.
 
* The testing team adds another dimension to the software development by providing a different viewpoint to the product development process.
Black box Testing : The strategy of black box testing is based on requirements and specification. It requires no need of knowledge of internal path, structure or implementation of the software being tested.
 
White box Testing : White box testing is based on internal paths, code structure, and implementation of the software being tested. It requires a full and detail programming skill.
 
Gray box Testing : This is another type of testing in which we look into the box which is being tested, It is done only to understand how it has been implemented. After that, we close the box and use the black box testing.

Black box testing Gray box testing White box testing
Black box testing does not need the implementation knowledge of a program. Gray box testing knows the limited knowledge of an internal program. In white box testing, implementation details of a program are fully required.
It has a low granularity. It has a medium granularity. It has a high granularity.
It is also known as opaque box testing, closed box testing, input-output testing, data-driven testing, behavioral testing and functional testing. It is also known as translucent testing. It is also known as glass box testing, clear box testing.
It is a user acceptance testing, i.e., it is done by end users. It is also a user acceptance testing. Testers and programmers mainly do it.
Test cases are made by the functional specifications as internal details are not known. Test cases are made by the internal details of a program. Test cases are made by the internal details of a program.
It is not possible to perform 100% testing of any software or product. However, can do the following steps to come closer :
 
Setting a hard limit on :
            * Percentage of test cases passed
            * Number of bugs discovered
 
Setting a red flag in case :
            * There is a depletion of test budged
            * There is a breach of deadlines
 
Setting a green flag in case :
            * The entire functionality is covered in test cases
            * All critical and major bugs must have a ‘CLOSED’ status
Alpha Testing : It is a type of software testing performed to identify bugs before releasing the product to real users or to the public. Alpha Testing is a type of user acceptance testing.

Beta Testing : It is performed by real users of the software application in a real environment. Beta Testing is also a type of user acceptance testing.
The testbed is an environment configured for testing. It is an environment used for testing an application, including the hardware as well as any software needed to run the program to be tested. It consists of hardware, software, network configuration, an application under test, other related software.
The manual testing process comprises the following steps :
 
* Planning and Control
* Analysis and Design
* Implementation and Execution
* Evaluating exit criteria and Reporting
* Test Closure activities
Unit testing has many names such as module testing or component testing.
 
Many times, it is the developers who test individual units or modules to check if they are working correctly.
 
Whereas, integration testing validates how well two or more units of software interact with each other.
 
There are three ways to validate integration :
 
* Big Bang approach
* Top-down approach
* Bottom-up approach
The test driver is a section of code that calls a software component under test. It is useful in testing that follows the bottom-up approach.
 
The test stub is a dummy program that integrates with an application to complete its functionality. It is relevant for testing that uses the top-down approach.
 
For example :
 
* Let’s assume a scenario where we have to test the interface between Modules A and B. We have developed only Module A. Here, we can test Module A if we have the real Module B or a dummy module for it. In this case, we call Module B as the test stub.

* Now, Module B can’t send or receive data directly from Module A. In such a scenario, we’ve to move data from one module to another using some external features called test driver.
It is one of the white-box testing techniques.
 
Data flow testing emphasizes for designing test cases that cover control flow paths around variable definitions and their uses in the modules. It expects test cases to have the following attributes:
 
* The input to the module
* The control flow path for testing
* The expected outcome of the test case
* A pair of an appropriate variable definition and its use
Drawbacks  of manual testing are :
 
* Highly susceptible to human error and are risky
* Test types like load testing and performance testing are not possible manually
* Regression tests are really time-consuming if they are done manually
* Scope of manual testing is very limited when compared to automation testing
* Not suitable in very large organizations and time-bounded projects
* The cost adds up, so, it’s more expensive to test manually in the long run
End to End testing is the process of testing a software system from start to finish. The tester tests the software just like an end-user would. For example, to test a desktop software, the tester would install the software as the user would, open it, use the application as intended, and verify the behavior. Same for a web application.
 
There is an important difference between end-to-end testing vs. other forms of testing that are more isolated, such as unit testing. In end-to-end testing, the software is tested along with all its dependencies and integrations, such as databases, networks, file systems, and other external services.
No, it is not possible. System testing should begin only once all of the modules are in place and are working correctly. However, it is better if performed before User Acceptance Testing (UAT)
Quality Assurance is a process-driven approach that checks if the process of developing the product is correct and conforming to all the standards. It is considered a preventive measure. This is because it identifies the weakness in the process to build software. It involves activities like document review, test case review, walk-throughs, inspection, etc.
Quality control is a product-driven approach that checks that the developed product conforms to all the specified requirements. It is considered a corrective measure as it tests the built product to find the defects. It involves different types of testing like functional testing, performance testing, usability testing, etc.
# Verification Validation
1. Verification is the process of evaluating the different artifacts as well as the process of software development.

This is done in order to ensure that the product being developed will comply with the standards.
Validation is the process of validating that the developed software product conforms to the specified business requirements.
2. It is a static process of analyzing the documents and not the actual end product. It involves dynamic testing of a software product by running it.
3. Verification is a process-oriented approach. Validation is a product-oriented approach.
4. Answers the question – “Are we building the product right?” Answers the question – “Are we building the right product?”
5. Errors found during verification require lesser cost/resources to get fixed as compared to be found during the validation phase. Errors found during validation require more cost/resources. Later the error is discovered higher is the cost to fix it.
A test bed is a test environment used for testing an application. A test bed configuration can consist of the hardware and software requirement of the application under test including – operating system, hardware configurations, software configurations, tomcat, database, etc.
A test plan is a formal document describing the scope of testing, the approach to be used, resources required and time estimate of carrying out the testing process. It is derived from the requirement documents (Software Requirement Specifications).
A test scenario is derived from a use case. It is used for end to end testing of a feature of an application. A single test scenario can cater to multiple test cases. The scenario testing is particularly useful when there is time constraint while testing.
A test case is used to test the conformance of an application with its requirement specifications. It is a set of conditions with pre-requisites, input values and expected results in a documented form.
A test case can have the following attributes :
 
TestCaseId : A unique identifier of the test case.

Test Summary : One-liner summary of the test case.

Description : Detailed description of the test case.

Prerequisite or pre-condition : A set of prerequisites that must be followed before executing the test steps.

Test Steps : Detailed steps for performing the test case.

Expected result : The expected result in order to pass the test.

Actual result : The actual result after executing the test steps.

Test Result : Pass/Fail status of the test execution.

Automation Status : Identifier of automation – whether the application is automated or not.

Date : The test execution date.

Executed by : Name of the person executing the test case.
When a bug occurs, we can follow the below steps.
 
* We can run more tests to make sure that the problem has a clear description.
* We can also run a few more tests to ensure that the same problem doesn’t exist with different inputs.
* Once we are certain of the full scope of the bug, we can add details and report it.
If the required specifications are not available for a product, then a test plan can be created based on the assumptions made about the product. But we should get all assumptions well-documented in the test plan.
31 .
If a product is in the production stage and one of its modules gets updated, then is it necessary to ret
It is suggested to perform a regression testing and run tests for all the other modules as well. Finally, the QA should also carry out a system testing.
Possible differences between retesting and regression testing are as follows :
 
* We perform retesting to verify the defect fixes. But, the regression testing assures that the bug fix does not break other parts of the application.

* Regression test cases verify the functionality of some or all modules.

* Regression testing ensures the re-execution of passed test cases. Whereas, retesting involves the execution of test cases that are in a failed state.

* Retesting has a higher priority over regression. But in some cases, both get executed in parallel.
Software testers employ black-box testing when they do not know the internal architecture or code structure. The techniques are:
 
* Equivalence Partitioning
* Boundary value analysis
* Cause-effect graphing
Unlike black-box testing, white box involves analyzing the system’s internal architecture and/or its implementation, in addition to its source code quality. It’s techniques are :
 
* Statement Coverage
* Decision Coverage
35 .
What is Sanity testing?
Sanity testing is testing done at the release level to test the main functionalities. It’s also considered an aspect of regression testing.
Non-functional testing examines the system's non-functional requirements, which are characteristics or qualities of the system that the client has specifically requested. Performance, security, scalability, and usability are among them.
 
Functional testing is followed by non-functional testing. It examines aspects that are unrelated to the software's functional requirements. Non-functional testing assures that the programme is safe, scalable, and fast, and that it will not crash under excessive pressure.
A test harness is a collection of software and test data used to put a programme unit to the test by running it under various conditions such as stress, load, and data-driven data while monitoring its behaviour and outputs.

Test Harness contains two main parts :
 
* A Test Execution Engine
* Test script repository
Positive Testing Negative Testing 
Positive testing ensures that your software performs as expected. The test fails if an error occurs during positive testing. Negative testing guarantees that your app can gracefully deal with unexpected user behaviour or incorrect input.
In this testing, the tester always looks for a single set of valid data. Testers use as much ingenuity as possible when validating the app against erroneous data.
According to the pesticide paradox, if the same tests are done repeatedly, the same test cases will eventually stop finding new bugs. Developers will be especially cautious in regions where testers discovered more flaws, and they may overlookPositive and Negative Testing?
 
other areas. Methods for avoiding the pesticide conundrum include:
 
* To create a completely new set of test cases to put various aspects of the software to the test.
* To create new test cases and incorporate them into existing test cases.
 
It is possible to detect more flaws in areas where defect levels have decreased using these methods.
Phases Explanation
Requirement Analysis QA team understands the requirement in terms of what we will testing & figure out the testable requirements.
Test Planning In this phase, the test strategy is defined. Objective & the scope of the project is determined.
Test Case Development Here, detailed test cases are defined and developed. The testing team also prepares the test data for testing.
Test Environment Setup It is a setup of software and hardware for the testing teams to execute test cases.
Test Execution  It is the process of executing the code and comparing the expected and actual results.
Test Cycle Closure It involves calling out the testing team member meeting & evaluating cycle completion criteria based on test coverage, quality, cost, time, critical business objectives, and software.
Bug : A bug is a fault in the software that’s detected during testing time. They occur because of some coding error and leads a program to malfunction. They may also lead to a functional issue in the product. These are fatal errors that could block a functionality, results in a crash, or cause performance bottlenecks
 
Defect : A defect is a variance between expected results and actual results, detected by the developer after the product goes live. The defect is an error found AFTER the application goes into production. In simple terms, it refers to several troubles with the software products, with its external behavior, or with its internal features.
 
Error : An error is a mistake, misunderstanding, or misconception, on the part of a software developer. The category of developers includes software engineers, programmers, analysts, and testers. For example, a developer may misunderstand a design notation, or a programmer might type a variable name incorrectly – leads to an error. An error normally arises in software, it leads to a change the functionality of the program.
In a normal software development process, there are four varying steps, referred to as PDCA. It stands for Plan, Do Check, Act.
 
* The plan defines the objectives and a comprehensive strategy to achieve that objective.

* Do depends upon the strategy finalized during the first stage. 

* Check is the testing part of the software development stage. It is used to make sure that everything is happening as per the plan.

* The act is a step that is used to solve any issue arising during the checking cycle. 

While the developers take responsibility for planning and building the project, testers handle the check part of it. 
Exploratory testing is referred to when design and execution take place simultaneously against an application. In this testing type, the tester uses domain knowledge and the testing experience to forecast under what conditions and where the system may behave in an unanticipated way.
Two major types of testing are important for web testing, such as :
 
Performance Testing : It is a testing technique wherein the quality attributes are evaluated, such as responsiveness, scalability, speed under varying load conditions, and more. The performance testing defines the attributes that require improvement before the launch.

Security Testing : This one is a testing technique that comprehends the resources and data that must be saved from hackers or intruders. 
The criticality of a bug can be low, medium, or high depending on the context.
 
* User interface defects – Low
* Boundary-related defects – Medium
* Error handling defects – Medium
* Calculation defects – High
* Misinterpreted data – High
* Hardware failures – High
* Compatibility issues – High
* Control flow defects – High
* Load conditions – High
Bug leakage : Bug leakage is something, when the bug is discovered by the end user/customer and missed by the testing team to detect while testing the software. It is a defect that exists in the application and not detected by the tester, which is eventually found by the customer/end user.
 
Bug release : A bug release is when a particular version of the software is released with a set of known bug(s). These bugs are usually of low severity/priority. It is done when a software company can afford the existence of bugs in the released software but not the time/cost for fixing it in that particular version.
A cause-effect graph testing technique is a black-box test design technique that uses a graphical representation of the input (cause) and output (effect) to construct the test. This method employs a variety of notations to describe AND, OR, NOT, and other relationships between the input and output conditions.
A critical bug is a bug that impacts a major functionality of the application and the application cannot be delivered without fixing the bug. It is different from the blocker bug as it doesn’t affect or blocks the testing of other parts of the application.
A bug goes through the following phases in software development-
 
New : A bug or defect when detected is in New state.

Assigned : The newly detected bug when assigned to the corresponding developer is in the Assigned state.

Open : When the developer works on the bug, the bug lies in the Open state.

Rejected/Not a bug : A bug lies in rejected state in case the developer feels the bug is not genuine.

Deferred : A deferred bug is one, fix of which is deferred for some time(for the next releases) based on the urgency and criticality of the bug.

Fixed : When a bug is resolved by the developer it is marked as fixed.

Test : When fixed the bug is assigned to the tester and during this time the bug is marked as in Test.

Reopened : If the tester is not satisfied with the issue resolution the bug is moved to the Reopened state.

Verified : After the Test phase, if the tester feels the bug is resolved, it is marked as verified.

Closed : After the bug is verified, it is moved to Closed status.
Structure-based test design techniques are also referred to as white box testing. In these techniques, the knowledge of code or internal architecture of the system is required to carry out the testing. The various kinds of testing structure-based or white testing techniques are-
 
Statement testing : A white box testing technique in which the test scripts are designed to execute the application’s code statements. Its coverage is measured as the line of code or statements executed by test scripts.
 
Decision testing/branch testing : A testing technique in the test scripts is designed to execute the different decision-branches (e.g. if-else conditions) in the application. Its coverage is measured as the percentage of decision points out of the total decision points in the application.
 
Condition testing : Condition testing is a testing approach in which we test the application with both True and False outcome for each condition. Hence for n conditions, we will have 2n test scripts.
 
Multiple condition testing : In multiple condition testing, the different combinations of condition outcomes are tested at least once. Hence for 100% coverage, we will have 2^n test scripts. This is very exhaustive and very difficult to achieve 100% coverage.
 
Condition determination testing : It is an optimized way of multiple condition testing in which the combinations which don’t affect the outcomes are discarded.
 
Path testing : Testing the independent paths in the system(paths are executable statements from entry to exit points).