What is Software Testing?
Software testing refers to process of evaluating the software with intention to find out error in it. It is a technique aimed at evaluating an attribute or capability of a program or product and determining that it meets its quality.
What are the types of Software Testing?
There are two types of software testing
Manual testing refers to testing done manually by humans while searching for bugs or anomalies in an application. The most important part of manual testing is that it allows for real-life scenario testing, often following conditions that are written in test cases.
Automated testing uses test scripts and specialized tools to automate the process of software testing.
Can you explain Functional Testing?
Functional testing verifies each function of an application or software. The tester verifies functionality with a specified set of requirements. So, the source code of a software or an application doesn’t play a major role in this case. Testing the behaviour of the software is the main concern.
What are the different types of Functional Testing’s?
The different types of functional testing include:
- Unit Testing
- Interface Testing
- Regression Testing
- Acceptance Testing
- Integration Testing
- System Testing
- Smoke testing
- Sanity testing
Can you explain Non-functional testing?
Non-functional testing determines if the product will provide a good user experience by measuring how fast the product responds to a request or how long it takes to perform an action. It mainly focuses on the performance, reliability, efficiency, speed, and other non-functional aspects of the software.
What are the different types of Non-functional Testing’s?
The different types of Non-functional testing include:
- Performance Testing
- Stress Testing
- Load testing
- Security Testing
- Volume Testing
- Documentation testing
- Recovery Testing
- Ergonomics Testing
- Compliance Testing
- Localization testing
- Interoperability Testing
- Availability Testing
- Baseline Testing
- Reliability Testing
- Usability testing
- Endurance testing
- Installation Testing
What is the difference between Black Box Testing and White Box Testing?
Black Box Testing
Black box testing is a one type method of software testing, which checks for the functionality of a software or an application without knowing the design, internal components, or structure of an application to be tested. It is also referred to as Specifications-based testing.
White Box Testing
White box testing is a method of software testing that test internal programming structures of an application. This type of testing technique is known as clear box testing, open box testing, glass box testing, structural testing, and transparent box testing. Its operation is opposite to black-box testing and is used at unit, integration, and system levels of the testing process.
What do you know about Test case?
Test case is a set of conditions or variables under which a tester will determine whether an application, software system, or one of its features is working as it should.
A test case is composed of a number of parts: the test input data, the system state at the beginning of the test, the resultant outputs, and the post test system state. Some of the tools of Test case Management.
- Test Rail
- Testpad
- Qase
- Klaros
- TestCaseLab
- Test Collab
- PractiTest
- Meliora Testlab
- TestLodge
How to Write Test Cases?
Basic steps of Test cases:
- Test Case Id
- Test Description
- Assumptions and Pre-Condition
- Test data
- Steps to be executed
- Expected Result
- Actual Result and Post-Condition
- Pass/Fail
- Comments
Basic Format of Test Case Statement
- Verify
- Using [tool name, tag name, dialog, etc]
- With [conditions]
- To [what is returned, shown, demonstrated]
- Verify: Used as the first word of the test statement.
- Using: To identify what is being tested. You can use ‘entering’ or ‘selecting’ here instead of using depending on the situation.
What is a Test Scenario?
Test scenario is basically a documentation of a use case. In other words, it describes an action the user may undertake with a website or app. It may also describe a situation the user may find themselves in while using that software.
How to Write Test Scenario?
- User Story ID/Requirement ID
- Test Scenario ID
- Test Scenario Description
- of Test Cases
- Priority
As a tester, you can follow these five steps to write a test scenario:
Step 1: First, carefully study the Requirement Document- (Business Requirement Specification (BRS), Software Requirement Specification (SRS), Functional Requirement Specification (FRS)) of the system under test (SUT) thoroughly. And also refer uses cases, manuals, books etc. of the application to be tested.
Step 2: For each requirement, find out how the user may use the software in all possible ways.
Step 3: List out test scenarios for each and every feature of the application under test (AUT).
Step 4: After listing the test scenarios, create a Traceability Matrix to ensure that every requirement is mapped to a test scenario or not.
Step 5: Send the test scenarios to the supervisor to review and evaluate them. Later, they are evaluated by other stakeholders of the project.
What is a Test Plan?
A test plan is basically a dynamic document monitored and controlled by the testing manager. A test plan documents the what, when, why, how, and who of a testing project. It also defines the size of the test effort.
What Should a Test Plan Include?
- Test Strategy
- Test Objective
- Test Schedule and Time
- Test Scope
- Reason for Testing
- Exit/Suspension Criteria
- Resource Planning
- Test Deliverables.
When do you stop testing?
Following points indicate for stop testing
1.Based on BRD & FRD document cover all possible functionality
2 Create scenarios with help of BRD & FRD then make maximum test cases which is possible
3.Execute all test cases and log defect.
4.When status of defect is closed then we can stop testing.
What are the basic testing steps?
- Define quantitative and qualitative test completion criteria
- Design test cases to cover the above criteria
- Build “executable” test cases
- Run the test cases
- Verify test results
- Verify test coverage against completion criteria
- Manage test libraries
- Manage reported incidents/defects
What is beta testing?
Beta testing is performed by the customer, which is also known as external acceptance testing
What are the debugging categories?
Following are the debugging categories:
- Backtracking
- Brute force debugging
- Cause elimination method
- Fault tree analysis
- Program Slicing
When to Automation testing Used?
Test Automation should be used by considering the following aspects of a software:
- Mid-Range, Large Projects and critical projects
- Projects that require testing the same areas frequently
- Requirements not changing frequently
- Stable software with respect to manual testing
- Accessing the application for load and performance with many virtual users
- Availability of time
How is error different from failure?
Error: An Error appears not only due to the logical mistake in the code made by the developer.
Failure-: Failure occurs when the software fails to perform in the real environment.
What are the differences between defect, fault, and bug?
Defect: It is a problem in the functioning of a software system during testing
Fault: It is an incorrect step, process or data definition in a software product.
Bug: It is a flaw in a software system that causes the system to behave in an unintended manner.
What is the rule of a TDD?
TDD stands for Test-Driven Development. It focused on creating test cases before writing the actual code which indicates that you are writing the code for the tests before writing it for the application.
What do you know about Traceability Matrix?
Traceability Matrix also known as Requirement Traceability Matrix. RTM is a table that is used to trace the requirements during the Software Development Life Cycle (SDLC). It can be used for forward tracing (i.e., from Requirements to Design or Coding) or backward (i.e., from Coding to Requirements). There are many user-defined templates for RTM.
When and how do you know that testing is done?
It is difficult to determine when to stop testing, as testing is a never-ending process and no one can claim that a software is 100% tested. The following aspects are to be considered for stopping the testing process:
Testing Deadlines
Completion of test case execution
Completion of functional and code coverage to a certain point
Bug rate falls below a certain level and no high-priority bugs are identified
Management decision.
the most common responses are:
- We test until we are out of time and resources;
- We test until all of the test cases we created ran successfully at least once and there
are no outstanding severe defects.
How many test cases we can write in a day?
We can tell anywhere between 2-5 test cases.
Primarily, we use to write 2-5 test cases, but in future stages we write around 6-7 because, at that time, we have the better product knowledge, we start re-using the test cases, and the experience on the product.
Â
How many test cases can we run in a day?
We can run around 30-55 test cases per day.
How to write a test plan?
The IEEE 829 standard is a great resource for how to write a test plan.
List of the different parts of the IEEE 829:
- Test Plan Identifier
- References
- Introduction:
- Test Items
- Feature to Be Tested
- Approach
- Pass/Fail Criteria
- Suspension Criteria
- Test Deliverables
- Testing Tasks
- Environmental Needs
- Responsibilities
- Staffing and Training needs
- Schedule
- Risks and Contingences
- Approvals
List out Test Deliverables?
Here are some important Test deliverables include:
- Test Strategy
- Test Plan
- Test Cases/Scripts
- Test Scenarios
- Test Data
- Effort Estimation Report
- Requirement Traceability Matrix (RTM)
- Defect Report/Bug Report
- Test Execution Report
- Graphs and Metrics
- Test summary report
- Test incident report
- Test closure report
- Release Note
- User guide
- Installation/configuration guide
- Test status report
- Weekly status report (Project manager to client)
Write some common mistakes that lead to major issues?
Some common mistakes include:
- Poor Scheduling
- Underestimating
- Ignoring small issues
- Not following the exact process
- Improper resource allocation