Tuesday, August 26, 2014

Software Testing Interview Questions A list of Top 50 Software Testing/SQA FAQs you may be asked in an Interview! So here it goes...

1. What is 'Software Quality Assurance'?

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with.

2. What is 'Software Testing'?

Testing involves operation of a system or application under controlled conditions and evaluating the results. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should.

3. Does every software project need testers?

It depends on the size and context of the project, the risks, the development methodology, the skill and experience of the developers. If the project is a short-term, small, low risk project, with highly experienced programmers utilizing thorough unit testing or test-first development, then test engineers may not be required for the project to succeed. For non-trivial-size projects or projects with non-trivial risks, a testing staff is usually necessary. The use of personnel with specialized skills enhances an organization's ability to be successful in large, complex, or difficult tasks. It allows for both a) deeper and stronger skills and b) the contribution of differing perspectives.


4. What is Regression testing?

Retesting of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

5. Why does software have bugs?

Some of the reasons are:
a. Miss communication or no communication.
b. Programming errors
c. Changing requirements
d. Time pressures

6. How can new Software QA processes be introduced in an existing Organization?
It depends on the size of the organization and the risks involved.
a. For small groups or projects, a more ad-hoc process may be appropriate, depending on the type of customers and projects.
b. By incremental self-managed team approaches.
7. What is verification? Validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed.


8. What is a 'walkthrough'? What's an 'inspection'?

A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required. An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything.

9. What kinds of testing should be considered?

Some of the basic kinds of testing involve: Black box testing, White box testing, Integration testing, Functional testing, smoke testing, Acceptance testing, Load testing, Performance testing, User acceptance testing.

10. What are 5 common problems in the software development process?
a. Poor requirements
b. Unrealistic Schedule
c. Inadequate testing
d. Changing requirements
e. Miscommunication
11.What are 5 common solutions to software development problems?
a. Solid requirements
b. Realistic Schedule
c. Adequate testing
d. Clarity of requirements
e. Good communication among the Project team


12. What is software 'quality'?

Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable

13. What are some recent major computer system failures caused by software bugs?

Trading on a major Asian stock exchange was brought to a halt in November of 2005, reportedly due to an error in a system software upgrade. A May 2005 newspaper article reported that a major hybrid car manufacturer had to install a software fix on 20,000 vehicles due to problems with invalid engine warning lights and occasional stalling. Media reports in January of 2005 detailed severe problems with a $170 million high-profile U.S. government IT systems project. Software testing was one of the five major problem areas according to a report of the commission reviewing the project.

14. What is 'good code'? What is 'good design'? 
'Good code' is code that works, is bug free, and is readable and maintainable. Good internal design is indicated by software code whose overall structure is clear, understandable, easily modifiable, and maintainable; is robust with sufficient error-handling and status logging capability; and works correctly when implemented. Good functional design is indicated by an application whose functionality can be traced back to customer and end-user requirements.

15. What is SEI? CMM? CMMI? ISO? Will it help?

These are all standards that determine effectiveness in delivering quality software. It helps organizations to identify best practices useful in helping them increase the maturity of their processes.

16. What steps are needed to develop and run software tests?
a. Obtain requirements, functional design, and internal design specifications and other necessary documents
b. Obtain budget and schedule requirements.
c. Determine Project context.
d. Identify risks.
e. Determine testing approaches, methods, test environment, test data.
f. Set Schedules, testing documents.
g. Perform tests.
h. Perform reviews and evaluations
i. Maintain and update documents
17. What's a 'test plan'? What's a 'test case'?

A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly.

18. What should be done after a bug is found?

The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere

19. Will automated testing tools make testing easier?
It depends on the Project size. For small projects, the time needed to learn and implement them may not be worth it unless personnel are already familiar with the tools. For larger projects, or on-going long-term projects they can be valuable.
20. What's the best way to choose a test automation tool?
Some of the points that can be noted before choosing a tool would be:
a. Analyze the non-automated testing situation to determine the testing activity that is being performed.
b. Testing procedures that are time consuming and repetition.
c. Cost/Budget of tool, Training and implementation factors.
d. Evaluation of the chosen tool to explore the benefits.
21. How can it be determined if a test environment is appropriate?
Test environment should match exactly all possible hardware, software, network, data, and usage characteristics of the expected live environments in which the software will be used.
22. What's the best approach to software test estimation?
The 'best approach' is highly dependent on the particular organization and project and the experience of the personnel involvedSome of the following approaches to be considered are:
a. Implicit Risk Context Approach
b. Metrics-Based Approach
c. Test Work Breakdown Approach
d. Iterative Approach
e. Percentage-of-Development Approach
23. What if the software is so buggy it can't really be tested at all?
The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs.


24. How can it be known when to stop testing?

Common factors in deciding when to stop are:
a. Deadlines (release deadlines, testing deadlines, etc.)
b. Test cases completed with certain percentage passed
c. Test budget depleted
d. Coverage of code/functionality/requirements reaches a specified point
e. Bug rate falls below a certain level
f. Beta or alpha testing period ends

25. What if there isn't enough time for thorough testing?
a. Use risk analysis to determine where testing should be focused.
b. Determine the important functionality to be tested.
c. Determine the high-risk aspects of the project.
d. Prioritize the kinds of testing that need to be performed.
e. Determine the tests that will have the best high-risk-coverage to time-required ratio.

26. What if the project isn't big enough to justify extensive testing?

Consider the impact of project errors, not the size of the project. The tester might then do ad-hoc testing, or write up a limited test plan based on the risk analysis.
27. How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers, especially in multi-tier systems. Load/stress/performance testing may be useful in determining client/server application limitations and capabilities.
28. How can World Wide Web sites be tested?
Some of the considerations might include:
a. Testing the expected loads on the server
b. Performance expected on the client side
c. Testing the required securities to be implemented and verified.
d. Testing the HTML specification, external and internal links
e. cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled
29. How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. If the application was well designed this can simplify test design.
30. What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. For testing ('extreme testing', programmers are expected to write unit and functional test code first - before writing the application code. Customers are expected to be an integral part of the project team and to help develop scenarios for acceptance/black box testing.
31. What makes a good Software Test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.
32. What makes a good Software QA engineer?
They must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews
33. What's the role of documentation in QA?
QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. Change management for documentation should be used.
34. What is a test strategy? What is the purpose of a test strategy?
It is a plan for conducting the test effort against one or more aspects of the target system.A test strategy needs to be able to convince management and other stakeholders that the approach is sound and achievable, and it also needs to be appropriate both in terms of the software product to be tested and the skills of the test team.
35. What information does a test strategy captures?


It captures an explanation of the general approach that will be used and the specific types, techniques, styles of testing

36. What is test data?

It is a collection of test input values that are consumed during the execution of a test, and expected results referenced for comparative purposes during the execution of a test

37. What is Unit testing?
It is implemented against the smallest testable element (units) of the software, and involves testing the internal structure such as logic and dataflow, and the unit's function and observable behaviors
38. How can the test results be used in testing?
Test Results are used to record the detailed findings of the test effort and to subsequently calculate the different key measures of testing
39. What is Developer testing?
Developer testing denotes the aspects of test design and implementation most appropriate for the team of developers to undertake.
40. What is independent testing?
Independent testing denotes the test design and implementation most appropriately performed by someone who is independent from the team of developers.
41. What is Integration testing?
Integration testing is performed to ensure that the components in the implementation model operate properly when combined to execute a use case
42. What is System testing?
A series of tests designed to ensure that the modified program interacts correctly with other system components. These test procedures typically are performed by the system maintenance staff in their development library.
43. What is Acceptance testing?
User acceptance testing is the final test action taken before deploying the software. The goal of acceptance testing is to verify that the software is ready, and that it can be used by end users to perform those functions and tasks for which the software was built
44. What is the role of a Test Manager?
The Test Manager role is tasked with the overall responsibility for the test effort's success. The role involves quality and test advocacy, resource planning and management, and resolution of issues that impede the test effort
45. What is the role of a Test Analyst?
The Test Analyst role is responsible for identifying and defining the required tests, monitoring detailed testing progress and results in each test cycle and evaluating the overall quality experienced as a result of testing activities. The role typically carries the responsibility for appropriately representing the needs of stakeholders that do not have direct or regular representation on the project.
46. What is the role of a Test Designer?
The Test Designer role is responsible for defining the test approach and ensuring its successful implementation. The role involves identifying the appropriate techniques, tools and guidelines to implement the required tests, and to give guidance on the corresponding resources requirements for the test effort
47. What are the roles and responsibilities of a Tester?
The Tester role is responsible for the core activities of the test effort, which involves conducting the necessary tests and logging the outcomes of that testing. The tester is responsible for identifying the most appropriate implementation approach for a given test, implementing individual tests, setting up and executing the tests, logging outcomes and verifying test execution, analyzing and recovering from execution errors.
48. What are the skills required to be a good tester?
A tester should have knowledge of testing approaches and techniques, diagnostic and problem-solving skills, knowledge of the system or application being tested, and knowledge of networking and system architecture
49. What is test coverage?
Test coverage is the measurement of testing completeness, and it's based on the coverage of testing expressed by the coverage of test requirements and test cases or by the coverage of executed code.
50. What is a test script?
The step-by-step instructions that realize a test, enabling its execution. Test Scripts may take the form of either documented textual instructions that are executed manually or computer readable instructions that enable automated test execution.

Sunday, August 3, 2014

Top 15 "Manual Testing" Interview Questions and Answer :)

Q 1: What's the difference between QA and testing?


Ans.:  *TESTING means "quality control"
*QUALITY CONTROL measures the quality of a product    
*QUALITY ASSURANCE measures the quality of processes used To create a quality product.

Q 2: What is black box/white box testing?


Ans.: Black-box and white-box are test design methods.  Black-box test design Treats the system as a "black-box", so it doesn't explicitly use Knowledge of the internal structure.  Black-box test design is usually Described as focusing on testing functional requirements.  Synonyms for Black-box include:  behavioral, functional, opaque-box, and Closed-box. White-box test design allows one to peek inside the "box", And it focuses specifically on using internal knowledge of the software to guide the selection of test data.  Synonyms for white-box include: Structural, glass-box and clear-box. While black-box and white-box are terms that are still in popular use, Many people prefer the terms "behavioral" and "structural".  Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. 

 Q 3: What's the difference between load and stress testing?


Ans.: Load testing is to test that if the application works fine with the loads that result from large number of simultaneous users, transactions and to determine weather it can handle peak usage periods. Stress testing is a form of testing which is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. For example, a web server may be stress tested using scripts, bots, and various denials of service tools to observe the performance of a web site during peak loads. Stress testing a subset of load testing.

Q 4: What is a 'Walk-Through'?


Ans.: A 'walk-through' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

Q 5: Who is responsible for integration testing?


Ans: Team leader along with developers.

Q 6: Is performance testing part of System testing?

Ans: Yes

Q 7: Is system testing perform before unit testing?

Ans : No

Q 8: What is software Test plan? Mention major items of a test plan document.


Ans: A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:

1. Objective

2. Product Details

•           Product Identification
•           Features to be tested
•           Features not to be tested
•           Acceptance Criteria 

3. Test Plan

•           Test Item
•           Test Deliverable 
•           Testing Process
•           Overview of test cycle
•           Formal Review Points
•           Test Environment
•           Resource
•           Security
•           System Test Setup
•           Schedule
•           Test Procedure
•           Entrance Criteria
•           Suspension Criteria
•           Resumption Criteria
•           Exit Criteria
•           Error Measurement / Management System
•           Error Reporting
•           Error Classification

4. Testing Types

•           Types of testing to be performed for the project.

5. Appendix

•           Defect Summary
•           Reference Documents

 Q 9: What should be the entry criteria and Deliverable's for system testing?


Ans: Although this is subjective and specific to project, but generally the entry criteria should be:
•           All human resources must be available and assigned the task.
•           Unit test cases should be prepared before coding.
•           All developed code must be unit tested.
•           System test cases should be prepared before start of system testing.
•           All test hardware and environments must be in place, and free for System test use.

Deliverables should be the integration tested build along with installation guide and release notes.
           

Q 10: How can it be known when to stop testing?


Ans: This can be difficult to determine. Many modern software applications are so Complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
•           Deadlines (release deadlines, testing deadlines, etc.)
•           Test cases completed with certain percentage passed
•           Test budget depleted
•           Coverage of code/functionality/requirements reaches a specified point
•           Bug rate falls below a certain level
•           Beta or alpha testing period ends

Q 11: What is the priority and severity of the bug? Explain with example a scenario, With high priority and low severity and vice versa.


Ans.: Severity of a bug describes the impact of a bug.

Blocker           Blocks development and/or testing work
Critical:           Crashes, loss of data, severe memory leak
Major:            Major loss of function
Minor:            Minor loss of function, or other problem where easy workaround is present
Trivial:            Cosmetic problem like misspelled words or misaligned text
Enhancement: Request for enhancement

Priority describes the importance and order in which a bug should be fixed. This field is utilized by the programmers/engineers to prioritize their work to be done. The available priorities range from P1 (most important) to P5 (least important.)

Example: Any look and feel issues like spelling mistake on an UI is less severe, but if the product is going for beta testing, priority of bug to fix is higher.

Similarly any exception coming on a particular operation has high severity, but that module is not to be delivered right now, so this bug can have low priority.


Q 12: Explain various States of bug, can a bug be new after it is resolved?


Ans:     Mainly a bug can be in either open state or it can be in end state.Open state comprises of Unconfirmed, New, Assigned and Reopened. End State comprises of Resolved, Verified and Closed. No a bug can not be new after it is resolved, it should have state Reopen.

Q 13: What is word deferred means in terms of bug life cycle? When a bug is Deferred?


Ans: Deferred means postponing the resolution of the bug. When any bug is found valid, but due to skill or time restrictions, it is planned to resolved later, bug is send to deferred stated.

Q 14:  What are the various points a tester should ensure before entering a bug?


Ans:     1. Make sure the bug has not been previously reported.
            2. Be sure you’ve reproduced your bug using the latest build released.

Q 15: Explain any two qualities of a useful bug report.


Ans: A useful bug report has two qualities:
1. Reproducible :  If the developer can’t see it or conclusively prove that it exists, he’ll mark it as “WORKSFORME” or “NOTREPRODUCIBILE”. Every relevant details you can provide helps.
2. Specific :  The quicker the engineer can isolate the issue to a specific problem, the more likely it’ll be expediently fixed.