T-76.5613 Software testing and quality assurance

The sole purpose of this resource is to prepare students for the exam of the course T-76.5613 Software testing and quality assurance, which can be taken in Helsinki University of Technology. The exam will consist mostly of lecture definitions and questions, which this resource will try to provide answers to.

In the ideal situation, reading this page, instead of the way too long course book, would be more than enough to pass the course exam. So if you are a student taking this course, so please contribute!

Lecture definitions

edit
Term: Definition:
Lecture 1: Introduction to Software Quality and Quality Assurance
Software quality
  • The degree to which a system, component or process meets specified requirements.
  • The degree to which a system, component, or process meets customer or user needs or expectations.
Software quality assurance
  • Planned processes that provide confidence in a product's suitability for its intended purpose.
  • It is a set of activities intended to ensure that products satisfy customer requirements.
  • Create good and applicable methods and practices for achieving good enough quality level.
Software testing Testing is the execution of programs with the intent of finding defects.
Good enough quality
  • There are sufficient benefits.
  • No critical problems.
  • The benefits outweigh the problems.
  • Further improvements would be more harmful than helpful.
Lecture 2: Testers and Testing terminology
Verification Verification ensures that the software correctly implements the specification.
Validation Validation ensures that the software is meets the customer requirements.
Black-box testing The software being tested is considered as a black box and there is no knowledge of the internal structure.
White-box testing Testing that is based on knowing the inner structure of the software and the program logic.
Functional testing Testing used to evaluate a system with its specified functional requirements.
Non-functional testing Testing to see how to system performs to its specified non-functional requirements, such as reliability and usability.
Dynamic quality practices Testing that executes code. Traditional testing methods.
Static quality practices Practices that do not execute code, such as reviews, inspections and static analysis.
Scripted testing Test case based, where each test case is pre-documented in detail with step-by-step instructions.
Non-scripted testing Exploratory testing without detailed test case descriptions.
Test oracle A document or a piece of software that allows the tester to decide if the test was passed or failed.
Reliability The ability of a software to perform its required functions under stated conditions for a specified period of time.
Maintainability The effort needed to make changes into the software.
Testability Effort needed to test a software system to ensure it performs its intended functions.
Defect severity Severity of the consequences caused by a software fault.
Defect priority The order of which the found defects are fixed.
Regression testing
  • Running tests that have been run before after there has been changes in the software.
  • To get confidence that everything still works and to reveal any unanticipated side effects.
Testing techniques
  • A testing technique is a definitive procedure that produces a test result.
  • Methods or ways of applying defect detection strategies.
Test case
  • A test case is an input with an expected result.
  • Normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps to follow, input, output, expected result, and actual result.
Test level
  • Group of test activities that focus into a certain level of the tested software.
  • Examples: Unit testing, Integration testing, System testing, Acceptance testing.
  • Can be seen as level of detail and abstraction.
Test type
  • Group of test activities that evaluate a system concerning some quality characteristics.
  • Examples: Functional testing, Performance testing, Usability testing.
Test phase Temporal parts of the testing process that follow sequentically each other, with or without overlapping each others.
Unit testing A unit (a basic building block) is the smallest testable part of an application.
Integration testing Individual software modules are combined and tested as a group. Communication between units.
System testing Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements.
Acceptance testing Final stage of validation where the customer or end-user is usually involved.
Alpha testing Testing that is usually done before release to the general public.
Beta testing Beta versions are released to a limited audience outside of the programming team. Testing is performed in the user's environment.
Lecture 3: White-box testing
Control-flow testing Control flow refers to the order in which the individual statements, instructions or function calls of an imperative or functional program are executed or evaluated.
Data-flow testing
  • Data-flow testing looks at the life-cycle of a particular piece of data (varialbe) in an application.
  • By looking for patterns of data usage, risky areas of code can be found and more test cases can be applied.
Statement coverage
  • Percentage of executable statements (nodes in a flow graph) exercised by a test suite.
  • Statement coverage is 100% if each program statement is executed at least once by some test case.
Decision (branch) coverage Percentage of decision outcomes (edges in a flow graph) exercised by a test suite.
Condition coverage
  • Testing if each boolean sub-expression has evaluated both to true and false.
  • Example: (a<0 || b>0) => (true, false) and (false, true)
Decision/condition (multi-condition) coverage
  • Execution of all sub-predicate boolean value combinations.
  • Example: (a<0 || b>0) => (true, true), (true, false), (false, true) and (false, false)
Lecture 4: Black-box testing techniques
Equivalence partitioning Dividing test inputs into partitions, where each partition contains similar inputs. Testing only one input of each partition leads to a lower number of test cases.
Boundary value analysis Values are chosen that lie along boundaries of the input domain.
Cause and effect graphing
  • A directed graph that maps a set of causes to a set of effects.
  • Allows the tester to combine conditions.
Classification tree testing Can be used to visualise equivalence classes (in a hierachial way) and selecting test cases.
Function testing Testing one function (feature) at a time.
State transition testing State transition testing focuses on the testing of transitions from one state (e.g., open, closed) of an object (e.g., an account) to another state.
Specification testing
  • Testing based on some sort of specification document.
  • If there is ambiguity in the document, it is more probable that there is a defect.
Failure mode analysis Error guessing based on experience and previous or typical defects.
Scenario testing
  • Testing scenarios are complicated and realistic stories of real usage of the system.
  • The goal is to focus on business needs and realistic situations.
Soap-opera testing Extreme scenario testing. Substitute all input values with extreme values.
Combination testing
  • An algorithm for selecting combinations.
  • Makes it possible to test interactions.
  • Too many combinations. Need for systematic techniques.
Pair-wise testing Every possible pair of interesting values of any two parameters is covered. Reduces combinations radically.
Decision table testing
  • For modelling complicated business rules.
  • Different combinations lead to different actions.
  • Combinations are visaulized. This can reveal unspecified issues.
Testing heuristics A way of making an educated guess. Can be used to generate ideas for good tests.
Lecture 5: Test Planning
Test Plan
  • Prescribes the scope, approach, resources and schedule of testing activities.
  • Includes features to test and tasks to perform.
Test Policy A company or product level test plan.
Test Strategy
  • Describes how testing is performed and how quality goals are achieved.
  • Includes test levels, test phases, test strategies and test completion criteria.
  • Plans how defect reporting and communication for testing is done.
Detailed Test Plan A more detailed version of the test plan?
Item pass/fail criteria
  • Completion criteria for the test plan.
  • Examples:
    • All test cases completed.
    • Code coverage tool indicates all code covered.
Test completion criteria (stop-test criteria) Criteria of when to stop testing activities.
Release criteria Criteria of when the product is in such a condition that it can be released. Usually not decided by testers.
Lecture 6: Test Case Design and Test Reporting
Test Case
  • A set of inputs, execution conditions and expected results.
  • In order to exercise a program path or to verify compliance with a requirement.
Test Summary Report
  • An IEEE Standard that includes:
    1. Identifier.
    2. Summary.
    3. Variances.
      • Variances to the planned activities.
    4. Comprehensiveness assessment
      • Against the criteria specified in the plan.
      • Reasoning for untested features.
    5. Summary of results
      • Success of testing.
      • All resolved and unresolved defects.
    6. Evaluation
    7. Summary of activities
      • Testing activities and events.
    8. Approvals
      • Who approves the report.
Testing Dashboard A dashboard with coverage and quality for each functional area.
Risk-based testing
  1. Risk based test design techniques.
    • Analyse product risks and use that information to design good test cases.
  2. Risk based test management.
    • Estimate risks for each feature.
    • Prioritize testing according to these estimates.
Lecture 7: Exploratory testing
Exploratory Testing
  • Testing without predefined test cases.
  • Manual testing based on:
    • Knowledge of the tester.
    • Skills of the tester.
  • Exploring the software.
  • Allows tester to adjust plans.
  • Minimize time spent on (pre)documentation.
Session-Based Test Management
  • Enables planning and tracking exploratory testing.
  • Includes a charter that answers:
    • What? Why? How? What problems?
    • And possibly: Tools? Risks? Documents?
    • Allows reviewable results.
  • Done in sessions of ~90 minutes
    • Gets the testing done and allows flexible reporting.
  • Debriefing
Lecture 9: Reviews and Inspections
Formal Inspection
  • A meeting with defined roles and trained participants:
    • Author, reader, moderator, recorder and inspectors.
  • Complete focus on revealing defects.
  • No discussion on solutions or alternatives.
  • Formal entry and exit criteria.
  • Carefully measured and tracked.
Audit An independent evaluation to check conformance of software products and processes. (ISO 9001, CMM)
Scenario based reading Limits the attention of the reader to a certain area.
Joint Application Design (JAD) A workshop where knowledge workers and IT specialists meet, sometimes for several days, to define and review the business requirements for the system.
Individual checking Searching for defects alone.
Additional definitions found from old exams
Design by Contract Prescribes that software designers should define formal, precise and verifiable interface specifications for software components.

Lecture 1 Questions: Introduction to Software QA

edit

Describe Garvin's five viewpoints to product quality and explain how these viewpoints apply to software quality.

edit
  • Transcendent approach
    • Quality cannot be measured but can be learned to recognize.
    • Quality is therefore difficult to define because it is recognized only through experience. Similar to beauty, for example.
  • User-based approach
    • Focus on consumer preferences.
    • Products that satisfy consumer requirements are of highest quality.
  • Manufacturing based approach
    • Emphasize the supply side and are mainly concerned with “conforming to requirements”.
    • "99.5% are non-faulty".
  • Product-based approach
    • Quality as a measurable attribute.
    • Better measure => better quality.
  • Value-based approach
    • Quality is defined in terms of cost and prices.
    • Quality products provide performance at an affordable price.

Compare ISO 9126 quality model and McCall quality models for software quality?

edit
  • The McCall Quality Model
    • Product revision: relates to the source code and development aspect of the system.
      • maintainability, flexibility, testability
    • Product transition: relates to reusing or re-purposing some or all of the system’s components.
      • portability, reusability, interoperability
    • Product operations: relates to qualities of the system while it is operational
      • correctness, reliability, efficiency, integrity, usability
  • ISO 9126 quality model
    • Functionality
    • Reliability
    • Usability
    • Efficiency
    • Maintainability
    • Portability

Describe different reasons that cause defects or lead to low quality software.

edit
  • Software is written by people.
  • People are under pressure because of strict deadlines.
    • Reduced time to check quality.
    • Software will be incomplete.
  • Software is very complex.

Explain what the statement "software is not linear" means in the context of software defects and their consequences. Give two examples of this.

edit
  • A small change in input or one-line error in the code may have a very large effect.
    • Example: Intel Pentium Floating Point Division Bug, several hospital systems
  • A change in code can also result in a minor inconvenience with no visible impact at all.
edit
  • Create good and applicable methods and practices for achieving good enough quality level.
  • Ensure that the selected methods and practices are followed.
  • Support the delivery of good enough quality deliverables.
  • Provide visibility into the achieved level of quality.

Describe and compare different definitions of software testing that have been presented. How these definitions differ in terms of the objectives of testing?

edit
  • Testing is the execution of programs with the intent of finding defects.
  • Testing is the process of exercising a software component using a selected set of test cases, with the intent of revealing defects and evaluating quality.
  • Testing is a process of planning, preparation, execution and analysing, aimed at establishing the characteristics of an information system and demonstrating the difference between actual and required status.

Describe the main challenges of software testing and reasons why we cannot expect any 'silver bullet' solutions to these challenges in the near future.

edit
  • It is impossible to test a program completely. Too many inputs and outputs.
  • Requirements are never final.
  • Testing is seen as a last phase in the development cycle. This phase is often outsourced.

Testing is based on risks. Explain different ways of prioritizing testing. How prioritizing is applied in testing and how it can be used to manage risks?

edit
  • The higher the risk, the more testing is needed.
  • Focus the testing effort:
    • What to test first?
    • What to test the most?
    • How much to test each feature?
    • What not to test?
  • Possible ranking criteria:
    • Test where a failure is most severe, most likely or most visible.
    • Customer prioritizes requirements according to what is most critical to customer's business.
    • Test areas where there have been problems in the past or where things change the most.

Lecture 2 Questions: Software testers and testing terminology

edit

Describe typical characteristics and skills of a good software tester. Why professional testers are needed? How testers can help developers to achieve better quality?

edit
  • Skills:
    • Destructive attitude and mindset.
    • Excellent communication skills.
    • Ability to manage many details.
    • Knowledge of different testing techniques and strategies.
    • Strong domain expertise.
  • Why professional testers:
    • Developers can't find their own defects.
    • Skills, tools and experience.
    • Objective viewpoint.
  • Testers can help developers by giving constructive feedback.

Describe the V-model of testing and tell how testing differs in different test levels?

edit
  • Different levels:
    • Requirements <=> Acceptance testing
    • Functional specification <=> System testing
    • Architecture design <=> Integration testing
    • Module design <=> Unit testing
    • Coding
  • It is good to use each development specification as a basis for the testing.
  • It is easier to find faults in small units than in large ones.
  • Test small units first before putting them together to form larger ones.

Describe the purpose and main difference of Performance testing, Stress testing and Recovery testing.

edit
  • Performance testing
    • Testing of requirements that concern memory use, response time, through-put and delays.
  • Stress testing
    • A form of testing that is used to determine the stability of a given system.
    • It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
  • Recovery testing
    • In software testing, recovery testing is the activity of testing how well the software is able to recover from crashes, hardware failures and other similar problems.
    • Example: While the application running, suddenly restart the computer and after that check the validness of application's data integrity.

Lecture 3 Questions: White box testing

edit

Describe branch coverage and condition coverage testing. What can you state about the relative strength of the two criteria? Is one of them stronger than the other?

edit
  • Decision coverage
    • Decision coverage is 100% if each control flow branch is at least once by a test case.
  • Condition coverage
    • Testing if each boolean sub-expression has evaluated both to true and false.
    • Example: (a<0 || b>0) => (true, false) and (false, true)
  • Therefore, condition coverage is stronger(?) than decision coverage.

How coverage testing tools can be used in testing? What kind of conclusions you can draw based on the results of statement coverage analysis?

edit
  • Without a good white-box testing tool, analyzing and controlling coverage testing is hopeless.
  • A tool can tell us:
    • If we are missing some tests.
    • If we need better tests (weak spots in tests).
    • Risky areas that are hard to cover. These areas can be covered by reviews, or other methods.
    • Unreachable code, old code, debug code...
  • Tools can be used to guide testing.
  • Tools highlight non-covered areas of code.

Give examples of defect types that structural (coverage) testing techniques are likely to reveal and five examples of defect types that cannot necessarily be revealed with structural techniques. Explain why.

edit
  • Structural testing is good for:
    • Revealing errors and mistakes made by developers.
  • Defect types that cannot be revealed with structural techniques:
    • There are missing features or missing code.
    • Timing and concurrency issues.
    • Different states of the software.
      • Same path reveals different defects in different states.
    • Variations of environment and external conditions.
    • Variations of data.
    • Does not say anything about qualities: performance, usability...

Describe the basic idea of mutation testing. What are the strengths and weaknesses of mutation testing.

edit
  • Involves modifying program's source code in small ways.
  • These, so-called mutations, are based on well-defined mutation operators that either mimic typical user mistakes (such as using the wrong operator or variable name).
  • The purpose is to help the tester develop effective tests or locate weaknesses in the test data used for the program or in sections of the code that are seldom or never accessed during execution.

Lecture 4 Questions: Black-box testing techniques

edit

Compare function testing and scenario testing techniques. What kind of purposes these two techniques are good? What are shortcomings of the two techniques? How function testing and scenario testing could be used to complement each other?

edit
  • Function testing is about testing one functionality of the software at a time.
    • Reveals errors that should be addressed early.
    • No interaction between functions is tested.
    • Does not say if the software solves the user's problem.
  • Scenario testing is about testing complicated and realistic stories of real usage of the system.
    • It is a story that is motivating, credible, complex, and easy to evaluate.
    • Easy to connect testing to documented requirements.
    • Scenarios are harder to automate(?).
  • They complement each others in the sense that the other focuses on real-life user requirements and the other on functional requirements.

List and describe briefly different ways of creating good scenarios for testing?

edit
  • Read "An Introduction to Scenario Testing" for 12 best ways.
  • Good testing scenarios are:
    • Based on a story about how the program is used.
    • Motivating: Stakeholders get more involved.
    • Credible: Can happen in the real world.
    • Complex: Complex use of the program data and environment.
    • Results are easy to validate.

Describe at least five testing heuristics and explain why they are good rules of thumb and how they help testing.

edit
  • Test at the boundaries.
    • Reveals common defects.
  • Test with realistic data and scenarios.
    • Quality of software becomes better when basic operations work.
  • Avoid redundant tests.
    • Make it more easy to execute, read and understand test cases.
  • Test configurations that are different from the developer's.
    • Developers are lazy to change their configurations.
  • Run tests that are annoying to set up.
    • Developers are lazy to test more advanced tests.

What is Soap Opera Testing? Why soap opera testing is not same as performing equivalence partitioning and boundary value analysis using extreme values?

edit
  • Soap Opera testing can be considered as scenario testing with extreme values.
    • Testing scenarios are complicated and realistic stories of real usage of the system.
    • Stories based on the most extreme examples that could happen in practice.
    • The goal in scenario testing is to focus on business needs and realistic situations.
  • The main difference is that a soap opera test case tests the whole system from a practical point of view while EP and BVA test a specific function of the system with extreme values.

Describe different coverage criteria for combination testing strategies. How these criteria differ in their ability to reveal defects and cover functionality systematically?

edit
  • 1-wise coverage
    • Every input parameter is covered with some test case.
  • Pair-wise coverage
    • All possible combinations of each pair of input parameters.
  • t-wise coverage
    • All possible combinations of t amount of input parameters.
  • N-wise coverage
    • Test all combinations of all parameters.

Describe the basic idea of decision table testing and the two different approaches to applying it.

edit
  • Basic idea:
    • Model complicated business rules.
    • Different combinations of conditions lead to different expected outcomes (actions).
      • Sometimes it is difficult to follow which conditions correspond to which actions.
    • Easy to observe that all possible conditions are accounted for.
  • Approaches:
    • Describe only interesting rules
    • Describe all possible combinations

Lecture 5 Questions: Test Planning

edit

List and describe at least five different functions that a test plan can be used for.

edit
  • Support quality assessment that enables wise and timely product decisions.
  • Support preparations, staffing, responsibilities, task planning and scheduling.
  • Support evaluation of the test project and test strategy.
  • Justify the test approach.
  • Benefits and limitations of the test approach.
  • For coordination.
  • For risk management.
  • Specify deliverables.
  • Record historical information for process improvement.

Describe six essential topics of test planning, what decisions has to be made concerning each of these topics?

edit
  1. Why: Overall test objectives.
    • Quality goals.
  2. What?: What will and won't be tested.
    • Prioritize. Provide reasoning.
    • Analyze the product to make reasonable decisions.
  3. How?: Test strategy.
    • How testing is performed:
    • Test techniques, test levels, test phases.
    • Tools and automation.
    • Processes:
      • How test cases are created and documented.
      • How defect reporting is done.
  4. Who?: Resource requirements.
    • People.
      • Plan responsibilities.
    • Equipment.
    • Office space.
    • Tools and documents.
    • Outsourcing.
  5. When: Test tasks and schedule.
    • Connect testing with overall project schedule.
  6. What if?: Risks and issues.
    • Risk management of the test project, not the product.

How estimating testing effort differs from estimating other development efforts? Why planning relative testing schedules might be a good idea?

edit
  • Differences:
    • Testing is not an independent activity, depends lots of how development performs.
    • Critical faults can prevent further testing.
    • It is hard to predict how long it will take to find a defect.
  • Relative schedules are good because:
    • It is hard to know when testable items are done by development.
    • Some of the other phases might take more time than predicted.

How is test prioritization different from implementation or requirements prioritization, why cannot we skip all low priority tests when time is running out?

edit
  • Prioritization should be thought of as distribution of efforts rather than execution order.
  • Do not skip tests with low priority.
    • Risk of missing critical problems.
    • Prioritization might be wrong.

What defines a good test plan? Present some test plan heuristics (6 well explained heuristics will produce six points in the exam)

edit
  • Rapid feedback
    • Will get the bugs fixed faster.
    • Whenever possible testers and developers should work physically near each other.
  • Test plans are not generic.
    • Something that works for a project might not work for another.
    • A test plan should highlight the non-routine, project-specific aspects of the test strategy and test project.
  • Important problems fast.
    • Testing should be optimized to find important problems fast, rather than attempting to find all problems with equal urgency.
    • The later in the project that a problem is found, the greater the risk that it will not be safely fixed in time to ship.
  • Review documentation.
    • Enables more communication and the reasoning behind the document is understood better.
  • Maximize diversity.
    • Test strategy should address test platform configuration, how the product will be operated and how the product will be observed.
    • No single test technique can reveal all important problems in a linear fashion.
  • Promote testability.
    • The test project should consult with development to help them build a more testable product.

Lecture 6 Questions: Test case design and Test Reporting

edit

Describe the difference between designing test ideas or conditions and designing test cases. How this difference affects test documentation?

edit
  • Test idea is a brief statement of something that should be tested.
    • Example: For a find function, "test that multiple occurrences of the search term in the document are correctly indicated".
  • Test case is a set of inputs, execution conditions and expected results.
    • Example: For a find function.
      • Input: Search term "tester" to a document containing 2 occurrences of the search term.
      • Conditions: Case insensitive search.
      • Expected results: The first occurrence is selected and both occurrences are highlighted with a green background.

Why is it important to plan testing in the early phases of software development project? Why could it be a good idea to delay the detailed test design and test case design to later phases, near the actual implementation?

edit
  • Early test design is good because:
    • It finds faults quickly and early.
      • Cheaper to fix more early.
    • Faults will be prevented, not built in.
  • Not too early test case design is good because:
    • Test cases can then be designed in implementation order.
      • Test case design can be started from the most completed and best understood features.
    • Avoid anticipatory test design.

Give reasons why test case descriptions are needed and describe different qualities of good tests or test cases?

edit
  • Test case descriptions are needed because:
    • They make test cases more repeatable.
    • More easy to track what features and requirements are tested.
    • Gives a proof of testing: evaluating the level of confidence.
  • Qualities of good test cases:
    • Power.
      • The test will reveal the problems.
    • Validity.
      • The problems are valid.
    • Value and credibility.
      • Knowledge of the problems bring value.
    • Coverage.
      • The test covers something that is already not covered.
    • Performable.
      • The test case can be performed as it is designed.
    • Maintainable.
      • Easy to make changes.
    • Repeatable.
      • Easy and cheap to run it.
    • Cost.
      • Does not take too much time or effort.
    • Easy evaluation.
      • It is easy to say if there is a defect or not.

What issues affect the needed level of detail in test case descriptions? In what kinds of situations are very detailed test case descriptions needed? What reasons can be used to motivate using less detailed, high level test case descriptions?

edit
  • Very detailed test cases:
    • For inexperiences testers.
      • They need more guidance.
    • For specific testing techniques:
      • All pairs of certain input conditions need more details.
    • Motivations:
      • Repeatability.
      • Traceability.
      • Tracking progress.
      • Tracking coverage.
  • Less detailed test cases:
    • For experienced testers.
    • Motivations:
      • Maintainability.
      • Less cost of documentation.
      • More creative testing.
      • More satisfaction for testers.

How defect reports can be made more useful and understandable? What kinds of aspects you should pay attention to when writing defect reports?

edit
  • Useful reports are reports that get the bugs fixed.
  • Minimal.
    • Just the facts.
  • Singular.
    • One report per bug.
  • Obvious and general.
    • Easy steps and show that the bug can be seen easily.
  • Reproducible.
    • Won't get fixed if it can't get reproduced.
  • Emphasize severity.
    • Show how severe the consequences are.
  • Be neutral when writing.

What is essential information in test reporting for management? How management utilizes the information that testing provides? Why a list of passed and failed test cases with defect counts is not sufficient for reporting test results?

edit
  • Essential information for management:
    • Evaluation of quality of the software development project.
    • Problems and decisions that require management action.
    • Status of testing versus planned.

Lecture 7 Questions: Exploratory testing

edit

What are the five differences that distinguish exploratory testing from test-case based (or scripted) testing?

edit
  • Scripted testing:
    • Tests are first designed and recorded, and then executed.
    • Execution can be done by a different person.
    • Execution can be done later.
  • Exploratory testing:
    • Tests are designed and executed at the same time.
    • Tests are often not recorded.
    • Can be guided by previous testing results.
    • Focus is on finding defects by exploration.
    • Enables simultaneous learning of the system.
    • No planning, no tracking, no recording and no documentation.
    • Depends on the tester's skill, knowledge and experience.

What benefits can be achieved using exploratory testing (ET) approach. What are the most important challenges of using ET? In what kinds of situations ET would be a good approach?

edit
  • Benefits:
    • Writing test cases takes too much time.
    • Testing from the user's viewpoint.
    • ET goes deeper into the tested feature.
    • Effective way of finding defects.
    • Gives good overall view of quality.
    • Enables testing of the look and feel of the system.
  • Challenges:
    • Coverage
    • Planning and selecting what to test.
    • Reliance on expertise and skills.
    • Repeatability.
  • Good for situations:
    • The features can be used in many different combinations.
    • Agile situations, where things change fast.

Describe the main idea of Session-Based Test Management (SBTM). How the needs for test planning, tracking and reporting are handled in SBTM?

edit
  • Enables planning and tracking exploratory testing.
  • Includes a charter that answers:
    • What? Why? How? What problems?
    • And possibly: Tools? Risks? Documents?
    • Allows reviewable results.
  • Done in sessions of ~90 minutes
    • Gets the testing done and allows flexible reporting.
  • Debriefing
  • Test planning, tracking and reporting is done with the help of charters.

Give reasons that support the hypothesis that Exploratory Testing could be more efficient than test-case-based testing in revealing defects?

edit
  • Test-case-based testing produces more false defect reports.

Lecture 9 Questions: Software Reviews and Inspections

edit

Describe and compare reviews and dynamic testing (applicability, benefits, shortcomings, defect types)

edit
  • Reviews:
    • A meeting or a process in which an artifact is presented to peers or the customer.
    • Benefits:
      • Identify defects and improve quality.
      • Can be done as soon as artifact is ready (or for incomplete components).
      • Distribution of knowledge.
      • Increased awareness of quality issues.
      • Cost of fixing found defects decrease radically.
    • Shortcomings:
      • Can only examine static documents.
    • Defect types:
      • Quality attributes.
      • Reusability.
      • Security.

Present and describe briefly the four dimensions of inspections.

edit
  • Process:
    • Planning.
    • Overview.
    • Defect detection.
    • Defect correction.
    • Follow-up.
  • Roles:
    • Leader.
    • Moderator.
    • Author.
    • Inspectors.
    • Reader.
    • Recorder.
    • Nobody from management.
  • Reading techniques:
    • Ad-hoc based.
    • Checklist based.
    • Abstraction based.
    • Scenario based.
  • Products
    • Requirements.
    • Design.
    • Code.
    • Test cases.

Explain the different types of reviews and compare their similarities and differences.

edit
  • Team reviews:
    • Less formal with lots of discussion and knowledge distribution.
  • Inspection:
    • Very formal meetings that enable improvement.
  • Walkthrough:
    • Author presents to others that are not prepared.
  • Pair review & Pass-around:
    • Individual check of a product.
    • Code review with a pair.
  • Audits:
    • Evaluation by another independent company.
  • Management review:
    • Ensure project progress (iteration demos in scrum).

Describe the costs, problems, and alternatives of reviews.

edit
  • Cost:
    • 5-15% of development effort.
    • Planning, preparation, meeting, ...
  • Problems:
    • No process understanding.
    • Wrong people.
    • No preparation.
    • Focus on problem solving rather than defect detection.
  • Alternatives:
    • Pair programming
    • Joint Application Design

Lecture 10 Article Questions: Static Code Analysis and Code Reviews

edit

Describe the taxonomy for code review defects for both functional and evolvability defects and describe the type of defect actually found in code reviews. (Article: What types of defects are really discovered in code reviews )

edit
  • Evolvability defects:
    • Documentation.
      • Documentation is information in the source code that communicates the intent of the code to humans (e.g., commenting and naming of software elements, such as variables, functions, and classes).
    • Visual representation.
      • Visual representation refers to defects hindering program readability for the human eye. (Indentation)
    • Structure.
      • Structure indicates the source code composition eventually parsed by the compiler into a syntax tree.
  • Functional defects:
    • Resource.
      • Resource defects refer to mistakes made with data, variables, or other resource initialization, manipulation, and release.
    • Check.
      • Check defects are validation mistakes or mistakes made when detecting an invalid value.
    • Interface.
      • Interface defects are mistakes made when interacting with other parts of the software, such as an existing code library, a hardware device, a database, or an operating system.
    • Logic.
      • The group logic contains defects made with comparison operations, control flow, and computations and other types of logical mistakes.
    • Timing.
      • The timing category contains defects that are possible only in multithread applications where concurrently executing threads or processes use shared resources.
    • Support.
      • Support defects relate to support systems and libraries or their configurations.
    • Larger defects.
      • Larger defects, unlike those presented above, cannot be pinpointed to a single, small set of code lines.
      • Larger defects typically refer to situations in which functionality is missing or implemented incorrectly and such defects often require additional code or larger modifications to the existing solution.

What is static code analysis and what can be said about the pros and cons of static code analyzes for defect detection (Article: Predicting Software Defect Density: A Case Study on Automated Static Code Analysis, Article Using static analysis to find bugs )

edit
  • Static code analysis is the analysis of computer software that is performed without actually executing programs built from that software (analysis performed on executing programs is known as dynamic analysis).
  • Finding of defects that lead to security vulnerabilities, such as buffer overflows, format string vulnerabilities, SQL injection, and cross-site scripting.
  • Another common bug pattern is when software invokes a method but ignores its return value.

Describe the Clean Room process model. What are the benefits of the Clean room model? How has the model been criticized?

edit
  • The philosophy behind Cleanroom software engineering is to avoid dependence on costly defect-removal processes by writing code increments right he first time and verifying their correctness before testing. Its process model incorporates the statistical quality certification of code increments as they accumulate into a system.
  • It can improve both the productivity of developers who use it and the quality of the software they produce.
  • The focus of the Cleanroom process is on defect prevention, rather than defect removal.

Lecture 11 Article Questions: Test Automation

edit

Describe data-driven and keyword-driven test automation. How do they differ from regular test automation techniques, what are their benefits and shortcomings?

edit
  • Data-driven testing is a methodology used in Test automation where test scripts are executed and verified based on the data values stored in one or more central data sources or databases. These databases can range from datapools, ODBC sources, csv files, Excel files, DAO objects, ADO objects, etc. Data-driven testing is the establishment of several interacting test scripts together with their related data results in a framework used for the methodology. In this framework, variables are used for both input values and output verification values: navigation through the program, reading of the data sources, and logging of test status and information are all coded in the test script. Thus, the logic executed in the script is also dependent on the data values.
  • Keyword-driven:
    • The advantages for automated tests are the reusability and therefore ease of maintenance of tests that have been created at a high level of abstraction.

Software lifecycle V-model and testing tools. What kind of tools are available for different phases, how they improve the software quality?

edit

What problems are associated with automating testing? What kinds of false beliefs and assumptions people make with test automation?

edit
  • Assumptions:
    • Computers are faster, cheaper, and more reliable than humans; therefore, automate.
    • Testing means repeating the same actions over and over.
    • An automated test is faster, because it needs no human intervention.
    • We can quantify the costs and benefits of manual vs. automated testing.
    • Automation will lead to "significant labor cost savings."
      • The cost of developing the automation.
      • The cost of operating the automated tests.
      • The cost of maintaining the automation as the product changes.
      • The cost of any other new tasks necessitated by the automation.
  • Problems:
    • Each time the suite is executed someone must carefully pore over the results to tell the false negatives from real bugs.
    • Code changes might mean changes to automated test cases.

Lecture 12 Article Questions: Agile Testing

edit

What kinds of challenges agile development approach places for software testing? Describe contradictions between the principles of agile software development and traditional testing and quality assurance.

edit
Agile Principle: Challenge:
Frequent deliveries of valuable software
  • Short time for testing in each cycle
  • Testing cannot exceed the deadline
Responding to change even late in the development Testing cannot be based on completed specifications
Relying on face-to-face communication Getting developers and business people actively involved in testing
Working software is the primary measure of progress Quality information is required early and frequently throughout development
Simplicity is essential Testing practices easily get dropped for simplicity's sake


Testing principle: Contradicting practices in agile methods:
Independency of testing
  • Developers write tests for their own code
  • The tester is one of the developers or a rotating role in the development team
Testing requires specific skills
  • Developers do the testing as part of the development
  • The customer has a very important and collaborative role and a lot of responsibility for the resulting quality
Oracle problem Relying on automated tests to reveal defects
Destructive attitude Developers concentrate on constructive QA practices, i.e., building quality into the product and showing that features work
Evaluating achieved quality Confidence in quality through tracking conformance to a set of good practices

Read the experiences of David Talby et al. presented in their article "Agile Software Testing in a Large-Scale Project". Describe how they tackled the following areas in a large-scale agile development project: Test design and execution, Working with professional testers, Activity planning and Defect management

edit
  • Test design and execution
    • Everyone tests.
      • Increased test awareness.
      • Testability increased as developers knew they had to write tests to their code.
    • Product size = test size.
      • Brings a strong message to the team: only features that have full regression testing at each iteration are counted as delivered product size.
    • Untested work = no work.
  • Working with professional testers
    • Easing the professional tester's bottleneck: developers simply code less and test more.
    • Encourage interaction over isolation.
      • Traditionally tester is seen as quite independent.
      • Integrate tester into the team. Otherwise he won't find enough bugs.
  • Activity planning
    • Planning game
      • Customer describes priorities to stories that the system should implement the next iteration.
      • Team breaks down stories to development tasks and estimate the effort to these tasks.
    • Integrate feature testing as coding.
      • No task is considered complete before tests are written and running.
    • Consider regression testing as global overhead that is done in the end of the iteration.
    • Allocate bug-fix time globally.
      • Planning defect resolution as an individual task results in high overestimates.
  • Defect management
    • Use a team-centered defect management approach.
      • Everybody knows each other's knowledge areas because of daily standup meetings.
    • Fix defects as soon as possible.
    • Less false defects due to everybody working in the same room.

Additional questions from old exams

edit

Describe the relationship of equivalence partitioning (EP), boundary value analysis (BVA) and cause-and-effect graphing (CEG). What are the differences of the three techniques. Can the techniques be used together to complement each other, why/why not?

edit

Describe the basic idea of pair-wise testing. What kind of testing problems pair-wise testing is good for and why does it work? Describe also what shortcomings or problems you should pay attention to when applying pair-wise testing?

edit

Describe the challenges that agile principles place on traditional testing. How do the short and tight development cycles of agile development affect testing?

edit