“If you can’t explain it simply, you don’t understand it well enough.” – Albert Einstein
Visit and follow our fan page to get information about new BoK’s, news and vacancies
“If you can’t explain it simply, you don’t understand it well enough.” – Albert Einstein
Acceptance Testing is defined as testing that verifies the system is ready to be released to end users. Acceptance Testing may also be referred to as User Testing.
In general: whatever happens in reality as a consequence of something
Regarding the software: actual output
Ad Hoc Testing
Testing done without any preparation
Doing ad hoc testing, the tester just follows his or her heart to try to find bugs.
Alpha Testing is a phase of testing of an application when development is nearing completion and minor design changes may still be made as a result of such testing. When you hear “alpha testing,” it refers to when the testing is done, not how the testing was done. As a rule, alpha testing is done inside the company. In some cases, alpha testing is outsourced to other companies.
Also Notify (in BTS)
The list of persons who should get emails containing:
The heart of the system responsible for processing
The version of the application formatted according to the convention accepted at the company
For example, at ShareLane we have this convention:
“Character encoding based on the English alphabet. ASCII codes represent text in computers, communications equipment, and other devices that work with text.” (Wikipedia)
Assigned by (in BTS)
Alias of the person who assigned the bug to its current owner
Assigned to (in BTS)
Alias of the current bug owner
Attachment (in BTS)
File (usually an image) uploaded as an illustration to the bug
(In the context of this course) Test automation written for the regression testing of
Automated Testing is the management and performance of test activities that include the development and execution of test scripts so as to verify test requirements, using an automated test tool. Automated testing automates test cases that have traditionally been conducted by human testers. IBM Rational Robot and Mercury WinRunner are examples of automated testing packages.
Software and data hidden from the user; e.g., Web server, application core, the DB, log files, etc.
Compared to a car, the back end pieces are items like the engine and the electrical circuit of the car.
Beta Testing is a phase of testing when development is essentially completed and final bugs and problems need to be found before final release. The value of beta testing is allowing some representatives of our target audience to try the product and give us feedback before we release it in the open.
Benchmark Testing is defined as testing that measures specific performance results in terms of response times under a workload that is based on functional requirements. Benchmark results provide insight into how the system performs at the required load and can be compared against future test results. Benchmark Testing is a prerequisite for Stress Testing.
Black Box Testing
Black box testing is what most testers spend their time doing. Black box testing ignores the source code and focuses on the program from the outside, which is how the customer will use it. Black box thinking exposes errors that will elude glass box testers.
Black Box Testing Methodology
A set of black box testing techniques and approaches
Boundary Testing is testing the program’s response to extreme input values.
The most vital fundamental concepts and attitudes regarding the subject of study
Depending on the context:
Bug Owner(in BTS)
The person responsible for the next step in bug closure
Bug Fix Verification
A 2-step process:
The bug can be closed right after Step 1 if the bug is no longer reproducible.
A meeting to discuss why bugs that went to production weren’t caught during testing
This meeting should be about improvements, not about blaming.
See Bug Fix Verification
Bug Tracking Procedure (BTP)
The set of rules about how bug tracking should function from the moment a bug is found and filed (or re-opened) to the moment when a bug is closed
Bug Attributes (in BTS)
Attributes of bug report in BTS:
Bug Tracking System (BTS)
Infrastructure that enables testers to
information about bugs.
Free, popular, and easy to use BTS software
A sub-version of the specific release
Unique identifier of sub-version of the software for concrete release.
Value that shows how many builds have been created for concrete release.
Build numbering starts with 1 and increases in increments of 1 every time a new build is created
Change History (in BTS)
A log of the changes that occur with a bug
The Change History usually includes:
Comments (in BTS)
This attribute has 2 main usages:
Compatibility Testing is testing that one product works well with another product
Component (in BTS)
The component where a bug was found – e.g., Checkout
Functional testing of a logical component
Also see Integration Testing; System Testing
A preset or the environment
For example, we can try to register using:
“Email is not in database” and “Email is in database” are two conditions.
Configuration Testing is defined as testing that verifies that the system operates correctly in the supported hardware and software environments. Configuration testing is an ideal candidate for automation when the system must be tested in multiple environments.
Conversion Testing is testing upgrading from one version of the product to another version.
Free version control software
Documentation Testing is testing all associated documentation related to the development project. This may include online help, user manuals, etc
Domain Testing utilizes specialized business knowledge relating to the program that is provided by subject matter experts.
A piece (or pieces) of information
Data usually comes in the form of text, a file, a value of a variable, or a value inside a DB record.
For each concrete situation, data is either:
For each concrete situation, Null input can be considered as either valid or invalid data.
“Correctness, completeness, wholeness, soundness, and compliance with the intention of the creators of the data” (definition taken from TechWeb.com)
If data integrity is seriously compromised in the test environment, testing might not make much sense until data integrity is reestablished.
Conceptually, the DB is a set of virtual containers required to organize and store data
The most popular DBs used in the Internet software industry are MySQL (free) and Oracle (not free).
Also see Web Site Architecture
Database Administrator (DBA)
A professional who manages databases
Date (in BTS)
Date and time when a bug was filed
DB (in BTS)
The DB version of the environment where a bug was found
If the application version is 1.0-23/34, the DB in the BTS must be 34.
The activity of a programmer to identify a buggy piece of code and fix it
A piece of code that has not been removed from the software, but should not be used
Often, code is not removed but deprecated to make a safe transition from an old (deprecated) code to a new code. Deprecated code serves as a backup in case something goes wrong with the new code. Once the new code has been there for a while and there have been many opportunities to analyze the consequences of removing the deprecated code, we might want to get rid of it.
Description (in BTS)
A detailed description of the bug
Here is the recommended format for Description:
Also see Summary
A professional who writes software code
(Also called “playground” or “sandbox”)
The software/hardware combo where a developer writes and tests his or her code
Dirty List – White List
A black box testing technique that consists of 2 parts:
Part 1: Brainstorming (black list)
Part 2: Selection of items that came up during Part 1 (white list)
Emergency Bug Fix (EBF)
A situation when a P1 bug is found in production and it’s necessary to create a patch release ASAP
Set of rules on how to proceed in case of EBF or EFR
Emergency Feature Request (EFR)
A situation when a certain feature must be released ASAP; e.g., in the case of a court ruling or to comply with a new law
Our competitor has won a patent case, and we have to change some piece of our software ASAP to make it work in a different way.
See System Testing
Certain conditions that allow to begin something
To make a phone call, you must have a working phone, a connection, and the phone number of the recipient. So we can say that the entry criteria for a phone call includes 3 conditions:
– a working phone is available
– a connection is available
– the phone number is known
Also see Exit Criteria
A set of inputs that are treated by software the same way under certain conditions
In other words, under certain conditions, the software must apply the same logic to each element of the equivalent class.
In some cases, the equivalent class can consist of only one element.
Also see Boundary Values
1. How the system responds to errors made by users
An example is the way the system responds if a Web form is submitted with invalid data in a required field.
2. How the system reacts to errors that happen when the software is running
For example, this error message: Test Portal>More Stuff>Python Errors>register_with_error.py provides info that the file register_with_error.py calls the undefined function get_firstpage().
A message that provides information about error(s)
An error message is an important measure that:
– Guides users in case of mistakes
An error message is usually delivered via a Web page by a code that is specifically written for handling user errors.
– Gives debugging info to developers
An error message is usually provided by an interpreter (or a compiler), or by some logging mechanism: e.g., the Apache Web server records errors in a special error log.
Error Recovery Testing
Error Recovery Testing involves testing the programs error messages by intentionally making as many errors as possible
Certain conditions that allow something to be considered finished.
For example, lunch at a restaurant is finished when the bill is paid. The exit criteria for a meal at a restaurant is:
– The bill is paid
Also see Entry Criteria
Expected Pattern of User Behavior
Scenarios that we expect will be (OR are already) taking place as users use our software
In general: whatever is expected to happen in reality as a consequence of something
Regarding the software: expected output
Sources of expected results:
The real challenge is to find expected results that serve as a true indicator of whether the software works or not.
Exploration for the purpose of finding bugs
Also see Ad hoc testing
Browsing through software UI to understand how things work.
Depending on the context, the term “feature” means:
– the ability to accomplish a specific task (i.e., functionality)
– a particular characteristic of the software
Formal (Documented) Testing
(In context of this course) Test execution with help of test documentation; e.g., test cases
Found On (in BTS)
The environment where a bug was found
For example: main.sharelane.com.
The interface that customers can see and use; e.g., text, images, buttons, or links
Compared to a car, the front end pieces are items like the steering wheel and the dashboard.
Also see Back End
Functional Audit (FA)
The FA compares the software system being delivered against the currently approved requirements for the system.
Functional testing is defined as testing that verifies that the system conforms to the specified functional requirements. Its goal is to ensure that user requirements have been met.
The ability to accomplish a certain task
For example, the functionality (ability) of a bottle opener is to open bottles.
Glass Box Testing
Glass box testing is part of the coding stage. The programmer uses his knowledge and understanding of the source code to develop test cases. Programmers can see internal boundaries in the code that are completely invisible to the outside tester. Glass box testing may also be referred to as White Box testing.
Grey Box testing
Testing that combines the elements of black and white box testing:
(In context of this course) a type of test automation needed to automate manual repetitive tasks
The most popular example of a helper is the utility for the automated creation of new user accounts; e.g., the Account Creator used at ShareLane.
ID (in BTS)
The unique ID of the bug
Depending on the context:
Integration testing may be considered to have officially begun when the modules begin to be tested together. This type of testing, sometimes called gray box implies a limited visibility into the software and its structure. As integration proceeds, gray box testing approaches black box testing, which is more nearly pure functional testing, with no reliance on knowledge of software structure or software itself.
Installation Testing involves testing whether the installation program installs the program correctly.
Data that should NOT be able to assist in accomplishing some task; e.g., registration
In the case of registration at ShareLane, a ZIP code must contain 5 digits. All other inputs are invalid data.
The word “legacy” is usually used in the terms “legacy feature” or “legacy user”. It means “existing”. For example, if production has a feature called “Shopping Cart”, then the “Shopping Cart” is a legacy feature.
A set of testing techniques designed to load the system or its components and then measure how the system or its components react
The usual purpose of load/ performance testing is to find a bottleneck; e.g., a part of the system or its components that slows down response time.
The testing used to find bugs in the adaptation of the software by users from different countries
For example, if our Web site was created for an English-speaking audience and we want to localize it for a Japanese-speaking audience, we’ll have to determine whether Kanji symbols can be used to create a username.
A file that keeps track of some activity
There are 3 most common actions regarding data in log files:
1. Read (lines are read by humans or a program)
2. Append (new lines are added under old lines, if any)
3. Write (all old lines if any are purged and new lines are added)
A bug in how the software processes information
Mainstream Usage Testing
Mainstream Usage Testing involves testing the system by using it like customers would use it.
Major (or milestone) release that happens at the release stage of the SDLC, after the testing and bug fixes stage is over
The version of a major release is presented as an integer: 7.0
Manual testing is defined as testing that is conducted by human testers.
A release that takes place between major releases
A minor release can have one of three variants:
– Feature release
– Patch release
– Mixed release
A FEATURE RELEASE takes place when we need to:
– Add new features
– Modify/remove existing features
A PATCH RELEASE takes place when the code in production has a bug(s). In this case, we simply release the fixed code.
A MIXED RELEASE is minor release with both feature related changes and bug fixes.
The version of a minor release is presented as a number after the decimal point and incremented by one after each further minor release: 7.1
Module testing is a combination of debugging and integration. It is sometimes called glass box testing (or white box testing), because the tester has good visibility into the structure of the software and frequently has access to the actual source code with which to develop the test strategies. As units are integrated into their respective modules testing moves from unit testing to module testing.
Multi-user Testing involves testing the program while more than one user is using it at the same time.
Testing that checks situations involving:
– User error
– System failure
Also see Positive Testing
New Feature Testing (NFT)
The first stage of test execution where new and/or changed features are tested
Also see Regression Testing
No data is provided
For example, if we press “Continue” on the first page of the ShareLane registration without entering anything into the “ZIP code” field, this is null input.
Operational Testing involves functional testing of a system independent of specialized business knowledge.
Result produced by the software in response to input
Performance Testing is defined as testing that verifies that the system meets specific performance objectives in terms of response times under varying workloads. This may also be referred to as Load Testing. An example of a performance test requirement may be: Utilizing 400 virtual users, 90% of all transactions have an average response time of 10 seconds or less and no response time can exceed 30 seconds. Performance Testing encompasses Stress Testing and Benchmark Testing.
Physical Audit (PA)
The PA is intended to assure that the full set of deliverables is an internally consistent set (i.e., the user manual is the correct one for this particular version of the software). It compares the final form of the code against the final documentation of that code.
PjM or Project Manager
A professional who manages projects
“They have the responsibility of the planning, execution, and closing of any project” (Wikipedia).
As a rule, in a start-up environment, the PjM’s role is assigned to the PM.
PM or Product Manager
A professional who manages products
“A product manager researches, selects, develops, and places a company’s products” (Wikipedia).
One of the main deliverables expected from a PM are well-written specs.
Testing that checks situations where:
– The software is used in a normal, error-free way and/or
– The system is assumed to be sound
Usage in a “normal, error-free way” can be defined as a scenario that accomplishes certain tasks needed to provide some value to a user. For example, registration is needed to create an account. So,
– Normal usage correctly completes all steps of the registration needed to create new account.
– Abnormal usage in this case would be submitting a Web form during registration where certain fields (e.g., ZIP code) have invalid data.
Also see Negative Testing
Post Implementation Review (PIR)
The PIR is held once the software system is in production. The PIR is usually conducted 6 to 9 months after implementation. Its purpose is to determine whether the software has, in fact, met the user’s expectations for it in actual operation.
Priority (in BTS)
The magnitude of a bug’s impact on the company’s business
A Web site available to our users
The opposite of prod are the development and test environments where the software is being developed and tested.
State or characteristic attributed to something (functionality, code, overall product, etc.) based on degree of match between that “something” and someone’s expectations about it.
Example #1: A tester says: “The quality of the checkout flow is good, because we fixed all the bugs.” (The expectation is: “Good software is bug-free software.”)
Example #2: A user says: “The quality of the checkout flow sucks, because the UI is very misleading.” (The expectation is: “Software should have an easy-to-use interface.”)
Quality Assurance (QA)
The set of activities targeted at bug prevention through process improvement
In theory: A professional specializing purely in process improvement
In reality: This term is used interchangeably with “test engineer” and “tester”.
Regression testing is defined as testing that verifies that the system functions as required and no new errors have been introduced into a new version as a result of code modifications. Regression testing is an iterative process conducted on successive builds and as a result is an ideal candidate for automation. Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors. It is a quality assurance measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity. Regression Testing is also a phase of testing that occurs near the end of a testing cycle.
Release Engineer (RE)
The person responsible for creating the release engineering infrastructure and for pushing code to various environments; e.g., test environments or production
Web page element that should to be filled with valid data (e.g., text box) or which value must be selected (e.g., value in drop-down menu) to be able to proceed to the next Web page.
For example, in the “Email” field, valid data must have one and only one “@” character.
Null input is always considered to be invalid for the required field.
Resolution (in BTS)
A stage of a bug’s life
Explanations below are given with the assumption that bug report has Type “Bug”:
(In the context of this Course) A black box testing technique based on evaluation of data or expectations with the purpose of setting priorities
Action(s) to undo unwanted changes.
A combination of actions and data applied to software under certain conditions.
The purpose of a scenario is to bring test execution to the point where an actual result can be retrieved and compared with an expected result.
In the example below, the verbs “Go”, “Type” and “Click” point to actions. Data is presented by the word “expectations”. If scenario assumes that book is in DB, then condition is: DB has data about book with word “expectations” in its title.
1. Go to qabok.com.
2. Type word “expectations” into the text field “Search”.
3. Click the “Search” button.
A Scenario test simulates a real world situation where a user would perform a set of detailed steps to accomplish a specific task.
Testing for protection against security breaches
Manual testing done with partial usage of the test automation, usually in form of helpers
Also see Automated Testing, Manual Testing
Severity (in BTS)
The magnitude of a bug’s impact on the software
Smoke (Build Verification) Test
A Smoke test validates that a fundamental operation or area of the program is ready to undergo more complex Functional, or Scenario Testing.
“An error, flaw, mistake, failure, fault, or ‘undocumented feature’ in a computer program that prevents it from behaving as intended (e.g., producing an incorrect result)” (Wikipedia)
Software Development Life Cycle (SDLC)
A way to get
– from an idea about a desired software
– to the release of the actual software and its maintenance
Software Quality Systems Plan (SQSP)
The SQSP address the activities to be performed on the project in support of the quest for quality software. All activities to be accomplished in the software quality area should receive the same personnel, resource, and schedule discussion as in the overall SDP, and any special tools and methodologies should be discussed.
Software Test Plan (STP)
The Software Test Plan documents the test program, timelines, resources, and tests to be performed for a test cycle leading to the release of a product or completion of a project.
A set of activities and processes primarily targeted to find AND address software bugs
Status (in BTS)
The state of a bug:
Stress Testing is defined as testing that exercises the system to the point that the server experiences diminished responsiveness or breaks down completely with the objective of determining the limits of the system. This may also be referred to as Volume Testing. An example of stress testing may be to send thousands of queries to the database.
Structured testing involves the execution of predefined test cases.
Structured Query Language (SQL)
SQL is a language to communicate with DB
“SQL is a database computer language designed for the retrieval and management of data in relational database management systems (RDBMS), database schema creation and modification, and database object access control management” (Wikipedia).
Summary (in BTS)
A quick synopsis of the bug
A bug in the syntax of the software code
Functional testing of a logically complete path
This term is usually applied to situations where two or more integrated components are involved.
Testing involves operating an application under controlled conditions and evaluating the results in order to confirm that the application fulfills it stated requirements.
Test Automation (TA)
In general: A myriad of different tools and techniques for a myriad of different purposes in software testing: code analysis, link checking, load/performance testing, code coverage, unit testing – this list goes on and on.
A Test Case is a specific set of steps to be executed in a program that are documented using a predefined format. Execution of the steps should result in a predefined expected result. If the expected result occurs the test cases passes. If the expected result does not occur the test case fails. Failure of a test case indicates a problem or defect with the application under test.
Test Case Attributes
Useful parts of the test case that assist with:
– Test case execution; e.g., an IDEA that clarifies what we are checking by using that test case
– Test case management; e.g., SETUP AND ADDITIONAL INFO that can contain data to make test cases more maintainable
The most common test case attributes are:
– Unique ID
– SETUP AND ADDITIONAL INFO
– Revision History
Test Case Index
A Test Case Index is a list of all Test Cases relating to a Test Plan.
Depending on the context, this means one of the following:
1. The coverage of possible scenarios
2. Test case execution coverage
A Test Cycle encompasses all the testing (Initial Testing, Alpha Testing, Beta Testing, and Regression Testing) that is conducted leading to the release of a product or completion of a project.
A professional specializing in software testing
The software/hardware combo where software is tested before being released to production
The second stage of the Test cycle
A Test Program is the methodology utilized for testing a particular product or project. The details of the Test Program are documented in the Test Plan.
The master document containing information about activities regarding the testing of the certain features (or other possible subjects of testing – e.g., how the system handles load)
The first stage of the test cycle
Test Readiness Review (TRR)
The TRR is a formal phase end review that occurs during the Coding Phase and prior to the onset of user (acceptance testing). The TRR determines whether the system is ready to undergo user (acceptance) testing.
A collection of test cases, usually dedicated to the same spec or the same feature
A black box testing technique that involves creating tables with inputs and/or conditions, and then combining those tables into test scenarios
Test Traceability Matrix (TTM)
A Test Traceability Matrix tracks the requirements to the test cases that demonstrate software compliance with the requirements.
Whether it makes sense to start testing
If a major flow (e.g., login) is not functioning, testing is blocked; therefore, we can say that the software is not testable.
A bug in how software presents information
The type of testing needed to find bugs in the user interface.
Unit Testing involves glass box testing of code conducted by the programmer that has written the code. Unit testing is primarily a debugging activity that concentrates on the removal of coding mistakes. It is part and parcel with the coding activity itself.
The evaluation of the user’s experience when he or she uses our software
A description of:
– How software will be (or is) used
– How software must respond to certain scenarios
User (Acceptance) Testing
User testing is primarily intended to demonstrate that the software complies with its requirements. This type of testing is black box testing, which does not rely on knowledge of the source code. This testing is intended to challenge the software in relation to its satisfaction of the functional requirements. These tests have been planned based on the requirements as approved by the user or customer.
Unstructured testing involves exploratory testing without the use of predefined test cases.
Data that should be able to assist in accomplishing some task; e.g., registration
In the case of registration at ShareLane, valid data for the ZIP code is 5 digits. Any other data is invalid.
Verifier (in BTS)
The alias of the person who must verify the bug after it was fixed
Version (in BTS)
The version of the environment where the bug was found
If the application version is 1.0-23/34, the version in the BTS must be 1.0
Web (Word Wide Web)
“A computer network consisting of a collection of Internet sites that offer text and graphics and sound and animation resources through the hypertext transfer protocol” (definition taken from wordnet.princeton.edu)
“A computer program that is responsible for accepting HTTP requests from Web clients, which are known as Web browsers, and serving them HTTP responses along with optional data contents, which usually are Web pages such as HTML documents and linked objects (images, etc.)” (Wikipedia).
(In the context of this course) An Internet project with the UI available to users via the Web
Also see Web Site Architecture
Web Site Architecture
The typical Web site architecture looks like this:
– Web server (HTTP handling)
– Application core (processing)
– Database (storage)
White Box Testing
(Also called “glass box testing”, “clear box testing”, and “open box testing”)
The number of testing techniques that require a comprehensive understanding of the software code
A programmer can perform white box testing by comparing:
– The requirements from the spec
– A piece of Python code from ShareLane
Also see Black Box testing; Grey Box Testing
see Dirty List – White list
Action that bypasses a problem or a way to bypass a problem
Visit and follow our fan page to get information about new BoK’s, news and vacancies