SoftwareTesting

Acceptance Testing
Testing the system with the intent of confirming readiness of the product and customer acceptance.
Ad Hoc Testing
Testing without a formal test plan or outside of a test plan. With some projects, this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. If testing occurs very late in the development cycle, this may be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.

Alpha Testing
Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.

Automated Testing
Software testing that utilizes a variety of tools to automate the testing process and/or testing that is used when the need for having a person manually testing becomes obsolete. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up and manage the tests.

Benchmark Testing
A test used to compare the performance of the software being tested to a previous version, an arbitrary standard or products published by the competition
Beta Testing
Testing after the product is code complete by early adopters who have often obtained free copies of the product and are urged to provide feedback and purchase the generally available or final version of the product.
Black Box Testing
Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.

Bug Bash
A form of directed ad-hoc testing. Usually performed late in the test cycle, the software is released to non-testers, roughly anyone in the organization interested in helping out. The test team acts as coordinators, reviewing results and determining if the problems found are unique bugs.
Build Verification Test (BVT, Smoke Test)
A subset of the complete tests ran to determine if the build is generally functional and stable, also known as acceptance test.
Code Complete
A milestone in the lifecycle of product release. At the point where the schedule calls for code completion, the development cycle comes to an end. All features should function as described in the functional specification.
Code Coverage
A metric of completeness as it applies to the code being tested. Branch code coverage is the statistic for the number of branches of code executed at least once by some test. Statement code coverage would be the statistic for the number of individual code statements executed at least once by some test. Code Coverage does not apply to black box testing.
Compatibility Testing
Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.

Configuration Testing
Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.
Feature Complete
The point in the development cycle where the software is fully functional. This can also be applied as a milestone after which no new features can be added.

Functional Testing
Testing one or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

Globalization
Designing and implementing software so it can support all targeted locales and user interface languages without modification to the software source code itself. Also known as internationalization.
Independent Verification and Validation (IV&V)
The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. This term is often applied to public sector work where government regulations may apply (e.g. medical devices).

Installation – Setup Testing
Testing with the intent of determining if the product will install on a variety of platforms as well as determining how easily it installs on each of these platforms.

Integration Testing
Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve combining groups of modules and functions with the goal of completing and verifying that the system meets the system requirements.

Load Testing
Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.

Localization
Adapting software so it can be used in a different setting (often a different country and/or language).
Performance Testing
Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.

Pilot Testing
Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a move-to-production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (See also beta testing.)
Postmortem
A report done after a product is released in order to identify ways in which the testing or development process can be improved. This may also be referred to as lessons learned.
Private Testing
Sometimes called bootleg testing, a developer will release a build to testing to determine if the bugs are successfully fixed before releasing the new code to the builder.
Regression Testing
Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred.

Security Testing
Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.

Software Testing
The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant. This term is often applied to commercial products such as internet applications. (Contrast with independent verification and validation.)

Stress Testing
Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.

System Integration Testing
Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off-the-shelf) system or any other system comprised of disparate parts where custom configurations and/or unique installations are the norm.

White Box Testing
Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.

No comments: