General Testing

Are you familiar with any of the following testing terms or concepts,
which have you used?
Black box testing - Testing from user interface with no knowledge of underlying code or algorithms

White box testing - Defining test cases and testing based on knowledge of the underlying code, structures, and algorithms used.

API testing - Testing by exercising direct programming interfaces into a program and testing for functionality and robustness


Build verification Test (BVT)-
• This is basic test carried out at the first after
code is released.
• It is done just to verify basic functionality
before any test starts.
• It is carried out to determine the integrity of
any new build
• You should develop (build) a quality bar to for
BVT
• You should create process for BVT failure for
resolution
• Always automate the BVT test suite
• Always track and store BVT result

Boundary Cases - Defining Test cases based on the minimum and maximum values that are accepted by a UI element or parameter to an API.

Equivalency Classes - A way of breaking down test cases into a more manageable set of cases by grouping together cases that produce the same result.

Requirements Tracing - Defining test cases by specifically targeting statements of fact in a Functional Specification.

Functional Specification - A document which describes the functionality that is exposed in a product to meet customer needs and usage patterns

Design Specification - A document which describes the implementation of the functionality defined in the Functional specification.

Stress Testing - Stressing a program by sending it large numbers of transactions repeatedly in a very rapid succession in order to stress its computing power and ability to handle very large loads.

Performance Testing - Testing a program specifically to determine how fast it is running, how many transactions it can accomplish in a certain period of time and determine the amount of resources (disk, memory, cpu) that a transaction costs.

Scalability Testing - Testing a programs ability to scale in the number of transactions or users that it can support as a function of increasing memory, disk space, number and speed of CPUs as well as multiple machines in a networked environment.

MTTF/MTBF - Mean Time to Failure or Mean Time between Failures. A measurement derived over long periods of moderate stress testing that describes the robustness of the system and it’s ability to function in a 24x7 environment.

Functional Testing - Testing the functionality of an area or component strictly for meeting it’s functional requirements outside of it’s interactions with other parts of the system.

System Testing - a.k.a. Integration/Interop testing or End-to-End testing. This testing all components of a project as they are intended to be used as a system

Unit Testing - Testing individual components prior to integration

Integration Testing - Testing a subset or all components working together to validate build and system integration process is functional and produces a workable, testable product


Network Testing - Testing the network etiquette of an application looking at bandwidth used, external network system failure impact (such as non-responsive DNS, slow WAN links, etc.), and light interoperability testing.


Interoperability Testing - Testing how a program interacts with another program, network service, or platform

Standards Based Testing - Typically this means testing a product against an external standard, such as an RFC, and IEEE spec, etc.

Usability Testing - Instructing and monitoring end users to perform abstract customer scenarios. Example: Create and save a document in Word and monitoring how they navigate the UI.

Beta Testing - Instructing customers to exercise your product prior to final release (usually in ways that cannot be exercised in an internal lab.

Alpha Testing - Very early customer testing to validate product requirements and design are satisfying the end goal of the product


Give me an example of some of the problems you encountered designing tests and how did you overcome them?

They need to draw on their experience and describe the problems encountered in test design. The answer can vary widely, from technical problems with automation, to lab design problems, to holes found in their testing either by review, or by customers running into bugs that we’re not detected with adequate testing. Look for clarity, depth of experience, and troubleshooting methods used. Ask if they have more than one example.

Describe what makes a good bug report?

In general: Date found, who found it, environment/OS/platform and system condition bug occurs in, exact steps to reproduce it, results, expected results, area of the product the bug was found in, related bugs, scripts to repro the bug and/or environment, etc.

Describe the software product lifecycle?

In general terms: Product idea, product requirements spec, design spec, code, unit test, test planning, lab building, tool development, integration, integration test, system test, beta test, regression test, release, documentation review, release note review, etc.

Describe test automation you’ve created.

Need a clear description of tool or automation designed, what it tested, why was this helpful, how were bugs in both the product and the tool found and fixed. What type of problems encountered, etc.

How would you design the lights at a traffic intersection?

See how creative they are. In particular, synchronizing with lights at other intersections, how they provide smooth flow of traffic (turns etc). Do they consider traffic patterns (rush hour/ special event/ emergency vehicles)?


Describe the process you use for creating test plans, test cases, and/or test schedules.

What types of tools have you used, and how have you used them?

 GUI automation tools (e.g.VisualTest, QA Partner, etc.).
 Memory Leak checkers (BoundsChecker, Purify, etc.)
 Test or path coverage analysis tools (C-Cover, PureCoverage, McCabes, etc.)
 Bug tracking systems
 Call profilers
 Debuggers
 Network Test tools (analyzers, packet generators).
 Database test tools
 Proprietary Tools

Please give us any other information you feel will be helpful.
The typical candidates we’ve been meeting have had only “barely acceptable” sql skills. The following is an example of a question we’d ask to help us assess the candidate’s sql knowledge.

In the pubs db are tables as follows:
Authors TitleAuthor Titles
au_id au_id title_id
au_lname title_id title
au_fname au_ord type
phone royaltyper pub_id
address price
city advance
state royalty
zip ytd_sales
contract notes
pubdate

Question: Write select statement(s) to return the author’s name who has the greatest number of titles
“Unofficial” Answers:
simplest (acceptable) answer which involves no joining of tables:
find the id of the author with the most titles:
select au_id, count(*)
from titleauthor
group by au_id
order by count(*) desc
use that/those id to get the authors’ name(s)
select au_fname, au_lname
from author
where au_id =
better answer in one select statement by joining tables:
option 1: ANSI syntax
select au_fname, au_lname, count(*)
from authors a
join titleauthor t on a.au_id=t.au_id
group by au_fname, au_lname
order by count(*) desc
option 2: same thing with “old” join syntax
select au_fname, au_lname, count(*)
from authors a, titleauthor t
where a.au_id=t.au_id
group by au_fname, au_lname
order by count(*) desc
notes: even better answers are possible (interviewer must understand SQL to be able to tell whether any answer is correct), the table titles is not used in the solution

Question)
Thre is a publisher and subscriber
on the publisher there is a stored proc that updates an oreder line record
the orderline table has PK(primary keys) orderID and productID
this table is linked to another table called comment
and the comment table also has the same primary keys as orderline table and also another field called comment
when you run the stored proc on the publisher,everything runs fine. when you replicate the process (publish it to the subsriber) and run the stored proc, the subsriber throws an exception "orderline data cannot be deleted" where is the error??

SoftwareTesting

Acceptance Testing
Testing the system with the intent of confirming readiness of the product and customer acceptance.
Ad Hoc Testing
Testing without a formal test plan or outside of a test plan. With some projects, this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. If testing occurs very late in the development cycle, this may be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.

Alpha Testing
Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.

Automated Testing
Software testing that utilizes a variety of tools to automate the testing process and/or testing that is used when the need for having a person manually testing becomes obsolete. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up and manage the tests.

Benchmark Testing
A test used to compare the performance of the software being tested to a previous version, an arbitrary standard or products published by the competition
Beta Testing
Testing after the product is code complete by early adopters who have often obtained free copies of the product and are urged to provide feedback and purchase the generally available or final version of the product.
Black Box Testing
Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.

Bug Bash
A form of directed ad-hoc testing. Usually performed late in the test cycle, the software is released to non-testers, roughly anyone in the organization interested in helping out. The test team acts as coordinators, reviewing results and determining if the problems found are unique bugs.
Build Verification Test (BVT, Smoke Test)
A subset of the complete tests ran to determine if the build is generally functional and stable, also known as acceptance test.
Code Complete
A milestone in the lifecycle of product release. At the point where the schedule calls for code completion, the development cycle comes to an end. All features should function as described in the functional specification.
Code Coverage
A metric of completeness as it applies to the code being tested. Branch code coverage is the statistic for the number of branches of code executed at least once by some test. Statement code coverage would be the statistic for the number of individual code statements executed at least once by some test. Code Coverage does not apply to black box testing.
Compatibility Testing
Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.

Configuration Testing
Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.
Feature Complete
The point in the development cycle where the software is fully functional. This can also be applied as a milestone after which no new features can be added.

Functional Testing
Testing one or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

Globalization
Designing and implementing software so it can support all targeted locales and user interface languages without modification to the software source code itself. Also known as internationalization.
Independent Verification and Validation (IV&V)
The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. This term is often applied to public sector work where government regulations may apply (e.g. medical devices).

Installation – Setup Testing
Testing with the intent of determining if the product will install on a variety of platforms as well as determining how easily it installs on each of these platforms.

Integration Testing
Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve combining groups of modules and functions with the goal of completing and verifying that the system meets the system requirements.

Load Testing
Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.

Localization
Adapting software so it can be used in a different setting (often a different country and/or language).
Performance Testing
Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.

Pilot Testing
Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a move-to-production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (See also beta testing.)
Postmortem
A report done after a product is released in order to identify ways in which the testing or development process can be improved. This may also be referred to as lessons learned.
Private Testing
Sometimes called bootleg testing, a developer will release a build to testing to determine if the bugs are successfully fixed before releasing the new code to the builder.
Regression Testing
Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred.

Security Testing
Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.

Software Testing
The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant. This term is often applied to commercial products such as internet applications. (Contrast with independent verification and validation.)

Stress Testing
Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.

System Integration Testing
Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off-the-shelf) system or any other system comprised of disparate parts where custom configurations and/or unique installations are the norm.

White Box Testing
Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.