Are you familiar with any of the following testing terms or concepts,
which have you used?
Black box testing - Testing from user interface with no knowledge of underlying code or algorithms
White box testing - Defining test cases and testing based on knowledge of the underlying code, structures, and algorithms used.
API testing - Testing by exercising direct programming interfaces into a program and testing for functionality and robustness
Build verification Test (BVT)-
• This is basic test carried out at the first after
code is released.
• It is done just to verify basic functionality
before any test starts.
• It is carried out to determine the integrity of
any new build
• You should develop (build) a quality bar to for
BVT
• You should create process for BVT failure for
resolution
• Always automate the BVT test suite
• Always track and store BVT result
Boundary Cases - Defining Test cases based on the minimum and maximum values that are accepted by a UI element or parameter to an API.
Equivalency Classes - A way of breaking down test cases into a more manageable set of cases by grouping together cases that produce the same result.
Requirements Tracing - Defining test cases by specifically targeting statements of fact in a Functional Specification.
Functional Specification - A document which describes the functionality that is exposed in a product to meet customer needs and usage patterns
Design Specification - A document which describes the implementation of the functionality defined in the Functional specification.
Stress Testing - Stressing a program by sending it large numbers of transactions repeatedly in a very rapid succession in order to stress its computing power and ability to handle very large loads.
Performance Testing - Testing a program specifically to determine how fast it is running, how many transactions it can accomplish in a certain period of time and determine the amount of resources (disk, memory, cpu) that a transaction costs.
Scalability Testing - Testing a programs ability to scale in the number of transactions or users that it can support as a function of increasing memory, disk space, number and speed of CPUs as well as multiple machines in a networked environment.
MTTF/MTBF - Mean Time to Failure or Mean Time between Failures. A measurement derived over long periods of moderate stress testing that describes the robustness of the system and it’s ability to function in a 24x7 environment.
Functional Testing - Testing the functionality of an area or component strictly for meeting it’s functional requirements outside of it’s interactions with other parts of the system.
System Testing - a.k.a. Integration/Interop testing or End-to-End testing. This testing all components of a project as they are intended to be used as a system
Unit Testing - Testing individual components prior to integration
Integration Testing - Testing a subset or all components working together to validate build and system integration process is functional and produces a workable, testable product
Network Testing - Testing the network etiquette of an application looking at bandwidth used, external network system failure impact (such as non-responsive DNS, slow WAN links, etc.), and light interoperability testing.
Interoperability Testing - Testing how a program interacts with another program, network service, or platform
Standards Based Testing - Typically this means testing a product against an external standard, such as an RFC, and IEEE spec, etc.
Usability Testing - Instructing and monitoring end users to perform abstract customer scenarios. Example: Create and save a document in Word and monitoring how they navigate the UI.
Beta Testing - Instructing customers to exercise your product prior to final release (usually in ways that cannot be exercised in an internal lab.
Alpha Testing - Very early customer testing to validate product requirements and design are satisfying the end goal of the product
Give me an example of some of the problems you encountered designing tests and how did you overcome them?
They need to draw on their experience and describe the problems encountered in test design. The answer can vary widely, from technical problems with automation, to lab design problems, to holes found in their testing either by review, or by customers running into bugs that we’re not detected with adequate testing. Look for clarity, depth of experience, and troubleshooting methods used. Ask if they have more than one example.
In general: Date found, who found it, environment/OS/platform and system condition bug occurs in, exact steps to reproduce it, results, expected results, area of the product the bug was found in, related bugs, scripts to repro the bug and/or environment, etc.
In general terms: Product idea, product requirements spec, design spec, code, unit test, test planning, lab building, tool development, integration, integration test, system test, beta test, regression test, release, documentation review, release note review, etc.
Need a clear description of tool or automation designed, what it tested, why was this helpful, how were bugs in both the product and the tool found and fixed. What type of problems encountered, etc.
See how creative they are. In particular, synchronizing with lights at other intersections, how they provide smooth flow of traffic (turns etc). Do they consider traffic patterns (rush hour/ special event/ emergency vehicles)?
Describe the process you use for creating test plans, test cases, and/or test schedules.
What types of tools have you used, and how have you used them?
GUI automation tools (e.g.VisualTest, QA Partner, etc.).
Memory Leak checkers (BoundsChecker, Purify, etc.)
Test or path coverage analysis tools (C-Cover, PureCoverage, McCabes, etc.)
Bug tracking systems
Call profilers
Debuggers
Network Test tools (analyzers, packet generators).
Database test tools
Proprietary Tools
Please give us any other information you feel will be helpful.
The typical candidates we’ve been meeting have had only “barely acceptable” sql skills. The following is an example of a question we’d ask to help us assess the candidate’s sql knowledge.
In the pubs db are tables as follows:
Authors TitleAuthor Titles
au_id au_id title_id
au_lname title_id title
au_fname au_ord type
phone royaltyper pub_id
address price
city advance
state royalty
zip ytd_sales
contract notes
pubdate
Question: Write select statement(s) to return the author’s name who has the greatest number of titles
“Unofficial” Answers:
simplest (acceptable) answer which involves no joining of tables:
find the id of the author with the most titles:
select au_id, count(*)
from titleauthor
group by au_id
order by count(*) desc
use that/those id to get the authors’ name(s)
select au_fname, au_lname
from author
where au_id =
better answer in one select statement by joining tables:
option 1: ANSI syntax
select au_fname, au_lname, count(*)
from authors a
join titleauthor t on a.au_id=t.au_id
group by au_fname, au_lname
order by count(*) desc
option 2: same thing with “old” join syntax
select au_fname, au_lname, count(*)
from authors a, titleauthor t
where a.au_id=t.au_id
group by au_fname, au_lname
order by count(*) desc
notes: even better answers are possible (interviewer must understand SQL to be able to tell whether any answer is correct), the table titles is not used in the solution
Question)
Thre is a publisher and subscriber
on the publisher there is a stored proc that updates an oreder line record
the orderline table has PK(primary keys) orderID and productID
this table is linked to another table called comment
and the comment table also has the same primary keys as orderline table and also another field called comment
when you run the stored proc on the publisher,everything runs fine. when you replicate the process (publish it to the subsriber) and run the stored proc, the subsriber throws an exception "orderline data cannot be deleted" where is the error??
No comments:
Post a Comment