Wednesday, September 30, 2009

QA/Testing - Interview Questions

Automated testing vs Manual testing ?

Modern web applications are characterized by a multitude of elements and possible interactions within the GUI which all have to be considered during testing.
Besides the possibility of reducing the testing effort, there is another reason to choose automated web testing: Especially extensive GUIs with lots of elements can push the testers to their limits when tested manually. Entering the same or similar data in hundreds of input masks for example is not an intellectual challenge but requires full concentration and can lead to a decreasing motivation and inattentiveness.

How would you decide what to automate or not to automate ?
1. Automate only that which needs automating.
2. Design and build for maintainability.
3. Whether or not to automate: rules of thumb
  1. GUIs are difficult to automate. Despite software vendors telling you how easy it is to use record-and-playback functionality, graphical interfaces are notoriously difficult to automate with any sort of maintainability. Application interfaces tend to become static quicker than Web pages and are thus a bit easier to automate. Also, I have found that using Windows hooks is more reliable than the DOM interface. Keys to look for when deciding to automate a GUI is how static it is (the less it changes, the easier it will be to automate) and how close the application is tied to Windows standard libraries (custom objects can be difficult to automate).
  2. If possible, automate on the command-line or API level. Removing the GUI interface dramatically helps reliability in test scripts. If the application has a command-line interface, not only does it lend itself to reliable automation but is also somewhat data driven, another green light to go forward with automation.
  3. Automate those things that a human cannot do. If you suspect a memory leak by performing a certain function but can’t seem to reproduce it in a reasonable amount of time, automate it. Also particularly interesting to automate are time-sensitive actions (requiring precise timing to capture a state change, for example) and very rapid actions (e.g., loading a component with a hundred operations a second).
  4. Stick with the golden rule in automation: do one thing, and do it well. A robust test case that does a single operation will pay off more than a test case that covers more but requires heavy maintenance. If you design your test cases (or library functions, preferably) to do single actions, and you write them robustly, pretty soon you can execute a series of them together to perform those broad test cases that you would have written into a single test case.
Given a product to test with no product specifications or functional specification, how would you go about testing ?
Go through UI. Look at various flows. Run through application - various transactions and actions. Note down the behavior noticed for these transactions and actions. These can be your expected behavior. Meet with development teams, product management team, user experience team. Learn more about most common operations. 


How would you test a calculator ?
Divide your test cases into positive test cases and negative test cases.


Tell me test scenarios for testing traffic lights ?
Divide your test cases into positive test cases and negative test cases. These kind of questions are more to learn about your thought process and imagination.


Given a website with 2 inputs, what test cases you will perform ?
Same as above. Divide test cases in positive and negative test cases.


what is Boundary value analysis ?
Boundary value analysis is a software testing design technique in which tests are designed to include representatives of boundary values. Values on the edge of an equivalence partition or at the smallest value on either side of an edge. The values could be either input or output ranges of a software component. Since these boundaries are common locations for errors that result in software faults they are frequently exercised in test cases.


What matrix you can think of ?
Bugs - Major/show stopper, open P0/P1, fixed/verified, open bugs etc
Automation results - Pass/Fail/Not executed/blocked.
Load and performance - Expected benchmark/actual benchmark, history data/stats, results per package or functions, memeory leaks, information about CPU/disk etc
Msll: Test repository results etc


What should QA dashboard look like ? What all it should be on QA dashboard ?
Please refer to my blog about QA dashboad.