Thursday, October 18, 2007

Testing FAQ

What is an entry criterion? What is an exit criterion?

Entry criteria is when to start testingexit criteria is when to stop testingsuspend/resume criteria is when to pause it and resume it

White box testing can also call as Clear box and Glass Box testing.

How much black box and white box testing takes part in GREYBOX testing process, I mean to say %?
As you all aware, Grey is a combination of Black and White color.The mixture/combination varies from time to time; In general, in testing scenario, according to the client requirements, the % varies from 60-40, 75-25, 85-15.

Sanity Testing...Sanity Testing/smoke Testing are considered to be same...they are done to check whether build is sane enough to be release for testing...that is to check the major business functionality. The main difference is smoke testing, testing is done with test cases or scripts in hand....whereas sanity testing is done with the knowledge of the requirements...Sanity testing is a kind of BVT (Build Verification testing). It is usually narrow and deep as Compared to Smoke testing. The main difference between Smoke testing and Sanity testing is Smoke testing has a written script and sanity testing does not have a script.Let me clarify that script here refers to either written test cases or automated scripts
Sanity testingHi all let me add more info on Sanity testing1. It is also called as Standard testing.2. Sanity testing is usually done at entry level of build, i mean at initial stage of testing.3. By this test build is accepted for end to end testing or else rejected.4. It may contain important test cases5. By this testing time can be saved
Sanity testing: Testing is performed to test the behavior of the application. It may normally include a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers etc.

Sanity TestingSanity Testing....Just taking an e.G. check the system if the testing can be carried out on the system because if we find that the application hangs frequently then there itself it fails and further testing cannot be carried out..


Diff between Load & Stress Testing: in load we gradually increase the load of system & test & see how gradually the performance degrades.Stress: we know the limits and try to go beyond the limits and do testing to check the output
Thread testingthis test technique, demonstrates key functional capabilities by testing a string of units that accomplish a specific function in theapplication. Thread testing and incremental testing are usually utilized together.


Load testing VS Stress testingBoth load and stress testing are type of performance testing.In Load testing we are gradually increasing the load in terms of through put and check the systems response time.Here we also check the max limit that the system can support.But in Stress testing we know the max limit or the max capability.Then we check the systems response by Appling a load beyond its limit. It means we check whether it is affecting to the database or crashing the system...



Information about Integration testing: 1. Most of times, Tester will do Integration testing.2. Stubs are used in bottom-Up approach testing.3. Drivers are used in Top-down approach testing.4. In the integration testing we need to test each module that are dependent on each other like one function calling other function, so this called function used as STUB mean just it give respective output on click or link.5. It is software integration testing.

Verification is the process confirming that something-software-meets its specification.Validation is the process confirming that it meets the user's requirement.

When to stop testing??When to stop Testing??1. When no more critical bug are present2. When given time for testing is over or my be Budget3. When the product are relesed to market4. when project has been cancelled(Stoped)5. When No improtance of testing

difference between load testing and stress testing1. Stress testing is subjecting a system to an unreasonable loadwhile denying it the resources (e.g., RAM, disc, mips, interrupts,etc.) needed to process that load. The idea is to stress a system tothe breaking point in order to find bugs that will make that breakpotentially harmful. The system is not expected to process theoverload without adequate resources, but to behave (e.g., fail) in adecent manner (e.g., not corrupting or losing data). Bugs and failuremodes discovered under stress testing may or may not be repaireddepending on the application, the failure mode, consequences, etc.The load (incoming transaction stream) in stress testing is oftendeliberately distorted so as to force the system into resourcedepletion.2. Load testing is subjecting a system to a statisticallyrepresentative (usually) load. The two main reasons for using suchloads is in support of software reliability testing and inperformance testing. The term "load testing" by itself is too vagueand imprecise to warrant use. For example, do you mean representativeload," "overload," "high load," etc. In performance testing, load isvaried from a minimum (zero) to the maximum level the system cansustain without running out of resources or having, transactionssuffer (application-specific) excessive delay.3. A third use of the term is as a test whose objective is todetermine the maximum sustainable load the system can handle.In this usage, "load testing" is merely testing at the highesttransaction arrival rate in performance testing

Quality Assurance & Quality Control....Quality assurance is a staff function, responsible for implementing the quality policy defined through the development and continuous improvement of software development processes.Quality assurance is an activity that establishes and evaluates the processes that produce products. If there is no need for process, there is no role for quality assurance.Quality control activities focus on identifying defects in the actual products produced. These activities begin at the start of the software development process with reviews of requirements, andcontinue until all application testing is complete.Testing is a Quality Control Activity.

LOAD VS STRESSLoad Testing = Increase the Load by increasing the No. of Users.Volume Testing = Increase the No. of Transactions keeping the No. of users constantStress Testing = Increase both the No. of Users and the No. of Transactions . By this we are applying stress to the application and will check at which point it fails.

Waterfall model & V ModelWaterfall model: is a software development model (a process for the creation of software) in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing validation, integration, and maintenance.the following phases are followed perfectly in order:Requirements specification Design Construction (implementation or coding) Integration Testing and debugging (verification) Installation Maintenance V Model : A further development of the waterfall model issued into the so called "V-Model". If you look at it closely the individual steps of the process are almost the same as in the waterfall model. Therefore I will not describe the individual steps again, because the description of the waterfall steps may substitute this. However, there is on big difference. Instead of going down the waterfall in a linear way the process steps are bent upwards at the coding phase, to form the typical V shape. The reason for this is that for each of the design phases it was found that there is a counterpart in the testing phases which correlate to each other.

Batch Testing...As per my knowledge Regression Testing can also be called as Batch Testing since theoretically, after each fix one must run the entire batch of test cases previously run against the system, to ensure that it has not been damaged in an obscure way....

Batch Testing...As per my knowledge Regression Testing can also be called as Batch Testing since theoretically, after each fix one must run the entire batch of test cases previously run against the system, to ensure that it has not been damaged in an obscure way....

priority & severity of the BugsOn what basis does one decide the priority and severity of the bugs..? I know that priority is business and severity is technical based. But still i have read one statement regarding this. Just do explain clearly this statement or justify it clearly.The stamnt is as follows " if schedule drawn closer to the release, even if the bug severity is more based on technical perspective, the Priority is given as low because the functionality mentioned in the bug is not critical to business" .

Bug life cycle means "work flow" of a a bug.The bug has to got thru the life cycle to be closed. A specific life cycle ensures that the process is standardised.The different states of a life cycle of bug are as follows:1.New2.Open3.Assign 4.Test5.Verified6.Deferred7.Reopened8.Duplicate9.Rejected10.Closed...Usually in manual testing, the bugs are reported to developers using the tool called Bugzilla .

Difference b/w Bug & defectVery Good question Neha..here is the answer to ur question...Bug: An Error found in the developmental environment before the product is shipped or delivered to the customer.Defect: An error found in the final product after it is shipped or delivered to the cuctomer.

Diff btwn BUG n DEFECTDefect is a mismatch between the actual result n expected result.Bug is a defect accepted by the developer.

this one is for sanity testingthis is also called TAT (Testor Acceptance Test). Testors look for a stable build to carry out their testing work. Testing to build for stability by testors is called sanity testing

Thread Testing means:Thread Testing means:A variation of topdown testing where progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

examples of severity & priority issuesThse are the examples of high priority, low severity & vice - versa:1. When executing a very complicated and unlikely scenario, the server crashes. In this cae Severity is high , but not high priority.2. a type in the product name appears in the installation splashscreen.In this case severity is minimum , but unacceptable, i mean to say of high priority.In brief..Severity is how much the bug will affect the customer, if they encounter it.Priority is how likely the customer will be able to come across the defect.Priority is decided by PM(Project Managers) and severity decided by Test engineers

Integration testing

info about Integration testing1. Most of time Tester will do Integration testing.2. Many unit tested moduels combined into a sub-system, which r then tested.3. Stubs are used in Top-down approach testing, stub is a dummy routine that simulates a module.4. Drivers are used in Bottom-up approach testing.5. In the integration testing we need to test each module that are dependent on each other like one function calling other function, so this called function used as STUB mean just it give respective output on click or link.6. The main aim of integration testing to check r they combined properly n giving the expected result?7. It is software integration testing. N u have rite that stubs are needed for bottom-up approach, and drivers r needed for Top-down approach.

whts the use of testing??1.To assure the quality of the product we need to do testing.2.In testng we find out the bugs or errors in to the system. 3.So that the product fullfill the client needs.Note : After testing we can't assure the quality of the product but for Quality Assurence we need to do testing.***** : We can do testing without QA but we can't assure the quality without testing.

but this is some additional info what i know ...SAND WICH approach or UMBERRALA APPROACH..this umberalla approach is a best approach for early release of limited functionality..which is not possible 4 top down N bottom up approach..here but the dis advantage here is the use of drivers n stubs complicates the test managment..

During SMOKE OR SANITY OR BVT (BUILD VERIFICATION) testing the test engineer will concentrate on testing the stability of the application..(i mean to say the postive flow of the applicaton is tested)..for example..in a system u have several modules.the test engr will test the modules one by one in a sequence..testing in the scence ..consider an ex of log in id n password..u just enter ur id n password if it is accepting them well u will log in..here we will just consider the postive flow only..u r least bothere about weather it is accepting even 4 wrong id or password..here u need to test wheter the flw is going or not..then in the same way 4 all other modules..if the testing fails the test engr will report the developer with defect and ask 4 modifications..unless this smoke or bvt test is passed further functional and system testing canot b conducted...

Answer of Retesting...Retesting means testing will be done only for the changed set of scenarios for the same application build. Here we test only particular module to which changes have been made.Retesting is also executing the same test case after fixing bug to ensure the bug fixing. It is to test the same application with multiple test data.

Retesting is regression testing 1. Retesting is done on the basis of some selective test cases. 2. It is done when some changes are to be made in an application. 3. In retesting we test that modified system is still meets its requirements or not. 4. In this testing we test the application on the bssis of devloper's reply.

Severity of a bug is decided that how much a particular bug is hampring the OTHER FUNCTIONALTY of the application.* Priority of a bus is decided that how much a particular bug is effecting the USER EXPECTETION from the application. Examples : Suppose we r testing NOTE PAD.High Priority Low Severity case : When we click on File button from menu bar New Ctrl+N link is not working.Low Priority High Severity case : When we click on Help button from menu bar About Notepad is not working. Priority and Severity of bug is also decided on how early a bug should be fix. suppose if New button is not working so how can tester go to test other function so here this bug is should be fixed as early as possible but About Note pad button should be fixed at last or when dovloper have time to fix.

White box testing is also called as structural testing and Black box testing is also called as Functional testing.

if new requirement comes in middle of perf.testingHello,As u have asked what we will do as a tester if we come to know new requirement in middle of performance testin.So, answer is if u have reached performance testing stage,it means already u have finished functionality testing.So once new requirement is identified, it has to be reported to developer and have to let them add tht into requirement specification document and has to update the code of tht application, as tht important requirement cannot be ignored.once again tester has to do regression testing of the application's impacted modules after the new requirement has been added and code updation is over.Say if there are 10 modules & first 5 are impacted.. so when that updation of code is going on , as a tester we can continue with the performance testin with the 6th , 7th & remaining modules.And if the 7th or 8th module is dependant & impacted on the new requirement whish is adding on.. then u can leave those modules & continue with performance testin parallely.

Involvement in any documentations as test.engAs a test engineer, we will nt be involved in any documentaion.All the test plan,minutes of meeting, some technically related documents are prepared by TL & PM.. But we may get involved to prepare some low level documents just to know whether we have understood the application if it is beginning of any project..& ofcourse, defect reort document will be prepared by us and it will be agian verified & approved by TL's..

answer to "portlet testing"Portlet is some part of the website pages where that particular fragment of information or say some ad's is collected from different source website.this is what i know about portlet . there are some checklist carried out for portlet testing. they are...This checklist summarizes key tests that should be performed on every portlet and provides basic information on troubleshooting. All the following tests should be performed in multiple implementations of the portal.1.Test alignment/size display with multiple stylesheets and portal configurations.2.Test all links and buttons within the Portlet display. (If there are errors, check that all forms and functions are uniquely named, and that the preference and gateway settings are configured correctly in the Portlet Web Service editor.)3.Test setting and changing preferences. (If there are errors, check that the preferences are uniquely named and that the preference and gateway settings are configured correctly in the Portlet Web Service editor.)4.Test communication with the backend application. Confirm that actions executed through the portlet are completed correctly. (If there are errors, check the gateway configuration in the Portlet Web Service editor.)5.Test localized portlets in all supported languages. (If there are errors, make sure that the language files are installed correctly and are accessible to the portlet.)6.If the portlet displays secure information or uses a password, use a tunnel tool to confirm that any secure information is not sent or stored in clear text.7.If backwards compatibility is supported, test portlets in multiple versions of portal.

difference b/w exploratory n adhoc testing????plzzzz

when there r no documents available then the test engr will conduct the application by using his previous knowledge or experience n predetermined idea abt the application functionality..this type of testing is called adhoc testing.just like adhoc testing explotatory testing is also conducted with out proper documents.but in expl testing the test cases will b writen after exploring the functionality of application n then the testing will b conducted..


difference between verification and validationHi guys, ppl have already defined verification and validation in testing but im a bit confused between the two, can any one provide the exact difference between them, thnQ soooo much!!!

10/9/2006 12:35 AM
difference between verfication n validationVerification mean's product is design to deliver all functionalitiesto the customerValidation ensure the functionalities r the intend behaviour of the product as defined in specification. For example due to some reason ur mother couldn't go to market instead she want u to go the market n buy the stuff she want's as we all love our mother vat we do we take a piece of paper and write all the stuff vat she want's and check with her tht everything has been written r not is called verificationAfter returning frm market once again we check the list and compare the stuff tht u brought frm the market tht anything is missing is called validation.

Sanity TestingSanity testing is also called as Testability testing r Build verification testing.After receiving the initial build frm developer,tester estimate the stability of that build tht whether the build is ready to start real testing r not.

sanity testing or smoke thestingSoon after the build is released from the development department, the initial test performed on the application to check whether the functionality is working properly and also to check all the features and objects are available are not.

retestingits a process where we test the already tested build with different set of values.

RetestingIn wich one can perform testing on application which is already tested ,with diff. sets of data,in ordr to ensure that the defect is reproducible

Retesting involves the check for the functionality of the previous and the present build is stable or not. Regression testing means durning testing if we find any mismatch in build,we send tht mismatch r defect to developer,after correcting the defect the developer send this modified build to testingteam. We tester's re-execute that modified build along with previous fail and pass testcase that r related to that modified build is called regression testing

white box testing is also known as 1.Glass box testing2.Clear box testing3.Transparent testing

Wednesday, October 17, 2007

TESTING METHODOLOGIES

Acceptance Testing
Testing the system with the intent of confirming readiness of the product and customer acceptance. Acceptance testing, which is a black box testing, will give the client the opportunity to verify the system functionality and usability prior to the system being moved to production. The acceptance test will be the responsibility of the client; however, it will be conducted with full support from the project team. The Test Team will work with the client to develop the acceptance criteria.

Ad Hoc Testing

Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.

Alpha Testing

Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.

Automated Testing

Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests.

Beta Testing

Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released.

Black Box Testing

Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.

Compatibility Testing

Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.

Configuration Testing

Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.

End-to-End Testing

Similar to system testing, the 'macro' end of the test scale involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

Functional Testing

Testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

Independent Verification and Validation
(IV&V)
The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices.

Installation Testing

Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs. Testing full, partial, or upgrade install/uninstall processes. The installation test for a release will be conducted with the objective of demonstrating production readiness. This test is conducted after the application has been migrated to the client's site. It will encompass the inventory of configuration items (performed by the application's System Administration) and evaluation of data readiness, as well as dynamic tests focused on basic system functionality. When necessary, a sanity test will be performed following the installation testing.

Integration Testing

Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed at as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. (see system testing)

Load Testing

Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.

Parallel/Audit Testing

Testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.
Performance Testing Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.

Pilot Testing

Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (see beta testing)

Recovery/Error Testing

Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

Regression Testing

Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred.

Sanity Testing

Sanity testing will be performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It will normally include a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.

Security Testing

Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.

Software Testing

The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant. This term is often applied to commercial products such as internet applications. (contrast with independent verification and validation)

Stress Testing

Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity.

System Integration Testing

Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm.
Unit Testing Unit Testing is the first level of dynamic testing and is first the responsibility of the developers and then of the testers. Unit testing is performed after the expected test results are met or differences are explainable / acceptable.

Usability Testing

Testing for 'user-friendliness'. Clearly this is subjective and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

White Box Testing

Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.