Pages

Custom Search

Methodology of Test Effort Estimation


est Estimation in Software Testing industry is similar to time management that we do in our day to day lives. In order to understand what test effort estimation is, let us understand the term estimation first.

Estimation is the intelligent anticipation of the amount of work that must be done and the resources (human, financial resources, equipment resources and time resources) needed to perform the work at a future date, in a defined environment, using specific methods.

Most of you who have never done Test estimation before must have actually done it though you haven't realized it.

Let's consider an example for this. You want to attend your office at 9:00. So you estimate the time it would take you to reach office say, 45 minutes. And you take 15 minutes for your morning walk, 30 minutes to get ready & 20 minutes for breakfast. So to reach office on time you would need to wake up no later than 7:10AM.

Now let us understand test effort estimation.

Test estimation is the process of estimating the testing cost, testing effort, testing size & schedule of testing for a particular project on software testing under a specific environment, with the help of specified methods, testing tools & test techniques.

Software Testing estimation is important as it is directly linked with the project cost & deadline. Estimation is based on:

# Documents/Knowledge available: Requirement Specification Document, Domain Knowledge, Tool Understanding

# Assumptions: Requirement Doc is complete, builds will be stable etc.

# Calculated Risks: Manpower cost, lack of system understanding, backup resources available or not etc.

# Past Experience: Past experience or historical data

There are different standard & non-standard methods of doing test estimation. Many Managers/Leads are not comfortable in doing estimation as it is a time consuming activity. So they follow a non-standard way of estimation based on their past experience. But, if they are asked to work on some new technology/domain then it is difficult for them to do test estimation.

Methods of software test estimation:

1) Function Point Analysis / Test Point Analysis:
FPA is an ISO recognized method. Measure the functional size of an information system / application. Size reflects the amount of functionality from the functional or user, point of view. Independent of the technology used to implement the system and is totally dependent on the SRS (Specification Requirement Document)

Formula to calculate FP:

FP = UFP * VAF

Where UFP – Sum of complexities of basic functions - Internal logical files (known as ILF), External interface files (known as EIF), External inputs (known as EI), External outputs (known as EO), External enquiries (known as EQ)

VAF – Value adjustment factor.

Number of the test cases = FP *1.2 (Caper Jones formula)

Test effort = Number of the test cases * (%age of development effort/100)

Drawback: Detailed SRS is required.

2) Work Breakdown Structure (WBS):
Break down each testing task (map the task with each deliverable as per requirement document) into the smallest chunk or sub-tasks. Now do the estimation for each sub task.

3) Point Estimation Technique:
It resembles with WBS estimation method, break down every testing task into sub task & then do following three estimations on each entity:

# Best Case/ Positive scenarios: Where everything goes right, P

# Worst Case/Negative scenarios: Where everything goes wrong, N

# Average Case/Exceptional scenarios: Where few things go right & few deviates from the actual plan, A

Test Effort Estimate = P + 4*N + A/6

4) Delphi Method:
It is similar to WBS estimation method; here task & sub-tasks are allocated to the team members or some experts. Then team members/experts give the estimate that how much time they will take to complete each task. And finally their estimates are consolidated to reach the final estimate for each task.

5) Use Case Points:
Use case is a document, which describes the behavior and interaction of the system as the reaction to a specific query or action of an actor. Here actor means either an end-user or the stakeholder. An interaction is initiated by the primary actor with some specific objectives in mind. The system then provides response by safeguarding the interests of all concerned actors. According to the requests made & various conditions surrounding those requests, system behavior/ flow can be opened up. Test cases are based on the Use Cases.

Conclusion: Estimation is not a close-ended document, but a live document. You should keep your estimation document updated as and when you sense a change from actual happening in field. It should be reviewed continuously during course.

SAP Test Acceleration and Optimization (SAP TAO)

SAP Test Acceleration and Optimization (SAP TAO):

Purpose Of SAPTAO:

The highly secure and mission-critical nature of enterprise resource planning (ERP) data requires many SAP customers to test their ERP applications and business processes regularly. Many SAP customers use expensive customized manual procedures to test processes and applications. Manual testing is time-consuming, and frequently requires a team of experienced quality assurance (QA) professionals. Subject matter experts also often need to spend a lot of time communicating the process data flow to testers. 
SAP Test Acceleration and Optimization streamlines the creation and maintenance of ERP business process testing. 

SAP Test Acceleration and Optimization helps QA specialists to break down a test into components which are:

Assembled into test cases in a simple interface, using drag and drop 
Parameterized for flexible reuse, such as reusing a test that has updated data 
Maintained easily and inexpensively, even when screens, flows, or service packs change 

SAP Test Acceleration and Optimization is designed for SAP-GUI-based applications. SAP Test Acceleration and Optimization users should be experienced quality managers who are familiar with the SAP Quality Center application by HP, SAP GUI ERP applications, and business process testing.

Automatic testing with SAP Test Acceleration and Optimization maximizes: 

Testing deployment 
SAP Test Acceleration and Optimization, with SAP Quality Center, dramatically reduces the amount of time required to build and execute test scripts. 

Reuse 
SAP Test Acceleration and Optimization eliminates the need to create new tests whenever a component changes. If one component in a group of tests changes, replace that component, and re-consolidate the tests

Maintenance 
SAP Test Acceleration and Optimization records component parameters. It provides a Microsoft Excel spreadsheet to save parameters for reuse and maintenance. SAP Test Acceleration and Optimization helps you to determine the need for repairs, and helps you to repair your components.

Robustness 
The SAP Test Acceleration and Optimization inspection process ensures that SAP Test Acceleration and Optimization tests are more robust during changes. Inspection examines the parameter in a component, not just the screen object behavior.
SAP TAO

The SAP Test Acceleration and Optimization Clent application runs on Windows System. It Performs 6 Key functions 
1 .PFA (Process Flow Analyzer)
2. Inspection/UI Scanner
3. Import/Export
4. Change Analysis
5. Repository
6. Consolidator

Configuration

License Validation:
* Once Installation is complete, Double Click on the SAPTAO Icon on the Desktop. SAPTAO UI Appears. 
* Click on the "Configuration" Link on the top right of the screen and select the "License" Tab
* Select the Solution Manager system from the System List 
* Click on Test SAP Connection (The text box on the right SAP TAO License with Text Enabled turns Green
Connection Settings:

* Click on the Connect Module on the Left hand Side Panel of SAPTAO UI.
* Select the SAP Managed system from the list and provide valid credentials
* Click on "Test SAP Connection" Button and then Click on "Save" Button
* The ICON displaying the selected current Backend system turns Green
* Provide Valid QC Connection details along with Domain and project details and click on "Save" Button
* The ICON displaying the connection state of QC turns Green
Configuration Settings

Click on the Configuration Link and set the following in the respective tabs as described below.

Inspection /UI Scanner

The Inspection tab page in the SAP Test Acceleration and Optimization client selects multiple SAP GUI screens and transactions for testing, in an easy-to-use interface. It then determines whether these screens are valid, and sends them to SAP Quality Center as screen components. 

Set up Inspection

Upload components to SAP Quality Center:

To upload the screen components to SAP Quality Center If this option is not selected, the parameters Overwrite components and Delimiter for Component/Unique Identifier on the Import/Export tab page are used to name new components

Component path

The location of components created during inspection 
Duplicate components are overwritten without confirmation. 

UI Scanner: 
A plug-in module for HP Quickest Professional enables you to collect information from one screen at a time, and sends the screen objects to HP Quality Center as a component. The UI Scanner scans all objects in the screen including dynamically generated objects. The UI scanner requires an active Quick Test Professional installation on your local work station.

Import/export 
Import/export exchanges components between the SAP Test Acceleration and Optimization client and SAP Quality Center. The import/export module does the following. 

Export components from the inspector in the SAP Test Acceleration and Optimization client to SAP Quality Center 
Import components from SAP Quality Center to the SAP Test Acceleration and Optimization client 
Export components from the local memory to SAP Quality Center 
Export a component from the UI scanner and send it to SAP Quality Center, in the background 

PFA also uses import/export while creating the test components

Procedure 
1. Choose Import/Export tab page. 
2. Select the required options. 

Process flow analyzer:
The process flow analyzer (PFA) records all user interactions, and the sequence of screens, in a business process, and stores them in the SAP Test Acceleration and Optimization repository. It automates inspection, retrieval of dynamic SAP GUI properties at runtime, and creation of components. 

Procedure 
On the PFA tab page, make the following settings: 
The PFA will create a second spreadsheet to store the recorded outputs
Do Not Use Screen Components 
Whether screen components are to be inserted in the test. If selected, only default components will be used. Do not select this option unless required. 

Steps for Process Flow analysis are:- 
1. To add a transaction, choose. 
2. Select a transaction e.g. VA01. Name for analysis is already populated with time stamp concatenated to it. You may change it. 
3. Choose "Start" for Process Flow Analysis.
Execute the process flow as a business user. 
6. After executing the transaction completely, stop the PFA by clicking on "Stop the PFA" button in the PFA controller window. 
7. A report is generated which captures the activities performed by business user 

Change analysis:
Change analysis analyzes the impact of changes due to upgrades, SAP patches, or custom development, on a test, component, or consolidated component. The impact is found by comparing the results of the technical bill of materials (TBOM), transport requests and SAP patches.

Set Folder for BPCA 
In this section, you specify: 
The folder created in SAP Quality Center to store the test set created by BPCA 
Whether to use BPCA result ID as the test set name 
Change analyzer functionality helps in repairing tests, which are impacted by a software change. SAP Test Acceleration and Optimization relies on SAP Solution Manager – Business Process Change Analyzer (BPCA) result. The BPCA result Id could be searched on the basis of a solution or project

Steps for Change Analysis:- 
1. Input your result Id, and choose "Change Impact Analysis". 

Repository 
The SAP Test Acceleration and Optimization repository is part of the SAP Solution Manager system, and stores data required to create, optimize and maintain components and tests. The repository contains the following:
User interactions and the sequence of screens in a business process 
Information specific to SAP Test Acceleration and Optimization, that cannot be retrieved by other tools 
Results or states during process flow analysis, before the component is created 

The SAP Test Acceleration and Optimization repository tools comprise the following. 

Component Explorer 
The user can see the list of all the components created by himself for a specified QC Domain and QC Project.

PFA Explorer 
The user can search for PFAs created till now, depending on search strings. The user can specify property to search on. The different properties which are possible to be searched on are:- 
• User 
• Analysis Name 
• Transaction 
• System 
• Client 
• Date Time 
• Language 

Once the search is done the user can click on one of the PFA to see more technical details.

Consolidation
Consolidation creates a single component from the objects and data in an SAP Quality Center test. The component contains all the code and screen elements in a test. It executes much faster than the individual components and helps you to maintain business processes. Consolidation also takes dependent tests into account.

GUI and Usability Test Scenarios


GUI and Usability Test Scenarios


1. All fields on page (e.g. text box, radio options, dropdown lists) should be aligned properly
2. Numeric values should be right justified unless specified otherwise
3. Enough space should be provided between field labels, columns, rows, error messages etc.
4. Scroll bar should be enabled only when necessary
5. Font size, style and color for headline, description text, labels, infield data, and grid info should be standard as specified in SRS
6. Description text box should be multi-line
7. Disabled fields should be grayed out and user should not be able to set focus on these fields
8. Upon click of any input text field, mouse arrow pointer should get changed to cursor
9. User should not be able to type in drop down select lists
10. Information filled by users should remain intact when there is error message on page submit. User should be able to submit the form again by correcting the errors
11. Check if proper field labels are used in error messages
12. Dropdown field values should be displayed in defined sort order
13. Tab and Shift+Tab order should work properly
14. Default radio options should be pre-selected on page load
15. Field specific and page level help messages should be available
16. Check if correct fields are highlighted in case of errors
17. Check if dropdown list options are readable and not truncated due to field size limit
18. All buttons on page should be accessible by keyboard shortcuts and user should be able to perform all operations using keyboard
19. Check all pages for broken images
20. Check all pages for broken links
21. All pages should have title
22. Confirmation messages should be displayed before performing any update or delete operation
23. Hour glass should be displayed when application is busy
24. Page text should be left justified
25. User should be able to select only one radio option and any combination for check boxes.

General Web Testing Scenarios

General Test Scenarios


1. All mandatory fields should be validated and indicated by asterisk (*) symbol
2. Validation error messages should be displayed properly at correct position
3. All error messages should be displayed in same CSS style (e.g. using red color)
4. General confirmation messages should be displayed using CSS style other than error messages style (e.g. using green color)
5. Tool tips text should be meaningful
6. Dropdown fields should have first entry as blank or text like 'Select'
7. Delete functionality for any record on page should ask for confirmation
8. Select/deselect all records options should be provided if page supports record add/delete/update functionality
9. Amount values should be displayed with correct currency symbols
10. Default page sorting should be provided
11. Reset button functionality should set default values for all fields
12. All numeric values should be formatted properly
13. Input fields should be checked for max field value. Input values greater than specified max limit should not be accepted or stored in database
14. Check all input fields for special characters
15. Field labels should be standard e.g. field accepting user's first name should be labeled properly as 'First Name'
16. Check page sorting functionality after add/edit/delete operations on any record
17. Check for timeout functionality. Timeout values should be configurable. Check application behavior after operation timeout
18. Check cookies used in an application
19. Check if downloadable files are pointing to correct file paths
20. All resource keys should be configurable in config files or database instead of hard coding
21. Standard conventions should be followed throughout for naming resource keys
22. Validate markup for all web pages (validate HTML and CSS for syntax errors) to make sure it is compliant with the standards
23. Application crash or unavailable pages should be redirected to error page
24. Check text on all pages for spelling and grammatical errors
25. Check numeric input fields with character input values. Proper validation message should appear
26. Check for negative numbers if allowed for numeric fields
27. Check amount fields with decimal number values
28. Check functionality of buttons available on all pages
29. User should not be able to submit page twice by pressing submit button in quick succession.
30. Divide by zero errors should be handled for any calculations
31. Input data with first and last position blank should be handled correctly

Cloud Testing: Issues and Challenges

Over the past few years, Cloud has evolved as a buzz word in most of the IT industries. Testing in the cloud has greatly reduced the cost involved especially for mobile applications. Cloud manifests itself in three forms viz. Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS). However, Software-as-a-Service and Infrastructure-as-a-Service has recently emerged as the most important software testing services.

Jerry Gao, Xiaoying Bai and Wei- Tek Tsai in a white paper titled "Cloud Testing – Issues, Challenges, Needs and Practice" published in Software Engineering: An International Journal highlights the major issues and challenges in cloud testing.

1.    On – Demand Test Environment Construction
The question that often arises is how to build a testing environment for on- demand cloud testing services which is either systematic or automatic. According to Gao et al, the current cloud technology does not have any supporting solutions that will help cloud engineers build a cost effective cloud test environment.

2.    Scalability and Performance Testing
A survey by Gao and others found that many of the published papers have discussed about performance testing and solutions; however, they only "focus on scalability evaluation metrics and frameworks for parallel and distributed systems." The current metrics, frameworks and solutions, does not support the features such as dynamic scalability, cost – models and others.

3.    On – demand Testing Issues and Challenges
Software testing services in TaaS should be controlled and managed by keeping in mind the on demand testing requests and this raises many issues and challenges such as identifying the test process for TaaS which will support on-demand automated testing, or to identify the various approaches to help engineers cope with the breakdown of test cases or test scripts.

4.    Regresson Testing Issues and Challenges
Software challenges and bug fixing brings in regression testing issues and challenges. The on – demand cloud testing services should address the various issues and challenges. 

3 Reasons Behind Inadequate Testing of Mobile Apps


 he present century ushered in the era of mobile technology which has opened up many avenues for organizations to grow. This era also brought in various innovations. One such innovation is mobile applications that we now highly depend on for almost everything.  The Strategy Analytics App Ecosystem Opportunities (AEO) forecast - Mobile Apps Revenue Forecast: 2008 – 2017 – predicts that by 2017 the mobile app smartphone market will generate more than $35B growing from less than $1B in 2009.

However, according to a recent study by Capgemini and Sogeti in conjunction with HP, two- thirds of mobile application companies are inadequately testing their apps. It was found that in organizations that conducts quality assurance on mobile apps, 64% of them focused on performance, 46% on functionality while only 18 % on security. "Consistent and reliable software applications have become critical to the operations of many organisations. Yet the lack of confidence in most companies' internal abilities to monitor and test the quality of their software is resounding, particularly when it comes to mobile applications," said Michel de Meijer, Global Service Line Testing Lead, Capgemini Group.

Jennifer Lent on searchsoftwarequality.com highlighted some of the reasons why many software testing organizations are not giving mobile app testing the priority it deserves. Take a look at what she has to say.

Testing organizations are not serious about mobile apps

Many of the testing organizations look at mobile apps as "mini smartphone apps".  Steve Woodward from Cloud Perspectives said that many testing organizations have a mentality that the applications should work and it wouldn't be a big problem if there are defects in the application. Testing organizations shouldn't have this type of mentality as many of the applications that are flooding the market today are designed keeping in mind the various objectives of a business organization.

Evaluating the app performance in various environment
Mobile applications are expected to perform in the same way across different real world conditions. Testing mobile applications is restricted only to the labs and it is impossible to replicate the various conditions mobile application users will experience in the real world informed Matt Johnston from uTest.

A Low Price Tag
With the number of mobile applications increasing day by day, there is a cut throat completion among many developers to deliver an app with a lower price tag before the deadline for the release. Thus, to bring down the cost, many may restrict testing only to the performance and functionality aspects and ignoring the need to security test their apps.

5 Tips to Choose the Best Automated Mobile Testing Tool


 Mobile applications have revolutionized our lives. Today, with the touch of an option, we can access any amount of information such as getting directions to a restaurant, booking our flight tickets or even order a cab service. The shift from the more traditional desktops to smartphones has resulted in an increase in the demand for mobile applications.

However; the number of smartphones having different features and specifications are increasing daily and this poses many challenges to the testers. Brian MacKenzie in a blog post on northwaysolutions.com highlighted some tips to help mobile app testers choose the best automation tools to test applications.  

1) Reusable Scripts
Testers should ensure that the test scripts can be reused across the various devices running the same operating systems irrespective of the version. There are various solutions in the market today that claim to meet this requirement, however the reality is many of them don't although they come with tag lines such as "object based scripting" and "cross OS scripting".   

2) Support Emulators/Simulators and other Physical Devices
Testers should be able to run and record scripts across physical devices as well as simulators and emulators.

3) Web Application Support
Hybrid apps, web based apps and native apps have their own advantages and disadvantages not only to the developer but also to the testers and users. Hybrid and Web Based apps are becoming popular and they are likely to replace native apps. Before selecting the tool, the tester should ensure that it support all types of apps.
 
4) Interruptions Shouldn't Cause the Test to Fail
Common interruptions such as phone calls and messages shouldn't cause the test to fail. Testers should ensure that the tool doesn't get affected by the various interruptions and it should be able to resume once the interruption is over.

5) Integration with Performance Testing Tools
As poor application performance can affect the revenue, testers should ensure that the solution they plan to select can be integrated with other performance testing tool. Furthermore, the solution should be able to measure the performance of the RAM, disk, the battery and the CPU. 

5 Skills Every Tester Should Have


The ever increasing complexity of applications and the need of software applications in various business organizations have changed the face of software testing. The users expect their applications to be not only be user friendly but also defect free and this increased the responsibility of a tester. Testers are now viewed not only as a person who is responsible to find any bugs and defects in software but they are now viewed as a person who can instill some confidence into the minds of the users.

Milind Limaye on beyondtesting highlighted some of the skills that every tester is expected to possess.

1.    Communication
Testers are not only expected to be good listeners; however they are also expected to be good presenters as well. Testers need to communicate with the management, the users and the developers before, during and after development, prepare test cases, test logs and present the test reports. Communication skills of a tester include his/her body language, the tone, their writing style as well as the words they use.

2.    Domain Knowledge
Although testers are not expected to be domain experts; however they are expected to have a brief understanding about the application. This will help them identify the possible defects a user might face. According to Millind, the tester should keep the domain in mind when deciding on the priority of the bugs and defects, the test cases and the priority of the requirements. They should also be aware of the various domain complexities and the challenges.

3.    Desire to Learn
Testers are expected to keep themselves up to date with the various technologies, approaches, tools and techniques and apply them during testing. Testers should always remember that new tools may offer then some new and exciting features which can enhance their testing capabilities.

4.    Differentiate the Defects
Testers should have the ability to identify and differentiate the defects which need immediate attention and those that are severe. The test plan should include the various levels with regards to the priorities and severities of the bugs.

5.    Planning
Testers must be able to plan the testing process accordingly. The test plan should include the priorities of the various test cases, the number of defects that they are targeting, as well as all the functionalities, requirements and features.  A well planned test will lead to a high customer satisfaction.

To Unlock the QTP Script


Is there any way in which i can unlock the QTP tests which are locked when QTP or QC is closed abruptly.

To unlock the QTP scripts when locked by some other user



Set QCConnection=QCUtil.QCConnection 

Set con=QCConnection.command 
con.CommandText="DELETE FROM LOCKS WHERE LK_USER = 'USERID' " 
Set recset=con.execute 

*Mention your QC or ALM USER ID in above code

CATT stands 4 Computer Aided Testing Tool

Although CATT is meant for as a testing tools, many SAP users have now use CATT frequently to upload vendors master data and make changes to other master record.

SAP Consultant and Abapers tends to used it for creating test data.

With CATT, you don't have to create any ABAP upload programs and this save on development time.  However, you still have to spend time on data mapping into the spreadsheet format.

The transactions run without user interaction. You can check system mesages and test database changes. All tests are logged.

What CATT does is record you performing the actual transaction once.

You then identify the fields that you wish to change in that view.

Then export this data to a spreadsheet to populate with the data required.

This is uploaded and executed saving you keying in the data manually.

To perform CATT, it has to be enabled in your production environment (your system administrator should be able to do this - SCC4).

You will also need access to your development system to create the CATT script.