END-TO-END SOFTWARE TESTING

cwp additional services

End-to-end software testing is a methodology used to test whether the flow of an application is performing as designed from start to finish. The purpose of carrying out end-to-end software testing is to identify system dependencies and to ensure that the right information is passed between various system components and systems. End-to-end software testing involves ensuring that the integrated components of an application function as expected. The entire application is tested in a real-world scenario such as communicating with the database, network, hardware and other applications. For example, a simplified end-to-end software testing of an email application might involve: logging in to the application, accessing the inbox, opening and closing the mailbox, composing, forwarding or replying to email, checking the sent items, logging out of the application.

For an online system where users do multiple tasks simultaneously, response time should be 1 second or less, 90% of the time.
The consistency of response time is measured in several test runs if the performance is calculated specifically in terms of response time.

Throughput Testing
Throughput software testing measures the throughput of a server in the Web-based system. It is a measure of number of bytes serviced per unit time. Throughput of various servers in the system architecture can be measured as kilobits/second, database queries/minute, transactions/hour, or any other time bound characteristics.
Capacity Testing
Software Capacity testing (see Miller, 2005) measures the overall capacity of the system and determines at what point response time and throughput become unacceptable. Software Capacity testing is conducted with normal load to determine the extra capacity where stress capacity is determined by overloading the system until it fails, which is also called a stress load to determine the maximum capacity of a system.
Myths on Software Performance Testing
Perception on PT differs from user to user, designer to designer, and system to system.
However, due to lack of knowledge, people understand PT in many ways leading to confusion among the user as well as developer community. Some of the myths on PT are: Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
• Client server performance problems can usually be fixed by simply plugging in a more powerful processor.
• If features work correctly, users do not mind a somewhat slow response.
• No elaborate plans are required for testing; it is intuitively obvious how to measure the system’s performance.
• Needs just a few hours to check performance before deployment.
• PT does not require expensive tools.
• Anyone can measure and analyze the performance; it does not require any specialized skills. However, the real picture on PT is entirely different. It is a complex and time consuming task. Testing on only a few parameters on performance do not yield proper results.
Complex parameters and different approaches are required to test the system properly. Performance Testing: “LESS” Approach
Performance of Web applications must be viewed from different objectives: fast response against a query, optimal utilization of resources, all time availability, future scalability, stability, and

reliability. However, most of the time, one or few objectives are addressed while conducting performance testing. While conducting PT with any objective in mind, the ingredients to the testing system are the same. The ingredients could be number of concurrent users, business pattern, hardware and software resources, test duration, and volume of data. The result from such performance tests could be response time, throughput, and resource utilization. Based on these results, some of the indirect results like reliability, capacity, and scalability of the system are measured. These results help in drawing a conclusion or making a logical judgment on the basis of circumstantial evidence and prior conclusions rather than just on the basis of direct observation. Such reasoning is required to justify whether the system is stable/unstable, available/ unavailable, or reliable/unreliable. This can be achieved by conducting the LESS (Load, Endurance, Stress, and Spike) testing approach.

Load Testing

Load testing (see Menascé, 2003) is a term which finds a wide usage in industry circles for performance testing. Here, load means the number of concurrent users or the traffic in the system. Load testing is used to determine whether the system is capable of handling various anticipated activities performed concurrently by different users. This is done by using a test tool to map different types of activities; then through simulation, different real life conditions are created.

To illustrate this, consider a Web-based application for online shopping which is to be load tested for a duration of 12 hours. The anticipated user base for the application is 1,000 concurrent users during peak hours. A typical transaction would be that of a user who connects to the site, looks around for something to buy, completes the purchase (or does not purchase anything), and then disconnects from the site.

Load testing for the application needs to be carried out for various loads of such transactions. This can be done in steps of 50, 100, 250, and 500 concurrent users and so on till the anticipated limit of 1,000 concurrent users is reached. A system being tested for 10 and 100 constant load of users for a period of 12 hours. The graph indicates that during these 12 hours there is a constant of 10 or 100 active transactions. For load testing, the inputs to the system have to be maintained so that there are a constant number of active users. During the execution of the load test, the goal is to check whether the system is performing well for the specified load or not. To achieve this, system performance should be captured at periodic intervals of the load test.

Performance parameters like response time, throughput, memory usage, and so forth should be measured and recorded. This will give a clear picture of the health of the system.
The system may be capable of accommodating more than 1,000 concurrent users. But, verifying that is not under the scope of load testing. Load testing ensures the level of confidence with which the customer uses the system efficiently under normal conditions.

Endurance Testing
Endurance testing deals with the reliability of the system. This type of testing is conducted for different durations to find out the health of the system in terms of its consistent performance. Endurance testing is conducted either on a normal load or on a stress load. However, the duration of the test is the focus of the test. Tests are executed for hours or sometimes even days. A system may be able to handle the surge in number of transactions, but if the surge continues for some hours, then the system may breakdown. Endurance testing can reveal system defects such as slow memory leaks or the accrual of uncommitted database transactions in a rollback buffer which impact system resources.

When an online application is subjected to endurance testing, the system is tested for a longer duration than the usual testing duration. Unlike other testing where the execution is held for a lesser duration, endurance testing is conducted for long duration sometimes more than 36 hours. Depicts the endurance test on a system with different loads of 10 active users and also a peak load of 1,000 active users running for a duration of 48 hours. This attempt can make the system become unreliable and can lead to problems such as memory leaks. Stressing the system for an extended period reveals the tolerance level of the system. Again, system performance should be captured at periodic intervals of the test. Performance parameters like response time, throughput, memory usage, and so forth should be measured and recorded.

Another important difference among these components is in terms of inputs to the system during the testing period. The variation in inputs for various types of performance related testing. Load and stress testing differ in number of concurrent users. Load testing is performed for a constant number of users whereas stress testing is carried out with a variable number of concurrent users. Stress testing provides two scenarios. In the first scenario, the variable factor is the number of users within the bandwidth to check the capacity of the system, keeping other inputs constant. In another scenario, hardware/software resources are varied to stress the system, keeping other inputs constant.

Spike testing deals with the surge in the load for short duration with uncertainty in input business pattern. The uncertainty in the business pattern depends on external factors to the system like sudden change in business, political change affecting the business, or any other unforeseen circumstances.

In endurance testing, load is increased beyond the expectation for a long duration of time, and the SUT is observed for its reliability. Here there is a need to choose the specific business pattern which may impact the performance during the endurance testing.
By adopting the LESS approach, it is easy to understand the performance behavior of the system from different points of view. The inference drawn from such tests will help to verify the availability, stability, and reliability of the system. Provides the inferences drawn from the LESS approach which indicates that LESS ensures complete performance testing.

Web-based applications have evolved from the client-server applications. The client server concepts are maintained through a Web-client and a Web server. These are software applications which interact with users or other systems using Hyper Text Transfer Protocol (HTTP). For a user, Web client is a browser such as Internet Explorer or Netscape Navigator; for another software application it is an HTTP user agent which acts as an automated browser.

The use of Web applications can range from simple to highly complex tasks. With the advent of e-commerce practice, business enterprises are simplifying the processes and speeding up the transactions using Web as an effective medium. Web-based applications are extensively used for both B2B and B2C e-commerce. The e-commerce technology used for Web applications is developing rapidly. More open system culture is followed and systems are vulnerable to performance problems. Simple applications are being developed with a Common Gateway Interface (CGI) typically running on the Web server itself and often connecting to a database which is again on the same server.

Modern Web applications are mounted on complex business logic tiers such as Web server, application server, database server, firewall, router, and switches. Each tier is exposed and vulnerable to performance problems.
PT is a complex and time consuming activity. The testing process should start from the

requirements collection phase itself. PT requires simulating several hundred concurrent users. This requires automated tools that are expensive. A proper environment like bandwidth, system configuration, and concurrent users hinders in providing adequate performance results. Production environment cannot be simulated as it requires investments. A subdued environment may not produce the relevant results. Skill set to plan and conduct PT is not adequately available. It is a late activity, and duration available for software testing is not adequate. However, testing the system for an optimal performance is a challenge and an opportunity.

Summary
The World Wide Web is an irresistible entity when it comes to business and information access in the modern world. All business systems utilize WWW to perform effectively in their own domains of business. A worldwide survey shows that business systems are unacceptable if the system is not high performance conscious. Ensuring high performance is the main criteria for Web users to repeatedly use the same site for their business.
The performance of any system is attributed to many parameters like response time to a user query, high throughput from the system and availability at all times. To ensure this, each system or software must undergo performance testing before its deployment. PT could be conducted in many ways such as in load, endurance, stress, and spike testing.
Technical peculiarities have a severe impact on the overall performance of the system.
Choice of technology platform for the product plays an important role in its performance.
To illustrate, developing the product on a new and untried technology with the belief that it would boost the performance may lead to other contraindications in terms of other quality attributes such as security, consistency, compatibility, and integrity. As an example, client side scripting is a new technology which can be used to reduce the number of interactions with the server. But the browser level scripting does not ensure total security. This may result in a possibility of exposing sensitive data to malicious access.
Likewise, many challenges have to be faced by the development team on the technology front. These are described in the following sections.
Security Threat
Security threats to Web applications can make users worry about online transactions.
Their privileged information may be leaked to the outside world. They may lose their credit card information to others and lose money.
While developing Web applications, the major focus is on the functionality rather than on security. If the Web users are able to see a security lapse in the system, then the product loses its market in spite of other interesting features because users no longer trust the Web site. Thus, a significant impact on performance of the site is directly related to the security of the applications. Threats such as SQL injection, cross browser scripting, sniffing can access the valuable, sensitive information of the user. Security and performance are usually at odds with each other. Current implementations of security on the Web have been adopted at the extreme end of the spectrum, where strong cryptographic protocols (see Paulson, 2004) are employed at the expense of performance.
The SSL protocol is not only computationally intensive, but it makes Web caching impossible, thus misses out on potential performance gains.
Developer’s Negligence on Performance
It is a common practice among many developers not to optimize the code at the development stage. To illustrate, a developer may use more buffer space against a variable in the program than

is actually required. This additional reserved space may later impact the performance of the system by denying space for others. Optimized code exhibits the same functional behavior; an optimized code is preferred for better performance of the system.
Optimized code may utilize scarce system resources such as memory and processor unnecessarily. Such coding practice may lead to severe performance bottlenecks such as memory leaks, array bound errors, inefficient buffering, too many processing cycles, larger number of HTTP transactions, too many file transfers between memory and disk, and so on. These problems are difficult to trace once the code is packaged for deployment.

Complex User Interface
Web users appreciate an elegant, simple user interface. A complex interface would promote a user to abandon the site due to the difficulties in interacting with the site. More complexity also impacts the performance of the site. Only the necessary user identification information should be made mandatory, and all other fields should be optional. Users do not like lengthy, complex interfaces which are irritating to use.
Web Site Contents
In the early days of Internet evolution, Web contents were static information and online traffic was comparatively light. Now, Web contents are dynamic, warranting powerful Web servers and robust techniques to handle data transfer. Database driven Web sites such as e-commerce applications typically display dynamic contents. The visual appearance of the Web page depends on the code executed, which in turn is based on the data stored in the tables. These dynamic Web pages require more processing power that puts stress on the server. The environment of the client system has a significant bearing on the performance of the product. Though the contents are stored at the server end, users always closely interact with client systems to get the required information. Clients must be properly configured and connected. They must have required hardware resources with the necessary operating system. Since a designer of Web applications has no control over the client system and its resources, Web applications must work with heterogeneous clients. Thus the client’s environment may consist of different browsers, divergent platforms, and different types of security settings, multisite caching, different communication protocols, and different network topology, all having a significant influence on the performance of the product.