Challenges of SOA Testing

Why is SOA testing such a different beast than previous forms of browser, client/server and mainframe testing? Many of the benefits of SOA become challenges to testing an SOA application.

1. SOA is distributed by definition

Services are based on heterogeneous technologies. No longer can we expect to test an application that was developed by a unified group, as a single project, sitting on a single application server and delivered through a standardized browser interface. The ability to string together multiple types of components to form a business workflow allows for unconstrained thinking from an architect's perspective, and paranoia from a tester's perspective.

In SOA, application logic is in the middle tier, operating within any number of technologies, residing outside the department, or even outside the company

Think of today's service components as "headless" applications (most with no user interface) that may both rely on other services, or be consumed by other services to make up any number of business workflows within an SOA. You can rigorously test many of these components as they are developed in isolation, but what happens when they interact at deployment time? It becomes much harder to predictably find the source of a problem when you cannot get direct insight into why two or more disparate technologies do not create a cohesive application.

The post-SOA world offers a vast array of options for how you assemble or consume business workflows across multiple technologies, both inside and outside your core team. In an SOA, more points of connection = an exponential increase in possible failure points.

2. Need to ensure high service levels and excellent exception management

Quality has become a governor on the enterprise's success in delivering SOA applications. Ensuring quality in a singularly developed application was difficult enough to create an entire discipline around QA. With an SOA, application "stress points" can be anywhere, and will change as individual services are added to the workflow or changed.

There is a quality chasm between Unit and Acceptance Testing. Finding the root causes of problems across the middle tiers of SOA applications is difficult. Testing a front end user interface becomes irrelevant when it provides no insight into what is actually happening on the back end. And expecting developers to find missed requirements by conducting more unit testing at the code level doesn't get the team there either - it may find some bugs in the component-level code, but it won't demonstrate why a business requirement isn't being met.

Services "wrappers," for instance SOAP/WSDL around an existing RMI object, promise better interoperability by presenting a common set of controls, allowing legacy systems and custom components to be pulled together as steps in an SOA workflow. However, these wrappers may not map every aspect of the original component, making them very opaque to testing. If we are automating unit testing ("white box" testing) and acceptance/system testing ("black box" testing) as above, we are missing the area where most potential errors occur: the unpredictable interaction space between components.

3. Prioritizing new design vs. component reuse efforts

Companies don't implement an SOA strategy to try out the latest technology. They do it to attain new business capabilities. Complexity is driven into software by the natural process of competition, which forces the evolution of new business rules and logic into business systems. According to the 2005 Aberdeen report, "It's no surprise that the top factor for implementing SOA, which 50% of survey respondents cited, is development of new capabilities."

Timeline and budget both constrain quality, creating serious limits to the scope of functionality that can be tested using conventional means. In addition, the business must prioritize functionality as the proposed scope expands, so the project may not fall together in the expected order.

No SOA is a "flip the switch" single technology change

In Selecting an SOA approach there are components that are simply not worth the money and effort to bring into the SOA world - for instance a data feed that supplies a relatively unchanging piece of information to the business workflow. Whether the answer on some lower priority technologies is "if it's not broke, don't fix it" or "it's just not worth changing," you will likely find yourself supporting and testing some relics in any SOA.

We know that to test SOA, you need to go far beyond merely testing a user interface or browser screen. Web services (WSDL/SOAP) are an important component for many SOAs, but if you're only testing web services, you are not likely testing the entire technology stack that makes up the application. What transactions are happening at the messaging (JMS) layer? Is the right entry being reflected in the database? In fact, many perfectly valid SOA applications house business logic entirely outside of web services - for instance a Swing UI talking to EJBs connected with messaging applications.

Are you ready to test? SOA offers great implementation advantages, but to ensure quality, you must deal with:

How can you consistently test, when you are trying to hit a moving target with fragile manual tests? The only way to overcome SOA project uncertainty is through highly reusable test automation that can talk to every middle-tier layer - whether your team has built it according to your overall strategy or not.

Jason has more than 13 years of experience in executing marketing plans, re-engineering business processes and meeting customer requirements for technology and consumer companies such as HP, IBM, EDS, Delphi, TaylorMade, Sun, Realm, Adaptec, Motorola, Sprint and currently with iTKO. Jason writes articles on a variety of subjects including software testing for http://www.itko.com