Are you agile

The more agile you try to be, the less agile you become

Being lean, creating more value with less work.
Avoid spending resources on tasks that are not creating value to the customer.

The lean principles are stemming from the Toyota corporation, and when softwareifying it, includes:

  1. do not produce more than is required, manage the requirements
  2. focus on communication, and flow of information
  3. remove stuff that is not adding value
  4. automate test, build and deployment tasks
  5. reduce complexity, limit tools and platforms to strictly needed
  6. do not produce defects (it’s that easy)
  7. limit dependencies

to lean or not to lean

Everybody knows that if they don’t learn, accept and embrace agile methodologies, they are doomed to very soon become everybody’s laughing stock, not to mention their careers are going to go down the drain. The problem is that the PHBs that decided that they absolutely have to “go agile”, at the same time decided that they don’t really want or need to change their habits – instead they will just re-dress and rename their usual processes and culture.

  • buying an expensive, yet “agile” tool set, so as to justify inflated budgets
  • getting a bunch of costly certification or two

Instead of concentrating on spreading the word about the principles – which is hard, because traditionally-bred teams and managers have trouble understanding, not to mention converting – the agile community has concentrated on proliferating certifications. Certified Scrum Master? Scrum Practitioner? Scrum Trainer? Grand Klan Master? Chief Lizard Wrangler? Cappo di Tutti Cappi?

When is the agile maturity model coming along?

The more agile you are, the less agile you try to be

What these people are lacking is a toolset to use to make their existing processes work with this new Agile thing. And as Agile is not only about the continual improvement of a product, but the continual improvement of a team it is also about the continual improvement of the process itself. Agile is not a static object, it should be continually changing and improving like a self-mutating sci-fi virus poised to take-over the world that can only be stopped by Nicolas Cage (straight to DVD).

The more agile you try to be, the less agile you are

not to mention scrum.


XML debatching in Oracle ESB


In a SOA system both batching and debatching of messages are useful concepts, depending on the overall architecture. Batching of messages is one way of achieving asynchronous communication, but asynchronous communication can easily be achieved without batching.
Batching can be used for high-volume data processing, to utilize resources, and to support typical bulk operations.

XML debatching

XML files containing multiple messages could be debatched to process messages separately. The debatching of XML files are supported by Oracle SOA suite -> ESB -> FTP adapter.
To activate the de-batching behaviour of the adapter, you have to add the “PublishSize” attribute to the jca:operation tag in the adapter’s wsdl

PublishSize as part of ftp adapter wsdl

PublishSize as part of ftp adapter wsdl

Configure debatching in the JDeveloper FTP adapter wizard

Configure debatching in the JDeveloper FTP adapter wizard

But trying this out we got the following error:

Payload Record Element is not DOM source.

The feature of debatching XML files is added to SOA suite patchset. technotes
XML debatching is not supported out of the box. Some additional configuration needs to be done, to add StAX support.


Streaming API for XML, StaX, is an API for reading and writing XML Documents. (specified in JSR-173). The stream-based parser browses the XML file instead of reading the whole DOM into memory. Introduced in Java 6.0, often better choice than SAX and DOM.
For examples, go to Lars Vogel’s blog.

Setting up XML debatching requires you to download and configure the XML pull-parsing library (StaX). But the technotes refer to invalid URL’s.

All the steps are listed in “XML Debatching” section from 10133technotes
Remember, you must also add the jars in server.xml and set “PublishSize” in adapter WSDL.

Oracle metalink (Where to Find jsr173_1.0_api.jar and jsr173_1.0_ri.jar Needed for XML Debatching ? [ID 736703.1] ) states that the jars can be found in jsr173.jar file from But this is not working either.

In general, if you are looking for a jar-file, and cannot find it in central maven repository, go and have a look here jar catalog
Here they are:

There are several jsr173 implementations around, and Sun has included one in java6. The BEA reference implementation (1.0) is really outdated, but to make sure you do not break Oracle support policies, please follow Oracle procedures.

[contact-form 1 "Contact form 1"]

Asynchronous messaging and HermesJMS

Hermes JMS console

Hermes display the activity on JMS queue or topic

Hermes display the activity on JMS queue or topic.

In a SOA solution asynchronous messaging can be used to decouple service provider from consumer, allowing service provider and consumer to process messages independently. The intermediate message buffer, typically JMS, will enable a more robust and reliable architecture.
Both during development, testing and production, it is extremely useful to be able to browse or search queues and topics, copy messages around and delete them.
This can be done by HermesJMS. Please note that the HermesJMS application is now bundled with SoapUI 3.5 and later versions. So instead of installing a standalone HermesJMS, you get it as part of SoapUI installation. Application is the same.
HermesJMS is a very useful tool supporting a wide range of JMS providers. The documentation is very good, with tutorials on how to configure this for the different providers. There is no need to give additional documentation in this blog, but just to show you how excellent this tool is, I have made some screenshots demonstrating the setup and management capabilities using Oracle Enterprise Messaging Service (OEMS) provided by OC4J in a SOA Suite installation. There are some differences from a OC4J standalone installation, as detailed here.
I repeat, the HermesJMS documentation is excellent, and there is also an older blog entry that had useful input. But some details needed to be modified so I add these screenshots to make you understand that if you are even close to JMS, you need this tool.

First it is a matter of adding the OEMS client libraries into classpath:

showing how to configure the provider classpath for OC4J Oracle Enterprise Messaging System (OEMS)

showing how to configure the provider classpath for OC4J Oracle Enterprise Messaging System (OEMS)

Adding libraries to support OC4J  JMS provider (OEMS).  Add the optic.jar library to the list of libraries so that Hermes can understand the managed OracleAS process management environment (OPMN).

Adding libraries to support OC4J JMS provider (OEMS). Add the optic.jar library to the list of libraries so that Hermes can understand the managed OracleAS process management environment (OPMN).

Now create a JNDI InitialContext:

Configure JNDI InitialContext, giving it a name.

Configure JNDI InitialContext, giving it a name.

Configuration to access Oracle SOA suite JNDI InitialContext

Configuration to access Oracle SOA suite JNDI InitialContext

Creating a session towards the recently configured provider

Creating a session towards the recently configured provider

Using session to discover destinations

Using session to discover destinations

Listing the 20 different destinations existing in this SOA suite instance

Listing the 20 different destinations existing in this SOA suite instance

Now doubleclicking on a destination (e.g jms/demoQueue) will list the content

Now doubleclicking on a destination (e.g jms/demoQueue) will list the content

Java app, JMS send message

Here is a sample java application to send a message to a queue. Include these entries to the classpath


package no.gwr.util.jms;

import java.util.Hashtable;
import javax.jms.Message;
import javax.jms.Queue;
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;
import javax.jms.QueueSender;
import javax.jms.QueueSession;
import javax.jms.Session;
import javax.naming.Context;
import javax.naming.InitialContext;

public class Send {

  public static void main(String[] args) {
    QueueConnection queueCon = null;
    try {
      Hashtable env = new Hashtable();
      env.put(Context.INITIAL_CONTEXT_FACTORY, "com.evermind.server.rmi.RMIInitialContextFactory");
      env.put(Context.PROVIDER_URL, "opmn:ormi://localhost:6003:oc4j_soa/default");
      env.put(Context.SECURITY_PRINCIPAL, "oc4jadmin");
      env.put(Context.SECURITY_CREDENTIALS, "***");
      Context ctx = new InitialContext(env);
      QueueConnectionFactory qcf = (QueueConnectionFactory) ctx.lookup("jms/QueueConnectionFactory");
      queueCon = qcf.createQueueConnection();
      QueueSession queueSession = queueCon.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
      Queue queue = (Queue) ctx.lookup("jms/demoQueue");
      QueueSender sender = queueSession.createSender(queue);
      Message msg = queueSession.createTextMessage("a test message, go to");
      System.out.println("message sent");
    } catch (Exception e) {

Intermediate firewall killing tcp connections, regarding WSM

As a followup to the post firewall and tcp connection

For Oracle WSM there exist a workaround from Oracle to make the application handle idle TCP connections, before the firewall kills them. The real scenario is a Oracle WSM gateway in DMZ.

1. Each policy step instance creates one or two long-lived connection to the Active Directory or LDAP directory. In a production environment, this may cause connection overloading during user authentication against an LDAP or Active Directory server.
The default value of the connection lifetime parameter, 0 milliseconds, ensures that the connection is never timed out.
The problem is that idle TCP connections will be killed by the intermediate firewall.

To provide a workaround for this behavior, you need to tune the connection lifetime parameter as follows:

a) Open the following file:ORACLE_HOME/opmn/conf/opmn.xml

b) Find the process-type ID whose value is the name of the instance
in which Oracle Web Services Manager is installed. This may be "home",
or it could be another instance name. For example:
<ias-component id="default_group">
<process-type id="home" module-id="OC4J" status="enabled">
c) Find the data id="java-options" in the category id="start-parameters"
section of the file.
<category id="start-parameters">
<data id="java-options" value="-server -XX:MaxPermSize=128M .../>
d) Add the connection lifetime parameter under java-options. For example:
e). Restart the server for the configuration changes to take effect.
The timeout property is for the time to live, it is provided for
client context invalidation.

It can be set using system property
which is set against the OC4J JVM.
The usage of the “” parameter is in milliseconds.

Letting the application handle tcp connections before firewall interfere is the most isolated option, and should be preferred.
But as mentioned in firewall and tcp connection, this is not always an feasible option.

Monitor your web services using Cruisecontrol

Active monitoring is a matter of verifying your services when it comes to performance, availability and scalability. This is then focusing on the non-functional requirements to often forgotten. The governance starts at design time, and development phase should not forget the deployment and production phase.

Combining soapui with some kind of scheduler gives a lot of options when it comes to active monitoring. This can be used both for verifying the quality and non-functional requirements before going live, and monitor the responstimes and availiability also in the production phase, as active monitoring.

Luntbuild was the suggested ci system from soapui (surveillance testing web services using soapui)but due to an luntbuild issue, actually more related to the quartz job scheduler library a different scheduler was choosen, the well-known Cruisecontrol.

First of all, cruisecontrol is very easy to install and configure, in contradiction to many blog postings around.

Use the link “for CC-Config” in lower right corner of dashboard to start the cruisecontrol-config.
An intuitive interface, to get the initial grip of setting up a new project.
But after playing around in this gui, you end up doing the configuration in config.xml. For the two projects below, including all config the config.xml is 37 lines long, so nothing to worry about.

Actually it was as easy as luntbuild to configure initally, and even easier to install. Given the rich set of features the choice should be simple.

12.11.2009 , 12_39_01

12.11.2009 , 12_18_02

12.11.2009 , 12_19_14

So after configuring a couple of projects, the build starts running, and reports are received.
And We can say something about the non-functional requirements:
Stability, scalability, availability, response times, variation over day and week.
As SOA is very much about distribution of resources a top-down active monitoring is a very valuable approach to know the quality of your services as experienced by the service consumers.

12.11.2009 , 12_11_52

12.11.2009 , 12_12_40

Monitor your web services using luntbuild and soapui

Monitor your web services using luntbuild

Halt: Due to the fact that luntbuild stops building after a couple of builds, you should consider the post “Monitor your web services using CruiseControl” instead. Meanwhile, monitor this luntbuild-bug, if you prefer luntbuild as scheduler.

In a SOA system some service providers could be implemented as web services. Proactive monitoring of these web services will give a top-down approach to monitoring the overall system as seen from a service consumer, and give early warning on response times, downtime etc.
This would be useful in a production environment, as well as in a test and development setting. Be aware that proactive monitoring is causing additional load, but the information you get from it is extremely valuable. You could act on incidents, instead of reacting to errors.

soapui and luntbuild

But a lightweight and cheap initial solution would be to combine soapui tests, and luntbuild. If you don’t use SoapUI,and are working on web services, well you should start using it. SoapUI can be used for functional testing, loadtesting, mocking, monitoring etc. The capabilities of SoapUI are combined with a continuous integration tool (scheduler and notification) for a complete web service monitoring solution.

There is no need to give a full detail of configuration here, only say that of the continuous integration tools, luntbuild has no initial cost when it comes to configuration. You could of course also use Hudson, CruiseControl, Anthill or Continuum. But for this specific scheduled execution of web service request execution luntbuild is the fastest to configure. There is detailed documentation on the SoapUI site on how to combine soapUI projects with Luntbuild for surveillance testing. See SoapUI documentation on automated web service testing.

After installing Luntbuild, follow the steps in this microsoft article (Create your own user-defined services Windows NT/2000/XP/2003) to get Luntbuild as a system service.
02.11.2009 , 19_49_16

Example of proactive web service monitoring

Then it is a matter of configuring Luntbuild. And this is really the easy part. You will have proactive monitoring up and running in 30 min!

02.11.2009 , 19_44_01

02.11.2009 , 19_44_44

02.11.2009 , 19_45_55

And notifications when webservice fails, with easy access to the full request-response cycle, including errors.

02.11.2009 , 19_47_50

now that the infrastructure for proactive monitoring is set up, it can of course be used for scheduled load testing, for automated web service testing on code repository commits, etc.

But actually, to me, the most useful part is the proactive monitoring of the active production system.


Martin Fowler on continuous integration
Test-driven development in an SOA environment
Passive vs. Active Monitoring
Create your own user-defined services Windows NT/2000/XP/2003
SoapUI documentation on automated web service testing (surveillance testing)

firewall and TCP connection

firewall killing idle tcp connections

An application deployed in DMZ is configured to authenticate towards LDAP server located in internal zone.
The initial authentication works fine. A new TCP connection is established through the firewall, and authentication hits the LDAP server. Th TCP connection is kept open by operating system after being established. Operating system will try to keep this TCP connection alive for a given time, typically configured to 7200 seconds (2h). The TCP connection is left open, to reuse it for subsequent requests, avoiding the TCP handshake overhead.

29.10.2009 , 11_23_38
But when doing a new authentication request 31 min after the first one, it fails.
The problem is that the intermediate firewall kills the idle TCP connection after a configured timeout, currently 30 min.
So what are the options to fix this:

  1. generate traffic, to avoid idle tcp connection. A type of proactive monitoring
  2. set firewall timeout to larger than the os-specific keepAlive
  3. set os-specific keepAlive to something less than the firewall timeout
  4. handle killed tcp connections on application layer level

Option 1 is fine because it adds the possibility of proactive monitoring. The downside is that the infrastructure is dependent on some arbitrary traffic generator or monitoring tool.

If the application behaviour can be controlled (through code change or configuration) option 4 is relevant. Very often there is a wide range of applications (both open-source, commercial and proprietary applications) that will experience the same problem of firewalls blocking TCP connections after some idle timeout. So controlling every application could become difficult.

Increasing the firewall timeout as mentioned in option 3 could be a way to go. But firewalls are there for security reasons, so should consider issues like Denial of Service (DoS) attacks, and TCP session hijacking before increasing firewall timeout.

Option 4 is detailed below.

hardening the TCP/IP stack

Microsoft recommends a KeepAliveTime of 300 seconds (5 min) as part of hardening the TCP/IP stack against denial of service attacks. See How to harden the TCP/IP stack against denial of service attacks in Windows Server 2003 , Microsoft Technet and Microsoft Windows Security Resource Kit.

The same kind of information is found in Securing and Optimizing Linux: RedHat Edition -A Hands on Guide. The keepAlive is changed from 7200 to 1800 seconds (30 min).

Operating system recommendations for keepAlive values are thus moved inside the firewall timeout window. The downside of reducing the keepAlive is that there are more keepAlive packets in the network, possibly increasing network congestion.

typical errors

ldap.DirContextHolder – Unsolicited exception thrown for directory context com.sun.jndi.ldap.LdapCtx@
at com.sun.jndi.ldap.LdapClient.processConnectionClosure

ldap.LDAPAuthenticatorStep – Failed to connect to ldap server.
at com.sun.jndi.ldap.Connection.readReply

ldap.DirContextHolder – Created directory context com.sun.jndi.ldap.LdapCtx

Covering Linux, OS X, Solaris and Windows on TCP keepalive configuration
Firewall Session Problems
Preventing disconnection due to network inactivity
How can I change the TCP/IP tuning parameters?
TCP keepalive overview
Using TCP keepalive under Linux

WSM redesign 10g->11g (WSM discontinued)

Fusion Middleware 11g
Oracle WSM is completely redesigned, and is a entirely different architecture in 11g. It is packaged in the Oracle Fusion Middleware (FMW) 11g release, more precisely fully integrated with Weblogic Server 11g.
FMW 11g was released July 1st, 2009, and marks the first full suite integration of Oracle and BEA products.

WSM changes are significant. See Examining the Rearchitecture of Oracle WSM in Oracle Fusion Middleware

This marks a major rearchitecture from the 10g, where Oracle WSM was a standalone product, but also included in the SOA suite.
The only component surviving the redesign is the WSM policymanager component. But the policies themselves are completely restructured Comparing Oracle WSM 10g and Oracle WSM 11g Policies
Note: Oracle Fusion Middleware 11g Release 1 (11.1.1) does not include a Gateway component.
Meanwhile, Oracle sales can co-sell Gateway from one of our ecosystem partners – Vordel, Sonoa, Intel, Layer7

“When will WSM Gateway in 11g be available?
WSM 11g Gateway is not part of the planned releases in the next 12 months. Timeframe beyond that can’t be shared with the customer.”

Oracle WSM 10g supported policy enforcement for third-party application servers, such as IBM WebSphere and Red Hat JBoss. Oracle Fusion Middleware 11g Release 1 (11.1.1) only supports Oracle WebLogic Server. There is no seperate UI in 11gR1. In 11g, everything in WSM is administred through EM.

So for securing web services you could either accept this Oracle suite bundling, pay the money, and reduced flexibility, or look for alternatives. In a suite, you could easily end up paying for unused functionality, and risking a tight coupling to a specific vendor. On the other hand, the product data sheet diagram looks cute.

discussion on wsm future
Youtupe video on 11g owsm functionality

WSDL contract contradictions, and SOAP header

The WSDL is the contract between service consumer and service provider. A WSDL describes the web service and defines

element description
type The data types used by the web service
message The messages used by the web service
porttype The operations performed by the web service
binding he communication protocols used by the web service

The semantics of the SOAP body is thus defined within the WSDL, and describes input/output on the executing endpoint of service. But as web services has moved from a plattform-independent RPC mechanism to an integration enabler across business boundaries, the security aspect is of major importance.
And there are several standardization initiatives on the meta-information of a web services, especially related to security. The WS-Security, SAML identity propagation, etc all end up as metadata in the SOAP header. But WSDL is not covering the SOAP header content at all.
As the transport mechanism is defined in WSDL, like SOAP over HTTP, one would expect the metainformation related to intermediate security steps also to be agreed between the consumer and provider.

Agreeing on transport, and content of web service request, without the important metadata part related to security gives no meaning. Of course it is difficult to specify a WS-Security block explicitly when it comes to encryption, digital signatures etc but it is typically through joint effort specifications that this kind of interoperability can be established.
And even if the metadata part that is related to security, payment, auditing etc is not well

Today it is obvious that the WSDL stem from RPC, but with the additional XML interoperability. And that the usage across business domains and in ever more heterogeneous environments have forced establishment of security specifications being appended.
Will the WSDL specification catch up, and include support for this?

From a consumer perspective a WSDL describing message content, together with transport protocol and physical service endpoint is of no use if the required metadata is communicated via mail, or over the phone, but not mentioned in WSDL.

As a side note, in Oracle ESB, if you do a XSL transformation, you have to specify the attribute passthruheaders=true to make sure the SOAP header is propagated to underlying service. Which is what you end up with as tool support when the information is not agreed in specifications.

Open Source risks – Java Enterprise rootkits

Open source is ubiquitious. There is literaly no project being run these days that is not using open source to some extent. Llibraries are downloaded and installed as needed.
If you are working on a J2EE project, what if the taglibs-library or spring-library is trojaned. Or what if a black hat developer is given commit access to one of the open source libraries in use. Remember the black hat developer is most likely an employee of Russian Business Network, or some other big business cybercrime network. They do relocate, and rebrand, but profit makers don’t disappear voluntarily.

There was an excellent presentation given by Jeff Williams on Enterprise Java Rootkits at the recent Blackhat USA conference. Should be a mandatory read for every java developer out there.

Malicious developer

The malicious inside developer is not what you should worry about. They already have access to your premises, and your network. So leaving some malicious code in there seems to be a detour for an insider. A cybercrime company would prioritize getting the code in themselves compared to paying off an untrusted inside developer. It is the risk/reward-ratio that makes cybercrime inclined to attack from anywhere, anonymously. Involving an insider would be an uneccessary risk. If the possbilities of remotely inserting malicious code are limited or removed, then getting an insider could become the only feasible option. But it is not there yet.

In a software project the developers consist of internal, outsourced, commercial and open source developers. And the amount of trust you can put on them follows same trajectory. Internals and outsourced you trust, they even have a face attached. The commercial you at least can get to some way or the other, even through court. But what about your open source developers. They do it for free, on their own spare time. How can we make sure they did not put in some moneymaking code in there.

control open source usage

So make sure you only use open source libraries that are widely used, that undergoes proper reviewing, and where upgrades and commit access is well controlled. Be conservative on updates, and the selection of open source libraries you are using. If not you are inviting every black hat developer out there to provide you some software. It is like asking the Mob to take care of your savings.

ongoing security verification

In the code process, the build process and the operational environment there are issues to consider when it comes to security.The amount of libraries and apis deployed in runtime environment is a risk. Is it all needed, and in use? Reflection, classloading, instrumentation are powerful apis that can be easily exploited by a blackhat developer.
Seal your jars, and build the open source libraries as well. And finally, what about the jdk versus jre. and the extensions folder.

Wikipedia – Russian Business Network
Enterprise Java Rootkits – Blackhat presentation