What happened to the Conficker virus?

It has been quiet for months. But is there a comeback to be expected. As the worm is self-updating, and has switched from “phone-home” logic to peer-to-peer logic, it is considered autonomous in many ways.
But the software is still updated, and there is still more than 5 million workstations and servers infected, and all these zombies are just hanging around waiting for instructions.

MikkoHypponen (F-Secure) Conficker presentation at BlackHat conference

MikkoHypponen (F-Secure) Conficker presentation at BlackHat conference

loss of attention

During first quarter of 2009 the peak estimate was roughly 10 million infected machines. Norwegian police got infected, A London hospital got infected, a major norwegian hospital, among others. The cost of the worm, including indirect cost due to loss of productivity etc can not be estimated. Did it take longer to get back the results from the cancer test? Affecting administrative systems is also affecting patient care.
Conficker was all over the place. But it has lost attention. Media has moved on to Twitter/Facebook and other social networking security issues that are more media friendly than a boring botnet that is idle but growing.
And loss of attention is just what they are waiting for. What is the motivation, what are they going to use this botnet for.

The Conficker Working Group has been fighting the Conficker battle since initial infections late november 2008, but they are actually more on the defensive side 8 months later. Rodney Joffe, director of the Conficker Working Group says:”"Even if we lose against Conficker, there are things we’ve learned that will benefit us in the future.”

Motivation

The resources spent on producing Conficker can be illustrated by the crypto-algorithm used in Conficker-family. They started using MD-6 algorithm in Conficker.B just weeks after first publication of this algorithm, and updated the algorithm in Conficker.C just two weeks after revision for buffer overflow was published. This is slick leading edge worm development.
There have been speculations on whether they have funding from intelligence agency, military, or even a country. Conficker.A employs two checks to avoid infecting systems located within the Ukraine, this code is removed in later versions of the Conficker family. During Conficker.C update there was some code added for a popup to market fake virus software. Most likely a disguise act to make it look like the every day profit worm.
The best-case scenario would be if they are only in it for the money.

Useful tools

F-Secure Easy Clean – Free tool from F-secure to remove Conficker
Windows Malicious Software Tool Scan your system for infections
Make sure you configure Microsoft update to get the latest security patches.

References

RIA – client side performance

RIAs

The web browser is the most common user interface, familiar to every end-user, and available on any system out there. So it is expected that more and more applications end up with a web front end. As there is a demand for richness on functionality, there is a set of libraries and plugins grown around the browser engine to overcome the limited request-response cycle. RIAs (Rich Internet Applications) is the common abbreviation. And RIAs are normally driven by usability requirements. But what often is forgotten is the performance aspect.
ajax-fig2_small

performance

And there is no thing as important to usability as the response times. You will never have a end user accepting to wait for 20 secounds, because the functionality she is given is so excellent. Yes I have heard that argument from the front-end developers, over and over again. But testers, as expected, just laugh at this. And the end-user normally starts crying.
So make sure you focus on the performance, also on the last mile from server over the network and into the browser execution engine. And the different browser engines does have their differences, so make sure you do the performance tuning on different browsers, and at least on the browser most commonly used by your end-users. No need to climb onto your high horse and show that your fancy latest version of some browser, running on developer hardcore pc is performing ok. Think about your end-users. You have to take the requirement of acceptable response times seriously.There are tools to support performance tuning the front-end.

tools
  • Firebug and YSlow
  • Fiddler
  • HttpWatch
  • The tools should be combined because they all they have their pros and cons, different browsers etc.

    conclusion

    Performance does matter, and is the most important factor when it comes to end-user experience. In SOA the server-side performance must be analyzed, see previous blog entry. But the client-side performance must not be forgotten. Make sure you use available tools to analyze the performance, as the end-user will experience it.

    automated tests, load testing and functional testing

    Automate your tests.

    You will need it for functional testing, performance testing, regression testing, bug fixing. The quality of the runtime application very much come down to the quality of your testing, and coverage of your tests. Most people know about junit for testing their logic. But what about web interface testing? And load testing?

    The sooner you find your errors the better. You could let the test team smoke out all the mistakes, which tend to be very late in the project. You could of course put it into production, and let your end users find it, which tend to be too late.
    Automated tests is the way to go.

    load testing/performance tuning

    When doing performance tuning you must be able to put load on your system. There are commercial tools available but one tool that I would recommend is Grinder. It does have a startup cost, but if you spend some time in the initial phase you will have a tool with extreme flexibility, and power. Normally you start out with some recordings. And then you pick up the generated script and tailor it to suit your needs. What is very useful is the controller/agent-architecture, meaning you control the tests from one machine, but have a range of agent machines performing the actual tests. This way the client-side limitations when it comes to number of sockets, or hardware, is not an issue. Using just one machine for client-side testing you normally would hit the socket limitations before server-side is loaded properly.
    The reporting part of Grinder is limited, but for the price (free software), it is definitively word investing some hours on learning about the capabilities of Grinder. Using some JMX-console, and extensive performance logging, will give you the bottlenecks, and the time distribution of response times.

    Functional testing

    For functional web testing there are other tools: Watij and Selenium. There are pros and cons of both, but both are easy to get going, so you can compare them yourself. Maybe have them both available in your test setup?

    SOA governance

    How can SOA become a competitive advantage. Everybody is doing SOA, so kicking off a SOA project is not giving you a competitive advantage. It is all about utilizing the SOA advantages, and this is where governance is required.
    How can the established services that is covering the core business processes be reused to improve existing processes, or even introducing new ways of doing business.
    A well-governed SOA system will be adaptive to rapidly changing business requirements, and even be a mandatory tool for the business analysts, and different business units, when improving business processes.

    One typical mistake is to start out doing a SOA-solution on a quiet and limited part of the IT-systems stack, and then being very consumer-driven on what services to provide. Usually the decision is about reducing risk and cost, and it is of course a fast way of establishing some services. But as this is a quiet corner, the reusability factor will most likely be low. And the advantage of SOA is not seen.
    It is important to focus on the core business processes. This is where reusability, and improvement to business processes will occur, and make a difference.
    When doing information modelling of the main business processes the core services will stand out. Information modelling is the key to finding the core services. What is the essential information for performing the everyday business.

    But even if there are core business processes being SOA-enabled is is still no competitive advantage as long as the same business processes already was automated before SOA. It is when existing business processes are improved, more businesses processes are automated, or new business processes are identified that the competitive advantage is occuring. And this can happen only if the SOA services are well-documented and communicated across departements, and to the business developers.
    A SOA repository is not only for the IT-department. They would be interested only in the technical side. The actual information modelling side, what information is exposed as service, are there any information that can be self-serviced, and reused in a different business process.

    As reuse is beginning to take place, proper lifecycle management is important. How is the quality, reliability, performance. What about version upgrades, end-of-life. Are the security aspects covered. Do we know and trust all consumers and providers in this process. Are there weak spots in the process chain.

    Now that is what SOA governance is all about.

    I have seen large SOA programs, and a corresponding budget, but no plan for SOA governance. The success factor being measured by counting the number of services created, and the number of people and manhours spent in the program. Which is really not saying anything about the competitive advantage at all.

    Business analysts and information modelling is the place to start. Then picking some information that is already used across at least two different business processes, and service-wrap this information. And do not introduce a new service every time you need some additional information.

    Three typical pitfalls:

    • Skipping the information modelling, and starting out by creating consumer driven services from the beginning. Will most often end up by a set of end-to-end integrations using web services for communication, and very limited reuse.
    • Introducing new services to fit the need instead of modifying existing services. Because it is so easy to toss in a new service, but starting to modify tested services could have side-effects and unknown consequences. Will end up with larger lifetime-costs and a huge number of non-reused services.
    • Postponing SOA governance until reaching a certain level of SOA maturity. Will be very difficult to establish governance, and consolidate the service stack later.

    WS-Security and WSM

    In a  SOA solution the security requirements will change as more and more business critical information is exposed on the service bus.  The growing complexity as number of providers/consumers increase, will require attention to securing the web services. In this article WS-Security, and Oracle WSM, is introduced as a way of encapsulating the business critical information in a standardized way.

    Article: SOA security, WSM and WS-Security

    three_different_scenarios

    WSM installation

    I have created a detailed installation guide on how to install WSM, Oracle Web Services Manager.

    The installation guide is starting out installing a separate j2ee server, from the SOA suite install. Then creating a separate OC4J instance to hold the WSM components. This is to separate the administrative component (ascontrol) from the WSM components.
    The WSM installation is then done as an advanced install, towards this OC4J instance.
    There is also details on how to create a dedicated WSM database instance.

    Even if there are already existing j2ee servers running SOA suite components, it is a general advice to separate WSM from the SOA suite.

    Article: WSM detailed installation guide

    And now you have a WSM to secure your web services.

    deployment

    SOA performance, cpu-bound or memory-bound

    The non-functional requirements are too often postponed or even forgotten until the later stages of a project. But if the general systems design, prototyping, and all the functional requirements are delivered, it can get quite expensive to deal with poor performance at the tail of the project.

    Is the system memory-bound, cpu-bound, network-bound, or disk-bound?

    It is mandatory to establish benchmark tests before performance tuning can start. There are both open-source tools, and commercial software (YourKit, JProfiler,…) that can help identifying the issues. But a simple jmx-console jconsole will give a first overview, or JRockit Mission Control.

    In general SOA systems are most often memory-bound, and a busy garbage collector could give extensive cpu-load. The two are very related in a automated garbage collector system.

    JVM tuning would normally be the place to start. The Java heap size is one of the most important tuning parameters of your JVM.
    The garbage collector algorithm is another important tuning parameter. Make sure you get into the details of the actual JVM, whether it is Sun JVM, JRockit, or something else.

    You could start buying your way out of the problem by adding more memory or cpu-power per server, or by adding parallell servers in a load-balanced environment. Hardware is a cost.

    Rewriting the system and rethinking the design could anyway become a necessity.

    Important factors are:

    • Message size of web services
      Influence IO latency, memory requirements and parsing/transformation cost (cpu). Rethink the message size, or rethink XML for communication.
    • The number of XML transformations and routing descisions.
      Again, parsing and transformation cost. Could the XML header be used, thus partial transformations?
    • Transport
      Influence IO latency. Using asynchronous protocols, verify network routing setup, limit external IO, use compression.

    There are also product-specific tuning options for the Service Bus and BPEL engine in use.
    The Oracle SOA suite tuning options will be covered in later blog.

    SOA service repository

    Establishing a SOA service repository could be done by utilizing products like Oracle Enterprise Repository and Oracle Service Registry. The roadmap for these two products looks promising, but there are also opensource alternatives with considerable momentum.
    It must be a live service repository, attached to documentation, so that it is made visible and available to other business units. Would there be other systems and business processes that could take advantage of this service to improve their process.
    The last issue are to often forgotten, but reusing services to gain a competitive business advantage, and improve the business processes intra-business, or business-business, is really the whole SOA idea.

    How suitable are the services for reuse, what about the service granularity, who are the business owners, and the consumers, what about security, performance. Make sure the services are well documented, and communicated.

    BTW, consumer-driven services will most often not be as reusable as information driven services, more on the issue in next blog.

    SOA governance should not be postponed

    Should a SOA program reach a certain level of maturity before SOA governance is implemented, or should SOA governance be implemented from day 1?
    Definitively, start thinking about governance from the beginning. Because consolidating a set of services for common governance control in later stages will be time-consuming, if possible at all. There will always be governance decisions taken anyway, so it will not be possible to ignore governance. Someone must exercise control on the services, when to upgrade, what to upgrade, when to retire. The whole SOA life-cycle from a business perspective. But if there is no official governance, the decisions will be taken at different spots in the organisation, and a range of non-official policies will be the result.
    There is this fine line between establishing policies up front, and then letting consensus between key stakeholders improve policies in a feedback loop, but still making sure the official guidelines are followed in the process of consensus improvement.
    A SOA program would have a broader range of stakeholders than a monolithic solution, keyword is communication. Different business units involved, a range of consumers, interoperability between business units, and across businesses. As SOA maturity is growing, governance consolidation gets difficult, not to say impossible, due to the increasing range of stakeholders. So, establish your SOA governance from day 1.

    When lacking SOA governance what often happens is that services are tailored to consumer, and duplication of almost identical services is the result. The lifetime cost of such a scenario is not something to present to the CTO, and is really breaking the whole SOA plan.

    Eclipse RCP

    When working on a Eclipse RCP solution Lars Vogel had a tutorial series that boosted the startup time of the project. And Eclipse RCP is in my opinion the user interface framework to stick to for rich client applications. Impressive, well-documented, and rich on functionality.

    check out Lars Vogel’s site: http://www.vogella.de/