The Whole – More than the sum of all parts

A lot of people are following the buzz about OSGi now and are excited about all the things it can do for you. A great world of new opportunities. Now, after the first draft of OSGi 4.2 it very much looks like we’re getting even more as an official OSGi standard (with RFC 124). The formally called Spring Dynamic Modules (in short SpringDM) is going to be OSGi-efied. Today, I want to take a short peak into what this can actually mean and how it could affect the software/ component reuse in your software stack even today.

First things first, when I talk to people about OSGi the most common complain is nothing OSGi specific at all. Basically it is the problem of every API out there. If you implement against an open or proprietary API you’re bound to it to some extend. Ok, this is besides the issue of being to complex. When done right, which includes using all the versioning features OSGi offers it is unfortunately kinda true. We’re still lacking the right tooling support here. Anyway, let’s stick to the first one first (I’ll cover the tooling problem in a later post). Speaking of proprietary, it is true you have to add custom headers to your jar files, but this alone doesn’t hurt you. You can still just use the jar as – yeah well – a simple jar in your web container or any other Java application without any problems at all. This was simple, but there is more of course – the OSGi doesn’t only specify manifest headers. There is also an API you use and program against. Activators for instance can be used to do something like setting up or shutting down your bundle and opening or closing connection pools to databases, registering services signaling that you’re bundle is there and ready to get its work done. Assuming you are not using any of the compendium APIs the imposed layer is rather small and can be encapsulated in a single package per bundle and more often just one single class.

However, as soon as you start using those few classes they become part of your jar and thus you’re bound to them. So, we’re still stuck right? Well, not really. First of all, every OSGi expert (and probably any other sane software engineer) will tell you not to intermingle your domain logic with middleware APIs of any kind. As already mentioned, with OSGi, you can usually keep this layer pretty thin. With Spring DM we can make it now even thinner and separate domain concerns from the container entirely. So, how can we achieve this. Of course, Spring is not doing any magic, although it does a pretty good job. All it does is actually providing an IoC framework, which declaratively wires beans together. Plain simple speaking, that’s about it. You could also use other IoC frameworks like Google Guice or pico container if like, but the integration in OSGi with Spring is pretty unique. The big question remaining is how we can best separate these two concerns.

To start with, you design your domain logic in a bean driven way. By that you not only achieve a well testable domain code, you also remove hard wired dependencies not relevant for your core needs – a very common practice in Domain Driven Design, to throw in some buzz words ;-) Package all that code in one or more bundles depending on your needs. What you got is the code base you can reuse in any environment. Now, we need the glue code to actually make it work in OSGi – the package with the few OSGi specific classes. Those you now have to put into a different bundle or fragment (more on this later). By this separation you no longer deliver container specific code, if you don’t need to. Even more, depending on your deployment scenario, you can change and adjust the configuration to your specific needs, all separated in different bundles/fragments. Removing configurations from your domain bundles has the advantage of independent versioning of you actual code and configuration. The domain code, if used f.i. in a Software Product Line is now no longer related to the configuration of your products based on this code and can be versioned independently – enabling you to see what really changes in your application.

Application Architecture
Fig.1 - Sample Application Architecture

This approach is shown in figure 1. You have two applications using the same library with different configurations. You can fix bugs or add features in the library without touching the configurations and vice versa. Interesting now is the difference between fragments and bundles. When should I use what? The advantage of a bundle is that it is the way OSGi intends to deploy code. You can use Activators and define your dependencies. The fragment in contrast was actually intended to only carry optional data like i18n information and alike. The advantage however is that the fragment and its host are sharing the same class loader(, which is also the biggest disadvantage and should be considered really carefully!). This distinction basically sets when to use what. When ever the configuration is loaded by the library and can’t be injected by another bundle, you need to have access to the bundles class path in order to provide the required resource. But wait a minute, you can’t use Activators in fragments. How could we now do the initialization? Here Spring DM comes in. You can just provide a Spring configuration in your fragment and as soon as the fragment is attached, Spring will recognize it and just initializes your beans defined in the descriptor and you have your Activator again. You even have access to the BundleContext if required. However, this approach is only intended for code that otherwise wouldn’t work or can’t be changed for some reasons. The best way to do it is for sure using bundles for that kinda thing.

The main point, I was trying to make here is that with the help of Spring and the up coming manifestation in OSGi R4.2 (if accepted), we have even more opportunities to build reusable and coherent artifacts, perfect suitable for software product lines. However, all the presented possibilities also impose potential for misuse and you should be careful by choosing your way. In many cases the simplest way on first sight eventually becomes a dead end, so be careful and farsighted when making your decisions.



OSGi dynamics and legacy code – taming the beast in the future?

Legacy code in OSGi has always been a problem. OSGi has such a dynamic nature, most libraries are either not aware of the potential errors that can be imposed by suddenly disappearing bundles or just use techniques not suitable for the OSGi programming model, like absolute paths, usage of file references instead of URLs, context class loaders and others alike. Despite these, I would call generally bad programing practices, which can be avoided even in non OSGi environments (maybe except the context class loader problem), there is one thing which is typical for regular applications and just doesn’t quite work within OSGi – static, implementation based APIs.

So, what do I mean with that. In short, the common APIs you are usually faced with when dealing with plain Java applications. You have a class which represents your API, you may have some configuration files to set your system up and that’s it – plain simple. Logging is a perfect example for that. Let’s take any of your preferred, non OSGi logging APIs. You have a configuration file or system properties, which set the basic configuration and then you are good to go. Just call a static factory, obtain you logging object and start what ever you wanna do with it.

Now, what is the problem with this in an OSGi context? At first glance, nothing. You can set-up your system and it behaves like in plain Java. Problems only occur when you start trying to use it in a dynamic, typical OSGi-way. Changing a configuration during run-time, providing the configuration in a different bundle or fragment will require you to take more things like the start order into account. This in return makes your bundles no longer following core principles of OSGi, which say that as soon as a bundle gets resolved, it can be used.

Solving this seems hard. You can’t change a public API. Not like usual, just extract interfaces and make the implementation available as a service. Well, in some cases this might be possible, sure it won’t be in all cases. Looking at this issue, what is the core problem? The export of the API packages! Assuming, you only depend on packages (not using Require-Bundle) bundles, depending on a certain API become resolved, as soon as this API becomes available as an exported package.

A possible solution to this dilemma is treating export packages like services. During run-time, a bundle can explicitly add or remove exported packages. With this, you would be able to only export a certain API when you are certain, it will work as expected. Of course, this needs more thinking, because this imposes several problems (Security is just one of them), but the potential is great. To be honest, I am currently not sure if it is worth the risk or even the effort, but I am definitely sure it is worth exploring. Just think of the possibility to actually eliminate the dynamic import header with all of its drawbacks and removing proprietary workarounds like buddy class loading. So, what do you think?



Some thought on the OSGi R4.2 early draft

Last week the OSGi website[1] published the early draft of the OSGi R4.2 specification[2]. Reason enough to have a short look at what is covered in the upcoming release.

First of all one has to notice that this is not a minor release as the version number may suggest. Release 4.2 is actually way more significant than the R4.1 release last year. At some points I would even say it is more important than the R4 release, because with that one usage becomes way more easier, especially for none OSGi experts. But first things first. What is actually in the new draft?

Core design changes/ enhancements:
RFC 120 – Security Enhancements
RFC 121 – Bundle Tracker
RFC 125 – Bundle License
RFC 126 – Service Registry Hooks
RFC 128 – Accessing Exit Values from Applications
RFC 129 – Initial Provisioning Update
RFC 132 – Command Line Interface and Launching
RFC 134 – Declarative Services Update

Enterprise design changes:
RFC 98   – Transactions in OSGi
RFC 119 – Distributed OSGi
RFC 124 – A Component Model for OSGi

Covering all of these RFCs in this post won’t be possible, but I want to pick some of them and go a little bit into the details.

RFC 120 – Security Enhancements (P. 5)
The major change here is the introduction of the concept of a black list. Now, you are able to say something like everyone is allowed to do such and such, unless… and here comes the clause. Before, only white lists were possible. Although, this is a tremendous improvements in terms of simplicity, there is a reason why black lists are not as secure as white lists and why certain systems don’t implement black list approaches. I think this feature should be treated with extreme caution and I hope this won’t hunt back the OSGi, when security flaws based on this are surfacing, which will most certainly will.

RFC 125 – Bundle License (P. 62)
Licenses are always a problem in Software development. Especially when a lot of 3rd party libraries are involved. The OSGi now addresses these issues by defining a header for the manifest, which conforms to a distinct syntax, which subsequently allows vendors to not only express the kind of license a certain bundle is published, but also which portion of the code is associated with what license within a bundle. In my opinion, this is a really cool feature. Conceptually certainly not rocket science, but the benefits are many folded. Now, it is possible to display, which licenses are actually contained in your container and even more important, you can define rules for certain Licenses not being deployed in your environment. I am just thinking on the typical viral problem of GPL[3] like licenses.

RFC 126 – Service Registry Hooks (P. 71)
This is going to be a mechanism, mainly used for framework related services and not applications in general. The idea is to allow hooks, to tweak the service registry behavior. One very likely use case are remote services, as discussed for quite a while in the OSGi community. Although very useful and powerful, I can see the problem that you get indirect dependencies with this one and eventually have similar problems like with plain java jar files. A bundle doesn’t express which tweaked service manipulation it needs (similar to the “which jar also needs to be at what pssition in your class path” problem). Of course, this is not the aim of the spec, but as soon as it is possible, people start using it, even if it is not intended by the designers. Fragments are the most famous example for this pattern.

RFC 129 – Initial Provisioning Update (P. 97)
The initial provisioning specification as published in the R4.1 spec has a major drawback. It assumes, that the created jar can be manipulated in terms of setting properties to the zipped content (information about the MIME type). Although, this can easily be done in Java, all major build tools I know of are lacking this feature, so custom implementations had to be used instead. A reason, why I once chose not to go with this solution. This is now addressed in the RFC. A very useful update, if you ask me.

RFC 132 – Command Line Interface and Launching (P. 103)
Till now, all OSGi implementations differed in the way they were started and one could extend the console. This will change with this new RFC. For me this is the right direction. We really need clear interfaces to instrument the container, so that exchanging the container becomes easier. Of course, every framework provider will still have it’s own “enhanced” features, which is a good thing, but you easier avoid the typical lock in. This should boost transparency between frameworks.

RFC 134 – Declarative Services Update (P. 144)
I haven’t looked very close into that, because Spring (as well as iPOJO[4] or SAT[5]) are way superior and wider used than DS, but in general, the reliability of obtaining a service was increased. Some oddities have also been removed/solved.

RFC 98 – Transactions in OSGi (P. 154)
Transaction are being introduced to provide a true ACID level support. For this the JTA is going to be used. Additionally the Distributed Transaction Processing (XA+ Specification Version 2, The Open Group, ISBN: 1-85912-046-6) protocol will be used, which is specifically tailored to meet OSGi’s needs. I haven’t dug really deep into this, but I hope with this RFC the issues user have reported with libraries like Hibernate will finally be solved. An issue popping up quite often.

RFC 119 – Distributed OSGi (P. 169)
The goal should be clear, but here is a citation taken from the spec:

  • An OSGi bundle deployed in a JVM to invoke a service in another JVM, potentially on a remote computer accessed via a network protocol
  • An OSGi bundle deployed in a JVM to invoke a service (or object, procedure, etc.) in another address space, potentially on a remote computer, in a non OSGi environment )
  • An OSGi service deployed in another JVM, potentially on a remote computer, to find and access afrom remote OSGi bundle service running in the “local” OSGi JVM (i.e. an OSGi deployment can accept service invocations
  • A program deployed in a non OSGi environment to find and access a service running in the “local” OSGi JVM (i.e. an OSGi deployment can accept service invocations from external environments)

[OSGi Spec R4.2 early draft (P.174) – August 22th, 2008]

I really like the idea of being able to use services from other JVM and even other machines, without having to worry about how to implement this. A very compelling idea! Unfortunately I don’t believe in a flat world and so I don’t believe in such a simple solution. Network connections tend to have a latency, drop unexpected or even return corrupted data. If your service consumer is not agnostic of that, I think there will be more and more problems popping up. However, I am open for this and will watch the development closely. Eric Newcomer[6] gave a pretty impressive demo how this works at the OSGi Community Event[7], so I am looking forward to get my hands dirty. Call me idealistic, but I actually want this to work and prove me wrong!

RFC 124 – A Component Model for OSGi (P. 216)
The most important change however is described in RFC 124 in my opinion. Here, the former known Spring DM[8] approach is translated into an OSGi standard. After a first quick look, it seems like it is indeed the Spring DM specification OSGi’ified topped with some additional semantic sugar, but I have to look closer to be sure. The only question which remains for me is, if this RFC will really make it into the final spec, what will happen with DS? In my opinion, RFC 124 is way superior to DS and there are no good reasons to keep DS. If I am wrong, maybe someone can shed some light.

Besides the facts mentioned in the early draft, I find particular interesting, what actually didn’t made it into the early draft. For me the whole EE issue with the mapping of web.xml’s for instance is completely missing. The enterprise features are the things I was eager to read, but instead total silence. Hopefully they are still working on it. Releasing the next version of OSGi without the a convincing Java EE story is like driving a car with the handbreak on. Let’s hope the EEG will manage to finalize theirs EE story in time!

As I final note I want to say, that I think it is a great move from the OSGi alliance to actually grant early access to the things they are working on. This makes it way more transparent for the community, what is going on behind the closed doors of the specification process of the OSGi. Maybe in the furture, we’ll see an open way of interacting with the whole JSR process, so that even Sun will eventually be satisfied. Well, time will tell…



[1] OSGi Homepage:
[2] OSGi R4.2 early draft:
[3] GPL License:
[4] iPOJO:
[5] SAT:
[6] Eric Newcomer’s blog:
[7] OSGi Community Event:
[8] Spring DM:


Getting SVN running under Eclipse 3.4 Ganymede

I am using Eclipse since version 2.1 and I can’t imagine to work without it anymore. However, some things like the missing integration of SVN in the default distributions annoyed me for quite some time. Now with Eclipse 3.4, we have a build-in SVN support, but unfortunately the installation is anything but convenient. That’s why I decided to go through the process in my blog. Maybe this helps some of you out there to safe some time.

1. If you haven’t done it already, download Eclipse from (which version depends on your personal needs, but I usually go for the PDE version)

2. After having downloaded and “installed” Eclipse, go to the Eclipse update site (within Eclipse: “Help > Software Updates…”. Select the “Available Software” tab and then go to “Ganymede > Collaboration Tools > Subversive SVN Team Provider” and install the SVN Client.
(optional – actually for me it worked, so I would recommend it when running under Windows, install the Windows SVN client from Tortoise Under OS X however, you can safely skip this step.)

3. Now you need to add the actual SVN Connector from the following update site (no idea, why this one hasn’t been included in the first place!):
To do so, click: “Help” > “Software Updates…”. Select the “Available Software” tab and then click on “Add Site…”. Copy and paste the update site url into the dialog box and press “OK”. Now you should see the new update site in the list as a URL. Now open this update site and go to “Subversive SVN Connectors”. You’ll find a list mit all available connectors. You certainly can install all of them and try them out, one by one, which fits best for your environment. For me however, the “Native JavaHL 1.4.5 connector” together with the “SVNKit 1.1.7″ connector worked. (For those who haven’t installed the Tortoise Client, I think you have to install the Windows binaries, but I can’t confirm that without having tested it – maybe some of you who tried it can comment on this one)

4. Once you have this installed (and restarted your eclipse) select the installed connector in the SVN preferences (“Window > Preferences > Team > SVN” in the “SVN Connector” tab choose your connector in the drop-down menu.

That should be pretty much it. I am really wondering, why this has to be such an effort to just get the SVN to work. Really not the best user experience, if you ask me.


Update Sept., 04th 2008: Fixed a menu name and extended the explanation for step 3.
Update Sept., 30th 2008: Updated the URL to the Polarion update-site according to feedback from Johan.


The notion of an application: Fitting just in

I was actually planing on writing something about this topic for months now, but I never found time to do so. Now that Peter Kriens picked up the topic in his current post [1], I just felt the need to write about my thoughts on this issue.

What is it all about?

When working on computer programs for a while, you pretty soon realize that you don’t want to invent the wheel all over again. So, people came up with concepts driving software reuse further and further. Starting from functional programming, finding its way to the fine world of objects and in our days driving more and more towards component based programming. Nothing surprising – though. With the help of OSGi I’d like to say Java has a pretty good standing among its competitors and could already be considered component ready.

However, there is one problem, at least as far as I am concerned. Looking at software product lines (I know, I know, they are more considered a fancy dream than a real world solution, but please hang in there for a moment and let me explain) the core concept – as I would call it – is the idea of divide and conquer. In a complex domain as computer science, it is not feasible to believe one can manage everything down to the finest granularity. So what you end up doing is creating the big picture and drill down on a “as needed” basis – a typical top down approach. Of course, this assumes you already have your main building blocks to work with. OSGi by its nature in general supports you in doing so. You only pick your functionality in terms of simple bundles and throw them into an OSGi container and everything just works – well, almost.

Unfortunately, that’s just the simplest scenario when things like this are working. Peter is saying in his blog that applications (bundles belonging to one domain and providing a certain functionality with their collective interaction, as I would define it) are simple to manage, because they express their dependencies. Tools like the OBR[2] make installation issues disappear and all the remaining requirements for an “application” concept to solve their problems are actually derived f.i. from the use of Class.forName() in legacy code to extend their applications and the sharing of implementation classes rather than interfaces. Here I have to disagree. Of course, the whole Class.forName() ideology is a problem within OSGi, but I don’t think this is the major reason for the demand for an application model.

Why is the concept of an application worth while?

Figure 1 - Sample Application StackLet’s assume you’re building a pretty complex application with … 400+ different bundles or so. Referring to software product line design you’re no longer managing all of these bundles, but you’re managing some sort of an application stack. If your interested in (see figure 1) creating a new application with “Domain Logic 1″ and “Domain Logic 3″, you just pick those and all the underlying plumbing should be taken as well. Of course (and here it becomes tricky) this involves loose coupling, which doesn’t include dependencies on a specific implementation or even worse implementer on the domain logic stack. Everything is abstracted from the implementation and everything just depends on interfaces and services.

Figure 2 - Dependency path problem with separated API and implementationOne of my core problem here is that there is no distinct bundle representing an application. There is no syntax, convention, anything. I need to know which bundle in the application is my “core” (and I am not even talking about finding out, what my applications are, I can choose from). Having figured out which one to use, we are still not in a safe harbor. The problem continues with the separation of API and implementation. Ones both are separated by design (as shown in figure 2), I can happily resolve all my bundles, but nothing works, when solely relying on Package-Import statement scoping (which I – again! – highly recommend). Automatic tools dependency discovery tools just can’t find a path to the implementation. In figure 2 you can see the problem show by the arrow. Starting from the “Core Bundle”, I only have a dependency on the API. The API doesn’t have any dependency on the implementation. The Implementation bundle does have a dependency on the API, but for the dependency resolution tool, it is not clear, if the bundle is just using the API or implementing it.

As you now see, there is a certain problem and I wasn’t actually surprised when I heard about Springs approach to this problem. Although Springs way is a fast solution for the immanent problem, it introduces dependencies one might not want. Bundling components together in the Spring way ties the interface to the implementation, limiting the flexibility of the whole design.

So, basically we are doomed, right?

Well, the question is, what do we REALLY need. As I mentioned before, in my opinion, it is a set of just a few features:

  1. an application identifier (this is great to figure out what functionality I actually have)
  2. a way to express simple dependencies (with Import-Package, I think we pretty much have that)
  3. some way to express dependencies on our environment, like the existence of a certain service implementation.

Looking closely at the core spec, we can actually find almost everything we need. Unfortunately, we can’t use this right away. Feature number one for instance can very easily be achieved by just introducing a new property in the file – even a version can be introduced, although I am not sure this is a good idea yet.

The tricky part comes with feature number 3. Looking at the latest official OSGi core spec version 4.1, there is no meta data concept reflecting services*. Fortunately, there are still the old specs available for download [3] and here we can find in version 3 the “Export-Service” and “Import-Service” headers removed from the current specification. Assuming all bundles are correctly tagged with the right meta data, tools like the OBR can easily create a dependency tree and figure out which bundles to pick. Even better, such tools are now able to pick the best implementation based on their other dependencies and potential version conflicts on the imports of each implementation bundle. I think such a dynamic solution is way better fitting the OSGi way of doing things than any static one.

Ok, I know R3 is so “old school” and we are living in a brave new world. Swimming back is not an option in a buzz, hype and in general fast pace driven world. Fortunately, we don’t have to. With Spring DM [4], there is a way to express dependencies on services in a declarative way. Here you can define which services you are consuming and which ones you are providing, thus giving 3rd party tools the ability to analyze the dependencies way more subtle. If the Spring DM approach is used widely – as a new “official” standard, tools like OBR can now work again. They are able to assemble the right product configuration and deploy a runnable application based on the additional meta data from the Spring DM configuration. The only drawback so far, only if everyone is using this configuration, we will get the best results. If partially used, the results may not be providing any significant improvement based on the very bundle constellation.

Something I intentionally left out till now is the new idea of providing nested frameworks. I haven’t thought that entirely through though, I have to admit. In general I like the idea, but for other reasons. The main reason, why I don’t like it for this particular problem of the lack of the notion of applications is that when looking at figure 1, you might end up with several layers of applications, which would then ultimately end-up in requiring multiple layer nesting of frameworks. And speaking about complexity I am not sure, if a framework in a framework in a framework in a …. makes life so much easier. Besides, this still doesn’t solve the problem with discovering the necessary dependencies for an application – not counting manually configuration of such a sub framework of course.


For me there is a clear need for an application like notion in OSGi. As shown above, there are valid use cases even now, which make the use of OSGi kind of limited in the long run. However, I actually don’t see the need for an entirely new concept, above or beyond a bundle. The only thing we really need is a clear statement, how to provide and obtain the required information to do the described extended dependency analysis. With these information at hand, tools by 3rd parties can be developed and make us all equally happy. At the end, we should keep in mind that OSGi is (currently) about defining a component standard, which should be open and not enforcing any lock in. This is in contrast to most of the adapters wishes of having a simple technology, solving their immanent problems. I think, now there is time for great tools to support working with OSGi more easily, helping developer to adapt and embrase the functionality and freedom offered by OSGi.


*) actually I was cheating a little bit here. OSGi R4 compendium contains the Declarative Service (DS) definition, which in fact defines valuable meta data. The reason, why I didn’t mention it here is basically related to the fact that Spring DM is superior to DS in many ways and DS is not part of the core specification, so I favored Spring DM for the sake of functionality.


[1] OSGi Blog:
[2] OBR:
[3] OSGi Spec Download Site:
[4] Spring DM:


OSGi Community Event – my personal resume

After being involved in OSGi for a couple of years now, I finally had the chance to attend one of the community events. I have to admit, I was kinda nervous. I talked to so many people in the community for so long through the mailing list or personal mails, but actually never really met one of the folks there, so this was the moment to meet them in person. It was kind a weird, it felt a bit like finally meeting someone you got to know by one of the online flirt portals (not that I ever did something like that, but that’s how I would have imagined it ;-) ). You know the prejudice that Computer Scientists are all nerds and freaky to some extend, so this was really exciting. In short, I wasn’t disappointed – more the opposite! I haven’t met anyone I don’t like! Especially Richard Hall(Apache Felix lead), Jan Rellermeyer(ETH Zuerich), Michael Keith(Oracle) and Jo Ritter(ProSyst) seam not only to be really nice guys, but are also very good speakers. So if you have the chance to attend one of their talks, it is worth attending!

Besides the talks of the already mentioned speakers, there were several other talks I like. Namely the talk of Hans Bossenbroek for Luminis, Jon Bostrom (MobiNoir Consulting) and of course the Key Note of Peter Kriens. To go deeper and talk about every single talk would just be too much for this post, but all the talks either gave me some good insights or were just fun to watch. Well done! I am more technical oriented, so unfortunately the first day wasn’t as interesting for me as the last one was, but I think this is the tradeoff of such an event.

Socializing wise speaking, this conference was pretty good as well. I talked to many interesting people and gained a lot of insight of the community. You can’t imagine how many companies are using OSGi for years now, but are just not talking about it. It is really impressive to see how far they have gone and what they have achieved with OSGi. I really think in the near future, we can expect to see many new areas, where OSGi will become the defacto standard.

Concerning the talk I gave on Wednesday, I just can say that I am more than happy with the feedback. The room was so packed that some people even had to stand, which didn’t really helped me with my nervousness ;-) After my talk, I had the chance to talk to several people about security, their experiences and new ways how to tackle the problem I outlined during my presentation. I think, we are on a good way to come up with suitable solutions and I am looking forward to more interesting discussions. People are now starting to use the features OSGi is offering this will drive more, even better solutions, we will need for a broader adoption. Security is crucial and when we are finally starting to deploy multiple applications side by side in one OSGi container, we can’t longer assume that everyone is playing nice. We have to enforce security, that’s what we owe our customers/users. If you hadn’t time to talk to me during the conference or didn’t have the chance to attend the event and are also involved with similar problems I am more than happy to get in contact with you. Just drop me a mail. I think the more input we can get the better the solutions are, we can come up with. Of course, I’ll keep you posted how things develop along the way.

Till then – cheers,


Firefox dowload day

This time it’ll be a very short post ;-) I just want to support my favorite Webbrowser, which I use for years now and which has become my first program I usually install on a new computer. I hope this way I can give something back to the community.

The firefox team is currently trying to get a Guinness World Record by the most downloads within one day. To accomplish this, they created a website collecting potential downloaders and coordinating the event. If you’re a supporter of firefox (or wanna to become on) and want to give something back (besides being part of a world record) you might consider registering. Of course giving away your e-mail-address credentials is always something that concerns me, but I actually trust these guys and just hope they are not doing anything stupid with my data.

Besides being part of the effort, I think the latest version of Firefox sports a lot of cool new features, which are just worth checking out. Even the latest beta of Firefox 3 is already great and you might wonna have a look at the features it already provides. As far as I can tell, this version is already stable enough to give it a test drive. So give it a shoot and check it out!



A little bit of talking at the OSGi Community Event

Two weeks ago I received great news. The proposal I committed with my fellow coworkers (Boris Terzic and Markus Gumbel) got accepted at the OSGi community event, so I hope I’ll see some of you I only met virtually till now. Concerning the topic – I think it’ll be interesting for all of you working or at least intending to work with security and OSGi. Now, after I am allowed to talk about it, I will be able to share some of the experiences we gained.

In the talk titled “Do not disturb my circles! Secure Application Isolation with OSGi”[1], Markus Gumbel[2], and I are going to talk about how to isolate several application domains within one JVM – based on OSGi mechanisms. No big deal, some of you might say, but depending on your requirements, it actually might become a big deal or even serious issue. In our very case, we will face a Common Criteria(CC)[3], evaluation for our JVM based components ([4], gives a nice introduction on this topic – unfortunately only in German). But first things first…

InterComponentWare AG (ICW)[5],, the company I am working for right now, is one of the (if not the) leading eHealth provider in the business, with a wide ranging product portfolio. The core product is called LifeSensor[6],, which is an electronic health record like Google[7], and Microsoft[8], introduced recently in their first versions. If you, like me, are a little paranoid about privacy issues, I can recommend you “our” service for sure. We take security pretty serious and have a whole department dedicated to this. Medical data are always a big issue. Consider your CreditCard data is lost, you can get reimbursed from your CreditCard company, once the access to your medical data is compromised, it can’t be undone. Pretty serious from my point of view! Anyway, this is actually not where I am aiming at. We (and my group in particular) are involved in the implementation of the German Telematic Infrastructure project specified by the Gematik[9], and commissioned by the German government.

In our talk, we will take the “Konnektor” we are developing as the sample application to illustrate the usage of the OSGi security features. The Konnektor (we use the Cisco AXP platform[10], underneath) is the key device deployed at every medical practice and pharmacy. It is responsible for creating secure connections to the telematic infrastructure back end, as well as other security relevant tasks like reading eHealthCards, signing prescriptions or uploading emergency data on the electronicHealthCards for instance. [11], gives a nice overview, how things are done in more detail. Additionally the specification allows to have third party applications to be deployed on the connector as well, to extend its functionality. A concrete example would be the integration of the LifeSenor mentioned above to directly upload your X-Ray images to your eHealthRecord. You might understand, that security plays a very important role in such a scenario.

The key problem we are faced with is that the functionality of the Konnektor has to be certified according to the CC, which I mentioned at the beginning of this post. Well, certifying software in general is not easy, especially if you are talking about security and something as complex as we have. The real problem in our case however is not certifying that the software does exactly the stuff specified – no more, no less -, but also allowing third party “plug-ins” to extend its features without compromising its certified functionality or the security of the whole system. In a simple scenario, a doctor uploads a new “feature” from a malicious source to the Konnektor and we have to ensure beforehand that nothing serious can happen – tough one! Well, of course, we not only found a way, but also found a way using OSGi’s security features to do the trick even within the same JVM. The Konnektor and potentially dangerous third party extensions running side by side in the same JVM (of course, without restarting – there is a reason why we are using OSGi!).

Like every other cutting edge software project, we also found ourselves struggling with various problems no one has experience before and so I am pretty sure, we can contribute some important insights in the domain of secure OSGi application development. I hope this will be as interesting for you as it is for me to work on this topic. It is really something you can’t find a lot published about, if someone has done something similar or knows about something, I would be happy to hear about it!

Although, the sample we chose is pretty unique domain wise, the basic techniques we will present are applicable in many different domains as well (banking, insurance, development, personal live style,…). Just think of your Eclipse installation. Right now, you install your plug-ins without a SecurityManager and hope that the plug-in only does what it is supposed to do… What if it doesn’t – or better doesn’t intend to play by the rules? I can see OSGi frameworks running as general platforms combining various different application in one JVM in the furture – a kind of a meta Operating System. As soon as this vision becomes reality, I don’t want applications being able to communicate without restrictions with each other. My health insurance provider together with my personal medical record manager in one container… you never know what an insurer might do, when they get a hold of your sensitive medical records. At least I certainly don’t want to try this out!

I realize that parts of this post might sound like a commercial. If so, this is certainly not intended, but I felt the need to explain in more detail than I will be able at the talk, why we did what we did and why this is so important. In the talk, we will stick to the technology only and avoid as much as possible any relation to a concrete product – I really hate talks pretending to point out new technologies or lessons learned, but actually trying to sell a product instead, so you’re safe here for sure!

For those of you attending the conference, don’t hesitate to talk to me, if you have any questions or just say hello. I was and certainly am a big fan of technical discussions. All of you who won’t be able to attend the event and interested in the topic, don’t worry, after the talk, I am certainly blogging more about this topic in the near future ;-) Till then, stay tuned!


UPDATE: the slides can be downloaded from the OSGi Website for everyone interested (pdf)




SpringSource Application Platform – the next step forward for OSGi?

At the beginning of this month[1], SpringSource published the first beta release of their so called SpringSource Application Platform[2], an (as I would call it) extension of the existing OSGi platform. The move came pretty surprising to me, I have to admit. No announcements prior to the release, no actual hints – nothing. Pretty impressive I have to admit. A company grown that popular being able to work on something from this magnitude without even seeding the glimpse of a rumor. Good job.

Well, I guess eventually the guys behind Spring are trying to make money. A very natural thing for a company, of course. After having provided the community with a great framework for so many “supposed to be” JEE applications for free, it is time to gain some revenue. I think they deserve it after all they have done for the community. Actually I find it in particular interesting how the strategic move, choosing GPL[3] as the license of choice, will work out. On the one hand, I can totally understand the point of view, choosing a license, which forces competitors to play with open cards and actually contribute to the community. On the other hand, I have to say, I am thinking twice before putting me in the hassle of dealing with a viral license. For instance, when I lately was involved in the decision between LOGBack[4] and Log4j[5] as a logging back end, the decision was not only based on features, but mostly licenses. In some scenarios, licenses are just a show stopper, so I honestly hope this will not be the case for my fellow colleagues from SpringSource.

Enough philosophizing… Spring released a very interesting extension to the existing OSGi specification. Summarizing all the new features they provide would certainly go beyond the scope of this blog, besides I am by far not as knowledgeable as the developers themselves, so I will just refer to their great introductions [6],[7] and [8]. Instead I want to point out some things I consider worth mentioning.

First of all, the repository[9] is just great. With the experiences I’ve made by migrating existing applications into modular, OSGi based bundles, I can tell this can be pretty time consuming and frustrating, if done accurately (not just adding some headers to the, but also changing the existing code to work within OSGi where necessary).

The second thing I’d like to mention is two sided… On the one side, OSGi is missing another abstraction layer, I think. Defining full functional, domain modules to help people jump start in OSGi is needed – no getting lost in dozens of bundles, versions and services. I think all of you, working with OSGi for a while, have been faced with the problem, where you resolved and started every bundle appropriately, but your program will just not work, because you forgot to deploy the actual implementation of a service. Knowing your system, it won’t be a big thing finding the problem, but in a unfamiliar application with hundreds of bundles deployed and more potentially ready to be deployed from a repository, just waiting to be picked… which one is the one to choose? An abstraction – bundling those, as Spring has done it with its PAR – might help.

But, there is also another side to this story. Maybe all we need is some sort of indicator, a meta data to indicate, that a specific bundle provides/ consumes a distinct service? This will remove the provider lock-in problem caused by solutions like “Require-Bundle” for instance. All we actually need is a (in some way) abstract concept of a working entity (or application or domain module as you might prefer calling it), which is more abstract than usual bundles. This can just be the aggregation of certain packages and service consumers as well as providers. Only APIs, no actual implementation dependencies and for sure no Bundle-SymbolicName dependencies. An ideal application now would only consist of a couple of domain modules, which have to be deployed in the OSGi container. The container (with the help of the repository) now just picks the required bundles and assembles the application. No need to define a specific vendor, everything will be resolved by the framework. That would be my dream…

Anyway, I think this is something, which should be discussed in more detail in official sources. The approach Spring choose is certainly a way to go and test drive, but honestly, I doubt that it’ll turn out to be the right fit in the long run. All solutions I can think of, which were created to solve a certain pain are a serious burden and actual blocker in the long run, like Eclipse Buddy Class Loading, the OSGi Require-Bundle* header or the (unlimited) backwards compatibility required by the jar specification.

The last thing to mention is the introduction of the new headers. In general, I am all in favor of doing so and I am not surprised about the chosen names. Of course, they might clash with later headers, potentially introduced by the OSGi Consortium, but this is nothing new in Java. If you take a look around, developers as it seams, consider the reverse domain nomenclature as a burden and just ignore it. Even if it is now better established on a package level (although there are still rather well known exceptions like the IAIK[10] or even Peter Kriens FileInstall[11] bundle for instance). On a Manifest level however most of us (including myself) can pledge guilty of silently ignoring Java best practices. Unfortunately the magnitude of such a decision hunts you down later. The recent discussion about the name space issues of the OSGi headers, was the perfect reminder for me to give these small things more thorough thought, as we usually tend to liberately ignore them most of the time in favor of fast development. Well, at least I pledge betterment and hopefully so does every responsible developer, too ;-)


*) Sorry, but I just can’t help it. In my opinion this header is not only a hurdle, but a real drawback from a componentization point of view. From a migration strategy approach, I guess the opposite applies, but what lasts longer – migration or maintenance?






What’s the next killer app for the iPhone?

The release of the iPhone SDK has been some weeks ago and I actually had time to think about the whole hype thing. Of course, one of my big disappointments was that Java is most likely not going to make it on the iPhone. I actually understand the concerns regarding the UI, but actually I could live quite happily with a headless version. Ok, I have to admit, this wouldn’t be a standard conform JVM, but when I look at current environments, I really have to ask myself, why such a version doesn’t already exist? I mean a headless JVM version, only designed to work on servers for instance. Maybe just as a profile, nothing distinct, but optimized for 24/7 server scenarios. I convinced that in such an environment OSGi could play an important role even on iPhones. Just forget for a moment that the current license of the SDK doesn’t allow background processes as well as “other languages” than Objectiv C and focus on the opportunities. OSGi offers a great componentization approach, which the new “Universal OSGi” RFP is trying to port to other platforms. Especially the expressiveness of the meta data provided by a bundle is very interesting for other languages as well. Of course, the question is to what extend is this possible, does it make sense to define a subset of functionalities, which everyone can met or does this make the whole idea worthless? Questions, at least for me, not easy to answer yet, but I am very interested in the outcome of the work of the expert group.

Now, despite the existence of Java or even OSGi on an iPhone, what are these new, cool and so desired applications, we can’t wait to see on our hand held devices? I gave it some thought and here is the list of “killer apps” I would love to see on the iPhone sooner or later.

  • Google Maps with GPS: No need to explain that much, this is THE application I am waiting for on the iPhone. No more expensive navigation software extensions for different countries will be needed anymore. You always have it at hand! (Ideally with offline capabilities to enable the use even withouth internet access.)
  • Instant Messaging: Something like fring is due for a long time. You never need to be offline, no need for expensive “Text Messages”, just write your Messages, without length limitations and even with pictures and other gimics. Not to forget features like Voice over IP via Skype for instance.
  • Environment controller: Let’s call it the ubiquitous remote. You can not only control all your HIFI and TV equipment with it (ok, we need blue tooth here, but maybe in 3-5 years…), but also open your garage door, the car, dim the light in the room (the one where you are, based on location information) or your favorite computer game with the gyroscope in the iPhone. Why having tens of different applications, if one can do everything for you?
  • Black box: Ok, this can be a good and a bad thing, I have to admit, but looking at it from a purely exploratory possitiv perspective, an iPhone can be used as a sort of a black box in case of accidents. Later, officials can be enabled to reconstruct the precise movements of the iPhone. A little bit like a flight recorder in airplanes. One can record where, with which speed, which movements the fon was moved.
  • Emergency tool: Not only a phone, which calls 911, 110 or what ever the emergency number of the country you are in is, but also a live safer for others – with applications helping people to apply first aid step by step, with pictures and animated videos to show how it is done properly. An application capable of automatically submitting emergency data to the doctor nearby in an ad-hoc network as soon as the doctor authorizes himself.
  • Interactive Games: Games are nothing new – no big deal about it. Something, which is new, are the kind of games introduced by the Nintendo WII. Equipment like a gyroscope or a position tracking system are new and animate people to actually play together. The iPhone can push this even further. With a mobile device, you can play anywhere with everyone, thus making playing a social experience. Just imagine a game you can play on your way to work, with someone in your train. Based on an adhoc-network, you probably don’t even now the one, but still can play against that individual. Even better are big sport events. Here thousands of people can be enabled to play against each other in some sort of a parallel game. A very interesting field, I have to admit. So many things are involved or have to thought about here: how to handle ad-hoc networks, how to negotiate the right score between two parties and avoid cheating, how to handle online/ offline behavior, how to secure the communication and manage trust in spontaneous networks,… Maybe I have to elaborate on this one in a later post, this topic just fascinates me, although I actually don’t play games.
  • My Personal Buddy: An application, I am imagining for years now. An “intelligent” agent helping me to organize my life, by proposing appointments according to my daily behavior, my favorites and so on. I think Peter Kriens once wrote something more detailed about some of the functionalities I would expect.

As you probably noticed, I am convinced of the success of the new iPhone and I already have some ideas, what might come in the future. However, the iPhone is surely not the answer to all questions and still has several drawbacks, but it is certainly a move in the right direction. Let’s seen what’s coming up next.

Stay tuned,