Some thought on the OSGi R4.2 early draft

Last week the OSGi website[1] published the early draft of the OSGi R4.2 specification[2]. Reason enough to have a short look at what is covered in the upcoming release.

First of all one has to notice that this is not a minor release as the version number may suggest. Release 4.2 is actually way more significant than the R4.1 release last year. At some points I would even say it is more important than the R4 release, because with that one usage becomes way more easier, especially for none OSGi experts. But first things first. What is actually in the new draft?

Core design changes/ enhancements:
RFC 120 – Security Enhancements
RFC 121 – Bundle Tracker
RFC 125 – Bundle License
RFC 126 – Service Registry Hooks
RFC 128 – Accessing Exit Values from Applications
RFC 129 – Initial Provisioning Update
RFC 132 – Command Line Interface and Launching
RFC 134 – Declarative Services Update

Enterprise design changes:
RFC 98   – Transactions in OSGi
RFC 119 – Distributed OSGi
RFC 124 – A Component Model for OSGi

Covering all of these RFCs in this post won’t be possible, but I want to pick some of them and go a little bit into the details.

RFC 120 – Security Enhancements (P. 5)
The major change here is the introduction of the concept of a black list. Now, you are able to say something like everyone is allowed to do such and such, unless… and here comes the clause. Before, only white lists were possible. Although, this is a tremendous improvements in terms of simplicity, there is a reason why black lists are not as secure as white lists and why certain systems don’t implement black list approaches. I think this feature should be treated with extreme caution and I hope this won’t hunt back the OSGi, when security flaws based on this are surfacing, which will most certainly will.

RFC 125 – Bundle License (P. 62)
Licenses are always a problem in Software development. Especially when a lot of 3rd party libraries are involved. The OSGi now addresses these issues by defining a header for the manifest, which conforms to a distinct syntax, which subsequently allows vendors to not only express the kind of license a certain bundle is published, but also which portion of the code is associated with what license within a bundle. In my opinion, this is a really cool feature. Conceptually certainly not rocket science, but the benefits are many folded. Now, it is possible to display, which licenses are actually contained in your container and even more important, you can define rules for certain Licenses not being deployed in your environment. I am just thinking on the typical viral problem of GPL[3] like licenses.

RFC 126 – Service Registry Hooks (P. 71)
This is going to be a mechanism, mainly used for framework related services and not applications in general. The idea is to allow hooks, to tweak the service registry behavior. One very likely use case are remote services, as discussed for quite a while in the OSGi community. Although very useful and powerful, I can see the problem that you get indirect dependencies with this one and eventually have similar problems like with plain java jar files. A bundle doesn’t express which tweaked service manipulation it needs (similar to the “which jar also needs to be at what pssition in your class path” problem). Of course, this is not the aim of the spec, but as soon as it is possible, people start using it, even if it is not intended by the designers. Fragments are the most famous example for this pattern.

RFC 129 – Initial Provisioning Update (P. 97)
The initial provisioning specification as published in the R4.1 spec has a major drawback. It assumes, that the created jar can be manipulated in terms of setting properties to the zipped content (information about the MIME type). Although, this can easily be done in Java, all major build tools I know of are lacking this feature, so custom implementations had to be used instead. A reason, why I once chose not to go with this solution. This is now addressed in the RFC. A very useful update, if you ask me.

RFC 132 – Command Line Interface and Launching (P. 103)
Till now, all OSGi implementations differed in the way they were started and one could extend the console. This will change with this new RFC. For me this is the right direction. We really need clear interfaces to instrument the container, so that exchanging the container becomes easier. Of course, every framework provider will still have it’s own “enhanced” features, which is a good thing, but you easier avoid the typical lock in. This should boost transparency between frameworks.

RFC 134 – Declarative Services Update (P. 144)
I haven’t looked very close into that, because Spring (as well as iPOJO[4] or SAT[5]) are way superior and wider used than DS, but in general, the reliability of obtaining a service was increased. Some oddities have also been removed/solved.

RFC 98 – Transactions in OSGi (P. 154)
Transaction are being introduced to provide a true ACID level support. For this the JTA is going to be used. Additionally the Distributed Transaction Processing (XA+ Specification Version 2, The Open Group, ISBN: 1-85912-046-6) protocol will be used, which is specifically tailored to meet OSGi’s needs. I haven’t dug really deep into this, but I hope with this RFC the issues user have reported with libraries like Hibernate will finally be solved. An issue popping up quite often.

RFC 119 – Distributed OSGi (P. 169)
The goal should be clear, but here is a citation taken from the spec:

  • An OSGi bundle deployed in a JVM to invoke a service in another JVM, potentially on a remote computer accessed via a network protocol
  • An OSGi bundle deployed in a JVM to invoke a service (or object, procedure, etc.) in another address space, potentially on a remote computer, in a non OSGi environment )
  • An OSGi service deployed in another JVM, potentially on a remote computer, to find and access afrom remote OSGi bundle service running in the “local” OSGi JVM (i.e. an OSGi deployment can accept service invocations
  • A program deployed in a non OSGi environment to find and access a service running in the “local” OSGi JVM (i.e. an OSGi deployment can accept service invocations from external environments)

[OSGi Spec R4.2 early draft (P.174) – August 22th, 2008]

I really like the idea of being able to use services from other JVM and even other machines, without having to worry about how to implement this. A very compelling idea! Unfortunately I don’t believe in a flat world and so I don’t believe in such a simple solution. Network connections tend to have a latency, drop unexpected or even return corrupted data. If your service consumer is not agnostic of that, I think there will be more and more problems popping up. However, I am open for this and will watch the development closely. Eric Newcomer[6] gave a pretty impressive demo how this works at the OSGi Community Event[7], so I am looking forward to get my hands dirty. Call me idealistic, but I actually want this to work and prove me wrong!

RFC 124 – A Component Model for OSGi (P. 216)
The most important change however is described in RFC 124 in my opinion. Here, the former known Spring DM[8] approach is translated into an OSGi standard. After a first quick look, it seems like it is indeed the Spring DM specification OSGi’ified topped with some additional semantic sugar, but I have to look closer to be sure. The only question which remains for me is, if this RFC will really make it into the final spec, what will happen with DS? In my opinion, RFC 124 is way superior to DS and there are no good reasons to keep DS. If I am wrong, maybe someone can shed some light.

Besides the facts mentioned in the early draft, I find particular interesting, what actually didn’t made it into the early draft. For me the whole EE issue with the mapping of web.xml’s for instance is completely missing. The enterprise features are the things I was eager to read, but instead total silence. Hopefully they are still working on it. Releasing the next version of OSGi without the a convincing Java EE story is like driving a car with the handbreak on. Let’s hope the EEG will manage to finalize theirs EE story in time!

As I final note I want to say, that I think it is a great move from the OSGi alliance to actually grant early access to the things they are working on. This makes it way more transparent for the community, what is going on behind the closed doors of the specification process of the OSGi. Maybe in the furture, we’ll see an open way of interacting with the whole JSR process, so that even Sun will eventually be satisfied. Well, time will tell…



[1] OSGi Homepage:
[2] OSGi R4.2 early draft:
[3] GPL License:
[4] iPOJO:
[5] SAT:
[6] Eric Newcomer’s blog:
[7] OSGi Community Event:
[8] Spring DM:


13 thoughts on “Some thought on the OSGi R4.2 early draft

  1. Agreed nice summary!

    Agree that 4.2 will be a big step forwards for OSGi.

    That said, I’m skeptical of RFC119 – and the vendor “spin” associated with it. As, as you note, RFC119 ignores the really issues concerning distributed computing (i.e. Deutsch 8 Fallacies). I also found the RFC119 demo somewhat underwhelming.

    If you are interested in distributed OSGi based systems – check out our approach ‘Newton’ ( – I’d be interested in your thoughts. Newton based systems have been dynamically scaled to a 1000 nodes – so I guess this has somewhat redefined my expectation of what an impressive OSGi demo should be :)



  2. Yeah, looking into the Newton project is on my TODO list for quite some time and I will definitely look into it. The only problem is, when you’re not directly working on a distinct topic it is not always easy to find some spare time looking deeper into it. The project popped up pretty often in interesting discussions, so even if you don’t want, I’ll take a closer look ;-) Do you have any documentation other than the project doc, like a white paper on the mentioned deployment scenario, lessons learned or something similar, you would recommend? You can send me a mail under mirkojahn “at” gmail “dot” com, if you don’t want to distribute it in public.


  3. I feel this move forward actually hurts OSGi. OSGi is heading more into the application domain. This means a lot of overlap between app servers and OSGi. The reason in IMHO that OSGi is popular can be summed up as, “the better classloader”. Once OSGi gets bloated with features and does too much it will no longer be an option for use in other platforms. Reason is simple, OSGi would be the platform rather than aiding others. Fine if that is the plan for OSGi, but just be open and say that. Up to this point I had thought even the service API was over the line, but moving to these features really pushes it over.

  4. Very nice summary.

    I can assure you that the EEG is actively working on the JEE story, but it is not likely that it will be addressed in the next version of the spec. Time for that is running out and the JEE problematic is very complex and comprehensive.

    RFC 119 is about the integration of remote communication systems (like ESBs) into the OSGi service programming model. It purposefully does not address in any way how the transport is implemented. We do agree that trying to hide the remote semantics from the clients is a fallacy we want to avoid. Nothing in the spec requires the remote invocation to magically work for existing clients. However, this is the first step in supporting new SOA applications using the OSGi service programming model. Very exiting :-)

    Tim Diekmann
    co-chair Enterprise Expert Group, OSGi Alliance
    co-spec lead of RFC 119 – Distributed OSGi

  5. Mirko,

    Great summary, thanks very much. This is indeed a major release, despite the minor version number.

    I think you are right, the Spring-DM design may very well be the most significant thing we are doing in this release.

    What’s going to happen to DS is definitely a good question, but it is not really going anywhere even though if the Spring-DM stuff catches on I agree with you that will be the way most people will want to configure OSGi services.

    On the distributed design, as you might imagine we spend a lot of time debating things like the level of transparency to shoot for (we do not believe distributed computing is transparent, and should not be promoted as such) but the main goal we have is very simple: Extend OSGi properties to suport the configuration, discovery, and provisioning of remote services.


    If we really wanted to include a vendor spin in 119 we would have adopted the Paremus product ;-) We specifically avoid endorsing any one approach to distributed computing, since OSGi is all about lots of ways of doing lots of things, and we did not want (for example) to require things that could only be done with RMI and Java. Another important requirement is interoperabilty with non-OSGi environments.

    However we also expect the Paremus design to work with the extensions we are defining – please let us know if you don’t think that’s the case.


  6. Hi Mirko,

    While I see your point about black lists being less secure what I find in practice (I’ve been doing security for quite a while now) is that the bigger threat to security is complexity. Without “black lists” the security policies becomes more complex, meaning that as a practical matter the judicious use of “black lists” can make your system *more* secure rather than less by making your policies simpler.

    Furthermore the use of “black lists” is optional and any framework administrator can always choose to not use them at all if they feel it will lead to weaker security.

    I believe that system vendors (such as OSGi framework vendors or application server vendors) would like to be able to provide default policies that provide a “secure” system to users. At the very least for the services they themselves may provide. This is not possible without the use of “black lists”! For example, today there is no way for an OSGi vendor to say “only the system bundle can advertise the PackageAdmin” service with an out-of-the-box policy file!

    I see the option to use black lists as another tool in the security professional’s arsenal of tools. And while they can certainly be abused or used incorrectly, *not* having them has caused people to abandon the use of J2SE security completely! I know this from personal experience unfortunately.

    I really hope that “black lists” (or deniable permissions as we call them) along with the dynamic nature of the OSGi model will help foster the use of J2SE security in real systems!

    John Wells (Aziz)

  7. @Tim and Eric
    Please don’t get me wrong. I am looking forward to see how this will work in real live applications. And I actually hope that it’ll be a great success. Right now, I am investigating issues with Secure WebServices and this is no fun, so having something within the realm of OSGi would be greatly beneficial. My only concern here is that the unawareness of the remote nature of a service will introduce problems that could have been avoided. Sometimes, doing this explicit reminds people to take more into account. For instance if remote services are not found, unless a specific property is set in the filter of the tracker. In this case, you consciously decide that you want to have remote services as well. Ok, this is just a stupid example from the top of my head, but I hope you understand, where I was aiming at. Without knowing people might get things that behave different as they expect (and test for). Always a nasty root of errors.

    Looking at it from your point of view, I have to agree, that there is a clear advantage of having “deniable permissions”. There is a reason for white lists and applied with causion I have nothing against them at all. The point, I was trying to make was somehow different. When I look at fragments for instance(I just like this example, sorry) they can be extremely useful as well and I am using them quite often. Unfortunately, people find out about the potentials and use them in a “not intended” manor (like creating a global class space like extension mechanism) – just because it is possible.

    What I am trying to say is (and I kinda think we agree on this one I guess), is that managing security in OSGi is a not trivial task. However, I am more seeking for methods to help the user/developer/admin to do his job easier. In our team, we actually came up with an approach that can solve some of the typical problems without introducing black lists and still allowing to dynamically add unknown components safely and more or less self managed. For the Common Criteria evaluation, we’ll be facing next year, such black lists are pretty much a no go for us, because they are harder to evaluate and increase the attack surface unnecessarily.

    As a last note I want to add, that till now, working with OSGi and security is pretty painful (like with plain Java). In my opinion, it doesn’t have to be this way. The only thing we need to start with is providing tooling support, like a framework deploying a basic set of bundles with the minimum on permissions and a UI (web, java, command line, what ever), that allows to safely install new bundles in a secure context by dynamically applying permissions. Such simple tool would help people to just “play” with security and familiarize themselves with it. Right now getting started is really painful and in many cases a show stopper.

  8. @Mirko,

    yes, we discuss this issue a lot. And as you might expect we get opinions on both sides.

    Some folks say “I just want to find a service, I don’t care whether it’s remote or not.”

    Others say “I really don’t want to end up using a remote service by accident”

    What we’re doing has the potential to go either way, but the current model tips more toward the conscious use of remote services (you have to give them special properties and configuration metadata and can easily filter them out if you want to).

    Although the possibility exists that someone might end up using a remote service accidentally, I think it’s more of an edge case than the typical situation.

    Thanks again for your comments!


  9. I don’t agree that Spring-DM is way superiour to DS. They both use different approach.

    We switched our project from Spring-DM to DS. Problem with Spring-DM is that it relies on proxies and there was no tool to find out a missing dependency. We often had to wait until the dependency timeouts (default 5 minutes?). Now with DS it is very simple to find the dependency issue (scr info ). Also DS forces us do define one component.xml for one service. This is correct. DS is also more lightweight and low level (no proxies…) = better control and easier to debug.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>