Wednesday, November 10, 2010

How to use Gemini Naming with Eclipse Virgo (Part 1)

First, let me get some definitions down.

What is Gemini Naming? It is a bridge to use JNDI in OSGi runtime. More information is available on the gemini naming site. You can also read up in OSGi Enterprise Specification section 126

What is Eclipse Virgo? This is a donated SpringSource dmServer. A really nice modular OSGi runtime/app server.

    Now let me explain why would I, being in sound state of mind would try to use JNDI in OSGi.

    Long version: I am very lazy and I also like to use other peoples stuff. That stuff is usually not OSGi aware and I am currently trying to use it in Virgo. Most of the time other peoples stuff also needs to be configured with a database connection information. Well, at least stuff that I wanted to use. That configuration can be conveniently provided with 5 to 10 entries in the property file or a single JNDI location string. I already have 5 to 10 entries in the property file to maintain, so I did not want to duplicate it to another property file to be missed during deployment.

    Short version: Because I can and hacking/learning stuff is fun.

    So without further delay, lets me explain how to deploy Gemini Naming and use it to look up entries from a traditional java app in the Virgo Web Server.

    Some preparation is needed:
    2. Unzip that bad boy to a location that we will call $VIRGO_HOME

    3. Download Gemini Naming (M1 at the time of the post - so artifact names might change)

    4. Unzip that bad boy as well.

    5. Copy org.eclipse.gemini.naming.impl-1.0.0.M01-incubation.jar to $VIRGO_HOME/lib/kernel

    6. Download APIs for OSGi enterprise bundle from the maven repo and save them to $VIRGO_HOME/lib/kernel

    Now we need to modify a few files. Use your editor of choice and modify $VIRGO_HOME/lib/ to add the OSGi enterprise api bundle to the list of launcher bundles:

    launcher.bundles =\
    This takes care of the kernel region (Read on for more info on Virgo regions more info on Virgo regions). Now lets take care of user region.

    We need to modify $VIRGO_HOME/config/

    1. Let's add Gemini naming to the list of baseBundles in user region

    baseBundles = \
    ... ,\

    2. We need to import a package into the user region from the kernel and that is done by modifying the packageImports property:

    packageImports =\
    ..... ,\

    This package comes from the osgi enterprise bundle deployed in the kernel region and allows us to keep a single version of the bundle deployed in both regions vs. each having an individual copy.

    3. Let's enable osgi console at the same time so we can examine the list of services:


    Now you can start the server and see the JNDI services deployed.

    cd $VIRGO_HOME/bin/
    ./ -clean

    Connect to telnet console:

    $ telnet localhost 2401
    osgi> vsh service list

    You should see entries:

    51 javax.naming.spi.ObjectFactory                                    4
    52 javax.naming.spi.InitialContextFactoryBuilder                     4
    53 org.osgi.service.jndi.JNDIContextManager                          4
    54 org.osgi.service.jndi.JNDIProviderAdmin                           4

    That is it. Now you can use JNDIContextManager and JNDIProviderAdmin to get a reference to InitialContext or look up services with "osgi:service/" schema name.

    Well... You almost can. But that is topic for part 2. Stay tuned.  Hint Tomcat gets in the way (see here)

    Tuesday, November 9, 2010

    Mule says no thanks to OSGi.

    Ross Mason of MuleSoft posted an article where he argues that OSGi is too complex for end user developer. Although I can sympathize, I do not completely agree. Yes there are areas where OSGi is middleware centric. Yes there are always things that should be simplified. Yes there are pains moving to OSGi.

    OSGi is a Dynamic Module System for Java. The key word is Module. Modularity is hard, not OSGi. Properly drawing boundaries between components is hard. Getting communication protocols and APIs right is hard. Unlearning habits of the past is hard.

    Dynamic is not something that a regular developer ever learned to deal with properly. For years we have been in a servlet world, where one never really worried about resources/services disappearing or even cared about writing code to account for a multi-threading. I still see code that treats HttpSession as a private HashMap.

    Think about all the iterations that enterprise Java went through and all of the technologies involved. CORBA, EJB1, EJB2, RMI, JINI, JNDI, JSP, JDBC, JMS, JCA, JPA, WS-(death)*, and on and on. How many books, articles, man-years of effort and learning went into getting developers to understand that set of technologies? All sorts of stacks and frameworks were built to hide the complexities of that set of technologies. Struts, WebWork, iBatis, Hibernate, Spring (later validated with EJB3/JEE5 and 6), Facelets, Seam, etc. That is a lot of effort from a lot of very smart people to get us to where we are now. It is very easy now to write an app in 3-4 weeks that is useful, pretty and functional. The same would be one to two years of manpower with EJB+JSP+"name the container you hate the most" just seven, six or even five years ago.

    With OSGi the situation is very similar to the days of EJB1. Very little information was available in dead-tree format (until recently) and not too many people who you knew were using OSGi. Advice was hard to come by. Surrounding ecosystem was narrowly focused on middleware, embedded devices or Eclipse plugins. The situations is improving dramatically lately. Books are getting published (check out OSGi in Action and Spring DM in Action). OSGi Alliance has started a wiki to spread the jungle knowledge. Many OSS projects are picking up OSGi and provide feedback and information on project specific pages.

    OSGi is a great technology, but it is fairly low level technology. It is like coding your whole application persistence logic with JDBC all over again. There are number of OSGi specs that address most of perceived complexities of OSGi: Declarative Services (DS) and Blueprint (Eclipse/SpringDM and Apache Aries) bring DI capabilities to OSGi. Building OSGi bundle metadata becomes simpler by the use of tooling support provided by bnd, bundlor and IDE tooling. Containers like Apache Karaf and Eclipse Virgo are simplifying the use of OSGi with each release.

    Having said that, there is one issue with OSGi. Please bear with me while I address it in a roundabout way.

    I think it was 2007 or 2008. I was at the SpringExperience/SpringOne conference. There was a lot of talk about data grids, super-duper distributed caching, mass scaling. All that sounded like candy to me. During one of the BOFs I asked Rob Harrop of SpringSource fame - "what is stopping adoption of this technology for use in an everyday project". His answer was very interesting and I wish I wrote it down, but it boiled down to (paraphrasing) :
    Developers want to use APIs and tools that they already know. If there are serious limitations on usage of those tools and APIs, developers tend to scream bloody murder and bail.
    OSGi is a different programming model from what we have learned to use so far. It has different quirks (TCCL handling is undefined in the spec for example) and not all libraries that we grew up with play nicely in that different programming model. Until all/most of the current cream of the crop libraries work seamlessly, or with very limited, configuration-only changes (no recompile/re-bundling required), there will be a push back on OSGi adoption.

    I do not consider myself an OSGi evangelist or a zealot. I really do not think that OSGi is a golden hammer or a nail. OSGi and modularity forces architects and developers to think about overall architecture in much more detail. There are projects that just don't need it. But there are applications that will benefit greatly from OSGi support. Those applications will tend to have a longer lifespans without complete rewrites and will be less complex after 2 years in production than a usual package tangle that is found in other so called "enterprise" applications.

    Wednesday, October 20, 2010

    News flash from SpringOne2GX

    This is very interesting development. (get source at

    A way to use Spring when building social/mobile apps. Including OAuth, integration with Twitter/LinkedIn/etc.
    There is SpringMobile and SpringSocial projects.
    Very very interesting. Including Grails/Spring Data support for NoSql datastores - very inventing of them to
    revitalize Spring Framework in post JEE world.

    Tuesday, March 2, 2010

    Compiling Hibernate 3.5.x

    Few days ago I wanted to build Hibernate new 3.5.x trunk to see the documentation and basically eyeball differences from prior versions. I found this page ( and followed the set-up steps. This is where the pain started.

    First off, build failed with an maven enforcer plugin error. I am running on jdk 1.6, specifically "1.6.0_17". Enforcer has a configuration that stops build when building with 1.6 jdk i.e. [1.5,1.6) - require at least 1.5 exclusive of 1.6.
    Ended up changing that to [1.5,1.7)

    Secondly, I am working on a Mac and hibernate builds documentation with po2xml that is not available on mac out of the box. Need to use ports (fink, macports, etc). I had an Ubuntu VM - so instead of messing around with ports I just moved to vm.

    Thirdly, build was doing some very strange things - i.e. jar artifacts were not getting published to local .m2/repository. For some reason they were renamed to $artifact.jdocbook-style. This was totally bewildering result. I spent a bit of time trying to look at the debug log from Maven - but that provided no immediate results. A hint came from :


    I was using Maven 2.2.1. So I thought - would it help to downgrade to 2.0.11 and see if that helps? Well - yes it did help.
    After repointing M2_HOME to location of 2.0.11 - build worked fine.

    The moral of the story - to build hibernate you MUST use Maven 2.0.11 until something funky is fixed in maven jdocbook plugin.

    Saturday, February 6, 2010

    Common local repository location for spring-build ivy artifacts

    SpringFramework in version 3 came out with a common ant + ivy build system across most of the projects.
    One thing that was driving me insane is that this build would pull artifacts into a project specific repository directory during ivy resolution vs. a common directory (like maven). As I was playing with the different projects and different versions of those projects disk usage was going through the roof.

    The only way that I found to have a common location is to provide a -D flag to ant:

    export ANT_OPTS="-Divy.cache.dir=$HOME/.spring-build-ivy-cache"

    Single location, minimize disk usage. But there is always a but in there somewhere. dm-server code base for one makes some assumptions about where that repository is located, i.e. "../ivy-cache/repository/[artifact]". I ended up creating a symbolik link for those situations. Not the best solutions but freed about 1G of space on my machine.

    Friday, January 22, 2010

    WebFlow on dmServer

    It should be easy to use webflow on dmServer right? Well, not if you depend on maven to copy dependencies.
    Just ran into few issues and wanted to share:

    1. Bundle-Import both org.springframework.webflow and org.springframework.binding (too many packages to import individually)
    WebFlow xml config will try to add a ConversionService from binding to your app context and if you don't have binding packages imported - BOOM...
    2. Don't forget el libraries
    WebFlow libd (library) has an optional dependency on and does not even lists ognl. org.springframework.binding imports javax.el, org.jboss.el and ognl as optional so resolution does not fail until you try to create an app context and webflow can't find any el parser to use and throws up.

    That is it for now.