Writting a DiigoDelicious plugin for Conduit

Tonight I started trying to write a bookmark syncing tool for my Delicious and Diigo bookmarks. Being of the Python persuasion I gave Conduit 0.3.11 a spin as the framework. This turned out to be a good choice, except for the incorrect documentation which suggests that custom modules should be placed in ~/.conduit/modules. In fact they should go in ~/.config/conduit/modules.

I got as far as writing a Bookmark data type (from the marketing I would have thought this should have already existed), and a Diigo data source. When I turned to the Delicious side of the equation I discovered that:

  1. The API has just changed; and
  2. The library I wanted to use (DeliciousAPI) doesn’t seem to support adding/updating entries.

So the project is now on hold until the Delicious API is updated to the latest API. Either way, Conduit seems like a very flexible framework and I hope it has long-term prosperity. Perhaps in due course it could natively support bluetooth SyncML 😉

Posted in gnome, linux, python | 2 Comments

MyProxy server segfaulting

Setting up a new MyProxy (v3.9 12 Jun 2007 PAM OCSP) from VDT 1.8.1 I ran across an annoying segfault issue when running the server to run as a CA, in debug mode.

# myproxy-server -dmax_proxy_lifetime: 43200 seconds
PAM enabled, policy requiredCA enabledmax certificate lifetime: 43200 seconds
using storage directory /var/myproxy
Starting myproxy-server on localhost:7512...
Connection from
using trusted certificates directory /opt/vdt/globus/TRUSTED_CA
Authenticated client <anonymous>
applying trusted_retrievers policy
trusted retrievers policy matched
applying authorized_retrievers policy
applying authorized_renewers policy
Program received signal SIGSEGV, Segmentation fault.

Note, the client can connect and only fails once the client responds with their credentials. Using GDB I got the stack trace of the issue:

#0  0x008f3950 in strip_newlines (string=0x901d13 "unknown error\n")    at myproxy_log.c:72
#1  0x008f3a95 in myproxy_log_verror () at myproxy_log.c:141
#2  0x0804ca27 in myproxy_authorize_accept (context=0x9fcc020,     attrs=0x9fcc008, client_request=0x9fd70c0, client=0xbfd892e0)    at myproxy_server.c:1445
#3  0x0804ae5a in handle_client (attrs=0x9fcc008, context=0x9fcc020)    at myproxy_server.c:465
#4  0x0804a932 in main (argc=2, argv=0xbfd897e4) at myproxy_server.c:308

Noting that the exception is in the myproxy_log_verror method, and looking at the myproxy code I found this method is only activated when in debug mode. So, assuming all my config was good I ran the myproxy-server proper and all was good.

Posted in grid, Work | Leave a comment

Some thoughts on a dynamic (lazy) data access layer for web services

The problem I’ve been address in my most recent work block has been to develop an interface to a relational data store which can be used either as a local DB, or via webservices. The concept is quite straight forward, we want CRUD operations on a set of data objects. The schema is fairly straight forward too, with a core hierarchy of elements with a few enumerated lists. As well as a few cuts across the hierarchy.

We want to use this with webservices, as well wanting to be able to serialise to an XML document, so we layed out the schema in XSD, and applied HyperJAXB3 to is using a JAXB annotations document. Other than a few tweaks to the HJ3 code to add in some extra features we needed, everything this far is vanilla.

We then laid out a generic DAO interface, and implemented a JPA and webservices client. The webservices server actually just wraps the JPA client with a security layer.

All is well so far, with the base functionality passing all the unit tests. But, this is a fairly clumsy, class by class CRUD interface. With JPA/Hibernate you can lazily fetch parents and children of an object, and replicating this functionality would be really useful. But the webservices link really kills things.

One approach we tried was to override the getter methods on the beans to allow dynamic retrieval of data if the stored attribute is null. But we needed the webservices client to create these new (extended) beans. This is actually harder than it sounds, as CXF w/ JAXB does not allow you to replace the context factory. (see https://jax-ws.dev.java.net/issues/show_bug.cgi?id=282 )

We also tried writing our own proxies, with some success. But we ran into troubles when proxying single elements (as opposed to lists) as you loose the annotations which CXF and JPA require.

The obvious answer, which I haven’t mentioned yet is to put this logic in the getters proper. This would be nice, but because we are generating the classes from a schema we don’t want to change the generated code. There are many reasons, none less than because the schema is evolving and it hurts to merge changes. We attempted to write our own JAXB plugin to change the methods which were already generated, but XJC/the-java-code-model do not allow you to remove code from a method, only append to it. Similarly, you can’t get the annotations on a method, nor modify existing annotations. This is a huge problem for us, as it would save us having to modify the JAXB plugins which create this code, rather we could simply apply our changes to the produced code.

Perhaps there is a way around this, but it certainly isn’t obvious, or indexed by Google.

The closing remarks are these. With a bit of backfilling JAXB/XJC will be an immensely powerful tool, able to develop a wide range of data models. (I’m working on the tickets now). Dynamic getter access to attributes via web services would be nice, but it is too much hard work really, and if it is required then creating a second set of domain objects is your only real hope, IMHO. When this project wraps up I’ll release the code here so we can all think about how this approach really faired.

Posted in java, JPA, Work | 1 Comment

I want the last 2 hours back!

I just spent a few hours writing an Acegi filter to get a certificate from a form post, and put it into a modified User Detail principle. All well and good, except after a redirect back to / I got the error below. Normally this would redirect you back into the application using a JSP page. It turned out that because I’m using Acegi 1.0.5 which has a bug when there is no access denied page handler defined. The reason this occurs only when I was logged in was because I neglected to add ROLE_ANONYMOUS authentication authority list. If only the error meant something debugging wouldn’t have been so hard.




Caused by:

	at org.acegisecurity.ui.ExceptionTranslationFilter.handleException(ExceptionTranslationFilter.java:229)
	at org.acegisecurity.ui.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:176)
	at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:275)
	at org.acegisecurity.providers.anonymous.AnonymousProcessingFilter.doFilter(AnonymousProcessingFilter.java:125)
	at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:275)
	at org.acegisecurity.ui.AbstractProcessingFilter.doFilter(AbstractProcessingFilter.java:271)
	at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:275)
	at org.acegisecurity.ui.logout.LogoutFilter.doFilter(LogoutFilter.java:110)
	at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:275)
	at org.acegisecurity.context.HttpSessionContextIntegrationFilter.doFilter(HttpSessionContextIntegrationFilter.java:249)
	at org.acegisecurity.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:275)
	at org.acegisecurity.util.FilterChainProxy.doFilter(FilterChainProxy.java:149)
	at org.acegisecurity.util.FilterToBeanProxy.doFilter(FilterToBeanProxy.java:98)
	at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
	at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
	at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
	at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
	at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
	at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
	at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211)
	at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
	at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
	at org.mortbay.jetty.Server.handle(Server.java:295)
	at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:503)
	at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:827)
	at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:511)
	at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:210)
	at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:379)
	at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:361)
	at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442)

Powered by ScribeFire.

Posted in java, Spring | Leave a comment

Spring and JNDI (Tomcat or Jetty)

Recently I had need to deploy some Spring webapps which required predeploy configuration. Being the first time I had to find a serious answer I looked to the mythical JNDI for an answer. This document is meant to complement other Spring JNDI documents out there.

Essentially the problem is this. We need to deploy a webapp. The webapp needs configurations (database and webservice endpoint locations). Editing properties files or XML config within the webapp isn’t nice, because on a redeploy the config will be lost. Inside containers like Tomcat I am not aware of a way to easily add extra items to the classpath which won’t get nuked unexpectedly, so solutions like PropertyPlaceholderConfigurer don’t really fly as the properties file will end up within the webapp. And I don’t like the idea of setting environment variables for to locate such things.

In steps JNDI. JNDI is the Java answer to namespaced, centralised configuration. Application containers like Tomcat, Jetty, Glassfish, etc all allow you to export objects via JNDI. This may not be a completely correct description, but it is sufficient for this demonstration. The trick is how to use these. I’ll show Jetty configs (which in Maven live in src/main/webapp/WEB-INF/jetty-env.xml) as well as some references to Tomcat (in $CATALINA_HOME/conf/server.xml or better still, in $CATALINA_HOME/conf/Catalina/[engine]/<webapp>.xml) (more on Tomcat here). This means that the config lives OUTSIDE the webapp, and is immune to inadvertant changes, making hot-patching sites easier as War/webapp is independent of the site config.

First, exposing a DB.
This exposes a Postgres DB on the name icatDB. Note, there is a special JDBC namespace. Also note I am not using the normal Postgres connection class, rather I’m using the connection pooling class.
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN"
<Configure class="org.mortbay.jetty.webapp.WebAppContext">
<New id="icatDB" class="org.mortbay.jetty.plus.naming.Resource">
<New class="org.postgresql.ds.PGPoolingDataSource">
<Set name="serverName">localhost</Set>
<Set name="databaseName">icat2</Set>
<Set name="user">nigel</Set>
<Set name="password"></Set>

Tomcat: Example from a different project:
<Context path="/continuum">
<Resource name="jdbc/users"
url="jdbc:derby:database/users;create=true" />

Spring: I am going to pass this into an entity manager:
<bean id="entityManagerFactory"
<property name="dataSource" ref="dataSource" />
<property name="jpaVendorAdapter">
<property name="database" value="POSTGRESQL" />
<property name="showSql" value="true" />
<property name="generateDdl" value="true" />

<bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName" value="java:comp/env/jdbc/icatDB"/>

So I cheated here. The data source already has a JNDI entrypoint so Spring isn’t involved. However in this next example I need to pass in a String which is a webservice endpoint address:Passing a String:
These kinds of elements are passed via the env namespace. From the Jetty JNDI page it tells me we can only pass in these types:

  • java.lang.String
  • java.lang.Integer
  • java.lang.Float
  • java.lang.Double
  • java.lang.Long
  • java.lang.Short
  • java.lang.Character
  • java.lang.Byte
  • java.lang.Boolean

This is fine for configuration work, which is all we are doing.
<?xml version="1.0"?>
<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN"
<Configure class="org.mortbay.jetty.webapp.WebAppContext">
<New class="org.mortbay.jetty.plus.naming.EnvEntry">
<Arg type="java.lang.String">http://hostname:8081/ws/ICAT</Arg>


<Context path="/icat"
<Environment name="mcatextWebservice"

Now Spring.

First import the jee namespace into your Spring config:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-2.0.xsd">

Now we can use the jee:jndi-lookup element in place of a value element:
<bean id="icatConnectionManagerBase" scope="session"
<constructor-arg index="0">
<jee:jndi-lookup jndi-name="java:comp/env/icatWebservice"/>

There has also been discussion of writing a PropertyPlaceholderConfigurer like bean which can bring all the JNDI into the properties scope so we could just use ${env.property} notation.

Posted in java, Maven, Spring | 9 Comments

Maven classpath issues at compile time

Here’s a very weird Maven/Java issue. The error message (below) occurs in my build phase where JaxB is called to produce some Java objects from XML. JaxB calls HyperJaxB, and on some systems it crashes.

[ERROR] XJC while compiling schema(s): org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: Class org.apache.commons.logging.impl.Log4JLogger does not implement Log
Caused by: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: Class org.apache.commons.logging.impl.Log4JLogger does not implement Log
Caused by: org.apache.commons.logging.LogConfigurationException: org.apache.commons.logging.LogConfigurationException: Class org.apache.commons.logging.impl.Log4JLogger does not implement Log
Caused by: org.apache.commons.logging.LogConfigurationException: Class org.apache.commons.logging.impl.Log4JLogger does not implement Log

But not all. My main dev machine works, my deployment machine doesn’t. My other deployment machine also works, but dave’s machine doesn’t. All of us are using the same Maven version (2.0.8), mostly the same Java versions (1.5.0_xx), and some are amd64, other are x86. However, the crash/work divide does not fall on this line. In fact, on the deployment machine both local users can compile OK, but the LDAP users can’t.

On Dave’s machine if we move the local repo to /tmp (-Dmaven.repo.local=/tmp/repository) it starts working. I did capture the class path on a few occasions, and it seems to be in quite a random order. I noticed that commons-logging is far closer to the start of the class path when the compile works, but I don’t have enough samples to confirm this.

Another curiosity I saw was that if your home directory is in a non-standard place, like /var/home/, then maven puts your repository in a directory called ? in the current working directory. Thats right, question mark. Oh dear.

Some machine details below:

Working machine:
Maven version: 2.0.8
Java version: 1.5.0_13
OS name: “linux” version: “” arch: “i386” Family: “unix”

Broken machine:
Maven version: 2.0.8
Java version: 1.5.0_11
OS name: “linux” version: “2.6.18-8.1.3.el5xen” arch: “amd64”

Username: test.user
Maven version: 2.0.8
Java version: 1.5.0_13
OS name: “linux” version: “” arch: “i386” Family: “unix”

Username: tomcat – LDAP/NFS
Didn’t work

Username: root

Username: srb

Powered by ScribeFire.

Posted in Archer, java, Maven | 2 Comments

Improving Trac – Version, milestones, tickets and reports.

We’ve been using Trac for some time as a development and project management tool. It does have it’s shortcomings, but it is very easy to extend. The most recent issue I’ve had is trying to retrofit a hierarchy to the components and tickets.

I started by using a naming convention of [Component]:[Subcomponent]. This is nice as it groups the component, BUT when I tried to then use the milestones I discovered that they work by this this naming convention [Component]:[Milestone]. Oh dear. I would have thought a core module would be a bit more robustly implemented, but no matter. I am considering writing a replacement for the milestone which incorporates this change, and some other things I will discuss now.

The way milestones seem to be used is to look to future versions. ie, versions are historic, milestones are in the future. This is fine, but it seems odd that we would hold two separate lists for these. ie, when we make a release we don’t delete the milestone, we add a new version. Why doesn’t the milestone move over automatically. My suggestion is this:

Replace milestones and versions with one type called milestones. Have a type field for the milestone with “Project” and “Release” as the default values. The versions selection in the ticket screen will just show release milestones. We can then produce a report for any milestone to show the pending tickets, and if we want, gantt chart the project milestones.

This is designed to work with the sprint plugin which Andrew Sharpe wrote, which is just yet another way of grouping tickets into buckets of work. So, doing away with milestones will free up the fore-mentioned naming convention, and allow us to create a component hierarchy which will again assist in reporting. I would probably wrap all of this up in the spring plugin, as subcomponents which could be used. At least then we have one working package, as opposed to many disjoint ones.

Finally we could add some of the timing and estimations functionality in as well, which would round off the project management requirements.

I guess this brings up a philosophical argument about reuse vs reimplementation. While I usually make a song and dance about reuse (ie use the existing implementation), I think that some of this functionality is so small or fine-grained that integration and maintenance will become a headache. We can take the good ideas and wrap the whole thing up in a neat little ball.

Just a thought anyway.

Powered by ScribeFire.

Posted in Archer, programming, python | 2 Comments

ManyToOne reference loading with JPA/Hibernate

In this example an InvestigationType has many SampleType’s. If I load an InvestigationType via em (entity manager) find it also loads the samples.

1.	InvestigationType inv2 = (InvestigationType) em.find(
2.		InvestigationType.class, inv.getId());
3.	System.out.println("Inv2 ID="+inv2.getId());
4.	System.out.println("Inv2 Title="+inv2.getTitle());
5.	assertTrue(inv2.getSample().size() == 2);

However, if I load a SampleType the investigation field is null, unless I refresh the object.

1.	// Get a sample
2.	SampleType s3 = (SampleType) em.find(
3.		SampleType.class, inv2.getSample().get(0).getId());
4.	em.refresh(s3);
5.	System.out.println(convertDataToXmlString(s3));
6.	InvestigationType inv3 = s3.getInvestigation();
7.	assertNotNull(inv3);

From my understanding I should not have to do this as the fetching should be the same in both instances, ie resolving the parent by default. Similarly, if I query for both objects the references are correctly loaded.

List items = em.createQuery("SELECT i, s FROM au.edu.archer.schemas.datadeposition.SampleType s,
     IN(s.investigation) i WHERE s.id=2").getResultList();

If someone could tell me why this is so, please let me know. Else, at least I have a working example now.

Powered by ScribeFire.

Posted in Archer, java, JPA | 1 Comment

Slashdot | Is Apple Killing Linux on the Desktop?

Slashdot | Is Apple Killing Linux on the Desktop?
This is a summary of my experiences walking the path between Linux (Ubuntu/Gnome) and OSX. For background I’ve been using Linux on the desktop for about 7 years, and only in the past 2 months have purchased a MacBook.

For years I’ve been using linux on the desktop, first Debian/Gnome and more recently Ubuntu/Gnome. I must say I really appreciated how well configured the Ubuntu desktop was out-of-the-box, and through good things coming together, how well the dynamic monitor configuration worked. As a programmer, web monkey, researcher and academic writer I had no complaints about the Linux desktop per se. I didn’t really miss any applications, and VM ware could certainly facilitate whatever was required.

What did continually bug me was hardware issues, most notably on recently released hardware. This was well exemplified when I got my MacBook and preceded to repartition it and put Ubuntu on 1/2 the hard drive. Being Intel on the inside it essentially worked, but when I first went to suspend it to ram (which one does all the time on a laptop) it didn’t come back to life. I don’t recall the specifics, but I eventually had to revert to an older kernel to resolve this. But this didn’t unnecessarily overcome the issue entirely, it just reduced the frequency of issues. There were similar issues with the wireless card, which have been fixed in the SVN version of the wireless driver. In time I wanted suspend to disk as well, via Tux on ice. I tempted fate to upgrade to the latest stable Kernel, applied all the Mactel patches, etc. This just reignited all the old suspend to ram issues. Sigh.

So I decided to give OSX a try. As an aside, the first thing I did was install VM ware and X11 so I could continue to use my linux dev environment. This works well except for the messed up copy/paste buffer.

My for few hours could only be described as like swimming through honey. Slow, painful, and I thought I was going to drown. OSX uses a completely different navigation philosophy which took a while to adapt to. For instance, using Finder and search to locate applications, and the application centric window grouping. This second example is especially annoying as a developer, as I routinely have many windows of many applications open at one time. To navigate with the keyboard you have to first (Apple)-TAB to the application and then (Apple)-~ to get to the appropriate window. This makes development especially tedious on OSX, which is why I just run all the required applications from my VM (Firefox, Eclipse, gEdit, terminal, etc) and then just use (Apple)-~ to switch in a less convoluted way. It is still annoying that it cycles all the windows, but the keyboard is poor cousin to the mouse in OSX.

And for the record, expose does not improve this situation, it is more frustrating to have to visually locate the window, grab the mouse and click on it. Having said that, for non-coding tasks like checking email and then checking the news I can see expose would be useful. Especially if there is a lot of windows open.

What I do really appreciate about OSX, I can just close the lid of my laptop and it suspends. And then I open the lid, and 99% of the time it resumes. And that really sums up why I’ve stuck with it. I need to reliably be able to suspend/resume my laptop, and if I can run linux in a VM then that is fine. When the next kernel (2.6.24) is released I’ll give Ubuntu another try, and if it works then I’ll switch back to it.

For non-programmers, who just want a reliable desktop experience then I always point them to a Mac, probably because I would have a clue about how to fix them so that gets me off the hook. But also because, as a pre-packaged unit, they are quite reliably software wise. Oh the hardware side there is an average of 1 “logic board” replaced a month in my department where there are about 20 odd Macs. But they are all under warranty, so thats good.

In relation to the referenced article, I think OSX is retarding the growth of Linux on the desktop because it is providing a more reliable alternative to Linux. But, I think Linux is evolving at a far greater rate than OSX or Windows, and in the longer term it will be a player. Functionally its there, it just lacks a happy home in terms of hardware. Vendors are getting on board, and compatibility is improving. And maybe I’ve missed the product lines which are supported FULLY by linux, but it wasn’t through a lack of trying. I should also qualify “supported” by saying, it driver support need to be in the stock kernel, so they are available to any distro. Ubuntu supports the MacBook better than most distros because they apply all kinds of patches. This is all well and good, but it certainly doesn’t make the MacBook any more Linux compatible.

Powered by ScribeFire.

Posted in linux, OSX, technology | Leave a comment

Hibernate, spring, and different jars

The situation is this. I had a working application which used Hibernate annotated classes, and JTA for data bindings, within a spring framework. Then I moved the annotated classes into a Jar file, and the application stopped working. Hibernate knew nothing about the classes because the PersistenceAnnotationBeanPostProcessor does not traverse into the JARS on the classpath. Further neither the entity manager or HibernateJpaVendorAdapter have options to specify the explicit paths to the classes.

So, after heaps of trials and tests I’ve had to abandon the JPA approach and use the direct Spring-Hibernate bindings, as they allow you to specify the path to a mappings file. This is a pain, as all my code is written to use an entityManager and now I have to have duplicate implementation which use the hibernate session factory. Perhaps there is a hibernate implementation of the entity manager, but I could not find it.

I used this blog entry for most of the config code.

Powered by ScribeFire.

Posted in java, programming, Spring | Leave a comment