Monday, December 13, 2010

Show me your license!

In the follow up of open-sourcing our project, I spent some time today figuring out which license would be used best. This was completely new to me so I started comparing the different kind of licenses available. Apparently there are two major kind of licenses. On the one hand you have the 'GPL based' licenses, which enfore the code that use the open-source GPL code, to also be free and to also continue under the GPL 'flag'. They even have a clause stating that you cannot add additional restrictions on the 'redistribution of either the original work or a derivative work'. The goal is to not only garantuee the 'freedom' of the open-source software, but to also encourage software that uses open-source software to do the same.

On the other hand you have a bunch of 'proprietary compatible' licenses, like the MIT/X, the BSD and the Apache license which pretty much allow everything (use, modify, redistribute, ...) without enforcing extra constraints. Chances are high that we will go with an Apache license. Somewhere in between those major kinds, there also is a more pragmatic version of the GPL, called the 'Lesser GPL' which literally is a copy of the GPL exempting the fact that the propietary software should also use GPL.

I also read a lot about copyright, copyleft, dual licensing, trademark protection, and credit enforcement, but I'm not gonna bore you with that...

Tuesday, November 9, 2010

eID

At work a colleague and me just finished the 1.0 version of a new eID security module. It's based on the open source framework of Fedict that uses an Applet to allow you to sign in using your digital id. This framework was brought to life due to a lot of problems (stability, configuration, ...) with the current eID Fedict middleware solution.

For now it's still implemented as a stand-alone war, but it will be integrated in our larger, existing security module very soon. This larger module copes with authentication, authorization and identity management and is really mature for some time now. It came to life several years ago to replace a product from Sun called Access Manager. Onces the integration with eID is finished, the entire module will be open sourced. Exiting stuff!

Sunday, October 31, 2010

PKCS#12

PKCS#12 is a PKCS (Public Key Cryptography Standards) developed by IBM and published by RSA Security. It is designed as the Personal Information Exchange Syntax Standard. This means that it serves as a standard to exchange sensitive key information between one kind of keystore and another kind. In Java you can instantiate a Keystore in the standard way Keystore.getInstance("JKS") or with a PKCS12 SPI implementation Keystore.getInstance("PKCS12"). Both implementations store private keys and certificates in a single file. OpenSSL however, stores them in separate files. Via the 'openssl pkcs12' command you can merge them into a PKCS#12 file, bridging the cap towards a Keystore. For instance:

openssl pkcs12 -export -inkey newSignedRequest.pem -certfile myCertFile.pem -name "TEST CERTIFICATE" -out myPkcs12Cert.p12
Certificates comming from a CA are almost always delivered in pkcs12 format.

Sunday, September 12, 2010

Can you fetch me this thingie? Just for me please...

Kodo is a JDO implementation that it's not widely adopted and a bit more complex than Hibernate. Neverteless I found at least feature that it already had long before Hibernate did, called 'fetch groups'. With a fetch group you can for instance eager fetch a collection that is defaulty mapped as lazy for a certain scenario.

Let's say a bunch of addresses are defaulty mapped 'lazy' to a customer. We can then define a fetch group (eg 'addressFetchGroup') to indicate that this collection should be eagerly fetched when this group id is used in the query.

Here's what this would look like:

The package.jdo (mapping) file:

<class name="Customer">
<extension vendor-name="kodo" key="data-cache" value="false" />
<extension vendor-name="kodo" key="jdbc-sequence-factory" value="native" />
<extension vendor-name="kodo" key="jdbc-sequence-name" value="MYSEQUENCE" />
<field name="name" persistence-modifier="persistent" />

<field name="addresses" default-fetch-group="false">
<collection element-type="address" />
<extension vendor-name="kodo" key="fetch-group" value="addressFetchGroup"/>
</field>

...

The code:

public Customer lookUpCustomerWithAddresses(String customerId) {

Collection fetchGroups = new ArrayList();
fetchGroups.add("addressFetchGroup");
fetchGroups.add("otherFetchGroup");
...
return retrieveWithFetchGroups(fetchGroups, customerId);
}

@SuppressWarnings("unchecked")
public <S> Collection<S> retrieveWithFetchGroups(Collection<String> fetchGroups, Object... parameters) {
KodoQuery query = (KodoQuery) persistenceManager.newJDOQuery();
try {
query.getFetchConfiguration().addFetchGroups(fetchGroups);
return (Collection) query.executeWithArray(parameters);
} catch (Exception ex) {
...
}
}

When used correctly (and more importantly, measured correctly with a profiler), this can greatly improve performance of a specific use case, without affecting other use cases. As of hibernate 3.5 this feature is also included and known as 'fetch profiles'.

Sunday, August 15, 2010

Thou shall respect the carriage return

As usual it was fairly late notice when we as the development team heard our certificates were only still valid a day and a half before expiring. Luckily the procedure to prolong them was already executed. We had sent them to our CA, Fedict, which had extented their validity and send them all back in one big zip.

Beside the complete lack of naming convention of the cert file names, a few things suprised me in the renewal process. We had at least 16 or more certificates to import for different environments and customers and had to chain each of them with an intermediate and root level certificate, also provided by the CA. Although this is a trivial task, it made me wonder why they couldn't have done this for us, you know, being a customer of their services. Second of all it was in pem format, which was nice, but it looked something like this:

Ulv6GtdFbjzLeqlkelqwewlq822OrEPdH+zxKUkKGX/eN
...
...
9801asds3BCfu52dm7JHzPAOqWKaEwIgymlk=

Importing them in our jks was impossible unless we added the begin and end phrase to make it look like this:

-----BEGIN CERTIFICATE-----
Ulv6GtdFbjzLeqlkelqwewlq822OrEPdH+zxKUkKGX/eN
...
...
9801asds3BCfu52dm7JHzPAOqWKaEwIgymlk=
----END CERTIFICATE-----

Although this is also a known practice and was quickly done, it was again something they could have done for us, you know, being a customer of their services. Lastly while doing all this, I noticed another subtlety, this time about the jks. It only allows the import if there is a carriage return after the last phrase. So you had to explicitly say '----END CERTIFICATE-----CR' before the import succeeded. I wonder if this also is a security feature.

Thursday, July 29, 2010

The eagerness of JPA

JPA came to life as a result of JSR 220. This request originated to provide a uniform API and way of working across different ORM solutions such as Hibernate, Toplink and JDO. One member of the expert group leading JSR 220 was righteously Gavin King, the main inventor of Hibernate. Since Hibernate is the most widely used implementation, I thougth they would follow most of its important design decisions. For instance in hibernate the default fetching strategy is 'lazy'. Although this can sometimes lead to difficulties with casts and object equality (if you received a proxy in stead of the real object), it has as advantage that you only fetch dependencies in memory when you need them. I was suprised to see that in JPA the default fetch of a @manyToOne for instance is 'eager'?!

This lead to the fact that someone in my team unwillingly dragged half the DB in memory :) If anybody knows why they decided to change this, please let me know. Thx

Wednesday, June 9, 2010

There can be only one

Before Java release 1.5 you had to do the following to create a Singleton:

public class ImASingleton {

private static final ImASingleton INSTANCE = new ImASingleton();

private ImASingleton() {
}

public static ImASingleton getInstance() {
return INSTANCE;
}
}

And to be able to serialize this singleton, you had to do 3 things:
1. Implement the Serializable interface.
2. Mark the instance fields transient if you didn't want them to be serialized.
3. Define a readResolve() method to return the single instance. This is a hook method called in the deserialization process. If you don't provide this method, the default implementation will always create a new instance in stead of your unique instance!

public class ImASingleton implements Serializable {

private static final ImASingleton INSTANCE = new ImASingleton();
private final transient Object nonSerializableField;

private ImASingleton() {
}

public static ImASingleton getInstance() {
return INSTANCE;
}

private Object readResolve() {
return INSTANCE;
}
}

As of Java 1.5 the preferred way to create a singleton is simply by using an enum. This get's you the Serialization for free.


public enum MySingleton() {
INSTANCE;
}

Friday, June 4, 2010

I want it now

On my current project I'm amongst others doing JSF front-end development again. An older module of the project combines JSF with jsp's and a more recent one with facelets (xhtml). During my first front end story, I was confronted with a JSF attribute that I, either used to understand and forgot, or never really got at all: The 'immediate' attribute.

In contrary to what a lot of developers think (myself included untill I read the article), the immediate flag does not skip the 'Process Validations' phase. It causes a component to be processed in the 'Apply Request Values' phase. This is the second life cyle phase in stead of the third.

Even for developers who completely understand it, it seems difficult to explain in a clear manner. I found a good article summarizing the use cases for it: http://wiki.apache.org/myfaces/How_The_Immediate_Attribute_Works

Friday, February 5, 2010

Let's conversate about the essence of Agile

It was bound to happen for me to have a post on Agile, and now here it is...

Many people have many different opinions on this hyped topic full of
buzz words and passionated feelings. I've spend some years on different kind of Agile projects. 'Different' in the way they adhered to the Agile principles. And this allows me to distill the essence. For me it's not about a complete set of 'rules', although they help if you still need to grow as a developer (see my first of two points).

In my first Agile project, there was a strict following of all the sub methodolies defining Agile (like burn down, TDD, stories, scrum, pairing, the manifesto, ...). The projects after that, were also Agile, but less strict, or less compliant to the rules if you will. They for instance did not adapt all of the principles such as burn down or pair programming.
Each project had valid reasons for their level of compliance. For instance, one of the goals of the strictest project was that half of the developers (junior Java developers) needed to be trained to later perform maintenance on the project, after the senior developers left the project. Pairing serves this need perfectly. The other projects had more senior members, which has an impact on my first essential point.

Colleagues who have never worked according to the methodology often ask me - while barely hiding their scepticism - what 'Agile' exactly is. According to me there are two essential things that all Agile projects have in common:

1. First of all, you need a team of *disciplined* and *courageous* developers. Seriously. Disciplined enough to test and doubt everything, even themselves. Courageous enough to refactor that complicated piece of code or to say 'no' to your boss (refuse to harm the code cfr Uncle Bob).

2. Second of all, the way the development team interacts with the team that determines the requirements. This can be the analists, proxy customers or customers.
I read an excellent Martin Fowler blog post which explains this way short, clear and to the point: Conversational stories.

In the post you can see that Agile, deals with requirements in conversation, sometimes even negociation. The early developer feedback can lead to changing, adding or dropping some requirements.

In the waterfall approach, the analists are super human creatures, who can never fail. They will never be wrong, nor forget the big picture and always
use the proper level of detail. Surely, they must be, since a developer never sees them but needs to implement what's described in their documents.

There are always questions about details that only pop up when you start the implementation. Even if the use case accurately described 90 percent of the cases, how do you handle the other 10%? If there are 3 steps described, what does the software need to do when step 2 fails? And so on.

As explained in the post, conversation is the answer.

Friday, January 1, 2010

Spring Webservices

On my current project I'm using Spring Webservices to interact with external companies. This contract-first framework is easy to use and hides most of the boiler plate code, so you can fully focus on the business part.

Here's a basic example that uses XmlBeans to marshall/unmarshall the XML to and from java objects. (You can also use Jibx, Xstream, Castor, JAXB, ... ).
All you need to do to use the XmlBeans marshaller, is pass it to the super constructor. The AbstractMarshallingPayloadEndpoint is usefull when you only need to process the payload XML. Only the XML is passed in as an argument to the invokeInternal() method.

public class MyEndpoint extends AbstractMarshallingPayloadEndpoint{

private final MyService service;

public MyEndpoint(MyService service, Marshaller xmlBeanMarshaller) {
super(xmlBeanMarshaller);
this.service = service;
}

protected Object invokeInternal(Object request) throws Exception {

XmlBeanRequestDocument requestXml = (XmlBeanRequestDocument) request;
Object result = service.findById(requestXml.getRequest().getId());

//marshall the result
XmlBeanResponseDocument responseXml = XmlBeanResponseDocument.Factory.parse(result);
return responseXml;
}
}

You can declare your endpoint in a number of endpoint mappers. One of the easiest to use, is the payload endpoint mapper. The mapper is an interceptor that looks at the root element of the payload to route the message to the proper endpoint.

<bean class="org.springframework.ws.server.endpoint.mapping.PayloadRootQNameEndpointMapping">
<property name="mappings">
<props>
<prop key="{http://mycompany.com/hr/schemas}MyRequest">myEndpoint</prop>
</props>
</property>
</bean>

<bean id="myEndpoint" ... />

The advantage of this kind of mapping, is that you can host all your webservices on the same url.

Let's say you want access to more than just the payload, meaning you also need to deal with attachments. Here is a simple example giving you an idea on how easy it is to implement your custom endpoint that does this :

public class DetachEndpoint implements MessageEndpoint {

public void invoke(MessageContext messageContext) throws Exception {

WebServiceMessage requestMessage = messageContext.getRequest();

if (requestMessage instanceof AbstractSoapMessage) {

@SuppressWarnings("unchecked")
Iterator attachments = ((AbstractSoapMessage) requestMessage).getAttachments();

while (attachments.hasNext()) {
Attachment attachment = attachments.next();

... //insert business logic here

//System.out.println("attachment = " + attachment);
//System.out.println("attachment datahandler = " + attachment.getDataHandler());
//System.out.println("attachment content-id: " + attachment.getContentId());
//System.out.println("attachment content type: " + attachment.getContentType());
//System.out.println("attachment size: " + attachment.getSize());
}
} else {
//throw some sort of exception
}
}
}

The messageContext gives you access to the attachments and to the payload XML via requestMessage.getPayloadSource(). This is very basic to show you the principle. A better way to do this, is to have an AbstractMarshallingPayloadEndpoint with a custom interceptor for incoming messages, that does logic similar to my DetachEndpoint.

The class implementing the EndpointInterceptor interface, can be declared in your config like this :

<bean id="myEndPointMapping" class="org.springframework.ws.server.endpoint.mapping.PayloadRootQNameEndpointMapping">
<property name="mappings">
<props>
...
</props>
</property>
<property name="interceptors">
<list>
<ref bean="myInterceptor" />
....
</list>
</property>

<bean id="myInterceptor" ... />

The interceptor has access to the messageContext via the handleRequest(MessageContext messageContext) method.