Monday, October 29, 2007

My problems putting RichFaces to work

There are many open source JSF component libraries in the market. So I had to choose one among them for my JSF projects. After some exhaustive exploration, I chose Ajax4jsf and RichFaces (now both projects are merged) as my component library because of the ajax-enabled components and also because I think it has enough JSF components to work with.

I read the RichFaces Developer Guide, and followed both the installation process and setting up, however I had problems getting it installed.
They got me crazy for long, but I didn't want to call off my attempt (although I might confess I was close to get rid of RichFaces). Actually I had different problems for different versions of the library, so I'm describing them hoping I can help somebody avoid this nightmare.

My Runtime Environment

Before going any further, I must explicit what my runtime environment is.
I have tried different versions and possible combinations (of course, avoiding not matching combinations such as Tomcat 5.5 and JSF 1.2):

  • JDK 5 and JDK 6 (update 1 and 2).
  • Tomcat 5.5 and Tomcat 6.
  • JSF RI (Sun's implementation), versions 1.1 and 1.2.

Configuring RichFaces 3.0.1

This is no longer the current version of RichFaces, so you will want to skip this part, but if not here is my experience with this version.

RichFaces 3.0.1 doesn't include officially Ajax4Jsf. However the lib directory seems to indicate otherwise.

RichFaces 3.0.1 included jar files:

  • ajax4jsf-1.1.1.jar
  • richfaces-3.0.1.jar

So I decided to work with this files in my web application (and not include any other version of Ajax4Jsf). As soon as I deployed my application I obtained an error. It seems to happen just when starting the ajax4jsf filter:
org.apache.catalina.core.StandardContext filterStart
java.lang.NoClassDefFoundError: org/apache/commons/collections/map/LRUMap
Soon I realized that the lib directory for Ajax4jsf 1.1.1 contained one file whereas version 1.1.0 contained two files:

Ajax4Jsf 1.1.1 included jar file:

  • ajax4jsf-1.1.1.jar

Ajax4Jsf 1.1.0 included jar files:

  • ajax4jsf-1.1.0.jar
  • oscache-2.3.2.jar

After that observation, I removed the ajax4jsf-1.1.1.jar file from RichFaces 3.0.1 and added the jar files from Ajax4Jsf-1.1.0 resulting in the following files:
  • richfaces-3.0.1.jar
  • ajax4jsf-1.1.0.jar
  • oscache-2.3.2.jar

And this is the workaround (based on a mixed jar files) that really worked out for RichFaces 3.0.1.

Configuring RichFaces 3.1.x

Finally, as of version 3.1.0, RichFaces and Ajax4JSF are put together in one single project.

It comes with three jar files. These are their names for version 3.1.2:
  • richfaces-api-3.1.2.GA.jar
  • richfaces-impl-3.1.2.GA.jar
  • richfaces-ui-3.1.2.GA.jar

However, running my web application with these files addressed another java.lang.NoClassDefFoundError error.
This time, I discovered that RichFaces has a "non documented" dependence of several apache commons libraries. The error vanished as soon as I added the latest version of these commons files.

For clarification this is the version of commons (and their jar files) I have included:

  • commons collections 3.2 (commons-collections-3.2.jar)
  • commons beanutils 1.7.0 (commons-beanutils.jar)
  • commons digester 1.8 (commons-digester-1.8.jar)
  • commons logging 1.1 (commons-logging-1.1.jar, commons-logging-adapters-1.1.jar, commons-logging-api-1.1.jar)

Sharing commons files between multiple libraries

Be aware of not including this apache commons files more than once!

So if you use these files within any other library in addition to RichFaces (e. g. Hibernate), then you need to ensure only one occurrence (the latest version) is really available otherwise you'll obtain a java.lang.NoClassDefFoundError error again.

You also need to ensure that all libraries sharing the commons files are loaded by the same class loader. For instance, if you have Hibernate jar files inside of $CATALINA_HOME/common/lib and RichFaces jar files within your war file then you have no other way but to put the apache commons files inside of $CATALINA_HOME/common/lib in order to avoid the java.lang.NoClassDefFoundError error.

As a consequence, if you're using Tomcat 5.5 you should know that this version of tomcat already includes some commons files within $CATALINA_HOME/common/lib, therefore, in this case you'll have to put the apache commons files at $CATALINA_HOME/common/lib and not within your war file, even though your RichFaces jar files are inside your war file!

As of Tomcat 6, there is no longer any use of apache commons library by Tomcat, so you can chose whether to put your last version of commons. As a rule of thumb $CATALINA_HOME/lib is the safest place though.


Due to the lack of documentation, all these conclusions have been collected from empirical tests.
I really appreciate any comments and share similar experiences. It'd be nice to know if there are people who has been through all this mess.

Sunday, September 30, 2007

instanceof doesn't work with Generics!

The information of generic types are not accessible at runtime. Contrary to what happens in other languages such us C++, this can be seen as a limitation that we, Java programmers, must bear with.

Simple scenario

As a sample scenario to show up the problem, we´ll write a simple generic Pair class containing a pair of objects of the same given parameter type T.
If you try to write a code such as this (overriding hashCode/equals methods):

public class Pair<T> {

private final T first, second;

public Pair(T first, T second) {
this.first = first;
this.second = second;
}

@Override public int hashCode() {
return first.hashCode() + second.hashCode();
}

@Override public boolean equals(Object obj) {
if (!(obj instanceof Pair<T>))
return false;
final Pair<T> other = (Pair<T>) obj;
if (this.first != other.first && (this.first == null || !this.first.equals(other.first))) {
return false;
}
if (this.second != other.second && (this.second == null || !this.second.equals(other.second))) {
return false;
}
return true;
}

public T getFirst() {
return first;
}

public T getSecond() {
return second;
}
}
You'll obtain a compiler error in the first line of the equals() method:
illegal generic type for instanceof
This error has to do with the way generics are implemented in Java and with the fact that the instanceof Java operator is evaluated at runtime (to check the runtime type information of an object).

The way Java works with Generics ends up within the compiler. This was a decision made up to be backward compatible with code previous to Generics. This feature is commonly known as Type Erasure. At compiler time, when a generic type is instantiated, all information about the actual parameter type is removed and therefore an instantiation of a type such as List<Employee> and List<Invoice> results at runtime in the same type, that is, its raw type List.

Therefore, you can't use the instanceof operator with generics. We replace this operator and we use instead the getClass() method for the implementation of the equals() method as follows:
  @Override   public boolean equals(Object obj) {   
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
final Pair<T> other = (Pair<T>) obj;
if (this.first != other.first && (this.first == null || !this.first.equals(other.first))) {
return false;
}
if (this.second != other.second && (this.second == null || !this.second.equals(other.second))) {
return false;
}
return true;
}
In addition to that, we'll obtain unchecked or unsafe operations when mixing raw types and generic types. In the above sample code we still have a warning at compile time:
Pair.java uses unchecked or unsafe operations
In order to get rid of this warning, we'll need to use the SuppressWarnings annotation just before the casting to Pair<T>:
...
@SuppressWarnings("unchecked")
final Pair<T> other = (Pair<T>) obj;
...
Conclusion

At runtime, the JVM hasn't information about generics, it is removed by the strategy known as Type Erasure.

However, avoiding the operator instanceof when using generics is only one sad consequence of Type Erasure. We can list some more:

  • You can't create an array of T (being T a type parameter).
  • You can't create an array of a generic type, such as new List<Employee>[7].
  • You can't access to the runtime class of a type parameter, such as T.class.
  • You can't access to the runtime Class of a generic type, such as List<Employee>.class.

You can read more consequences of Type Erasure in this blog.

The main benefit of Type Erasure is that they didn't had to change the JVM in order to deal with Generics! and therefore achieving backward compatibility. However there are people who doesn't like it and there's a new proposal to Java Generics (perhaps for Java 7!): Reified Generics, where generic parameters are available at runtime.

Thursday, August 2, 2007

Managing JPA EntityManager lifecycle

Managing EntityManager lifecycle when using JPA in a non enterprise environment (e.g. Tomcat) is a custom task you must take care of by yourself for you don't have the IoC (Dependence Injection) of a Java EE 5 to manage the EntityManager lifecycle. It happened the same when we discussed the EntityManagerFactory lifecycle in a previous post. Therefore, in this situation, you should consider the right scope of the EntityManager to use.

In order to decide your best approach, you need to keep in mind that:

  1. An EntityManager is not a heavyload object.
  2. It's not safe to traverse lazy-loaded relationships once the EntityManager is closed (This situation will change as of JPA 2.0).

Because of the first issue, there is no need to use the same EntityManger longer than needed, so there is neither room for application scope nor session scope. In addition to the first issue, note that you can't use an EntityManager instance for the whole application lifecycle (application scope) for the EntityManager is not Thread-safe.

Only two scopes are left:

  • Method scope (i.e. instantiate/destroy one EntityManager in each business method).
  • Request scope.

Using EntityManager with method scope


You create and destroy the EntityManager instance within a business method.
Be careful to ensure the EntityManger is always closed. When dealing with transactions it could be a little tricky as it was commented in this post.

Dealing with this scope is easy, here is a sample:
public void aBusinessMethod() {

EntityManagerFactory emf = ... ;
EntityManager em = emf.createEntityManager();
try {
...
} finally {
em.close();
}
}
However, this method scope is not enough for every situation. There could be some scenarios where you'll need a wide scope, such as the following situations:

  • When transactions spread multiple business methods.
  • Need to traverse lazy-loaded relationships outside a method (e.g. in a JSF page).

In these scenarios you can choose to keep one EntityManager for the whole life of a HTTP request as we are about to describe.

Before stepping further into the request scope strategy, you can think of a little workaround in order to avoid using the latter (this workaround is not my favorite alternative). It is based on forcing eagerly the loading of relationships of entities and can be useful when you need to traverse them once the EntityManager has been closed. One of such a typical scenario is traversing a relationship from within a JSF page.

Loading eagerly relationships

As a reminder, the JPA relationships that by default are lazily loaded are: OneToMany and ManyToMany.

Loading eagerly these relationships can be done in two different ways:

  • Changing the domain model
  • Explicit loading of relationships

The first one is referring to explicitly use the parameter fetch of the JPA annotation @OneToMany or @ManyToMany as in the sample:

@OneToMany(fetch=FetchType.EAGER)

The problem with this solution is that you define a requirement in your domain model instead of solving the problem just when you really need it.

Loading explicitly relationships can be done at first sight by invoking the method on the entity that get access to the collection of entities, however this is not very reliable: for instance, when we traverse a OneToMany relationship with Hibernate EM as the JPA engine, the returned collection (proprietary implementation) has not loaded its entities at all. In spite of returning a collection, its elements are not loaded till you really access each one (or invoke the size() method of the collection to force this loading)!

Using EntityManager with request scope


This is a one EntityManager per HTTP request strategy with the following features:

  • Creation on demand of the EntityManager.
  • Lazy closing of the EntityManager.

We'll provide an on-demand creation of the EntityManager instance to use within the request service. So if the EntityManager is not needed for a given request, it won't be create at all!

The main benefit of this scope is derived from the delayed closing of the EntityManager (it will last as long as a HTTP request is in process). Every queried entity will be managed till the end of the request and therefore during the presentation phase (the render phase in JSF for instance). This allows you to traverse lazy loaded relationships in a transparent manner: you no longer have to force the loading of relationships as commented before (they will be loaded only if they are traversed: on demand).

Another benefit is the possibility to share the EntityManager instance among several business methods within a request (with no need to pass the EntityManager as parameter). This is not a limitation, in case you would need more than one EntityManager instance you'll be able to bypass this utility and create directly an EntityManager instance yourself.

Transparent Design for an EntityManager with a lazy closing behavior

As a requirement, our approach will be transparent from a client perspective.That is, the client code should not change in order to provided this behavior (this way, you can easily adapt your web application). This is the typical client code using the EntityManager inside a method:
EntityManagerFactory emf = ...
EntityManager em = emf.createEntityManager();
try {
...
} finally {
em.close();
}
In order to provide this transparency, we'll create proxies for the EntityManagerFactory and EntityManager classes. The following UML class diagram, based on the Proxy Pattern, shows abstract proxies and the concrete classes:
Instead of creating one class for every proxy we have chose to create an abstract proxy class to be used as a generic multi-purpose base class, and we have provided the actual proxy as a concrete class extended from the the abstract proxy.

You can take a look to the EntityManagerFactoryProxy class:
abstract class EntityManagerFactoryProxy implements EntityManagerFactory {

protected final EntityManagerFactory delegate;

protected EntityManagerFactoryProxy(EntityManagerFactory emf) {

this.delegate = emf;
}

public EntityManager createEntityManager() {

return delegate.createEntityManager();
}

public EntityManager createEntityManager(Map map) {

return delegate.createEntityManager(map);
}

public boolean isOpen() {

return delegate.isOpen();
}

public void close() {

delegate.close();
}
}
The EntityManagerProxy class (not shown here, see complete source from resources) is created likewise.

The ScopedEntityManagerFactory class is just a factory for LazyCloseEntityManager. And LazyCloseEntityManager class acts in behalf of the client to override the EntityManager.close() method in order to delay calling the close method of its delegate (actual EntityManager).

Binding Thread and EntityManager

Our design is based on a property all Application Servers must ensure: for a given request, the ServletRequestListener's methods and Serlvet's service method are executed in the same thread (regardless of the use of a thread pool to serve requests).

Taking advantage from this issue, we're going to create a request listener whose responsibility is just to close the EntityManager bound to to the request thread (if it was created previously created).
In order to delay the closing of the actual EntityManager we'll encapsulate it in an LazyCloseEntityManager (transparent proxy) (whose close() method will be overridden) and we´ll bind the latter to the current request thread so that the actual EntityManager will be closed later by the request listener when the request service ends.

Binding the EntityManager to the current thread can be done easily using a ThreadLocal object.

The class responsible for creating LazyCloseEntityManager is ScopedEntityManagerFactory and so the one who use ThreadLocal. when a client asks for an EntityManager managed ScopedEntityManagerFactory will look up first in its ThreadLocal object. If the current thread asking for the EntityManager has already used it, No other EntityManger will be created, otherwise, ScopedEntityManagerFactory will create a new LazyCloseEntityManager and will bind it to the thread for future uses.

Somehow ScopedEntityManagerFactory should be notified when the HTTP request finishes for it must forget the LazyCloseEntityManager bound to its ThreadLocal object.
One way to do so in a no coupled fashion is through a listener. ScopedEntityManagerFactory will implement LazyCloseListener in order to be notified by the LazyCloseEntityManager.

Finally the HTTP request listener is responsible for closing eventually the LazyCloseEntityManager and notifying of this event.

The following UML class diagram exposes these ideas:
As a facility, we'll show some of the core classes that implements this design.
We'll start with the factory of LazyCloseEntityManager instances, that is ScopedEntityManagerFactory. This is the only class using the ThreadLocal class. Note it is notified when each LazyCloseEntityManager instance is really closed.
public class ScopedEntityManagerFactory extends EntityManagerFactoryProxy
implements LazyCloseListener {

private final ThreadLocal<LazyCloseEntityManager> threadLocal;

protected ScopedEntityManagerFactory(EntityManagerFactory emf) {

super(emf);
this.threadLocal = new ThreadLocal<LazyCloseEntityManager>();
}

public EntityManager createEntityManager(Map map) {

LazyCloseEntityManager em = threadLocal.get();
if (em == null) {
em = new LazyCloseEntityManager(super.createEntityManager(map));
createEntityManager(em);
}
return em;
}

public EntityManager createEntityManager() {

LazyCloseEntityManager em = threadLocal.get();
if (em == null) {
em = new LazyCloseEntityManager(super.createEntityManager());
createEntityManager(em);
}
return em;
}

private void createEntityManager(LazyCloseEntityManager em) {

threadLocal.set(em);
em.setLazyCloseListener(this);
}

protected LazyCloseEntityManager getEntityManager() {

return threadLocal.get();
}

public void lazilyClosed() {

threadLocal.set(null);
}
}
Below you can see the LazyCloseEntityManager class. Note this class is a wrapper for the Actual EntityManager: It is created by ScopedEntityManagerFactory (through a ThreadLocal) and used by the HTTP request listener PersistenceAppRequestListener:
public class LazyCloseEntityManager extends EntityManagerProxy {

private LazyCloseListener listener;

public LazyCloseEntityManager(EntityManager delegate) {

super(delegate);
}

public void setLazyCloseListener(LazyCloseListener listener) {

this.listener = listener;
}

public LazyCloseListener getLazyCloseListener() {

return listener;
}

@Override
public void close() {
}

protected void lazyClose() {

super.close();
if (listener != null) listener.lazilyClosed();
}
}

The important thing to highlight is that the real close() method has no effect: in case a client code invoke this method, it has nothing to do. lazyClose() method is the one who close the actual EntityManager and is invoked by the HTTP request listener as you can see next.

Here is the HTTP request listener. Note it only closes the LazyCloseEntityManager bound to the current thread.

public class PersistenceAppRequestListener implements ServletRequestListener {

public void requestInitialized(ServletRequestEvent evt) {
}

public void requestDestroyed(ServletRequestEvent evt) {

PersistenceManager pm = PersistenceManager.getInstance();

if (pm instanceof ScopedPersistenceManager) {
LazyCloseEntityManager em = ((ScopedEntityManagerFactory)pm
.getEntityManagerFactory()).getEntityManager();

if (em != null)
em.lazyClose();
}
}
}

Client code using the request scope EntityManager


The only requirement (in addition to include the library jar file) for a client web application to use the request scope EntityManager is:

  • Define a web listener.
  • Determine the name of the Persistence Unit to use (optional).

Both modifications must be done in your web.xml deployment descriptor archive.

Configuring the web.xml archive

The name of the Persistence Unit that you want for this utility to use can be define as an init parameter in the deployment descriptor file web.xml:
<context-param>
<param-name>es.claro.persistence.PERSISTENCE_UNIT</param-name>
<param-value>MyPersistenceUnit</param-value>
</context-param>
Alternatively, you can define the name of the Persistence Unit programmatically by using this code:
PersistenceManager.setPersistenceUnit("MyPersistenceUnit");
If no Persistence Unit name is define neither within the web.xml archive nor using this code, an default name is assumed, that is "DefaultPU".

You also need to explicitly define a HTTP listener in the deployment descriptor file web.xml as follows:
<listener>
<description>Listener for managing EntityManager with request scope</description>
<listener-class>es.claro.persistence.PersistenceAppRequestListener</listener-class>
</listener>
Actually, this listener acts as a request listener and as a servlet context listener.
From an user perspective, this is the listener which lazily close the EntityManager of the current HTTP request, so it would be fatal to forget to define this listener.

Sample code

The only class to know is PersistenceManager.
Through this singleton, you can create the EntityManagerFactory instance needed to begin to work with JPA (What you'll obtain from PersistenceManager won't be actually an EntityManagerFactory object but a proxy subclass of this class).

Below is a sample code using this approach:
EntityManagerFactory emf = PersistenceManager.getInstance().getEntityManagerFactory();
EntityManager em = emf.createEntityManager();
try {
EntityTransaction t = em.getTransaction();
try {
t.begin();
...
t.commit();
} finally {
if (t.isActive()) em.getTransaction().rollback();
}
} finally {
em.close();
}
Note that this client code is the same for method scope, so switching between a scoped and non-scoped EntityManager strategy is completely transparent to the client code!

Simplifying client code

An optional enhancement to this design can be done.
Just to add PersistentManager.getScopedEntityManager() as as shortcut method.

By using this method in client code, closing the EntityManager each time you use it can be avoid
EntityManager em = PersistenceManager.getInstance().getScopedEntityManager();

try {
em.getTransaction().begin();
...
em.getTransaction().commit();
} finally {
if (em.getTransaction().isActive()) em.getTransaction().rollback();
}
Note this code is just trivial if no transaction is involved.

A little drawback for coding this way is that you no longer can switch between scoped and non-scoped EntityManager strategy without changing the client code!

Resources


The whole source code of the request scope entity manager has been developed with Netbeans and hosted as a little open source project at Google Code, so you can access both the source code as well as a jar file with the library ready to use.

Thursday, July 26, 2007

Working with a non JTA DataSource in Toplink Essentials

As it is commonly known, Toplink Essentials, as a JPA engine, can be used both inside a Java EE 5 application server and outside an EJB container in a Java Standard Edition (Java SE) 5 application.
However, if you are working with JPA outside an EJB container, you would have known that by default you can't define a data source to work with in your persistence.xml archive. You just need to define the JDBC connection data as showed in the sample below:

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="SamplePU" transaction-type="RESOURCE_LOCAL">
<provider>oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider</provider>
<class>sample.MyEntity</class>
<properties>
<property name="toplink.jdbc.url" value="someURL"/>
<property name="toplink.jdbc.driver" value="oracle.jdbc.OracleDriver"/>
<property name="toplink.jdbc.user" value="someUser"/>
<property name="toplink.jdbc.password" value="somePassword"/>
</properties>
</persistence-unit>
</persistence>
For many people not being able to use a datasource is a little frustrating. However, before you think of switching to another JPA engine (such as Hibernate EM which does allow to define non JTA datasource for a Java SE environment, i.e. Tomcat), I must tell you there is a way to define a non JTA data source in Toplink Essentials too! Just it is not straightforward.

Transparent approach

Toplink Essentials allows you to customize some JPA behavior with several extensions to the standard JPA that can be defined in the persistence.xml persistence configuration file. So, we take advantage of some of this features to achieve our goal.
The extension we need to serve our purpose can be found here.

It's important to keep in mind that this workaround doesn't compromise the application using JPA. You won't have to write any proprietary code using classes other than the JPA standard.

The only requirement for a client application is:

  1. To attach a jar archive as a library (with the proprietary setting for managing a DataSource).
    This can be done in the same way you attach your Toplink Essentials library: either in your war file (under /WEB-INF/lib) or in within your Application Server library directory (i. e. $CATALINA_HOME/common/lib for Apache Tomcat).

  2. To define a property in the persistence.xml persistence configuration file.

Create the customized code as a library

The only problem we can't define a data source in persistence.xml to be used for the engine in a Java SE environment is because the looking up of this data source is not by name by default. Fortunately we can change this behavior through the JNDIConnector class.

If you take a look to the methods of this class, you'll notice you can even setup a DataSource by invoking the setDataSource() method. However we won't go this further. The only method we need is just setLookupType() to establish the looking up type to the constant JNDIConnector.STRING_LOOKUP.

The JNDIConnector instance we need to modify is accessed through a Toplink session. Toplink Essentials allows us to customize this session before it is used by implementing a SessionCustomizer (we'll need to specify its class name as the previously mentioned property in persistence.xml).

Here is showed the one class implementation we need:
package es.claro.commons.ds;

import oracle.toplink.essentials.tools.sessionconfiguration.SessionCustomizer;
import oracle.toplink.essentials.sessions.Session;
import oracle.toplink.essentials.jndi.JNDIConnector;
public class DataSourceSessionCustomizer implements SessionCustomizer {

public DataSourceSessionCustomizer() {
}

public void customize(Session session) throws Exception {

JNDIConnector conn = (JNDIConnector)session.getLogin().getConnector();
conn.setLookupType(JNDIConnector.STRING_LOOKUP);
}
}
Note that this code is proprietary indeed, but this code is not within your web application but as a library file. You must think of it as an extension of your JPA engine files: I suggest you to compile this class (using Toplink Essentials library as part of the classpath for the compilation), pack the compiled code into a jar file and add this jar file to wherever you put the Toplink Essentials jar files.

Customizing the persistence configuration file

Now that we have already implemented the SessionCustomizer interface, we have just to define the property toplink.session.customizer in the persistence.xml persistence configuration file.

Below you can take a look to a persistence.xml where this property is defined:
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
<persistence-unit name="SamplePU" transaction-type="RESOURCE_LOCAL">
<provider>oracle.toplink.essentials.ejb.cmp3.EntityManagerFactoryProvider</provider>
<non-jta-data-source>java:comp/env/jdbc/DefaultDS</non-jta-data-source>
<class>sample.MyEntity</class>
<properties>
<property name="toplink.session.customizer" value="es.claro.commons.ds.DataSourceSessionCustomizer"/>
</properties>
</persistence-unit>
</persistence>

Using the data source in your web application

Now, Toplink Essentials will use your data source when you use JPA code! With JPA you don't need to use directly any DataSource.

Note that in this sample we're using a reference jdbc/DefaultDS and not an actual data source previously defined in the Application Server (e.g. Tomcat). As you should guess, the name of the data source referenced in the persistence.xml is just a reference. This reference is defined in web.xml deployment descriptor file of your web application and mapped to an actual data source in a proprietary file. In case of Tomcat, this mapping is defined in the context.xml file as shown in the following sample:
<?xml version="1.0" encoding="UTF-8"?>
<Context path="/sample">
<ResourceLink global="jdbc/MyRealDS" name="jdbc/DefaultDS" type="javax.sql.DataSource"/>
</Context>

Thursday, June 28, 2007

How to close a JPA EntityManger in web applications

When working with resources it's always important to ensure they are closed when not longer needed.
Working with JPA there are two kind of resources we can take care of: EntityManager and transactions.

Context

As of Java EE, you can use IoC to inject an EntityManager so the Container is the one who manage the whole life cycle of EntityManagers.
The context we are referring here is when no such a IoC exists and you need to create/destroy EntityManagers each time you use it in a business method. So, if this is your scenario, you can take an approach similar to those described below:

  • Non enterprise applications (i. e. Servlet Container), and therefore no injection of EntityManagers.
  • Method scope for EntityManager.

Approaches

Because we are creating an EntityManager inside a business method who uses the Java Persistence API, we'll assume the existence of an attribute emf which actually is the unique and global instance of EntityManagerFactory needed within an application.

Easy but not enough approach

At first sight, you can think of creating and closing the EntityManager as follows:
public Customer getBestCustomerOfMonth() {

EntityManagerFactory emf = ... ;
EntityManager em = emf.createEntityManager();
// business logic
em.close();
}
However, as you can guess, this code doesn't ensure the EntityManager is always closed when the method finishes.
As far as a RuntimeException occurs in the business logic, the em EntityManager remains open!
You'll always want to avoid this sort of code.

Second approach

You can nest the line for closing the EntityManager inside a finally block, so you can re-write the latter snippet of code as:
public Customer getBestCustomerOfMonth() {

EntityManagerFactory emf = ... ;
EntityManager em = emf.createEntityManager();
try {
// business logic
} finally {
em.close();
}
}
Do you think this is enough to dispose all the EntityManger's resources?
Well, actually this is the case when no transaction is involved. Note that the getBestCustomerOfMonth() method doesn't create any transaction from the EntityManager.

In presence of transactions

If you need transactions, you'll have to create and close them explicitly (as far as you don't use an enterprise application server) in a similar way you have already done with the EntityManager.

At OTN, there is a simple tutorial showing how to use JPA that use the previous approach in presence of transactions!

This is a snippet of code extracted from the mentioned OTN tutorial:
public void alterOrderQuantity(long orderId, int newQuantity) {

EntityManager em = jpaResourceBean.getEMF().createEntityManager();
try{
em.getTransaction().begin();
Order order = em.find(Order.class, orderId);
order.setQuantity(newQuantity);
em.getTransaction().commit();
}finally{
em.close();
}
}
However the previous approach is not enough. This is not completely right because as it is said in the JPA javadocs when the EntityManager.close() method is invoked, the Persistent Context associated with this EntityManager remains managed till the underlying transaction completes. So What happens to the EntityManager instance if an exception occurs before closing the transaction?

Although the close() method of the EntityManager is invoked (finally clause), its Persistence Context will remain managed for the transaction hasn't commits!

Final approach

So, when using transactions outside an enterprise application server because you'll have to close (commit or rollback) the transaction in the same way you do for EntityMangers.
In order for these resources (both EntityManager and underlying transaction) to be closed you'll need to make an additional level of nesting and write your code similar to this one:
public Customer updateCustomer(Customer cust) {

EntityManagerFactory emf = ... ;
EntityManager em = emf.createEntityManager();
try {
EntityTransaction t = em.getTransaction();
try {
t.begin();
// business logic to update the customer
em.merge(cust);
t.commit();
} finally {
if (t.isActive()) t.rollback();
}
} finally {
em.close();
}
}
Perhaps this nested structure could looks like a bit of a mess, but it is really needed in precence of transactions.

I hope Java 7 comes soon for help! As of Java 7, all this nesting will be avoided by using closures!

Monday, May 21, 2007

Bug in Hibernate implementation of JPA: Persisting an entity

I've found a bug in the Hibernate implementation of JPA when persisting an entity.

I'm working with JPA using two different JPA implementations: Hibernate EM and Toplink Essential, and I've found certain interesting conditions (I'm to describe) upon which Hibernate throws an unexpected exception that I think it shouldn't.

The observation

The exception I obtained was this:

javax.persistence.PersistenceException: org.hibernate.PersistentObjectException: detached entity passed to persist

And it is thrown when persisting an entity for the second time when the first time hadn't succeeded. The whole scenario is as follows:

Suppose a user is going to add a new entity such as a customer to the database but he/she enters some data invalid for the database (it violates some constraints, for instance a unique constraint) and therefore when the user clicks on the "add button" to add the customer, within the business logic, the persist method of the EntityManager throws a PersistenceException. In fact, with Hibernate, this exception is thrown in a lazy way when the transaction tries to commit.

Whatever the case, for the business logic to be robust, when the exception happens, it is caught and the user is informed, so he/she is allowed to modify the wrong data and to try again to add this same but modified entity.
So, when the user, after modifying some values of the customer proceeds again to click on the "add button", the business logic executes em.persist(entity) for the second time (with the same object entity). This time, although no constraint is violated in the database (the user worked it out), Hibernate throws the exception with the message: detached entity passed to persist.

The code of the business logic I was referring above is this:
public void addCustomer(Customer cust) throws ValidateException, CustomerException {

validate(cust);
EntityManagerFactory emf = PersistenceManager.getInstance().getEntityManagerFactory();
EntityManager em = emf.createEntityManager();

try {
EntityTransaction t = em.getTransaction();
try {
t.begin();
cust.setStartedDate(new Date());
em.persist(cust);
t.commit();
} finally {
if (t.isActive())
em.getTransaction().rollback();
}
} catch (PersistenceException ex) {
Throwable lastCause = ex;
while (lastCause.getCause() != null)
lastCause = lastCause.getCause();
if (lastCause.getMessage().startsWith("ORA-00001"))
throw new CustomerException("The Customer Id already exists.");
else
throw ex;
} finally {
em.close();
}
}
Note that I'm assuring both EntityManager em and transaction is closed before leaving this method!

My Customer Entity class is using a database sequence to feed its primary key:
@Entity
@Table(name = "CUSTOMER")
@SequenceGenerator(name="customerGen", sequenceName="SEQ_CUSTOMER", initialValue=1, allocationSize=1)
public class Customer implements Serializable {

@Id
@GeneratedValue(strategy=GenerationType.SEQUENCE, generator="customerGen")
private Integer id;

...
}
And my client code (snippet from a JSF Backing bean) where the business logic is invoked is:
public String addCustomerAction() {

try {
model.addCustomer(customer);
return "done";
} catch (ValidateException ex) {
FacesContext.getCurrentInstance().getExternalContext().getRequestMap()
.put("validationErrors", ex.getMessages());
return "error";
} catch (CustomerException ex) {
FacesContext.getCurrentInstance().getExternalContext().getRequestMap()
.put("errors", ex.getMessages());
return "error";
}
}
Just after invoking the addCustomer() method for the second time the unexpected PersistenceException is thrown!
No exception should be thrown here, because the entity hasn't persisted yet (although we tried once but there was a rollback due to database constraint).
As I have checked, this works fine using Toplink Essential as the JPA engine.

The explanation

So, what is happening? Why am I obtaining this error if this method is invoked upon these conditions?

After some little research, I've discovered what should be happening with Hibernate and how to workaround with this problem.
Once you have tried to persist an entity and obtained an exception, the primary key of the entity is remembered and marked as a sort of "already used" state and therefore you shoudn't try to persist an entity with this primary key again: the workaround is based on assigning another primary key to the entity! This way you won't obtain the unexpected exception with the message: detached entity passed to persist.

My workaround client code (snippet from a JSF Managed bean)
public String addCustomerAction() {

try {
model.addCustomer(customer);
return "done";
} catch (ValidateException ex) {
FacesContext.getCurrentInstance().getExternalContext().getRequestMap()
.put("validationErrors", ex.getMessages());
return "error";
} catch (CustomerException ex) {
FacesContext.getCurrentInstance().getExternalContext().getRequestMap()
.put("errors", ex.getMessages());
customer.setId(null); // This is the line we need to workaround the problem with Hibernate!
return "error";
}
}
The actual value of the setId method doesn't matter as far as it is not the same. In this snippet it's assigned to null because this way the sequence will be use again to get the next value when persisting next time.

In My Humble Opinion, I think this is a bug, because if the transaction is rolled back, you should be able to work with the same primary key (as far as it isn't in the database yet).

I would like to know what is your opinion on this bug, so I encourage you to post any comments below!

I'm using the following version products:

  • Hibernate Core 3.2
  • Hibernate EntityManager 3.2.1 GA

Friday, May 4, 2007

JPA EntityManagerFactory in web applications

Coding Java Persistence web applications that run outside a Java EE Server (e.g. Apache Tomcat) is slightly different from JPA applications inside an Application Server. The main difference has to do with the responsibility to manage the EntityManagerFactory lifecycle. Because there is no Dependence Injection outside the Java EE Server, this responsibility fall directly on the programmer. So, we are about to discuss the way to deal with this easily.


As it's already mentioned in Design Choices in a Web-only Application Using Java Persistence, the EntityManagerFactory class is thread-safe, therefore it's very convenient to create it once with application scope and to destroy it when the web application eventually ends (i. e. during a server shutdown).


Closing an EntityManagerFactory must not be forgotten. It's very important to keep this fact in mind. Otherwise it would be possible to achieve unexpected side effects. These effects depend on the type of JPA provider; i. e. By using Toplink Essentials the problems, as I've realized, may arise in subsequents deployments (if closing the EntityMangerFactory is forgotten).

So, it's a common practice to create only one EntityManagerFactory at the beginning of a deployed web application shared with application scope and ensure its destruction at the end of the web application's lifecycle.

A practical design for managing an EntityManagerFactory this way is based on a web application listener (ServletContextListener) as well as on the ServiceLocator design pattern. This latter to gain access to the EntityManagerFactory from wherever it is needed. The only purpose of the application listener is to ensure that the shared EntityManagerFactory is always closed (if it was created).

Note we'll use a lazy strategy for the creation of the EntityManagerFactory.

The following UML class diagram shows us this approach:



Managing the EntityManagerFactory's lifecycle using an application listener

The following class could be a sample of this listener.

public class PersistenceAppListener implements ServletContextListener {

public void contextInitialized(ServletContextEvent evt) {
}

public void contextDestroyed(ServletContextEvent evt) {

PersistenceManager.getInstance().closeEntityManagerFactory();
}
}
We also need to define this listener in the web application descriptor file web.xml.
<web-app ...>
...
<description>ServletContextListener</description>
<listener>
<listener-class>tmpl.web.PersistenceAppListener</listener-class>
</listener>
...
</web-app>

Getting the EntityManagerFactory from a singleton in the PersistenceManager class

public class PersistenceManager {

public static final boolean DEBUG = true;

private static final PersistenceManager singleton = new PersistenceManager();

protected EntityManagerFactory emf;

public static PersistenceManager getInstance() {

return singleton;
}

private PersistenceManager() {
}

public EntityManagerFactory getEntityManagerFactory() {

if (emf == null)
createEntityManagerFactory();
return emf;
}

public void closeEntityManagerFactory() {

if (emf != null) {
emf.close();
emf = null;
if (DEBUG)
System.out.println("n*** Persistence finished at " + new java.util.Date());
}
}

protected void createEntityManagerFactory() {

this.emf = Persistence.createEntityManagerFactory("OrderPU");
if (DEBUG)
System.out.println("n*** Persistence started at " + new java.util.Date());
}
}

Using the EntityManagerFactory class from a business class

The client code that use this pattern is straight forward.

The following snippet of code is when your are using the EntityManager in a transaction:
EntityManagerFactory emf = PersistenceManager.getInstance().getEntityManagerFactory();
EntityManager em = emf.createEntityManager();

try {
EntityTransaction t = em.getTransaction();
try {
t.begin();
...
t.commit();
} finally {
if (t.isActive()) t.rollback();
}
} finally {
em.close();
}
When no transaction is required the client code is as follows:
EntityManagerFactory emf = PersistenceManager.getInstance().getEntityManagerFactory();
EntityManager em = emf.createEntityManager();

try {
...
} finally {
em.close();
}


Friday, April 13, 2007

Publishing JSF application from behind a proxy

Problem

Sometimes application servers are located behind a web proxy, so that the contents that they publish is available from a general URL space. For example, the web server www.acme.es acts as a proxy for the web application server srv3.acme.es, so that every web page which is published like http://srv3.acme.es/* are accessible from http://www.acme.es/srv3/*.

The problem is that links generated inside HTML pages by JSF have always server relative paths, which start by a slash, and these paths arrive to the client verbatim. For example, if in srv3.acme.es exists an application called apl, with a page p that references another page q, an external client would make a request to www.acme.es/srv3/apl/p, that will be transformed in a request to srv3.acme.es/apl/p, and then a page that contains a link like /apl/q will be generated, and this page will be returned to the client. When the client follows this link, his browser will make a request to www.acme.es/apl/q (/srv3 has disappeared), which is a different page, that maybe does not even exist.

Solution

The URL of links generated by JSF components are build by the view handler from the view identifier. By default, the view identifier is the path of the JSP page within our web application, and the view handler builds the URL by concatenating the context path, the JSF servlet path and the view identifier (which is the path to the JSP page).

The behaviour of the view handler can be redefined, and this way, we can build the URL of links ourserlves. Next there is a step by step description about how to make this redefinition for the example showed above.

Step by step solution

A view handler is a class that extends javax.faces.application.ViewHandler. This is an abstract class with many methods, but we are only interested in redefining the method getActionURL; for the other ones, we will make a call to the default view handler. If we create a constructor in our view handler that accepts another view handler as a parameter, JSF will provide us with the default view handler in the moment that ours is created. So our view handler could be this way:

import java.io.*;
import java.util.*;
import javax.faces.*;
import javax.faces.context.*;
import javax.faces.application.*;
import javax.faces.component.*;

public class AcmeViewHandler extends ViewHandler
{
ViewHandler defaultHandler;

public AcmeViewHandler(ViewHandler defaultHandler)
{
this.defaultHandler = defaultHandler;
}

public Locale calculateLocale(FacesContext context)
{
return defaultHandler.calculateLocale(context);
}

public String calculateRenderKitId(FacesContext context)
{
return defaultHandler.calculateRenderKitId(context);
}

public UIViewRoot createView(FacesContext context, String viewId)
{
return defaultHandler.createView(context, viewId);
}

public String getActionURL(FacesContext context, String viewId)
{
String actionURL = defaultHandler.getActionURL(context, viewId);
String prefijoProxy = context.getExternalContext().getInitParameter("es.acme.faces.PROXY_PREFIX");

if (prefijoProxy == null)
return actionURL;
else
return prefijoProxy + actionURL;
}

public String getResourceURL(FacesContext context, String path)
{
return defaultHandler.getResourceURL(context, path);
}

public void renderView(FacesContext context, UIViewRoot viewToRender)
throws IOException, FacesException
{
defaultHandler.renderView(context, viewToRender);
}

public UIViewRoot restoreView(FacesContext context, String viewId)
{
return defaultHandler.restoreView(context, viewId);
}

public void writeState(FacesContext context)
throws IOException
{
defaultHandler.writeState(context);
}

}

In JSF 1.2, we can extend ViewHandlerWrapper, so we can avoid implementing every method. The path returned by the getActionURL method is composed by a prefix (in our example /srv3), which is defined by es.acme.faces.PROXY_PREFIX parameter, to which the original URL (which is relative to the application server) is appended. The value of this parameter is defined in the web.xml file:

<web-app>
...
<context-param>
<param-name>es.acme.faces.PROXY_PREFIX</param-name>
<param-value>/op</param-value>
</context-param>
...
<web-app>

Now the only step left is to state in the faces-config.xml file that our class is the one we want to use as the view handler:

<faces-config>
...
<application>
<view-handler>es.acme.faces.AcmeViewHandler</view-handler>
</application>
...
</faces-config>