JSON Processing (JSON-P)

JSON (JavaScript Object Notation) is a compact text file format that can be used to store and transfer data. It has become very popular over the years as it is very simple to read and parse. Beyond that each JSON construct should be valid JavaScript and it should be possible to evaluate it using JavaScript’s eval() function. The latter has made it popular in the web developer community as JSON data received from the server can be easily evaluated on the client side.
The popularity of JSON has created the demand to use it also on the server-side. Over the time different Java libraries have been developed that support reading and writing JSON data (e.g. google-gson, flexjson or Jackson). Hence it was only a question of time until JSON processing becomes a part of the Java EE specification.
In the last article I have thrown light on the new concurrency features of Java EE 7. In this article we want to explore the new feature JSON Processing.
After these introductory words it is time to put our hands on some code. The following code shows how to write a very simple JSON file:

JsonObject model = Json.createObjectBuilder()
		.add("firstName", "Martin")
		.add("phoneNumbers", Json.createArrayBuilder()
				.add(Json.createObjectBuilder()
						.add("mobile", "1234 56789"))
				.add(Json.createObjectBuilder()
						.add("home", "2345 67890")))
		.build();
try (JsonWriter jsonWriter = Json.createWriter(new FileWriter(Paths.get(System.getProperty("user.dir"), "target/myData.json").toString()))) {
	jsonWriter.write(model);
} catch (IOException e) {
	LOGGER.severe("Failed to create file: " + e.getMessage());
}

The builder API makes it easy to append the next call to the previous one. I still need to get used to the fact that you have to call createArrayBuilder() or createObjectBuilder() everytime you want to create an array or an object.
When reading a JSON file you have to determine the type you are currently reading by calling getValueType():

try (JsonReader jsonReader = Json.createReader(new FileReader(Paths.get(System.getProperty("user.dir"), "target/myData.json").toString()))) {
	JsonStructure jsonStructure = jsonReader.read();
	JsonValue.ValueType valueType = jsonStructure.getValueType();
	if (valueType == JsonValue.ValueType.OBJECT) {
		JsonObject jsonObject = (JsonObject) jsonStructure;
		JsonValue firstName = jsonObject.get("firstName");
		...

Functionality to map a complete JSON file to an existing structure of Java classes (someting like JAX-B for XML files) is not part of the standard. But instead of that JSON-P offers a complete streaming API for reading huge amount of JSON data:

try (FileWriter writer = new FileWriter(Paths.get(System.getProperty("user.dir"), "target/myStream.json").toString())) {
	JsonGenerator gen = Json.createGenerator(writer);
	gen
	.writeStartObject()
		.write("firstName", "Martin")
		.writeStartArray("phoneNumbers")
			.writeStartObject()
				.write("mobile", 123456789)
				.write("home", "2345 67890")
			.writeEnd()
		.writeEnd()
	.writeEnd();
	gen.close();
} catch (IOException e) {
	LOGGER.severe("Failed to write to file: " + e.getMessage());
}

The code for the streaming API reads very similar to the normal API. The main difference is that you have to mark the end of an array or object with writeEnd().
Reading a JSON file looks very similar to the [Streaming API for XML (StAX)](http://stax.codehaus.org/Home):

try (FileReader fileReader = new FileReader(Paths.get(System.getProperty("user.dir"), "target/myStream.json").toString())) {
	JsonParser parser = Json.createParser(fileReader);
	while (parser.hasNext()) {
		JsonParser.Event event = parser.next();
		switch (event) {
			case START_OBJECT:
				LOGGER.info("{");
				break;
			case END_OBJECT:
				LOGGER.info("}");
				break;
...

Having just said that the above code looks very similar to StAX code, let’s try to compare both implementations by the means of file size and runtime. The following diagram shows the average execution times for writing and reading a JSON/XML file with 100.000 entries.
comparison_diagram
One entry consists of a simple object with a name and an array of two telephone numbers:

{"firstName":"Martin","phoneNumbers":[{"mobile":123456789,"home":"2345 67890"}]}

This is how it looks in XML:

<firstName>Martin</firstName><phoneNumbers><mobile>123456789</mobile><home>234567890</home></phoneNumbers>

It is surprising that the StAX implementation is this much slower when writing big files. When we take a closer look at the file size, we see that the XML file is about 1.31 times bigger than the JSON file, although it contains the same data.

Conclusion: JSON-P provides a standardized API for JSON processing together with a StAX like streaming API. Although it still lacks a binding to a Java class structure, it performs even better than the StAX implementation by a more compact file size.

Source code can be found on my github repository.

PS: The implementations used in the comparison are listed below:

	<dependency>
		<groupId>org.glassfish</groupId>
		<artifactId>javax.json</artifactId>
		<version>1.0</version>
	</dependency>
	<dependency>
		<groupId>stax</groupId>
		<artifactId>stax</artifactId>
		<version>1.2.0</version>
	</dependency>

Using Java EE’s ManagedExecutorService to asynchronously execute transactions

One year has passed by since the Java EE 7 specification has been published. Now that Wildfly 8 Final has been released, it is time to take a closer look at the new features.

One thing which was missing since the beginning of the Java EE days is the ability to work with fully-fledged Java EE threads. Java EE 6 has already brought us the @Asynchronous annotation with which we could execute single methods in the background, but a real thread pool was still out of reach. But all this is now history since Java EE 7 introduced the ManagedExecutorService:

@Resource
ManagedExecutorService managedExecutorService;

Like the well-known ExecutorService from the Standard Edition, the ManagedExecutorService can be used to submit tasks that are executed within a thread pool. One can choose if the tasks submitted should implement the Runnable or Callable interface.

In contrast to a normal SE ExecutorService instance, the ManagedExecutorService provides threads that can access for example UserTransactions from JNDI in order to execute JPA transactions during their execution. This feature is a huge difference to threads started like in a SE environment.

It is important to know, that the transactions started within the ManagedExecutorService’s thread pool run outside of the scope of the transaction of the thread which submits the tasks. This makes it possible to implement scenarios in which the submitting thread inserts some information about the started tasks into the database while the long-running tasks execute their work within an independent transaction.

Now, after we have learned something about the theory, let’s put our hands on some code. First we write a @Stateless EJB that gets the ManagedExecutorService injected:

@Stateless
public class MyBean {
    @Resource
    ManagedExecutorService managedExecutorService;
    @PersistenceContext
    EntityManager entityManager;
    @Inject
    Instance<MyTask> myTaskInstance;

    public void executeAsync() throws ExecutionException, InterruptedException {
        for(int i=0; i<10; i++) {
            MyTask myTask = myTaskInstance.get();
            this.managedExecutorService.submit(myTask);
        }
    }

    public List<MyEntity> list() {
        return entityManager.createQuery("select m from MyEntity m", MyEntity.class).getResultList();
    }
}

The tasks that we will submit to the ManagedExecutorService are retrieved from CDI’s Instance mechanism. This lets us use the power of CDI within our MyTask class:

public class MyTask implements Runnable {
    private static final Logger LOGGER = LoggerFactory.getLogger(MyTask.class);
    @PersistenceContext
    EntityManager entityManager;

    @Override
    public void run() {
        UserTransaction userTransaction = null;
        try {
            userTransaction = lookup();
            userTransaction.begin();
            MyEntity myEntity = new MyEntity();
            myEntity.setName("name");
            entityManager.persist(myEntity);
            userTransaction.commit();
        } catch (Exception e) {
            try {
                if(userTransaction != null) {
                    userTransaction.rollback();
                }
            } catch (SystemException e1) {
                LOGGER.error("Failed to rollback transaction: "+e1.getMessage());
            }
        }
    }

    private UserTransaction lookup() throws NamingException {
        InitialContext ic = new InitialContext();
        return (UserTransaction)ic.lookup("java:comp/UserTransaction");
    }
}

Here we can inject the EntityManager to persist some entities into our database. The UserTransaction that we need for the commit has to be retrieved from the JNDI. An injection using the @Resource annotation is not possible within a normal managed bean.

To circumvent the UserTransaction we could of course call the method of another EJB and use the other EJB’s transaction to commit the changes to the database. The following code shows an alternative implementation using the injected EJB to persist the entity:

public class MyTask implements Runnable {
    private static final Logger LOGGER = LoggerFactory.getLogger(MyTask.class);
    @PersistenceContext
    EntityManager entityManager;
    @Inject
    MyBean myBean;

    @Override
    public void run() {
		MyEntity myEntity = new MyEntity();
        myBean.persit(myEntity);
    }
}

Now we only need to utilize JAX-RS to call the functionality over a REST interface:

@Path("/myResource")
public class MyResource {
    @Inject
    private MyBean myBean;

    @Path("list")
    @GET
    @Produces("text/json")
    public List<MyEntity> list() {
        return myBean.list();
    }

    @Path("persist")
    @GET
    @Produces("text/html")
    public String persist() throws ExecutionException, InterruptedException {
        myBean.executeAsync();
        return "<html><h1>Successful!</h1></html>";
    }
}

That’s it. With these few lines of code we have implemented a fully working Java EE application whose functionality can be called over a REST interface and that executes its core functionality asynchronously within worker threads with their own transactions.

Conclusion: The ManagedExecutorService is a great feature to integrate asynchronous functionality using all the standard Java EE features like JPA and transactions into enterprise applications. I would say the waiting was worthwhile.

Example source code can be found on github.

Injecting configuration values using CDI’s InjectionPoint

Dependency injection is a great technology for the organization of class dependencies. All class instances you need in your current class are provided at runtime from the DI container. But what about your configuration?

Of course, you can create a “Configuration” class and inject this class everywhere you need it and get the necessary value(s) from it. But CDI lets you do this even more fine-grained using the InjectionPoint concept.

If you write a @Produces method you can let your CDI container also inject some information about the current code where the newly created/produced value is inject into. A complete list of the available methods can be found here. The interesting point is, that you can query this class for all the annotations the current injection point has:

Annotated annotated = injectionPoint.getAnnotated();
ConfigurationValue annotation = annotated.getAnnotation(ConfigurationValue.class);

As the example code above shows, we can introduce a simple @Qualifier annotation that marks all the injection points where we need a specific configuration value. In this blog post we just want to use strings as configuration values, but the whole concept can of course be extended to other data types as well. The already mentioned @Qualifier annotation looks like the following one:

@Target({ElementType.FIELD, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@Qualifier
public @interface ConfigurationValue {
	@Nonbinding ConfigurationKey key();
}

public enum ConfigurationKey {
	DefaultDirectory, Version, BuildTimestamp, Producer
}

The annotation has of course the retention policy RUNTIME because the CDI container has to evaluate it while the application is running. It can be used for fields and methods. Beyond that we also create a key attribute, which is backed by the enum ConfigurationKey. Here we can introduce all configuration values we need. In our example this for example a configuration value for a default directory, for the version of the program and so on. We mark this attribute as @Nonbinding to prevent that the value of this attribute is used by the CDI container to choose the correct producer method. If we would not use @Nonbinding we would have to write a @Produces method for each value of the enum. But here we want to handle all this within one method.

The @Produces method for strings that are annotated with @ConfigurationKey is shown in the following code example:

@Produces
@ConfigurationValue(key=ConfigurationKey.Producer)
public String produceConfigurationValue(InjectionPoint injectionPoint) {
	Annotated annotated = injectionPoint.getAnnotated();
	ConfigurationValue annotation = annotated.getAnnotation(ConfigurationValue.class);
	if (annotation != null) {
		ConfigurationKey key = annotation.key();
		if (key != null) {
			switch (key) {
				case DefaultDirectory:
					return System.getProperty("user.dir");
				case Version:
					return JB5n.createInstance(Configuration.class).version();
				case BuildTimestamp:
					return JB5n.createInstance(Configuration.class).timestamp();
			}
		}
	}
	throw new IllegalStateException("No key for injection point: " + injectionPoint);
}

The @Produces method gets the InjectionPoint injected as a parameter, so that we can inspect its values. As we are interested in the annotations of the injection point, we have a look if the current injection point is annotated with @ConfigurationValue. If this is the case, we have a look at the @ConfigurationValue’s key attribute and decide which value we return. That’s it. In a more complex application we can of course load the configuration from some files or some other kind of data store. But the concept remains the same.

Now we can easily let the CDI container inject the configuration values we need simply with these two lines of code:

    @Inject @ConfigurationValue(key = ConfigurationKey.DefaultDirectory)
    private String defaultDirectory;

Conclusion: Making a set of configuration values accessible throughout the whole application has never been easier.

Implementing dynamic proxies – a comparison

Sometimes there is the need to intercept certain method calls in order to execute your own logic everytime the intercepted method is called. If you are not within in Java EE’s CDI world and don’t want to use AOP frameworks like aspectj, you have a simple and similar effective alternative.

Since version 1.5 the JDK comes with the class java.lang.reflect.Proxy that allows you to create a dynamic proxy for a given interface. The InvocationHandler that sits behind the dynamically created class is called everytime the application invokes a method on the proxy. Hence you can control dynamically what code is executed before the code of some framework or library is called.

Next to JDK’s Proxy implementation bytecode frameworks like javassist or cglib offert similar functionality. Here you can even subclass an exiting class and decide which methods you want to forward to the superclass’s implementation and which methods you want to intercept. This comes of course with the burden of another library your project depends on and that may have to be updated from time to time whereas JDK’s Proxy implementation is already included in the runtime environment.

So let’s take a closer look and try these three alternatives out. In order to compare javassist’s and cglib’s proxy with the JDK implementation we need an interface that is implemented by a simple class, because the JDK mechanism only supports interfaces and no subclassing:

public interface IExample {
    void setName(String name);
}

public class Example implements IExample {
    private String name;

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}

In order to delegate the method calls on the proxy to some real object, we create an instance of the Example class above and call it within the InvocationHandler via a final declared variable:

final Example example = new Example();
InvocationHandler invocationHandler = new InvocationHandler() {
	@Override
	public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
		return method.invoke(example, args);
	}
};
return (IExample) Proxy.newProxyInstance(JavaProxy.class.getClassLoader(), new Class[]{IExample.class}, invocationHandler);

As you can see from the code sample the creation of a proxy a rather simple: Call the static method newProxyInstance() and provide a ClassLoader, an array of interfaces that should be implemented by the proxy as well as an instance of the InvocationHandler interface. Our implementation forwards for the sake of demonstration only the instance of Example we have created before. But in real life you can of course perform more advanced operations that evaluate for example the methods name or its arguments.

Now we take a look at the way the same is done using javassist:

ProxyFactory factory = new ProxyFactory();
factory.setSuperclass(Example.class);
Class aClass = factory.createClass();
final IExample newInstance = (IExample) aClass.newInstance();
MethodHandler methodHandler = new MethodHandler() {
	@Override
	public Object invoke(Object self, Method overridden, Method proceed, Object[] args) throws Throwable {
		return proceed.invoke(newInstance, args);
	}
};
((ProxyObject)newInstance).setHandler(methodHandler);
return newInstance;

Here we have a ProxyFactory that wants to know for which class it should create a subclass. Then we let the ProxyFactory create a whole class that can be reused as many times as necessary. The MethodHandler is here analog to the InvocationHandler the one that gets called for each method invocation of the instance. Here again we just forward the call to an instance of Example we have created before.

Last but not least let’s take a look at cglib’s proxy:

final Example example = new Example();
IExample exampleProxy = (IExample) Enhancer.create(IExample.class, new MethodInterceptor() {
	@Override
	public Object intercept(Object object, Method method, Object[] args, MethodProxy methodProxy) throws Throwable {
		return method.invoke(example, args);
	}
});
return exampleProxy;

In the cglib world we have an Enhancer class that we can use to implement a given interface with a MethodInterceptor instance. The implementation of the callback method looks very similar to the one in the javassist example. We just forward the method call via reflection API to the already existing instance of Example.

Now that we have seen three different implementations we also want to evaluate their runtime behavior. Therefore we write s simple unit test, that measures the execution time of each of these implementations:

@Test
public void testPerformance() {
	final IExample example = JavaProxy.createExample();
	long measure = TimeMeasurement.measure(new TimeMeasurement.Execution() {
		@Override
		public void execute() {
			for (long i = 0; i < JavassistProxyTest.NUMBER_OF_ITERATIONS; i++) {
				example.setName("name");
			}
		}
	});
	System.out.println("Proxy: "+measure+" ms");
}

We choose a huge number of iterations in order to stress the JVM and to let the HotSpot compiler create native code for the often executed passages. The following chart shows the average runtime of the three implementations:

2014-01-14_proxies_chart

To show the impact of a Proxy implementation at all, the chart also shows the execution times for the standard invocation of the method on the Example object (“No proxy”). First of all we can put to record that the proxy implementations are about 10 times slower than the plain invocation of the method itself. But we also notice a difference between the three proxy solutions. JDK’s Proxy class is surprisingly nearly as fast as the cglib implementation. Only javassist pulls out with about twice the exeuction time of cglib.

Conclusion: Runtime proxies are easy to use and you have different way of doing it. JDK’s Proxy only supports proxies for interfaces whereas javassist and cglib allow you to subclass existing classes. The runtime behavior of a proxy is about 10 times slower than a standard method invocation. The three solutions also differ in terms of runtime.

Securing a JSF application with Java EE security and JBoss AS 7.x

A common requirement for enterprise applications is to have all JSF pages protected behind a login page. Sometimes you even want to have protected areas inside the application that are only accessible by users that own a specific role. The Java EE standards come with all the means you need to implement a web application that is protected by some security constraints. In this blog post we want to develop a simple application that demonstrates the usage of these means and shows how you can build a complete JSF application for two different roles. As the solution might look straight forward at first glance, there are a few pitfalls one has to pay attention to.

The first point we have to care about, is the folder layout of our application. We have three different kinds of pages:

  • The login page and the error page for the login should be accessible by all users.
  • We have a home page that should only be accessible for authenticated users.
  • We have a protected page that should only be visible for users of the role protected-role.

These three types of pages are therefore put into three different folders: the root folder src/main/webapp, the folder src/main/webapp/pages and the protected page resides in src/main/webapp/pages/protected:

security-web-ide

The web.xml file is the place to define which roles we want to use and how to map the accessibility of some URL pattern to these roles:

<security-constraint>
    <web-resource-collection>
        <web-resource-name>pages</web-resource-name>
        <url-pattern>/pages/*</url-pattern>
        <http-method>PUT</http-method>
        <http-method>DELETE</http-method>
        <http-method>GET</http-method>
        <http-method>POST</http-method>
    </web-resource-collection>
    <auth-constraint>
        <role-name>security-role</role-name>
        <role-name>protected-role</role-name>
    </auth-constraint>
</security-constraint>
<security-constraint>
    <web-resource-collection>
        <web-resource-name>protected</web-resource-name>
        <url-pattern>/pages/protected/*</url-pattern>
        <http-method>PUT</http-method>
        <http-method>DELETE</http-method>
        <http-method>GET</http-method>
        <http-method>POST</http-method>
    </web-resource-collection>
    <auth-constraint>
        <role-name>protected-role</role-name>
    </auth-constraint>
</security-constraint>

<security-role>
    <role-name>security-role</role-name>
</security-role>
<security-role>
    <role-name>protected-role</role-name>
</security-role>

As you can see we define two roles: security-role and protected-role. URLs matching the pattern /pages/* are only accessible by users that own the roles security-role and protected-role, whereas the pages under /pages/protected/* are restricted to users with the role protected-role.

Another point you may stumble upon is the welcome page. At first guess you would want to the specify the login page as welcome page. But this does not work, as the login module of the servlet container automatically redirects all unauthorized accesses to the login page. Therefore we specify the home page of our application as welcome page. This is already a protected page, but the user will get redirected to the login page automatically when he calls its URL directly.

<welcome-file-list>
    <welcome-file>pages/home.xhtml</welcome-file>
</welcome-file-list>

Now we are nearly done with the web.xml page. All we to do is to define the authentication method as well as the login page and the error page, which is shown in case the user enters invalid credentials. One has to page attention that both pages don’t include protected URLs (e.g. CSS or JavaScript files), otherwise even the access to these two pages is forbidden and the user gets an Application Server specific error page to see.

<login-config>
    <auth-method>FORM</auth-method>
    <form-login-config>
        <form-login-page>/login.xhtml</form-login-page>
        <form-error-page>/error.xhtml</form-error-page>
    </form-login-config>
</login-config>

As we are going to deploy the application to the JBoss Application Server, we provide a file named jboss-web.xml that connects our application to a security-domain:

<?xml version="1.0" encoding="UTF-8"?>
<jboss-web>
    <security-domain>java:/jaas/other</security-domain>
</jboss-web>

The “other” security-domain is configured inside the standalone.xml. The default configuration requires that the user passes the “RealmUsersRoles” login module, which gets its user and role definitions from the two files application-users.properties and application-roles.properties inside the configuration folder. You can use the provided add-user script to add a new user to this realm:

What type of user do you wish to add?
 a) Management User (mgmt-users.properties)
 b) Application User (application-users.properties)
(a): b

Enter the details of the new user to add.
Realm (ApplicationRealm) :
Username : bart
Password :
Re-enter Password :
What roles do you want this user to belong to? (Please enter a comma separated list, or leave blank for none) : security-role,protected-role
About to add user 'bart' for realm 'ApplicationRealm'
Is this correct yes/no? yes

Here it is important to choose the correct realm (ApplicationRealm), as this realm is configured by default in the standalone.xml for the “other” login module. This is also the place where you provide the roles the user possesses as a comma separated list.

<form method="POST" action="j_security_check" id="">
	<h:panelGrid id="panel" columns="2" border="1" cellpadding="4" cellspacing="4">
		<h:outputLabel for="j_username" value="Username:" />
		<input type="text" name="j_username"/>
		<h:outputLabel for="j_password" value="Password:" />
		<input type="password" name="j_password"/>
		<h:panelGroup>
			<input type="submit" value="Login"/>
		</h:panelGroup>
	</h:panelGrid>
</form>

The next step is to implement a simple login form that submits its data to the login module. Note the ids of the input fields as well as the action of the form. This way the form is posted to the login module, which extracts the entered username and password from the request. JSF developers may wonder why we use a standard HTML form instead of the element. The reason for this is that the JSF form elements spans its own namespace and therefore the ids of the input fields are prefixed with the id of the form (and this form id cannot be empty).

If the user has passed the login form, we present him a home page. But the link to the protected page should only be accessible for users that own the role protected-role. This can be accomplished by the following rendered condition:

<h:link value="Protected page" outcome="protected/protected" rendered="#{facesContext.externalContext.isUserInRole('protected-role')}"/>

Last but not least we need the logout functionality. For this case we implement a simple backing bean like the following one that invalidates the user’s session and redirects him back to the login page:

@Named(value = "login")
public class Login {

    public String logout() {
        FacesContext.getCurrentInstance().getExternalContext().invalidateSession();
        return "/login";
    }
}

As usual the complete source code can be found on github.

Building and testing a websocket server with undertow

The upcoming version of JBoss Application Server will no longer use Tomcat as integrated webserver, but will replace it with undertow. The architecture of undertow is based on handlers that can be added dynamically via a Builder API to the server. This approach is similar to the way of constructing a webserver in Node.js. It allows developers to embed the undertow webserver easily into their applications. As the addition of features is done via the Builder API, one can only add the features that are really required in one’s application. Beyond that undertow supports WebSockets and the Servlet API in version 3.1. It can be run as blocking or non-blocking server and it is said, that first tests have proven that undertow is the fastest webserver written in Java.

As all of this sounds very promising, so let’s try to set up a simple websocket server. As usual we start by creating a simple java project and add the undertow maven dependency:

<dependency>
  <groupId>io.undertow</groupId>
  <artifactId>undertow-core</artifactId>
  <version>1.0.0.Beta20</version>
</dependency>

With undertow’s Builder API our buildAndStartServer() method looks like this:

public void buildAndStartServer(int port, String host) {
    server = Undertow.builder()
            .addListener(port, host)
            .setHandler(getWebSocketHandler())
            .build();
    server.start();
}

We just add a listener that specifies the port and host to listen for incoming connections and afterwards add a websocket handler. As the websocket handler code is a little bit more comprehensive, I have put it into its own method:

private PathHandler getWebSocketHandler() {
    return path().addPath("/websocket", websocket(new WebSocketConnectionCallback() {
        @Override
        public void onConnect(WebSocketHttpExchange exchange, WebSocketChannel channel) {
            channel.getReceiveSetter().set(new AbstractReceiveListener() {
                @Override
                protected void onFullTextMessage(WebSocketChannel channel, BufferedTextMessage message) {
                    String data = message.getData();
                    lastReceivedMessage = data;
                    LOGGER.info("Received data: "+data);
                    WebSockets.sendText(data, channel, null);
                }
            });
            channel.resumeReceives();
        }
    }))
    .addPath("/", resource(new ClassPathResourceManager(WebSocketServer.class.getClassLoader(), WebSocketServer.class.getPackage()))
            .addWelcomeFiles("index.html"));
}

Let’s go line by line through this code snippet. First of all we add a new path: /websocket. The second argument of the addPath() methods lets us specify what kind of protocol we want to use for this path. In our case we create a new WebSocket. The anonymous implementation has a onConnect() method in which we set an implementation of AbstractReceiveListener. Here we have a convenient method onFullTextMessage() that is called when a client has sent us a text message. A call of getData() fetches the actual message we have received. In this simple example we just echo this string back to client to validate that the roundtrip from the client to server and back works.

To perform some simple manual tests we also add a second resource under the path / which serves some static HTML and JavaScript files. The directory that contains these files is given as an instance of ClassPathResourceManager. The call of addWelcomeFiles() tells undertow which file to server when the client asks for the path /.

The index.html looks like this:

</pre>
<html>
<head><title>Web Socket Test</title></head>
<body>
  <script src="jquery-2.0.3.min.js"></script>
  <script src="jquery.gracefulWebSocket.js"></script>
  <script src="websocket.js"></script>
  <form onsubmit="return false;">
    <input type="text" name="message" value="Hello, World!"/>
    <input type="button" value="Send Web Socket Data" onclick="send(this.form.message.value)"/>
  </form>
  <div id="output"></div>
</body>
</html>
<pre>

Our JavaScript code is swapped out to the websocket.js file. We use jquery and the jquery-Plugin gracefulWebSocket to ease the client side development:

var ws = $.gracefulWebSocket("ws://127.0.0.1:8080/websocket");
ws.onmessage = function(event) {
    var messageFromServer = event.data;
    $('#output').append('Received: '+messageFromServer+'');
}

function send(message) {
    ws.send(message);
}

After having created a WebSocket object by calling $.gracefulWebSocket() we can register a callback function for incoming messages. In this method we only append the message string to the DOM of the page. The send() method is just a call to gracefulWebSocket’s send() method.

When we now start our application and open the URL http://127.0.0.1:8080/ in our webbrowser we see the following page:
undertow-websocket
Entering some string and hitting the “Send Web Socket Data” button sends the message to the server, which in response echos it back to the client.

Now that we know that everything works as expected, we want to protect our code against regression with a junit test case. As a websocket client I have chosen the library jetty-websocket:

<dependency>
    <groupId>org.eclipse.jetty</groupId>
    <artifactId>jetty-websocket</artifactId>
    <version>8.1.0.RC5</version>
    <scope>test</scope>
</dependency>

In the test case we build and start the websocket server to open a new connection to the websocket port. The WebSocket implementation of jetty-websocket allows us to implement two callback methods for the open and close events. Within the open callback we send the test message to the client. The rest of the code waits for the connection to be established, closes it and asserts that the server has received the message:

@Test
public void testStartAndBuild() throws Exception {
    subject = new WebSocketServer();
    subject.buildAndStartServer(8080, "127.0.0.1");
    WebSocketClient client = new WebSocketClient();
    Future connectionFuture = client.open(new URI("ws://localhost:8080/websocket"), new WebSocket() {
        @Override
        public void onOpen(Connection connection) {
            LOGGER.info("onOpen");
            try {
                connection.sendMessage("TestMessage");
            } catch (IOException e) {
                LOGGER.error("Failed to send message: "+e.getMessage(), e);
            }
        }
        @Override
        public void onClose(int i, String s) {
            LOGGER.info("onClose");
        }
    });
    WebSocket.Connection connection = connectionFuture.get(2, TimeUnit.SECONDS);
    assertThat(connection, is(notNullValue()));
    connection.close();
    subject.stopServer();
    Thread.sleep(1000);
    assertThat(subject.lastReceivedMessage, is("TestMessage"));
}

As usual you can find the source code on github.

Conclusion: Undertow’s Builder API makes it easy to construct a websocket server and in general an embedded webserver that fits your needs. This also eases automatic testing as you do not need any specific maven plugin that starts and stops your server before and after your integration tests. Beyond that the jquery plugin jquery-graceful-websocket lets you send and receive messages over websockets with only a few lines of code.

Analyze package dependencies with structure101

One key to a stable application is a well-structured codebase. We know that we should build as many black boxes as possible, because as soon as one black box is finished, we no longer have to think about its interior. You just use the code you or another team member has written through a well-defined interface. This gives you the possibility to concentrate on the next feature you want to add.

When we think about black boxes we often have classes or whole jar packages in mind. Classes should of course be black boxes, no discussion about that. The same is true for jar packages. But in-between classes and jar packages there is another level of structure, which is often not seen this directly as a black box: packages.

Packages are often second class citizens and their interrelationship is not analyzed this thoroughly. But there is a great tool for such analysis: Structure101. It in general helps you to monitor and verify the dependency structures and complexity of your project by the means of well-organized diagrams.

So let’s start with a sample project. I have taken one of my own projects for this: japicmp is a tool to compute the differences between the API of two jar archives in means of what methods and classes have changed. structure101 has a great composition view, which shows you the dependencies between the packages of a project. This is how it looks for the current version of japicmp:

Image

 

Cleary we can see for example that the cli package, which is responsible for the command-line parsing, uses the exception as well as the config package and is used itself by the main package, where the main() method resides. With the cli package everything seems to be OK. But what about the three packages cmp, util and model. The difference computation between the classes and methods, i.e. the business logic, resides in the package cmp. Hence it should use the model as well as the util package. But these two packages should not have any backward dependencies. This problem is also shown in the matrix view:

Image

 

When we take a closer look at the tangle between these three packages, we see that the class AccessModifier, which is located in the cmp package, is used from the util package:

Image

 

Beyond that, this class is also used in the model. This clearly indicates that the class should rather stay in the model package as in the cmp package. This seems to make sense, as the access modifiers of a class or method are part of the model of a jar archive and do not belong into the business logic. If we move this class to the model package, we get the following result:

Image

This looks much better. We do not have any tangles within the package structure. The nice layout also shows clearly that the whole application depends on the model, as the package is located at the bottom of the diagram. The business logic, which resides within cmp, is called from the main package and uses util, config and the model, as it should be. The same is true for the output package, where the implementations for the cli and xml output reside. This package uses the config as well as the model, once it is computed.

Conclusion: Packages should not be second class citizens but help you to structure your application such that it is easy to overview the code and separate functionality. Tools like structure101 help you to analyze the dependencies between packages and therefore let packages be an important level between the jar on the one side and the class on the other side.

Follow

Get every new post delivered to your Inbox.