2009-12-23

Fajr Center for Arabic Language - How to get there

In three days, on the 26th of December, the Intensive Winter Program starts at the Fajr Center for the Arabic Language in Medinet Nasr (Nasser City) in Cairo (fajr.com). For me this is my second Intensive Winter Program at Fajr. Two years ago I studied there too, at a lower level. In a small series of blogs I will provide some information for future students, because, although their teaching of the language is good, a lot of other things aren't quite professional. In the evaluation form after the previous course I mentioned several points and none of these seem to have changed.

I'm studying in the Nasser City branch, so all information on this page refers to that branch.

How to reach them
They have an e-mail address on their website, but they rarely respond to any mail. Calling them is the better option. I only got someone on the phone if I called the second phone number.

How to get there
The route to Fajr on their site is confusing, outdated and wrong at the moment. Getting there with a taxi, using the information they provide, is almost impossible. They mention a Pizza Hut nearby. Two years ago there were at least two Pizza Huts along Mostafa el Nashas Street, one near Fajr and one about two or three kilometers before, so I ended up at the wrong place at first. The one Fajr refers to doesn't exist anymore. Coming from the city center (and I guess also from the airport) on Mostafa el Nashas ('nagas' with the 'g' pronounced somewhere between a 'g' and a 'k' in Egyptian Arabic) Street, make sure you pass Manhal School at your right hand side. After the Total petrol station you pass a T-junction. After the El Tawheed & El Nour store (*) (lots of furniture on pavement) you go right. At the opposite corner there's the location of the former Pizza Hut. At the end of the street turn right and immediatly left. The first building at right hand side has a plaque with 'Fajr Center for Arabic Language' on it.

(*) There are at least two El Tawheed & El Nour stores in Medinet Nasr, so be careful mentioning it to a taxidriver. You may end up at the store near Serag Mall.

Placemarks on Google maps tend to drift, but maybe this map is helpful:
http://maps.google.com/maps/ms?ie=UTF8&hl=en&msa=0&msid=108044428407275649825.00046a7288fc441af637a&ll=30.055918,31.365237&spn=0.003074,0.006899&t=h&z=17

2009-05-26

Create and startup a virtual machine with KVM under (K)Ubuntu Linux 9.04

This text describes how you can create a virtual machine with Ubuntu 8.04 server edition running under (K)Ubuntu Linux 9.04 desktop edition using kvm. There are a lot of tools to manage virtual machines under Ubuntu. I tried some of these, but in the end some simple shell scripts given to me by a colleague of mine were the best source of information.

I used commands from the scripts and a lot of information on the Internet to create a virtual machine running Ubuntu 8.04 server edition, with its own ip-address and with ssh access. The virtual machine is owned by user vosf (me) and run by user vosf. The name of the machine will be 'mugamma'. I'll explain the name Mugamma below.

Mugamma will be used for system management, running Puppet and Subversion.
It will store its data files on a NAS, so there's no need to give the machine lots if Gigabytes for storage.


Why Mugamma


Host 'mugamma' is named after an enormous building in Cairo, near Tahrir Square. It's the biggest public administrative building on the African continent. As a tourist you go there for visa extensions. You need a visa extension if you stay in Egypt for more than one month plus two weeks. Going to Mugamma is an interesting experience. Last time I went to Egypt I was there for 6 1/2 weeks.

I didn't know about the 2 extra weeks, so I thought I needed the visa extension. I experienced the chaos and mostly unfriendly staff over there. Thanks to a helpful lady (some of them are okay) I learned that I was going to leave Egypt just before I needed a visa extension; one day longer in Egypt and I'ld really need it. They had already provided my with two sets of forms and I had already filled in the forms. I had my photos ready and visited three or four front offices, fighting myself through crowds of shouting Egyptians and foreigners, until I learned I didn't need the visa extension. I can say I've had my share of the Mugamma experience anyway :)

The Mugamma building is a present from the Soviet Union to Egypt in the eary 1950's. In the Al Ahram newspaper it was once called "Egyptian bureaucracy's answer to Kafka's Castle". Most people think it's an ugly building. Not me. I think most people find it ugly because it is a Soviet style building and people have learned that everything from the Soviet Union is bad. Many people just do not want to admit they like it, or find it interesting at least.

Notes


I will write my steps as instructions to create the machine.

Most output I present is a little bit different from the actual output of the commands.

Please read the Ubuntu documentation on virtualization to test if your machine supports virtualization.

I will not use the Virtual Machine Manager (libvirt and tools).

I'm not providing a list of packages you need to install.
You'll find out yourself which packages are missing.

I created the machine and a startup script in directory ~/VirtualMachines. In the instructions I tell you to this too, but you must make your own choices here of course.

Steps


Network bridge


To make it possible for the virtual machine to access the network, we need a network bridge. This doesn't seem to work with Wifi interfaces. Change file /etc/network/interfaces from:


                                                                                                             
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

into:



auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off

After a restart of the network (sudo /etc/init.d/networking restart) there's a network interface called 'br0' with the ip-address the 'eth0' interface had before the change. Interface 'eth0' has no more ip-address attached.


Make yourself member of group kvm


If there's no group 'kvm', you probably need to install package 'kvm' first. Add yourself to the group 'kvm', either on the command line or using a GUI user management tool in (k)ubuntu. Then, in an open terminal, you can use 'newgrp kvm' to add 'kvm' to the list of groups you're in for the current terminal session. In a new session the membership will be the case automatically.


$ groups                                                                                                                                
vosf [...] admin [...]
$ newgrp kvm
$ groups
vosf [...] admin kvm [...]

Create location to store the virtual machine


In a terminal:


                                                                                                             
$ mkdir ~/VirtualMachines
$ cd ~/VirtualMachines

Create image file


                                                                                                             
$ qemu-img create -f qcow2 mugamma 2048M
$ ls -l

-rw-r--r-- 1 vosf vosf 20480 2009-05-21 13:02 mugamma

2 GB is enough for the basic OS, some extra packages, configuration and log-files. Program data is stored on a NAS. The current size of the image file is much smaller than its maximum size of 2 GB.


Install Ubuntu


First download Ubuntu Server Edition 8.04 for 32 bits Intel processors to $HOME/path/to/ubuntu-8.04.2-server-i386.iso

Then:
                                                                                               
$ kvm -boot d -cdrom $HOME/path/to/ubuntu-8.04.2-server-i386.iso -hda mugamma -m 128M
This starts up the guided install.

I chose:
* enter, enter, enter
* Country = 'other', then 'Netherlands'
* 'No' for keyboard layout
* enter, enter
* 'mugamma' (without quotes) for machine name
* 'Guided - use entire disk' for partitioning
* enter
* 'Yes' for writing to disk (this starts the install process and takes some time)
* 'Fred Vos' for full name of new user
* 'vosf' for account
* 'secret' for password and again for verification (or maybe something else)
* Blank for HTTP proxy question
* Selected 'OpenSSH server' as software to install
* Enter to continue to boot the OS


This is an example. Change at least the machine name, username and account for your situation.
After this it looks asif things went wrong. Message: 'FATAL: No bootable device'. Just close the Qemu window here.


Prepare for startup by user


Later I will move this virtual machine to a new physical machine and on that new machine it will be owned by and be started by root. While setting it up and experimenting, it will be owned by me and be started by me.


Create a script file called 'qemu-ifup' in the current directory ($HOME/VirtualMachines):


                                                                                                          
#!/bin/sh
set -x

switch=br0

if [ -n "$1" ];then
/usr/bin/sudo /usr/sbin/tunctl -u `whoami` -t $1
/usr/bin/sudo /sbin/ip link set $1 up
sleep 0.5s
/usr/bin/sudo /usr/sbin/brctl addif $switch $1
exit 0
else
echo "Error: no interface specified"
exit 1
fi

Then make it executable:


                           
$ chmod +x qemu-ifup

Create a random MAC address for the machines' network interface. I want to reuse that MAC-address, so my DNS will always generate the same IP-address. The MAC-address I will present here is 00:11:22:33:44:55, but it's something else.


Create a small executable script 'mugamma-startup.sh' to startup mugamma:


                                                    
#!/bin/sh

iface=$(sudo tunctl -b -u vosf)
sleep 1
kvm -hda mugamma -m 128M -daemonize -net nic,macaddr=00:11:22:33:44:55 -net tap,ifname=$iface,vlan=0,script=qemu-ifup

Change 'vosf' with your username or something more generic and make it executable:


                                                                                        
$ chmod +x mugamma-startup.sh

The system has grown now. In my case after this step it was 541 MB:


                                                                                        
$ ls -l

-rw-r--r-- 1 vosf vosf 567226368 2009-05-24 21:42 mugamma
-rwxr-xr-x 1 vosf vosf 161 2009-05-24 22:04 mugamma-startup.sh
-rwxr-xr-x 1 vosf vosf 310 2009-05-20 19:51 qemu-ifup

Time to start up the system for the first time


You can now startup the virtual machine:



$ ./mugamma-startup.sh

and login to the system as user in the VNC session. After retreiving the ip-address of the machine in that session, you can login to the machine with ssh from a normal terminal window.


Final steps


When you assign a fixed ip-address to the machine in the DHCP server of your router, based on the MAC-address of the virtual machine, you do not need the VNC window anymore. Adding '-vnc none' to the kvm command in the startup-script starts up the machine without a VNC window.


Now the system is ready for installing the necessary stuff.

2008-12-30

Telex documentation online

Telex is the result of a weekend project of mine. Telex can read news items from several news feeds and send the new titles to IRC channels. The software is in use for some months now and seems to work nice at one location. Most of the documentation is ready now and the software can be downloaded.



See http://www.mokolo.org/telex/introduction.html for more information.

2008-11-09

Markdown and tools

Markdown [1] is a wiki-like text format that makes it easy to enter text in an editor. Using a translator, such text can be transformed into HTML, PDF, et cetera.

To transform a markdown formatted text to HTML, the best choice under Debian or Ubuntu is package 'markdown'. I'll use it to generate HTML for future blog entries. Unfortunately 'markdown' the package, cannot produce PDF, nor can 'markdownj', a Java implementation.

Pandoc [2] is another choice. It offers more output formats, but I cannot believe the HTML format it produces, can be parsed by any decent browser. The 'pandoc' package provides program 'markdown2pdf'. This program does produce a PDF, but it doesn't parse block quotes from markdown correctly and links get lost. I guess I'll remove the pandoc package soon and search for ways to change the markdownj [3] Java stuff into software that transforms markdown into XSL-FO [4], so I can use Apache FOP [5], the XSL-FO converter from XMLmind [6] or another XSL-FO converter to generate PDF or RTF or whatever output format.

To install pandoc under Debian/Unbuntu:

% sudo apt-get install pandoc libghc6-pandoc-dev pandoc-doc texlive-latex-base texlive-latex-recommended

References



[1] http://daringfireball.net/projects/markdown/

[2] http://johnmacfarlane.net/pandoc/

[3] http://sourceforge.net/projects/markdownj/

[4] http://en.wikipedia.org/wiki/XSL-FO

[5] http://xmlgraphics.apache.org/fop/

[6] http://www.xmlmind.com/foconverter/

2008-10-28

Dependencies between Maven dependencies and Linux package dependencies

Introduction


I'm working on ways to make Java software installable under Linux via packages, i.e. generate .deb-files or .rpm-files of software systems or libraries, that, if possible, respect the File Hierarchy Standard [1]. One aspect that makes generating these packages difficult, are dependencies. Using Maven [2] as the build system, every so-called 'artifact' usually depends on other artifacts. A software product depends on libraries (.jar-files) and libraries depend on other libraries. These dependencies between Maven artifacts should be reflected in dependencies between packages we create and packages where our packages depend on.


It is generally not a good idea to provide external libraries with out products if these external libraries can be stored on a central place on our systems. This saves us multiple occurences of exactly the same artifact on our system, and/or the same artifact in different versions. If a bug is solved in a specific library and a new version is released, we do not want to upgrade every product that depends on this library, only because it provides that library with the full product.


More and more Java libraries become available as Debian or RPM packages. We should use these packages if possible.



Dependencies on multiple levels



Classes


A Java class can use instances of external classes or static methods from such classes. A dependency can be required or optional. The latter depends on the way the class is used or the environment under which the class is used. The dependency can be limited to specific versions of these other classes, because a necessary method was introduced in a specific version, a bug has been fixed since a certain version, or a method that is used is present until a certain version. A class can also depend on components like configuration files, XSL files et cetera.



Libraries and end products (jar, war and brothers)


A library or end product is a collection of classes and resources like configuration files, XSL scripts, scripts to start and stop a service et cetera. Usually these products are released with specific versions. Dependencies of every component in such product to components within another product are reflected in dependency rules like 'requires product X version Y.Z or better'.



Maven artifacts


A Maven artifact is a library or end product as described above. A Maven artifact contains a lot of metadata on the 'thing' and also a list of dependencies on other Maven artifacts. A lot of Maven artifacts are available in publicly available repositories. The biggest repository is called 'Central'. That's the de facto repository. Artifacts can also be stored in a locally maintained repository or in one or more organizational repositories. During the build of a product, all direct and indirect (transient) dependencies can be resolved to build a product with all dependencies included.



Packages


A package (RPM, DEB or Slackware package) with Java libraries, can contain one or more jars. If it contains more jars, these usually are interrelated, like a core jar, a jar with samples, a jar with a web application. All jars in such a package have the same version and normally are all included in a zip-file one can download from the products' website.



Relationships between Maven artifacts and software packages and dependencies


Figure 1 shows an UML object relationship diagram explaining the relationships.

Figure 1: relationship diagram


On the left hand side you see the Maven artifacts and their dependencies. A Maven artifact has a groupId, an artifactId and a version. For example, the current version of XOM, an XML API for Java, has groupId 'xom', artifactId 'xom' and version '1.1'. It provides 'xom-1.1.jar'. Another example is JSAP, a library for parsing command line parameters. The current version is groupId='com.martiansoftware', artifactId='jsap' and version='4.2'. It provides 'jsap-4.2.jar'. An artifact has 0 or more dependencies. Dependencies do not mention exact versions, but version constraints. More on version constraints can be found at [3]. Every dependency should match at least one artifact.


On the right hand side you see the software packages and their dependencies. A software package has a name and version and it has 0 or more dependencies. Just like in the Maven artifacts situation, a dependency should match at least one software package.



A Java library software package provides one or more Maven artifacts. Debian package 'libxom-java' currently provides 'xom-1.1.jar'. Installing it leads to this file installed in directory '/usr/share/java/', with a symbolic link 'xom.jar' pointing to 'xom-1.1.jar'.



References


[1] http://www.pathname.com/fhs/

[2] http://maven.apache.org/

[3] http://docs.codehaus.org/display/MAVEN/Dependency+Mediation+and+Conflict+Resolution

2008-10-26

Generating Linux packages from Maven artifacts

Introduction

Java applications usually provide all dependent libraries (jars) with the application. This has advantages, like having less problems getting the application to work and no problems with possibly slightly different versions of libraries. But providing all libraries with an application has disadvantages too, like numerous copies (and versions) of the same library installed on a single system and the necessity to upgrade all software package using a specific library if this library should contain a serious bug.



More and more Java applications become available as Debian packages. Debian policies state that statically linked libraries must not be used, for the reasons mentioned above. Providing external jars with a program is like using statically linked libraries in C or C++. Debian Java packages typically use external jars that are contained in separate packages. After installing such a package (via dependencies), a jar is installed in directory /usr/share/java. If we look at XOM, the best XML API for Java, package 'libxom-java', at the time of writing, installs xom-1.1.jar and a symlink xom.jar in directory /usr/share/java:


$ ls -l /usr/share/java/*xom*
-rw-r--r-- 1 root root 265858 2008-01-15 06:59 /usr/share/java/xom-1.1.jar
lrwxrwxrwx 1 root root 11 2008-07-08 12:14 /usr/share/java/xom.jar -> xom-1.1.jar
-rw-r--r-- 1 root root 160069 2008-01-15 07:00 /usr/share/java/xom-samples-1.1.jar
lrwxrwxrwx 1 root root 19 2008-07-08 12:14 /usr/share/java/xom-samples.jar -> xom-samples-1.1.jar

There's more in this package, but for now we concentrate on the XOM jar. After installing the 'libxom-java' package, we can add /usr/share/java/xom.jar to our classpath, and if XOM version 1.1.1 is released because of some bug, or version 1.2 is released with new features, we get an automatic update and our program will still work.

A lot of libraries are now available as Debian packages. If a library is available via a Debian package, we should use these instead of libraries we provide with our software, if we distribute to Debian or Ubuntu systems. But a lot of libraries are not (yet) available as packages. How to cope with such a mixed environment? Below I'll propose a way to at least make things work for your local environment.


Sample application 'Telex' and sample library PircBot

Telex is a one day weekend software project of mine. It's an application that can run several IRC robots on different IRC servers and channels. It sends the titles of new items of as many RSS feeds as you like, to these robots. It's very flexible. You can send different feeds to different robots for instance.

Telex depends on two libraries, PircBot [1] and Rome [2]. PircBot is used for the IRC bots and Rome is used to check RSS feeds. Both libraries are not available as Debian packages for Ubuntu 8.04, at least not in the default repositories. I'm going to show you how to you can make a Debian package of PircBot. We'll do that in small steps.


Get PircBot


The pom.xml file of Telex, contains the following dependency:


    <dependency>
<groupId>pircbot</groupId>
<artifactId>pircbot</artifactId>
<version>1.4.2</version>
</dependency>

If I run command % mvn package or another Maven command that results in fetching dependencies and transitive dependencies, this will result in fetching the PircBot jar from some external Maven repository and placing it in my local repository if it is not already there.



Version constraints in Maven


The version constraint in the dependency you see above is a so-called soft constraint. It doesn't mean that this exact version will be fetched, though most Maven dependencies mention an exact version of an existing jar as a soft constraint. The latter is a pity. They should mention all versions of the library that meet the requirements. More information on version constraints in Maven can be found at [3]. There you will find that a version constraint can contain a series of ranges, like '[2.1,2.4.3),(2.5,)', meaning every version greater than or equal to 2.1 and smaller than and not including 2.4.3 or greater than (not including) 2.5. Versions 2.1, 2.2, 2.4.2 and 6.7 meet this sample constraint and versions 2.0, 2.4.3 and 2.5 do not.



Install file with make


It is easy to install the PircBot jar using GNU Make [4] if this jar is in your local Maven repository. Here's the simplest Makefile you can think of to do this. As you see, my home directory is /home/vosf-dev. Store the following contents in a file called Makefile and change the path to the PircBot jar according to your situation. Use a tab at the beginning of the second line, not spaces, otherwise it won't work.


install:
install --owner=root --group=root /home/vosf-dev/.m2/repository/pircbot/pircbot/1.4.2/pircbot-1.4.2.jar /usr/share/java/

Make sure you have installed GNU Make and then run make install as root. Sample:


$ sudo make install
install --owner=root --group=root /home/vosf-dev/.m2/repository/pircbot/pircbot/1.4.2/pircbot-1.4.2.jar /usr/share/java/

Throw the installed jar away immediately, because now we are going to install it via a package:


$ sudo rm /usr/share/java/pircbot-1.4.2.jar

Make a package


Using Checkinstall [5] it is not difficult to create a Debian package. If you haven't yet installed Checkinstall, do that first via $ sudo apt-get install checkinstall.

In the directory where Makefile is located, create a file called description-pak containing a description of the package. Here's a sample:


Library for building IRC robots
.
PircBot is a Java framework for writing IRC bots quickly and
easily. Its features include an event-driven architecture to handle
common IRC events, flood protection, DCC resuming support, ident
support, and more.

We want a symbolic link called pircbot.jar to the pircbot-1.4.2.jar file (and have it removed when the package is removed). One way to achieve that is via scripts that are executed before and after an install and before and after removal. We'll use two of these scripts. Create file postinstall-pak with the following content:


#!/bin/sh
cd /usr/share/java
ln -s pircbot-1.4.2.jar pircbot.jar
exit $?

Create a file called preremove-pak:


#!/bin/sh
cd /usr/share/java
rm pircbot.jar
exit $?

Both scripts return 0 if creation or removal of the link is sucessful. You do not need to make the scripts executable with chmod.

Now issue the following command:
$ sudo checkinstall --maintainer="Fred Vos \<fred.vos\@mokolo.org\>" --pkgname="libpircbot-java" --pkgversion=1.4.2 --pkggroup=libs --nodoc --install=no

You'll use other settings of course. This produces a list with settings, most as you provided on the command line. You need this settings list, because the architecture setting on the command line doesn't seem to work properly. Press <7><Enter>all<Enter> to change 'Architecture' from 'i386' to 'all' and <Enter> to accept. If the architecture setting works (--pkgarch=all), you can add --default to the command line options.

Basically what Checkinstall does here is run the make install command and intercept all actions GNU Make was planning to do and generate a package out of that; a Debian package by default on a Debian (or Ubuntu) Linux system and an RPM on SuSE and Redhat systems and a Slackware package on a Slackware system. But you can make RPMs too on a Debian system, with an extra parameter. Check the Checkinstall man page for that.


Files postinstall-pak and preremove-pak are included and will work in both .deb and .rpm packages.


Test it


$ ls -l /usr/share/java/pircbot*
ls: cannot access /usr/share/java/pircbot*: No such file or directory
$ sudo dpkg -i libpircbot-java_1.4.2-1_all.deb
Selecting previously deselected package libpircbot-java.
(Reading database ... 124836 files and directories currently installed.)
Unpacking libpircbot-java (from libpircbot-java_1.4.2-1_all.deb) ...
Setting up libpircbot-java (1.4.2-1) ...
$ ls -l /usr/share/java/pircbot*
-rwxr-xr-x 1 root root 74259 2008-08-10 20:08 /usr/share/java/pircbot-1.4.2.jar
lrwxrwxrwx 1 root root 17 2008-08-10 20:19 /usr/share/java/pircbot.jar -> pircbot-1.4.2.jar
$ sudo apt-get remove libpircbot-java
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be REMOVED:
libpircbot-java
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 102kB disk space will be freed.
Do you want to continue [Y/n]?
(Reading database ... 124836 files and directories currently installed.)
Removing libpircbot-java ...
$ ll /usr/share/java/pircbot*
ls: cannot access /usr/share/java/pircbot*: No such file or directory
$ sudo rm libpircbot-java_1.4.2-1_all.deb


Make available for apt


Next step is to make the package available for apt for internal use, so you can do something like:
$ sudo apt-get install libpircbot-java
This is not the scope of this page. There are plenty of pages on the Internet on that subject.



Final thoughts


Though you can install and remove the packages you create this way, these are only Debian packages in a technical sense. These 'packages' make it possible to install some files at the right locations. Making a true Debian package requires a lot more files to be distributed. Still, installing external libraries using this technique is acceptable in cases where you need one or two external libraries to be available as packages.


I didn't handle the case where this external library has dependencies of its own. I'll handle dependencies between Maven dependencies and package dependencies in another article. In that article I'll show you how this can be handled.



Further steps


In following articles, I'll try to join Maven dependencies with package dependencies and show you how to make Debian packages from our own Java software.



References


[1] http://www.jibble.org/pircbot.php

[2] https://rome.dev.java.net/

[3] http://docs.codehaus.org/display/MAVEN/Dependency+Mediation+and+Conflict+Resolution

[4] http://www.gnu.org/software/make/

[5] http://www.asic-linux.com.mx/~izto/checkinstall/

2008-07-02

OpenID authentication with Spring Security

This page describes an experiment with adding OpenID authentication to a web application using Spring Security. It describes OpenID a little and gives detailed instructions on how I was able to add OpenID authentication to a web application. I hope this page is useful for you. If you find any mistakes I made or if I'm not clear, please respond.

OpenID

OpenID authentication makes it possible to use the same 'username' and password for multiple sites. The 'username' is in fact a URL to a web page, typically the URL of a homepage or personal blog. You can register an ID at several sites, or run an OpenID server. I registered my OpenID at http://claimid.com/. My OpenID, registered at the claimid.com site is https://openid.claimid.com/fred-vos. I can use this ID at different sites, but if I don't like claimid.com anymore, I don't want to change my ID everywhere. So I used a service OpenID authentication allows. I made a web page using a URL to the following page: http://openid.fredvos.nl/ and added the following lines in the body/header of the HTML page:
  <link rel="openid.server" href="https://openid.claimid.com/server" />
<link rel="openid.delegate" href="https://openid.claimid.com/fred-vos" />
This tells an authentication agent to use my https://openid.claimid.com/fred-vos ID and use https://openid.claimid.com/server as the server. If I want to switch to another OpenID server or forget my password, I can create an ID at another site and change the two lines in my http://openid.fredvos.nl/ page and continue to use my personal OpenID. More information on OpenID can be found at many sites on the Internet, for instance at the OpenID homepage.

I not only want to use my OpenID at several sites, I also want to use OpenID authentication for my web applications. Spring Security, formerly known as 'Acegi', supports a lot of authentication mechanisms. Recently Spring Security added support for OpenID authentication, so I tried Spring security for my experiment.

Kora

First I created a small working web application called 'kora'. Maybe I'm going to develop this thing further into something useful, but for now it's just a sample application.

I used Maven as the build tool for this. The web application consists of an index.jsp page (URL=http://localhost:8180/kora-1.0-SNAPSHOT/index.jsp) with a link to a servlet that responds to http://localhost:8180/kora-1.0-SNAPSHOT/kora?request=play with a web page showing the text 'Pling!'. Tomcat listens to port 8180 on my machine. At this stage Kora didn't use authentication/authorization.

Then I tried to get a part of this web application accessible only via my OpenID.

Added some jars to the local repository

Some of the jars I needed, were not available in Maven repositories, so I had to add these to my local repository by hand.

I downloaded Spring Security version 2.0.2 as a zip-file at the Spring Framework downloads page, unpacked the zip-file, did a cd to the spring-security-2.0.2/dist directory and added the core and openid jars to my local Maven repository, using:

$ mvn install:install-file \
-Dfile=spring-security-core-2.0.2.jar \
-DgroupId=org.springframework.security \
-DartifactId=spring-security-core \
-Dversion=2.0.2 \
-Dpackaging=jar \
-DgeneratePom=true
$ mvn install:install-file \
-Dfile=spring-security-openid-2.0.2.jar \
-DgroupId=org.springframework.security \
-DartifactId=spring-security-openid \
-Dversion=2.0.2 \
-Dpackaging=jar \
-DgeneratePom=true

Added dependencies to pom.xml

Then I added the following dependencies to my Maven pom.xml:

  <dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-core</artifactId>
<version>2.0.2</version>
</dependency>

<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-openid</artifactId>
<version>2.0.2</version>
</dependency>

<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>2.5.5</version>
</dependency>

<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-aop</artifactId>
<version>2.5.5</version>
</dependency>

<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>2.5.5</version>
</dependency>

<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-dao</artifactId>
<version>2.0.2</version>
</dependency>

<dependency>
<groupId>org.openid4java</groupId>
<artifactId>openid4java</artifactId>
<version>0.9.3</version>
</dependency>

<dependency>
<groupId>javax.servlet</groupId>
<artifactId>jstl</artifactId>
<version>1.2</version>
</dependency>

<dependency>
<groupId>taglibs</groupId>
<artifactId>standard</artifactId>
<version>1.1.2</version>
</dependency>

File applicationContext.xml

This is a file that describes the access rights. I copied a file called applicationContext.xml from an unpacked samples zip found here, to src/main/webapp/WEB-INF/applicationContext.xml.
Here's a slightly edited version:

<?xml version="1.0" encoding="UTF-8"?>

<b:beans xmlns="http://www.springframework.org/schema/security"
xmlns:b="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://www.springframework.org/schema/security
http://www.springframework.org/schema/security/spring-security-2.0.1.xsd">

<http>
<intercept-url pattern="/**" access="ROLE_USER" />
<intercept-url pattern="/index.jsp*" filters="none" />
<intercept-url pattern="/openidlogin.jsp*" filters="none" />

<logout />
<openid-login login-page="/openidlogin.jsp" />
</http>

<authentication-manager alias="authenticationManager" />

<user-service id="userService">

<user name="http://openid.fredvos.nl/" password="notused"
authorities="ROLE_SUPERVISOR,ROLE_USER" />
</user-service>

</b:beans>

As you can see, pages openidlogin.jsp (discussed later) and index.jsp can be accessed by anyone, and all other pages in all directories (/**) only by persons with ROLE_USER and that the person with OpenID http://openid.fredvos.nl/ has roles ROLE_USER and ROLE_SUPERVISOR, so this person can access all Kora pages. Furthermore you see that the login page for OpenID login is openidlogin.jsp. This page is discussed below.

File openidlogin.jsp

This is the login page. I copied a file called openidlogin.jsp from the sample application mentioned before to src/main/webapp/openidlogin.jsp. I haven't made any changes to this file. Here's the source:

<%@ taglib prefix='c' uri='http://java.sun.com/jstl/core_rt' %>
<%@ page import="org.springframework.security.ui.AbstractProcessingFilter" %>
<%@ page import="org.springframework.security.ui.webapp.AuthenticationProcessingFilter" %>

<%@ page import="org.springframework.security.AuthenticationException" %>

<html>
<head>
<title>Open ID Login</title>
</head>

<body onload="document.f.j_username.focus();">
<h3>Please Enter Your OpenID Identity</h3>

<%-- this form-login-page form is also used as the
form-error-page to ask for a login again.
--%>
<c:if test="${not empty param.login_error}">
<font color="red">

Your login attempt was not successful, try again.<br/><br/>
Reason: <c:out value="${SPRING_SECURITY_LAST_EXCEPTION.message}"/>.
</font>
</c:if>


<form name="f" action="<c:url value='j_spring_openid_security_check'/>" method="POST">

<table>
<tr><td>OpenID Identity:</td><td><input type='text' name='j_username' value='<c:if test="${not empty param.login_error}"><c:out value="${SPRING_SECURITY_LAST_USERNAME}"/></c:if>'/></td></tr>

<tr><td colspan='2'><input name="submit" type="submit"></td></tr>
<tr><td colspan='2'><input name="reset" type="reset"></td></tr>
</table>

</form>

</body>
</html>

File web.xml

To activate the filtering process through Spring Security, I added the following text to my src/main/webapp/WEB-INF/web.xml file:

  <filter>
<filter-name>springSecurityFilterChain</filter-name>
<filter-class>
org.springframework.web.filter.DelegatingFilterProxy
</filter-class>
</filter>

<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>

<servlet>
<servlet-name>context</servlet-name>
<servlet-class>
org.springframework.web.context.ContextLoaderServlet
</servlet-class>
<load-on-startup>1</load-on-startup>

</servlet>

Test

After

$ mvn tomcat:deploy

everything was compiled, a war-file was assembled, and the war-file was sent to Tomcat running on localhost and listening to port 8180. Opening URL http://localhost:8180/kora-1.0-SNAPSHOT/index.jsp showed the page with the link to the protected http://localhost:8180/kora-1.0-SNAPSHOT/kora?request=play page in my browser. Clicking this link didn't show the protected page at first, but the request was intercepted and the openidlogin.jsp was shown. Entering my OpenID and clicking the 'Submit query' button, redirected the browser to the claimid.com site. After I logged in to that page with fred-vos as username and my password, I got redirected to the protected page that says 'Pling!'.

Not that easy - please respond

It looks as if adding the authentication was an immediate and simple success, but it wasn't. It was quite difficult. Only after I found the sample application, I was able to get everything working. There are a lot of pages on the Internet with outdated instructions. The instructions on this page will be outdated too in the future if I don't add updates. So please respond if it doesn't work anymore, if possible with necessary changes.