Archive for January, 2006

Google creating its own Linux Distribution ?

Tuesday, January 31st, 2006

Apparently, quite a lot of blogs ( BlogORabais , Je Hais le Printemps) and news sites (Slashdot, The Register) are relaying the information…

So, what is Google currently preparing ? Is this information a whole FUD started by Google , just to make this company even more popular ?

In any case, this can only benefit the Ubuntu, Debian  and more generally, the Linux communities. So, Long life Google….

ServiceMix JBI Container and PXE BPEL: Theory and practice..

Thursday, January 26th, 2006

While speaking about a work I achieved last summer, related to ServiceMix (a JBI container) and PXE ( a BPEL engine that can be embedded inside ServiceMix), a friend of mine came up to the following conclusion :

In theory, theory and practice are the same whereas in practice, they are not.

I love that sentence :-)

So, here is a small HOWTO, taken from the email I sent to ServiceMix mailing list :

How to get a BPEL process running with ServiceMix JBI Container and Fivesight's

1) The first step is to create a BPEL process with the corresponding WSDL files.
Examples bundled with PXE can serve as a quickstart.

2) Remove any concrete bindings in the WSDL Files (
binding and service XML tags). Indeed, the endpoints are JBI proxies, so the
SOAP over HTTP bindings are useless here. PXE and ServiceMix will take care of
registering ports as JBI Service endopints.

3) Compile your BPEL process and WSDL files
let's say the main WSDL file describing the process is in the
MissionPlanningProcess.wsdl (this file must import the other WSDL files that
used :

REM add the resources to PXE's Resources Repository MissionPlanning.rr
rradd -wsdl file:MissionPlanningProcess.wsdl MissionPlanning.rr

REM compile the BPEL
bpelc -rr MissionPlanning.rr -wsdl file:MissionPlanningProcess.wsdl

4) Create a pxe-system.xml file that describes how to bind the BPEL process to
actual JBI endpoints. (PXE's deployment descriptor)

Let's say that the MissionPlanning process provides 3 portTypes :
proc:ProcessPT, proc:CallbackPT, resp:ResponderPT.

We want to expose 2 services :
ProcessSVC that exposes the proc:processPT and proc:CallbackPT porttypes
ResponderSVC that exposes the resp:ResponderPT portType.

(same names as the Async example bundled with PXE)

the corresponding pxe-system.xml file would be :";

Pay attention to use the same value for the "name" attribute in the
system-descriptor tag, as the name of the BPEL process. (current limitations
with PXE, should be fixed in the future)

5) We now have all the necessary artifacts to create a SAR (System Archive) file
that is just a container for all these files :

sarcreate -common MissionPlanning.rr -sysd pxe-system.xml MissionPlanning.cbp

6) JBI needs deployable components (the SAR in this case) to be contained in a
zip file. The zip file is referred later as a Service Unit. (hence the -su)

=> create the output directory
=> jar cf pxe.sar
(or use any tool that can create a .zip)

7) Package this service unit inside a so-called Service Assembly (SA), which is
just a set of service units with a jbi.xml
For example, create the following outputMETA-INFjbi.xml file :  ./jbi.xsd"

Service Assembly containing just the BPEL

BPEL Service Unit

and create the jar :
cd output
echo creating Service Assembly
jar cf ..MissionPlanning-sa.jar *
cd ..

component-name refers to the name of the BPEL engine deployed in the JBI

 8) Create a servicemix.xml file that launches a JBI container.
An example is bundled with ServiceMix's AsyncDemo example :
Pay attention to :


in the installationDirPath, you will have to drop the PXE's JBI component.
(bundled with ServiceMix). If ServiceMix doesn't detect PXE nor install it, then
it means there is a problem in your installationDirPath. (For example, if
ServiceMix is integrated inside Geronimo, the "." directory refers to

the deploy directory is where you will drop the Service Assembly

9) launch service mix (either standalone, or by sourcing the spring file.
If you source the spring file, make sure you use ServiceMix's Spring version.
The XML extension mechanism is not yet available from upstream Spring, so
Spring won't recognize servicemix's specific spring syntax.

10) Here you go, you can then talk to your BPEL process from other JBI
components (more information in another HOWTO)
I hope that this HOWTO will help someone some day...

FreeNX is damn crazy !

Thursday, January 26th, 2006

Waoo.. Have you ever tried FreeNX, a free version of NoMachine’s server ? This piece of software is incredible ! In two words, it’s a better, secure, VNC server.

A brief look at NoMachine NX explains some technical details about it… Good luck to understand the gory details ;-)

FUSE is the future, a small HOWTO to FUSE on Ubuntu

Wednesday, January 25th, 2006

Yes, FUSE is the future !!

I personnally think that it does NOT make sense to implement every protocol known to earth inside kernel-space, just to be able to mount remote folders. Implementing stuff in kernel-space implies complexity, and bloat.

Added to that, why would you reinvent the wheel ? If libProtocol already exists, it is somewhat stupid to re-implement it, just for the sake of having something in kernel-space..

That’s why FUSE has been invented. This post describes how to mount, for example, a ssh directory using FUSE.

So now, let’s imagine some great stuff with FUSE… The current /var/log totally sucks. Text files are handy for the system administrator, because he can use his usual UNIX guru commands (grep, awk, perl, whatever). However, they are NOT handy at all for system utilities that have to parse all the different formats of logs, in order to output logs, etc (Awstats, for example does that, for Apache logs).

So, we would live in a better world if, for example, all logs were output’ed to a Database, and we had a virtual /var/log, that reflected the database, just so that people can use grep and perl on it… Not only this would allow stats tools to be more efficient, but we would keep the current compatibility…


Linux = 0.53% of internet users

Wednesday, January 25th, 2006

It is a shame to see that only 0.53% of internet users (see zdnet [fr]) use Linux… It was obvious it didn’t represent many users, but so few ?????

Anyways, Good luck to our lovely OS…

10 things that still suck under Linux

Sunday, January 22nd, 2006

I have recently setup a Linux server, so it was the occasion of pointing out the few things that still suck on this beautiful Operating System. Even though the distribution was Ubuntu Linux Server, the most famous Debian GNU/Linux derivative, all of these remarks apply to most other UNIXes such as FreeBSD.

The article is not meant to conclude that XXX OS is better than Unix. It is just a series of remarks that will, hopefully, contribute to making it better in the future.

  1. Lack of consistency : Anyone having administrated a Linux machine has been faced to the general lack of consistence. I am not referring to the often-criticized lack of consistence in the User Interface, but to the heterogeneity of the miscellaneous components instead.In fact, each component (software, library, daemon) does not integrate to its environment, and no effort is done to ensure a smooth integration. For example, there is no generalized notion of a “virtual host” on the system, whereas it is clear for the system administrator that Apache’s’s VirtualHost, Postfix’s’s aliases, ProFTPd’s’s virtual Host, and all of their respective logs are somewhat related. Why are all tools so much application-centric, instead of being service-oriented ?Linux would be a better place if all those applications shared, to some extend, a set of configuration parameters, log formats and conventions. When looking at awstats logs, only the web-specific part of the domain appears, whereas the system administrator would like to have a global vision (HTTP, SMTP, FTP, SQL, etc).Of course, the Virtual Host is only one example of a disparate setting. There are lots of others, such as the lack of generalized identifiers and passwords for people’s accounts. As usual, there is nothing technically impossible here, and the solutions are already existing (LDAP, for example, but not necessarily), but once again, to make it possible, people have to agree on some conventions.If you want to provide FreeNX access to your users, you will have to maintain two sets of user/passwords. So would you if you want to give a MySQL database to each of your users. Additionnally, you will have to define a set of conventions to link a user to its database, since MySQL is a planet, and the system is the rest of the universe : there is no link between the two.Finally, even after all these years of editing and modifying these configuration files in /etc, I still wonder why no single file has the same syntax in /etc. There is no pattern, every single file looks like a different world. History has its part of the responsibility, but sometimes, people should be able to correct their mistakes. I am not speaking about drastically changing the whole /etc, but maybe progressively migrating the unused configuration files (how often have you modified /etc/iniittab by hand ?) to some common scheme. (not necessarily XML, but there should be some consistence in the choice. Consistence is not just about eye candy, it is also, and more importantly, about writing once for all, a generic parser, that can be optimised, and on which would all application rely)
  2. Logging is most probably one of the worst parts of a UNIX system. The current syslog system is old and needs to be replaced by something better, cleaner. People could argue that it still works fine, and that syslog-ng solves part of the problem.
    However, it’s an inconsistent system : why is it that we can say mail.* or uucp.* (that only few people use, actually..), but not jabber.*, http.*, samba.*, etc.
    The answer is simple : the system is way too static, many details have been hardcoded into the system a long time ago, and the only extensible part in it are the localX.* that is limited anyways. The proof ? Any decent application (Apache, Samba, ProFTPd, …) implements its own logging mechanism. This has the consequences of bloating instead of componentizing applications.A solution to this would be to implement a flexible, extensible logging framework, that allows any application to fill a set of user-defined attributed, not static ones. The framework should log to a database (SQL, Native XML, OO, whatever), and indexes should be there to help log analyzers to efficiently perform their job. Text files are not machine-friendly, so any log which is to be analyzed by an application should not be written as a mere text file. Of course, system administrators are used to accessing files, so a possible solution is to use something like FUSE in order to implement a virtual /var/log on which UNIX gurus will be able to tail -f, grep, vi, and less. UNIX not-so-gurus will, on the other side, enjoy seeing better graphical applications focusing on the user experience, search, etc, instead of focusing on parsing and optimizing access the big files.
    Additionnally, FUSE would allow tools such as logrotate to still work.
  3. Everything is based on the polling paradigm. Why would man-db run every week, even though I haven’t touched any man page for years ? why would awstats re-analyze my logs every night even though I haven’t had any query the whole day on several virtual hosts ?
    The problem is both about elegance and performance. The polling paradim gives the impression of a dumb system, that reverts to ugly hacks to minimize the performance hit caused by this inefficient system.
    If my server only uses 1% of its CPU during the day to serve Apache Queries, I do not want to wait until the end of the day for my awstats to be updated. Moreover, if at the end of the day, my Apache still eats 100% of the CPU, I do not want awstats to start analyzing logs.
  4. Permissions. Since sensitive data is disseminated everywhere (passwords all over the configuration files, private keys for some daemons, etc), it is nearly impossible to ensure that a consistent set of permissions are applied.Instead, there should be a central repository where all critical information would be stored, and that could be safely protected and watched by the system administrator. Passwords should not be disseminated to /root/.my.cnf, /etc/freenx, /etc/apache/*, etc..

    Additionnally, no distribution currently takes advantage of ACLs by default. It is always possible to mount the filesystem with acls enabled, but no package would, by default, set ACLs instead of standard permissions. However, this could be useful in some cases, such as setting default ACLs in /usr/local/stow (for those who use this system), to ensure that any file created later in this directory will be readable by the staff member, regardless of the umask of the creator.
    A lot of other files could benefit from ACLs, and more specifically, default ACLs. This could be used to enforce stricter permissions, such as forbidding access to anyone to /var/log, and only authorizing specific users to rotate logs, etc. A lot of things can be thought and re-engineered.

  5. Useless bindings all over the place. There are many languages, it is a fact. Since every language must communicate with libraries written in other languages, everyone creates bindings all over the place. However, it would be a little smarter to take advantage of the current .Net platform, implemented by the Mono project. For example, there are bindings for Gtk and all Gnome libraries for the .Net platform, so why are people developping Gtk / Gnome bindings for Python, since there already is a python compiler targetting the .Net platform.
    Developping less stuff, and concentrating on the already developped architectural blocks would help homogeinizing the system as a whole. I am not against the diversity of languages, but since a platform exists to make all these languages communicate, it should be used.
  6. There should be standard communication patterns between processes. It looks like everybody reinvents the wheel to communicate with other processes. Some applications (pop-before-smtp) watch logs of others (courier-imap, etc), some use IPC, some other prefer UNIX sockets.. It looks like more and more people are adopting dbus these days. Maybe  all applications should take the  same path, to let the system administrator be able to monitor communications (logging, permissions, etc).
  7. Limits and MaxSettings are hard to parameter. The maximum number of Apache threads , for example, is pretty hard to configure, since there is no easy way to calculate it. It is even harder to set a reasonable value when there are other services that may use  the CPU as well…
    So, I believe that there should be global parameters, instead of application-specific parameters. It does not make sense to set the number of Threads/Processes in Apache regardless of the other daemons running.
  8. Applications cannot communicate with users :The only communication mean between applications and users are emails. However, email is a specific communication mean, and not everybody wants to use it. Some system administrators may prefer getting paged when a error outcomes (log, whatever) on the system.
    There is simply no dedicated mean for alerting a user, so people revert to quick and dirty hacks (call a specific shell script that will send a message to the cell phone, setup a email<-> phone bridge, etc..).

    So, there should simply be an abstraction to alert and send messages to the system administrator. The middleware would then use the appropriate plugins to communicate with the user, and such a system would prevent every application to implement specific means of notification.

  9. Too many legacy unsecure systems.Whenever an application ships with SSL/encryption, this encryption is an option. Why wouldn’t things be encypted by default ? Having applications that already implement encryption communicate securly by default does not seem something hard to do, so why would we still stay with all those legacy services, unencrypted just because the system administrator is too lazy to configure the SSL certificates, and stuff ?
    SSH is a good example to follow : keys are generated by default, making the system useable right after installation.
    SSL is a bad example : its limits prevent it from being used easily with Virtual Hosts, so it should be improved..
  10. Running an encrypted / is hackish. It is particularly hackish (init ramdisk, etc..) to run a system where / is encrypted. This should be fixed to allow people with laptops to take their computer without fearing their data might be stolen.

Once again, I am not criticizing any work done by all the volonteers on this planet. I am just pointing out all the items that I think should be enhanced, in case people did not realize the few things that are still problematic today.

I would love to see other people list their wishes and remarks too.

Evolution hacker position at Red Hat

Sunday, January 22nd, 2006

It looks like Red Hat has a position for an Evolution Hacker position…  The more Open Source positions there are, the better !

Trackbacks are great !

Sunday, January 22nd, 2006

Hi think that Trackbacks are a great invention. Everyone should turn trackbacks on.

This post explains everything about trackbacks in wordpress.

GObject hell !

Sunday, January 22nd, 2006

I find it particularly interesting to see people actually enjoying GObject development.

My personal experience with GObject has been a total pain, and I cannot believe that some people find fake-OO nice..

So far, I see two ways of developping classes on top of the GObject system :

  • Directly writing the Classes and Interfaces. This is a pain, not only to write, but also to maintain. This involves writing tons of #directives that are mostly governed by conventions…. Forget about renaming any class, for sure…
  • Using a higher-level language and using an intermediate tool (Gob, codegen) to generate the appropriate Code. First-time writing of the classes is easier, but then development becomes a nightmare and debugging is hell…

So, congrats to those guys who enjoy GObject, since they are those who allow us to write OO applications on top of their libraries using real high level languages, such as Java and C#.

Document-Driven-Development (DDD)

Saturday, January 21st, 2006

Here is a nice post about Document Driven Development.

Something to think about…