Ruby, MySQL and Snow Leopard

It should have been a simple excercise. I just wanted to set up a dev environment for a Ruby on Rails app that uses MySQL on my Snow Leopard Macbook 5.5 – easy, right? I wish… after much trial-and-error, and many encounters with the dreaded error message:

"uninitialized constant MysqlCompat::MysqlRes"

… I finally got everything to work using the following steps:

  1. Installing the 64-bit version of MySQL 5.5.14 using the .dmg installer.
  2. removing all possibly pre-existing versions of the mysql gem:
    sudo gem uninstall mysql
  3. Installing the correct gem with the following line:
    sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config
  4. This trick is missing from most howto’s: I needed to set the linker path.
    export DYLD_LIBRARY_PATH=/usr/local/mysql/lib/

After this, you can verify your installation is working with irb:

Motoko:project-rails karsten$ irb
>> require 'rubygems'
=> false
>> require 'mysql_api'
=> true
>> Mysql.get_client_info
=> "5.5.14"

Having done this, the rake db creation and migration tasks went smoothly.

developing a VR tank

Last week I finished the most exciting project of my professional life (so far): I got to develop a 180° projection surface virtual reality system, more catchily coined a VR tank. It features:

  • a curved projection surface measuring more than 4 meters in width and 2 meters height
  • a HD stereo rear-projection system consisting of 4 projectors
  • a 10-camera VICON motion capture system
  • optionally, a treadmill to give users an infinite walk-space

My part in the project focussed on developing a VR application framework in C++, integrating the open-source Horde3D engine, interfacing with the real-time Vicon data stream and developing a new perspective-based rendering pipeline. It was most satisfying!

Below you can see a video of the system in action (filmed in infrared). The bright things atop the screen are Vicon cameras, their bright glare is infrared only and usually not visible. You can see how the user perspective is dynamically calculated based on head position: the black areas outside his field of view are not visible to the user.

The objects that the user is interacting with appear to him as floating in front of the screen. The Vicon system does full-body skeleton reconstruction, and we’re tracking the user’s hand position to detect object interactions: the spheres “bounce” and disappear when he touches them. They seem blurry in the video because the two stereo-separated views are superimposed.

How does it work?

We’re using the Vicon system to implement a 6-degrees-of-freedom headtracker, with which we can approximate the user’s field of view in the virtual world. Using standard OpenGL cameras, we then render two fixed-fov views, one for each eye. The “magical ingredient” is then a GLSL shader to raytrace these world views onto a theoretical cylinder, representing our projection surface. These final images are then drawn (quadbuffered) on a fullscreen quad spanning the projection screen.

The video unfortunately shows only a rather boring environment view – it’s amazing how real and immersive more complex environments feel when experienced through our setup.

I’d like to thank my esteemed colleague Martin Löffler for the productive collaboration and all the fun we had!

Emotional expressions

I recently made some short prototype stimulus clips for the EC TANGO research project. The motion capturing was done by a colleague, I then processed the data in MotionBuilder and rendered in 3ds max. I’m quite pleased with the overall aesthetics of the clips, and surprised how emotionally expressive they are despite using such a sparse avatar!

Here are some samples I can share, along with a little game: Identify the emotions! :-)

Your guess was… ???

Find of the day: cmd.exe’s REPLACE

Here’s a surprisingly tough little problem: imagine you have a directory with ca. 90.000 image files, in  my case a frame stack of a video sequence. A selection of these has been processed, and the edited files now reside in a different directory. Now I want to re-build the updated frame stack by copying the files from the old directory to the new one without prompting and without overwriting.

Seems easy, right? It would be, if Explorer wouldn’t crash when trying to copy such a massive amount of files. On the command line, it seems that copy, xcopy and robocopy are not built for this purpose. You can make them overwrite without prompting, but not not overwrite without prompting

Enter replace. I’d never heard of the command until today,  but

replace <source> <destination> /a

does the trick…

Elderscrolls fun

Here’s a fun little thing I came up with after solving the GameInformer translation riddle: an online dovahkiin rune writer. You can input arbitrary text, and the web site will find the appropriate runes, as far as they are known. You can try it out here!

At the moment it does only phonetic transcription, no translation. Maybe I’ll add a little translation function later – shouldn’t be too hard if the language is just a 1:1 mapping to English…

Technologies: pure HTML, CSS and Javascript. Time from conception to completion: 4-5 hrs…

WordPress update woes

Bettina and I have organized our wedding on a WordPress-powered website. Being quite new to WordPress, we’ve been customizing our site theme (twentyten) by directly editing the style.css file. This worked great… until the WordPress 3.0.4 update came along.

The update restored the style.css file back to default, undoing all our changes without so much as making a backup copy!

WordPress and jQuery UI

While working on a quite large WordPress plugin, I’ve started to use jQuery UI for all sorts of widgets on the admin panels. At first I was delighted that that WordPress (3.0.1) had the UI libraries pre-packaged, thinking that it would save me the trouble of including them myself. Well, turns out that was wishful thinking!

For some reason the WordPress version of jQuery UI is severely stripped down. Out of the box, I couldn’t use Dialog, Datepicker and Accordion elements. I couldn’t find a CSS file for it, either. First, I worked around this by downloading the official jQuery UI distro and selectively including the missing elements. What I should have done much sooner is simply ditch the packaged version and use the official one!

To get it to work:

  • download jQuery UI 1.7.3, with all components selected
  • in the admin_init action callback, register the script and stylesheet
  • in the admin_menu callback, enqueue the library, with the packaged jQuery as a dependency:
    wp_enqueue_script( ‘my-jquery-ui’, null, array(‘jquery’), null, true );

That way you don’t even have to explicitly enqueue the base jQuery library, and I haven’t hit a snag since.

The Vicon System explained

I just finished this a few days ago, and I rather like the final product!

This clip is intended for use in our section’s presentations, to explain the operation of the Vicon motion capture system: how the cameras infrared-illuminate the markers, capture their 3d positions, reconstruct a virtual skeleton and compute the joint center trajectories.

Software used: 3ds max for modeling and animation, Composite for assembly and effects (self-illumination glow, z-buffer unsharp masking, lens effects.)

A virtual Vicon camera

We needed a “virtual motion lab” for our presentations, so I had to make a virtual VICON camera in 3dsmax. Fun times!

No materials yet...

Time from start to modeling finish: roughly two days… Because this is not a realtime model I used Turbosmooth modeling.

Screen-space Ambient Occlusion

Shading only with texture or flat color and SSAO. No lighting or other shading used.

Screen-space ambient occlusion is all the rage today, and for a reason: it’s hard to find an approximation of accessibility shading with similar run-time characteristics (most importantly, being independent of scene complexity). Results are fast and consistent, and it looks a lot like global illumination (which, of course, it is not).

So, when tasked to make our techdemos prettier, this was a pretty obvious venue to explore. Since we already use Horde 3D, a shader-driven game engine framework, it was mostly a matter of finding sample algorithms and adapting them to our engine. In the picture above you can see the shader in action.

In the demo, you can control the character and walk/run around the famous Sponza atrium (which for some inexplicable reason has filled up with giant blue cubes). Smooth animation blending thanks to the sweet Horde3D framework. Developing the shader took about a week, the demo application ca. 1 day.

The cubes appear a bit out-of-focus because the ambient occlusion map is being rendered at small resolutions, then filtered with a Gaussian blur kernel and up-scaled to reduce the noise that is inherent to the method. Combining the SSAO shader with some actual lighting (that would accentuate the edges) would fix that, but wasn’t in the scope of the demo.

Code and binary downloads will follow in a few days!