cv.jit is now compatible with 64-bit Max

May 4, 2015

I had a lot of requests from people who wanted to use cv.jit with the latest version of Max. Unfortunately, I have been too busy to rebuild the externals for them to be 64-bit compatible.

Fortunately, Cycling ’74 has been kind enough to do the port themselves.

Here’s the new Github repository:

Also, you can check out the Maxology website, which features cv.jit as part of its first Starter Kit.

A Simple Eye Tracker For Jitter

October 20, 2011

A few years ago, I made a rough eye tracker in Jitter with cv.jit for my own use. It wasn’t the most thoroughly advanced approach to the problem but it did the job, and it’s relatively lightweight.

I moved on to other projects and I forgot about the patch almost entirely, but from time to time I have people writing to me asking question about face tracking (which is different from face detection) and eye tracking. And so, here it is, my eye tracker, for anyone to use.

Download “Eye tracker” and necessary abstractions.

As a bonus, you also get a number of useful abstractions, although they’re undocumented and will remain this way. Also, don’t ask me how they work; I don’t remember. The names are fairly descriptive, though.

You will need the standard cv.jit objects for this patch to work.

As always, hope this helps.

jit.freenect.grab Release Candidate 4

October 15, 2011

A number of people have been writing me about what appeared to be a memory leak in jit.freenect.grab. Indeed, there was. It’s been found, and as far as I can tell, it’s been squashed. If any of you wish to compile the external on your own, please beware that it is the OSX version of libusb that leaks. Be sure to read the libfreenect README carefully to find out how to patch the problem.

I also get people asking me about a Windows external. Unfortunately, it’s not going to happen soon. At least unless some kind soul does the work for us, because I just don’t have enough time to dedicate to a port.


jit.freenect.grab Release Candidate 3

January 20, 2011

Release candidate 3 is out!


It’s now possible to output the raw image from the infrared camera instead of the RGB image. This is the raw data the Kinect uses to estimate depth. This means that by blocking the infrared projector (on the left side of the device) it’s now possible to use the Kinect as an infrared camera. Note, however, that the infrared camera was calibrated to work with the infrared projector and you may find ambient infrared light to be either too dim or too bright to be useable.

Even with RC2, some users reported stalling issues. It appears that stalling occurs under heavy CPU load ? although I haven’t verified that this is true in all cases.

jit.freenect.grab Release Candidate 2 is out (finally)

January 10, 2011

Taking a short break from what has been a truly hectic and near-sleepless month, I found the time to upload a slightly improved jit.freenect.grab. Several users have reported to me that jit.freenect.grab stops working after a while. I think I’ve identified the culprit and this new version may solve the problem. I hope so.


Some people have also asked me about a Windows version of the external. libfreenect, on which this external is based now works on Windows and a port should be possible. Or at least, it would be if I had the time. Currently, my main priority is getting enough sleep to keep a sane mind. However, the code is on Github and if anyone wants to take the time to work on a Windows version, that would be fantastic.

Some people are also interested, of course, in NITE. Which is a skeleton tracking system developed by Primesense, the company that designed the Kinect’s depth sensor. As far as I know, NITE requires Primesense’s own drivers to work, which would require a new external altogether to work in Jitter. It would be a very useful project but unfortunately, my current engagements don’t allow me the time to work on something like this at the moment.

Here’s hoping rc2 solves everybody’s problems.

jit.freenect.grab page now live

December 3, 2010

I just published a page for the jit.freenect.grab external I have been working on. It allows users of Max/MSP/Jitter to get depth and image data from their Kinect.

Head over to for more info!

Minor update to cv.jit

August 16, 2010

I have just released cv.jit version 1.7.1, which fixes two issues. A bug in cv.jit.shift, which also affected cv.jit.touches, was causing memory leaks. Furthermore, some Windows users were seeing 14001 errors when trying to use externals that make use of OpenCV functions. Both problems have now been fixed.

Using cv.jit.touches: recycling index values

June 29, 2010

cv.jit.touches, a new object added in the 1.7.0 release, tracks the movement of bright spots in a greyscale image and reports “touch”, “drag” and “release” information ? when regions appear, move or disappear. Each region is assigned a unique ID number, so that even if they move, you can always know where an object is from frame to frame. This is unlike using cv.jit.label-based blob analysis, where you are never guaranteed that the same object in a scene will end up with the same index.

By default, cv.jit.touches outputs ever-increasing indices, although the counter is always reset to 0 when there are no active regions. For some applications, this is a reasonable approach: every touch event has its own unique ID. However, some may wish to recycle IDs. If you’re in this situation, I made an abstraction that renumbers the IDs coming out of cv.jit.touches to re-use release IDs, and keep their values as low as possible. It will always output the smallest value that isn’t currently assigned to another active region.

Download cv.jit.touches.recycle.

In order to use this file, just copy it to the “Max5/Cycling ’74/cv.jit-support/Abstractions” folder.

You can see how it works by copying the following code and selecting the “new from clipboard” option from Max’s file menu.


cv.jit 1.7 is finally out!

June 7, 2010

The latest version of cv.jit, 1.7.0 is finally out. I say finally because, it’s been on the brim of release for several months now, but life being what it is, I only now managed to put the finishing touches on it.

The most obvious change is that the help files have been completely re-written in Max 5 format. cv.jit 1.6 and earlier help files did not display properly in Max 5, owing to some issue with double-byte comments.

A few objects have been added also. cv.jit.opticalflow combines the functionality of cv.jit.LKflow and cv.jit.HSflow. These were two of the earliest externals I wrote, and I now somewhat regret the decision to keep them separate: they essentially do the same thing, albeit in different ways. cv.jit.opticalflow also adds support for two newer optical flow estimation algorithms: block-matching and a brand-new bleeding-edge real-time algorithm by Gunnar Farneb?ck. To go along with this new external (and the two older optical flow objects), I also added a drawing utility, cv.jit.flow.draw that displays the optical flow field using hue for direction and saturation for distance.

Farneb?ck optical flow, visualized with cv.jit.flow.draw

I often get questions about tracking blobs, or about dealing with the fact that cv.jit.label doesn’t always give the same label to what we would perceive as being the same object. The new object cv.jit.touches, sorts of addresses these issues. It’s a greyscale/binary region tracker. It assumes that the regions are of roughly the same size and don’t overlap. As the name implies, it was packaged with multi-touch interfaces in mind, and it outputs information such as “touch”, “drag” and “release”, but it can be used with other kinds of inputs.

cv.jit.threshold implements adaptive thresholding, in which each pixel is compared to the average brightness of its surroundings instead of a fixed value. This is especially useful when working under slightly uneven lighting situations.

Finally, cv.jit.grab is a ridiculously simple but very useful abstraction that wraps jit.qt.grab and jit.dx.grab depending on your platform. This allows you to write cross-platform patches.

Another big change, apart from the help files, is that I moved the cv.jit site to my own domain. As mentioned in every help file and abstraction, IAMAS, the great media art institution in Gifu, Japan, has provided support for my work on cv.jit ? in the form of computers, software, time, unwitting testers and advice from teachers and colleagues. Alas, my contract having reached its end, I don’t work there anymore and I thought that it might be best if I gathered all my work under the same roof, here. The actual files are hosted on Sourceforge, so that those who are interested in actually doing some development can dig in to the SVN repository.

Head over to the new cv.jit page for downloads!

Gragra DSP Exhibition 2010 ? Kyoto

March 22, 2010

Shinobu Toma ? Ghost in the Space

Kazuomi Eshima ? Remind

Kei Shiratori ? twist suburbia

Daichi Misawa ? Skies

Leo Kikuchi ? Landscape in my Arms

Kaori Takemoto ? Hunter-gatherer Colorist

Kanna Komaki ? Utopian Babble