Ugo.san

Human-computer Interaction, Software Development and things in between

G.a.m.b.i.t – sketching with multiple devices

I’m glad with the final outcome of my system, now that i’m approaching the final steps. G.a.m.b.i.t (Gatherings and Meetings with Beamers and Interactive Tablets) was constructed to allow people to make prototypes of interfaces using any device. In the picture below I’m sketching on a large surface projection and the system is running on a sony e-reader!

gambit sketching system

gambit sketching system

Gambit is described in this paper: GAMBIT: Addressing multi-platform collaborative sketching with html5

 

User Interface Engineering map

Here is a map positioning UI Engineering (the main area of my PhD thesis) among other disciplines. I’ve defined UIE as a combination of Usability Engineering and Information Visualization, so my work addresses both Interface Design and Navigation. This illustration is based on the original at Dan Saffer’s Desigining for Interaction.

User Interface Engineering

User Interface Engineering

As Visual Design is concerned with form – the spatial organization of information and layout within user interfaces. Information Architecture is concerned with the structure of content – how to best organize and label content so that users find the information they need. Interaction Design is focused on behavior and context as well as Human-computer interaction – however they differ at their methods of addressing the human aspect of computer systems since HCI’s are more those of engineering and computer science than of design.

 

How do we perceive lag? Part 1

Humans can perceive from 10 to 12 frames per second individually. Early silent film projections had frame rates varying from 16 to 24 FPS, and nowadays the standard rate for movies is at least 24 FPS. But how is this perceived when we interact with whatever is being showed to us? This series of posts will report a part of my research on investigating the lag perception while drawing.

What is lag?

The phenomenon behind human frame rate perception is called Flicker fusion, i.e. the point in which a flickering image appears as stationary. Early silent film projections had frame rates varying from 16 to 24 FPS, and nowadays the standard rate for movies is at least 24 FPS.

Flicker fusion at 15 frames per second

Within the research domain of human vision, there are works addressing a possible frame rate limit that human beings are able to perceive: it is estimated that humans can perceive from 10 to 12 frames per second individually, above this limit the illusion of motion would take place, and humans would not be able to perceive individual frames.

However, the considered limit applies when no interaction is required. So I’ve started to investigate if this limit can also be applied for the specific case of interacting with devices. More specifically to drawing on electronic devices. The paper is called “Assessing lag perception in electronic sketching” and its available here and here.

15 vs. 30 vs. 60 FPS

Can you tell the difference between the three rates below? Bo Allen have done a simple yet very interesting comparison between the frame rates:


Get Adobe Flash player

Get Adobe Flash player

Get Adobe Flash player

(source: http://boallen.com/fps-compare.html)

* update: There is another great (interactive) tool for comparing refresh rates: http://frames-per-second.appspot.com/

Interaction and Lag

One of my research interests is investigating sketching (human drawing) on electronic devices. In fact, electronic sketching was one of the first input modalities to be considered in computing, take a look at these Youtube videos where Ivan Sutherland is demonstrating his electronic sketching system.. in 1963! I’m also maintaining a playlist about the history of electronic sketching, with some interesting videos, check it out.

Before touchscreen devices became mainstream the only viable option people had to sketch was by using WACOM devices. Now we have different devices, with different performances and screens of many sizes. And we also have HTML5 which allows us to build a single system that runs on all those devices through a browser.

So I’ve started implementing a sketching system and started trying different upper limits for the rendering, as you can see on the video below. Note how the sketch lags behind the pen:

How would people perceive the different rates while sketching without (of course) knowing it in advance? I’ve setup an experiment and tested with 35 subjects, that’s on Part 2 of this series.

The experiment

That will be on Part 2, soon. =)

 

Mammooth Fuzz

This pedal is based on zvex wooly mammoth:

 

Photos:

(Re)making a Conference Poster with Inkscape

Last month I needed to prepare a poster for DIS2012 and EICS2012 conferences describing my research (Addressing Multi-platform Collaborative Sketching), and I wanted to do it with Inkscape. Then I came across that amazing design by Felix Breuer.

But I needed it in a vertical format. So in case you are in the same situation, here it goes:

Download the ZIP file
GitHub repository

You can also see the original (horizontal) version here:

FPSAC poster rasterized

* Update * There is a new “vampire red” version by Mathieu Zen:

POSTER vampire red template

Lot of space with Fullscreen plugin

I’ve just forked an old abandoned plugin project created to make Eclipse go fullscreen. Originally, the plugin maximizes the usable space by hiding everything but the editors and navigator — which means hiding even the status bar, making it really hard for developers to use.

The single available option was to show/hide the Menu Bar, which is useful for RCP apps. But for IDE usage the status line MUST show up (it’s impossible to debug a stacktrace without knowing which line you are, for instance). My buddy Robson is a packager of ArchLinux and they were facing this issue with Eclipse, so he asked me to fix it. :)

Get the plugin

You might get the .jar here and drop it on your plugins/ folder. If you are an Eclipse plugin developer, you might also want to check it out on my github.

Toggle Fullscreen

You might use Ctrl+Alt+Z to toggle fullscreen, or at Window->Fullscreen

Configure

You can choose to hide or show Menu Bar and Status Bar in Window -> Preferences -> General -> Full Screen

Enjoy! :)

Android Lightsword *updated*

Lightsword just reached 100.000 installs, its awesome. The new version have 5 hilt options, gesture support, CLASH sounds and color options. Want a pink saber? ok!

Use the force with Lightsword, an app to simulate a light saber in android. I’ve spend some time to fine tune the accelerometer to provide a smooth interaction — you just have to activate the sword and swing your phone around.  Works great at bars :) Trust me on this one. Now go try it.

Android Theremin is out!

Theremin is finally released on Android Market. It’s in alpha stage, but I will be updating it quite a lot. For now it might be used to produce sounds based on the magnetic field around your phone. You can manipulate it using a metal object such as a coin or your ear phones. It might serve as well as a metal detector! :)

There is a video showing Theremin in action.

I would really appreciate some feedback and feature requests. I hope you enjoy it.

Android Theremin prototype *updated*

"I'm so much cooler than you guys playing violins..."

Theremins are weird musical instruments. They consist basically of two antennas, the player moves its hands around it, altering the magnetic field. The device then registers this changing, playing creepy sounds. Take a look at this Pato Fu’s song and a lesson on how to operate a Theremin.

So I’ve bought an HTC Tattoo and of course, decided to try to develop apps for it. The whole SDK is amazing!

The concept of a Theremin can be implemented on Android, since the phones have a digital compass and registers the magnetic field in 3 axis (!).  Here is the first prototype and my very first Android app. yey! \o/

Its possible to change the magnetic field registered by this mobile using a coin. The closer the coin gets, the lower is the note. Actually real theremins work in the opposite way :)

*Update!*

Click here for the first version

Now, those notes are redered .ogg files of C scales =/ Android doesnt have a MIDI toolkit, right? I want to be able to produce my own frequencies, so the user could manipulate them like in a real theremin.

Using string distance to compare sketches

post-it with the map

There is an article called Trainable Sketch Recognizer for Graphical User Interface Design from A. Coyette and others showing an approach to recognize pen-made sketches based on Levenshtein distance algorithm for string comparison.

The article talks about recognizing elements of user interface such as buttons, combo boxes and windows, when they are sketched by a designer. Well, they do that by using Levenshtein’s String Distance algorithm. Thats right, an algorithm created for string comparison to check if they are closer or not, in terms of character swapping. You know, when there is a typo on you search at google like “algoritm”, it says “you meant algorithm”. Thats a string distance algorithm working. It is really simple and I was quite amazed about how good it worked for Sketch.

So, what the authors of the article did to transform drawings into words? First you need to assign a number for each cardinal point, to compose your words. Lets say 1 for north, 2 for northeast, 3 for east, and so on. Then take the points (x,y) of the sketch and then each pair is compared to the next one, if the point is at north of the previous one then its a gesture going up and the character relative to North its added, 1.


How do you know what cardinal point each pair of x,y belongs relative to the previous is easy, take a look at the post-it at the beginning of this post (its actually a post-it, its on my wall). Lets get two points A and B, if (B.x – A.x) its positive and (B.y – A.y) its zero, then B its at East of A. If they are negative and positive respectively, then B its at SouthWest from A.

So a square would be something like: 3333333355555557777777111111111
A triangle would be like: 45454544537777777782232312


But people draw things differently, you may start a square by moving your pen South, instead of East, for example.
Since the algorithm its so fast, you may compare your sketch to several samples of squares, several words, or even better, let the user tells your application what he/she meant with that they just drawed.
“this is a square, learn the way I do”

Its a powerful tool which combined with other algorithms such as corner finding could give a fingerprint of the user’s sketch.

This is implemented as the single one algorithm responsible for recognition on the Sketch Shapes Application. There is more about this on the way, as the project matures: new algorithms will take place, but I think it couldnt be any simpler than Levenshtein’s.

Kudos for Coyette and the team.