Sunday, January 24, 2010

SemFeel: A User Interface with Semantic Tactile Feedback for Mobile Touch-screen Devices

Comments posted on Jill's Blog and Chris's blog.

Touch screens are present on most new mobile devices. While they are good and useful, you must be looking at the screen to know if you have touched the right button. Recently, makers of mobile devices have been experimenting with vibration to inform a user when he or she is pressing a button on the screen. In their paper, Koji Yatani and Khai Truong experiment with using multiple vibrating actuators in a mobile device to inform the user of which button he or she is pressing.


SemFeel is a sleeve developed to fit on a mobile device that has five vibrating actuators. One in the middle, top, bottom, left and right. Using these actuators, eleven different patterns of vibration were able to be produced. One at each of the actuators, one going left to right and vice versa, one going up to down and vice versa, one going clockwise and another going counterclockwise. Experiments with each of these patterns showed that users were easily able to recognize all of the patterns except for counterclockwise.



One of the tests required users to enter four digit numbers on the mobile device without looking at the screen of the mobile device. Testers that used SemFeel completed the tests far more accurately than those that used a mobile device with only one or no vibrating actuators.

I think that this device would be very useful. Even when I use my Ipod, I always press the wrong button when I don't look at my screen. A device like this would help me know whether or not I'm pressing the right button. I don't like the fact that you have to put this large sleeve on the device to get it to work. I would try to find a way to integrate the SemFeel system into the device itself.

Saturday, January 23, 2010

Disappearing Mobile Devices

Mobile devices are getting smaller all the time. With this comes a growing problem. Peoples hands stay the same size and are too big for people to use the tiny interfaces. In this paper, Tao Ni and Patrick Baudisch look for ways to interface with very small mobile devices. They first decided that a gesture recognition system would work best. To record the gestures, motion scanners, touch scanners and direction scanners were used. Some of the problems that occurred when trying to implement the gesture system were that sometimes a person's finger was too small and caused errors when the sensor no longer detected the person's finger. To compensate for this, the user used their entire hand instead of just a finger to make gestures.


They decided to test the system using two different unistroke input systems, Graffiti and EdgeWrite. The participants in the test were able to use the EdgeWrite system using this interface with much fewer errors than with the Graffiti system. By the end of the study, they had come up with a list of design implications for a disappearing mobile device, including:

  • Use the entire hand for input to allow for larger motion amplitude. It reduces error with complex gestures.
  • Preferably use unistroke gestures that do not rely on the correct recognition of relative position features. (This is what caused many of the errors for users using Graffiti.)
  • Design devices, such that they glance over irregularities and gaps between fingers. However, limit the focal range to prevent motion past lift-off from being recognized as a gesture.


While I thought it was a cool idea, I think it would be very difficult to use. I would struggle to remember the gestures. Also, they have not developed anything to give the user any visual feedback. The user doesn't know if the gesture they made is correct or not. I would try to fix that problem if I were to continue work on this project.

Wednesday, January 20, 2010

Pressure Sensitive Keyboards

Comments on Jill's blog and Brandon's blog

We all use keyboards just about everyday. In fact, I'm even using one now! Over the many years we've used them though, they have kept the same functionality. To type and the occasional keyboard shortcut. This paper discusses making a keyboard with pressure sensitive keys. Instead of having simple contact points underneath the keys like most keyboards, this keyboard uses small domes that are able to detect how much surface area is touching. The harder the keys are pressed, the more surface area is touching.



In gaming this could be used to tell the player's character how fast to run depending on how hard the key is pressed. When typing, pressing the keys harder could increase the size of the font. It could also be used to detect accidental key strokes by filtering out keystrokes that are significantly softer than the rest.

I thought the concept was pretty cool and I won't deny that it definitely has applications. I just don't see this feature catching on very quickly. I might use the feature that changes the font size in a chat application. If I were to work on this project, I would probably try and find some additional and more useful applications for the keyboard.

Tuesday, January 19, 2010

3D Sketching


EverybodyLovesSketch is an intuitive program that allows the user to create 3D models and sketches using concepts from perspective drawing. Artist and designers have several techniques that help them draw things properly in a perspective view. (Vanishing points, perspective grids etc.) These things help artist visualize planes in a 3D space on a 2D surface. EverybodyLovesSketch allows the user to specify, select and draw directly onto these planes in order to draw 3D curves using 2D techniques. (Like drawing on a graphics tablet).
By making a small tick mark on a curve, a horizontal sketch plane is created at the level in space of that curve. By making two tick marks across curves a vertical plane is created that goes through both tick marks. Three tick marks are used to create an arbitrary plane. By selecting one or more curves and using flicking gestures, the user can create orthographic sketch planes or orthographic extruded sketch planes. Users can also select one or more curves, copy them and then project them onto a different sketch plane. Users can also select multiple curves that form a closed loop and create a surface from those curves. Then the user can draw curves on that surface.

A study was conducted to see just how intuitive the program was. 49 high schoolers were taught how to use the program and worked with the program for 75 minutes a day for 11 days. In most cases, students were able to create decent 3D sketches in the first 3 or 4 days. Many students began with an actual 2D sketch and then translated it to 3D using the program.

Having taken drawing classes, I am familiar with techniques used in perspective drawing. I found this paper to be really fascinating. This program uses the ways that artists create the illusion of 3D space to create something that is actually in a 3D space. Another beautiful blend of art and technology. If I were to add a feature to this program, I would probably add a texture and shader brush that allows you to draw on the surface of your sketches. The program already is able create surfaces between the curves and draw on them.

Link to the paper