Saturday, March 20, 2010

Foldable Interactive Displays


Portability has become very important in electronic devices. It used to be that you could just make a device smaller to make it more portable, but our electronic devices can only get so small. Johnny Chung Lee, Scott E. Hudson and Edward Tse have done research in the area of creating foldable displays.

In this paper, they created four different displays. They were a piece of paper, a scroll, a folding fan and an umbrella. All of these had IR emitters placed on them in places so that they could easily be tracked by a camera and have a screen projected on them. Since the camera they were using could keep track of four IR emitters, by placing them in the right places, the camera could detect if the paper had been folded and adjust the size of the projection. The camera could also use the IR emitters to determine the orientation of the display.


I think this would be good for researching how to create usable foldable displays once the device itself can display the screen instead of needing the screen to be projected onto it. As they are right now, you would need to carry around the camera and projector with you making it not very portable.

Extending 2D Object Arrangement with Pressure-Sensitive Layering Cues

Comment on Jacob's blog

Layers are a very important tool in many graphic design applications. In this paper Philip Davidson and Jefferson Han look for a new way to help users reorder layers and makes use of a multi touch display. The user lifts an image or layer by resting their finger lightly on it. The layer also becomes slightly lightly in color to provide the user with visual feedback. In this mode, the user can move the layer above existing ones. When the user applies more pressure to the object, it darkens to give the impression of being pushed down. When doing this, the user can slide the layer underneath existing layers.


The user can also press on edges of layers. This lowers the edge pushed and raises the opposite edge. This way the layer can be slid on top of or underneath other layers. The user can also push down on some layers while lifting others in order to rearrange them. If the user wants to place a layer between stacked layers, the user can use their finger to peel the layers back as though it were a stack of papers and place the new layer in the middle of the stack.

I would love to try this out. I use layers extensively when I am drawing on my computer. I would love to use this to see if it makes reordering the layers any easier. Not that it is difficult now, it could just be faster.

Friday, March 19, 2010

Annotating Gigapixel Images

Comment on Jill's Blog

Gigapixel images are huge. Really huge. They consist of billions of pixels. The upside is that they can capture an insane amount of detail. The downside is that not nearly all of it is visible at once. Annotations are a useful way to provide the viewer with extra information about the image, but how do you make annotations on a massive image in such a way that they are easily readable and don't completely clutter up the screen when you zoom out? This is the issue tackled in this paper by Quing Luan, Steven Drucker, Johannes Kopf, Ying-Qing Xu and Michael Cohen.


Annotations can be put over any sized area of the image. The annotation also has a "depth." At the user zooms in on the image, it appears to the user that they are getting closer to whatever they are zooming in on. The annotations also get larger as the user zooms in. However once the user goes past the "depth" of the annotation, it is no longer displayed. Many annotations have an upper cap on them. That is, they do not appear until the user has zoomed in sufficiently close.

I feel that this is pretty similar to what Google Earth does already. However, I believe this works in real time and and Google Earth doesn't use gigapixel images. I've never worked with gigapixel images but I can see how this would be useful.

ILoveSketch: As-Natural-As-Possible Sketching System for Creating 3D Curve Models

Comment on Jill's Blog

Seok-Hyung Bae, Ravin Balakrishnan and Karan Singh from the Department of Computer Science at the University of Toronto are the minds behind ILoveSketch. Their purpose was to develop a 3D sketching system that captures the affordances of pen and paper to make things easier for designers. They created several features that aid the designer both in 2D and 3D.

One of the 2D features is an aid for drawing 2D curves. When drawing a curve, designers will usually make several light strokes and darken them as they approach the desired curve. ILoveSketch looks at these multiple strokes and after a certain time out period, will draw a NURBS curve that best fits the strokes. This helps the designer easily create smooth curves. This technique can also be used to join multiple curves by making strokes that connect them. Another 2D feature is the automatic rotation of the paper based on the angle or curve of the marks made. The goal is to rotate the paper into a more comfortable position for the designer. However, in the user study, the automatic rotation was noted as an undesirable feature.

Some of the 3D features were familiar because they were improved in the follow up work that I already did a post on called "EverybodyLovesSketch." Sketch planes were indicated by using the axis widget. By making a gesture across two axes, the sketch plane would be the plane that contained both of those axes. By making a flick across one axis makes the sketch plane on that axis and parallel to the flick mark. One can also indicate an arbitrary sketch plane by making a flick from the origin of the axis widget to any direction.


I am glad I read this paper since I have already read the paper that expands on this. The other paper comes up with more and better ways to indicate sketch planes. It was cool to see where EverybodyLovesSketch came from. I thought the coolest feature was making the NURBS curve from multiple strokes. I feel it would be very useful for designers.

Thursday, March 4, 2010

Data-Driven Exploration of Musical Chord Sequences

Comment on Gus's Blog

Eric Nichols, Dan Morris and Sumit Basu developed a tool that allows people who are making music to easily build harmonies on their music. By using lots of sample data, they are easily able to take a melody and generate several pleasing chord progressions that fit with the melody. This tool allows the user to explore and easily and quickly build harmonies in their music. From the input data, a widget is generated with several sliders, each one representing a different artist or genre of music. Buy adjusting these sliders, the user can change the type or style of chords that are generated. The users in the user study found this tool to be very useful and fun. Some of the users though did express that they would like the ability to manually change specific measures and the frequency of chord changes.

I think this has potential to be a good music writing tool. I would also like to see extra features added like the ones some of the users specified. I'd love to toy with it for a while and see what I could come up with.

Multi-touch Interaction for Robot Control

Comment on Ross's Blog

Mark Micire, Jill Drury, Brenden Keyes and Holly Yanco explored how people would interact with and use a multi-touch interface to control a robot. Some of the items on the control display include forward and backward cameras, top down and isometric maps of the area being navigated, four directional control buttons, a brake button and a button for the lights. The user study performed was to see how the users interacted with the multi touch controller. They were surprised to find that all of the users handled and interacted with the controls in very different ways.

I thought this was pretty neat. Although I think it would be better if there was some kind of tactile feed back from the controls. From this study, a lot could be gleaned about how people perceive the affordance of controllers.

Monday, March 1, 2010

Emotional Design

Donald Norman has definitely taken a different direction with this book than he did in "The Design of Everyday Things." In TDOET, good design was everything. A user should be able to know how to use a device or object quickly without having to think about it. Nothing was more important than good design. Now, it seems as though there is something just a little more important than design. People have to like the things that they use.

Something could be very poorly designed, however, it may be that people enjoy using it. If someone enjoys using something, they are more inclined to use it again and again and even might convince their friends to buy the product as well. People will buy and use products that are poorly designed if the product is fun and enjoyable to use. Norman goes into great detail about the levels of human thinking including the visceral (instinctive) level, the reflective level and the behavioral level.

He also talks about how affective computing (emulating emotion in computing) could help us design devices that interact with people better. This is because of how important emotions are to humans understanding one another. There is so much information you can glean from a conversation just by listening to the tone of voice and observing facial cues and body posture. If our machines could interpret and communicate emotions in the same way humans do, they would be better equipped to handle their tasks and they would be easier for humans to interact with. In the last chapter, Norman discusses some concerns about issues that could be raised in the future as affective computing becomes more advanced and more integrated into our every day lives.

I thought it was a very interesting read. Certainly different from his last work that we read. He took a very different direction this time going from pure design to "make it enjoyable to use". The last chapter also felt very different from the rest of the book. I guess because he saves all the cautions and thoughts about repercussions of affective computing for the ending. And now, the search for what is more important than emotional design!