Thursday, October 27, 2011

Blog Post #24-Gesture avatar: a technique for operating mobile user interfaces using gestures

References

Authors:
Hao Lü     University of Washington, Seattle, Washington, USA
Yang Li     Google Research, Mountain View, California, USA
 

Published In:
CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems















Summary


Hypothesis:
Modern smart phones utilize touch based GUI's in order for users to navigate programs.  This leads to several problems, the biggest two being: the "fat-finger problem," i.e. the target UI is too small for fingers to accurately press, and the "occlusion problem," i.e. the finger that will activate the control blocks vision of it while the user is pressing it.  The authors hypothesis that the system they developed which they called, "Gesture Avatar," will perform better than normal input methods and better than the "Shift" system.

Testing the Hypothesis:
In order to test their hypothesis the authors created the system they outlined early in the paper with Java on the Android 2.2 with additional systems for image processing written in C++.  They obtained twelve students, eight of them males and four of them females, from an unnamed company.  The study they then performed had users learn both the Shift and Gesture Avatar system.  Half of the users learned the Shift first and the other half learned the Gesture Avatar system first.  This was done to prevent a user bias in favor of the system they learned first.  The participants were then given a specific task to complete.  They had to target a specific area on the screen using both systems.  The target area varied in many ways throughout the experiment and the performance times of the users were measured.  Halfway through completing this task, the users had to move from a sitting position to a treadmill where they completed the other half while walking on said treadmill.

Hypothesis Results:
The results of the experiment were ran through a ANOVA in order to eliminate any variance within the results.  After this, the data showed that the Shift system was faster when the size of the target was 20px but slower at 10px and there were no differences when it was 15px.  This falls in line with the author's hypothesis that Gesture Avatar would be better at smaller sizes but slower for larger ones.   Also, Gesture Avatar showed no difference in response times while the user was walking and while he was sitting.  On the other hand, Shift's time went up significantly while the user was moving.



Discussion

In my opinion the authors had a well defined hypothesis that was easy to test and evaluate.  After reading the data from their experiment, it obviously fell in line with their hypothesis and proved that the Gesture Avatar system is at least worth investigating further.  The system the authors created seems to me to be a very unique and promising method of user input.  Forever I have struggled with the problems they addressed with Gesture Avatar (moving the caret while typing, selecting tiny hyperlinks).  The authors could have had a more extensive experiment, i.e. a larger number of participants, more diverse participants, and more tests for them to perform.  Also, I think having the users complete some of the tasks while on a treadmill is pointless because it cannot be generalized to normal walking.  While on a treadmill you don't have to pay attention to where you are going or what is in front of you, only the pace at which you walk.  A move apt experiment would to navigate an obstacle course or have the user walk on a semi-busy sidewalk (if that is allowed within experimentation rules).

Thursday, October 13, 2011

Blog Post #17- Biofeedback Game Design

Reference

Authors:  
Lennart Erik Nacke     University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Michael Kalyn     University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Calvin Lough     University of Saskatchewan, Saskatoon, Saskatchewan, Canada
Regan Lee Mandryk     University of Saskatchewan, Saskatoon, Saskatchewan, Canada

Published In:
CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems University of Saskatchewan, Saskatoon, Saskatchewan, Canada







Summary

The authors of the paper, "Biofeedback Game Design" wish to approach the design of electronic games using "physiological" input methods. By physiological input methods, they are referring to things such as: heart rate (HR) monitors and galvanic skin response (GSR) monitors.  Traditional methods of combining video games with these forms of inputs, end up with simple games that use HR or GSR sensors as the primary form of input.  According to the authors, this is a faulty method of game design because these types of sensors are "passive;" the user cannot actively control these responses, thus leading to either boring game play, or simple relaxation based games.  In order to correct this discrepancy, the authors created a simple game that utilizes what they call direct physiological inputs.  Examples of these kinds of inputs are: muscle flexation, breathing patterns, eye gaze, change of temperature (a la blowing hot air).  These are all types of inputs that can be directly controlled by the user. The authors then spent about one to two pages explaining the game they created and how various physiological sensors interact with the game itself.  To test the use of physiological inputs the authors created three different games, two with different kinds of sensors and one without any to act as a control.  The study used ten subject (seven of which were male) who had varying degrees of experience with video games and novel forms of input (i.e. the Nintendo Wii or Rockband Guitar system).  The results of the study showed the users all preferred the two games with novel inputs rather than the control game.  Furthermore, when asked which novel inputs they enjoyed the most, the direct senors got much more votes than the passive sensors did all across the board.


Discussion

Right off the bat I can say the authors did not have nearly enough participants in their study.  Ten is such a low number, it cannot capture the general audience of video games or eliminate the outliers who for some reason or other dislike novel forms on input in video games.  Also, as mentioned later in the paper, all of the participants are "causal gamers" (aka, really, really bad players).  Four of the participants even admitted to only playing video games a few times per year.  Given the limited number of subjects I would think the authors would have stuck with at least players who play once a week rather than people who never game at all.  Other than this, I found the concepts in the paper agreeable.  When discussing the difference between direct and passive forms of physiological input, I  agreed that direct inputs seemed more natural and rewarding to the player (even without having played the game they developed).  I could definitely see the eye tracking make its way into future games, but the others I am not sure of.  I am sure someone will eventually try it, but whether it is successful remains to be seen.