Thursday, November 10, 2011

Blog Post #27- Sensing cognitive multitasking for a brain-based adaptive user interface

Reference

 Authors:   
    Erin Treacy Solovey     Tufts University, Medford, Massachusetts, USA
    Francine Lalooses     Tufts University, Medford, Massachusetts, USA
    Krysta Chauncey     Tufts University, Medford, Massachusetts, USA
    Douglas Weaver     Tufts University, Medford, Massachusetts, USA
    Margarita Parasi     Tufts University, Medford, Massachusetts, USA
    Matthias Scheutz     Tufts University, Medford, Massachusetts, USA
    Angelo Sassaroli     Tufts University, Medford, Massachusetts, USA
    Sergio Fantini     Tufts University, Medford, Massachusetts, USA
    Paul Schermerhorn     Indiana University, Bloomington, Indiana, USA
    Audrey Girouard     Queen's University, Kingston, Ontario, Canada
    Robert J.K. Jacob     Tufts University, Medford, Massachusetts, USA

Published In:
    CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems






Summary

Hypothesis:
The authors of this paper set out to accomplish a few things with their paper:
  1. Prove that fNIRS is a reliable mean of measuring various brain states as compared to fMRI
  2. Show that fNIRS can be used in different study models
  3. Build a proof of concept system that has a user be able to control a robot while a fNIRS based system can control an adaptive interface to manage distractions

    Testing the Hypothesis:
    The authors performed quite a few experiments in order to fully test their hypothesis.  They first constructed a simple test to determine if fNIR can be used to tell the differences in the brain between various states of multi-tasking.  This experiment had twelve participants press a certain key depending on the order of capital and lower case letters in a series of words.

    The next two experiments they performed were somewhat related to each other.  They both involved a human controlling a robot to perform the task  of sorting rocks.  The user then had to press certain buttons based on certain characteristics of the rock samples.

    Results of the Hypothesis:
    The conclusion from the studies was that using response time alone, delay, branching, and dual task are statistically impossible to differentiate, however, it was considerable easy by comparing the levels of oxygenated blood in various regions of the brain. 


    Discussion

    In my opinion, the authors definitely accomplished what they set out to do.  I am very confident about this because they were especially thorough in their case study part of their paper.  Most papers if they involve some sort of experiment, maybe have one example with a few participants.  In this paper, however, the authors performed three separate studies each with over twelve people. As far as relevancy goes?  As our smart phones become smarter the problem of multi-tasking will only increase.  It is very important to look into ways to increase human efficiency while multi-tasking takes place and to determine if it possible to reduce distractions in today's society.

    Thursday, October 27, 2011

    Blog Post #24-Gesture avatar: a technique for operating mobile user interfaces using gestures

    References

    Authors:
    Hao Lü     University of Washington, Seattle, Washington, USA
    Yang Li     Google Research, Mountain View, California, USA
     

    Published In:
    CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems















    Summary


    Hypothesis:
    Modern smart phones utilize touch based GUI's in order for users to navigate programs.  This leads to several problems, the biggest two being: the "fat-finger problem," i.e. the target UI is too small for fingers to accurately press, and the "occlusion problem," i.e. the finger that will activate the control blocks vision of it while the user is pressing it.  The authors hypothesis that the system they developed which they called, "Gesture Avatar," will perform better than normal input methods and better than the "Shift" system.

    Testing the Hypothesis:
    In order to test their hypothesis the authors created the system they outlined early in the paper with Java on the Android 2.2 with additional systems for image processing written in C++.  They obtained twelve students, eight of them males and four of them females, from an unnamed company.  The study they then performed had users learn both the Shift and Gesture Avatar system.  Half of the users learned the Shift first and the other half learned the Gesture Avatar system first.  This was done to prevent a user bias in favor of the system they learned first.  The participants were then given a specific task to complete.  They had to target a specific area on the screen using both systems.  The target area varied in many ways throughout the experiment and the performance times of the users were measured.  Halfway through completing this task, the users had to move from a sitting position to a treadmill where they completed the other half while walking on said treadmill.

    Hypothesis Results:
    The results of the experiment were ran through a ANOVA in order to eliminate any variance within the results.  After this, the data showed that the Shift system was faster when the size of the target was 20px but slower at 10px and there were no differences when it was 15px.  This falls in line with the author's hypothesis that Gesture Avatar would be better at smaller sizes but slower for larger ones.   Also, Gesture Avatar showed no difference in response times while the user was walking and while he was sitting.  On the other hand, Shift's time went up significantly while the user was moving.



    Discussion

    In my opinion the authors had a well defined hypothesis that was easy to test and evaluate.  After reading the data from their experiment, it obviously fell in line with their hypothesis and proved that the Gesture Avatar system is at least worth investigating further.  The system the authors created seems to me to be a very unique and promising method of user input.  Forever I have struggled with the problems they addressed with Gesture Avatar (moving the caret while typing, selecting tiny hyperlinks).  The authors could have had a more extensive experiment, i.e. a larger number of participants, more diverse participants, and more tests for them to perform.  Also, I think having the users complete some of the tasks while on a treadmill is pointless because it cannot be generalized to normal walking.  While on a treadmill you don't have to pay attention to where you are going or what is in front of you, only the pace at which you walk.  A move apt experiment would to navigate an obstacle course or have the user walk on a semi-busy sidewalk (if that is allowed within experimentation rules).

    Thursday, October 13, 2011

    Blog Post #17- Biofeedback Game Design

    Reference

    Authors:  
    Lennart Erik Nacke     University of Saskatchewan, Saskatoon, Saskatchewan, Canada
    Michael Kalyn     University of Saskatchewan, Saskatoon, Saskatchewan, Canada
    Calvin Lough     University of Saskatchewan, Saskatoon, Saskatchewan, Canada
    Regan Lee Mandryk     University of Saskatchewan, Saskatoon, Saskatchewan, Canada

    Published In:
    CHI '11 Proceedings of the 2011 annual conference on Human factors in computing systems University of Saskatchewan, Saskatoon, Saskatchewan, Canada







    Summary

    The authors of the paper, "Biofeedback Game Design" wish to approach the design of electronic games using "physiological" input methods. By physiological input methods, they are referring to things such as: heart rate (HR) monitors and galvanic skin response (GSR) monitors.  Traditional methods of combining video games with these forms of inputs, end up with simple games that use HR or GSR sensors as the primary form of input.  According to the authors, this is a faulty method of game design because these types of sensors are "passive;" the user cannot actively control these responses, thus leading to either boring game play, or simple relaxation based games.  In order to correct this discrepancy, the authors created a simple game that utilizes what they call direct physiological inputs.  Examples of these kinds of inputs are: muscle flexation, breathing patterns, eye gaze, change of temperature (a la blowing hot air).  These are all types of inputs that can be directly controlled by the user. The authors then spent about one to two pages explaining the game they created and how various physiological sensors interact with the game itself.  To test the use of physiological inputs the authors created three different games, two with different kinds of sensors and one without any to act as a control.  The study used ten subject (seven of which were male) who had varying degrees of experience with video games and novel forms of input (i.e. the Nintendo Wii or Rockband Guitar system).  The results of the study showed the users all preferred the two games with novel inputs rather than the control game.  Furthermore, when asked which novel inputs they enjoyed the most, the direct senors got much more votes than the passive sensors did all across the board.


    Discussion

    Right off the bat I can say the authors did not have nearly enough participants in their study.  Ten is such a low number, it cannot capture the general audience of video games or eliminate the outliers who for some reason or other dislike novel forms on input in video games.  Also, as mentioned later in the paper, all of the participants are "causal gamers" (aka, really, really bad players).  Four of the participants even admitted to only playing video games a few times per year.  Given the limited number of subjects I would think the authors would have stuck with at least players who play once a week rather than people who never game at all.  Other than this, I found the concepts in the paper agreeable.  When discussing the difference between direct and passive forms of physiological input, I  agreed that direct inputs seemed more natural and rewarding to the player (even without having played the game they developed).  I could definitely see the eye tracking make its way into future games, but the others I am not sure of.  I am sure someone will eventually try it, but whether it is successful remains to be seen.

    Thursday, September 29, 2011

    Blog Post #13- Combining multiple depth cameras and projectors for interactions on, above and between surfaces

    Reference

    Authors:
    Andrew D. Wilson     Microsoft Research, Redmond, WA, USA
    Hrvoje Benko     Microsoft Research, Redmond, WA, USA


    Published In:
    UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology






    Summary

    This paper is another of the technical possibilities type of article.  Instead of creating a hypothesis and setting out to test it, they are testing whether a particular type of technology is reasonable to construct and use.  For this particular paper the authors are testing a prototype they call, "LightSpace."  LightSpace is an interactive room powered by depth cameras.  These cameras are able to project onto any surface at a very high rate.  Thus allowing such interactions as: a projection of an image onto a user's hand as he walks or any flat surface in the room can be used as a projection board.  The system itself handles all projecting and combines the entire worldspace into one reference grid so developers do not have to worry with implementation details such as which camera is projecting what.  The idea of projecting onto the human body in this paper is called, "Simulated Surfaces."  With this, ListSpace keeps a vague mesh of the human bodies within the room, and much like the Microsoft Kinetic, tracks this mesh and allows for easy projection like a menu unto the user's hand or a graphic that the user wants to move from surface to surface. 

    Overall, the paper was primarily to showcase the various methods of implementation the authors used in constructing their LightSpace room.  They also showed how one can project a 3D imagine unto a 2D surface which can then be analyzed by standard image processing techniques.


    Discussion

    I think the authors accomplished what they set out to do at the beginning of this book.  They wanted to outline their methodology of creating LightSpace and I feel like they did a good job of explaining the various implementation details.  Moving on, LightSpace is a very interesting piece of technology.  Frankly, many people would be interested in this because of their perceptions of the future.  Whether it is from books or movies, many people envision computers as an interactive unit like the one portrayed in this paper. An interesting point made by the authors is that it would be completely feasible to emulate the same functionality of a Microsoft Surface onto any flat surface.  What would be even more interesting and probably useful would be to allow interaction between the program behind the projectors AND a Microsoft Surface.  Since the Surface has such a high resolution compared to the projectors, users might be more comfortable exploring the interactions between the Surface and another Surface or just a board mounted on the wall like in the paper.

    Tuesday, September 27, 2011

    "Gang Leader for a Day" Review

    I went into the book a little sceptical. I had already taken two sociology classes and had not had good experiences with the field.  So when I read an except by the author were he claims he is a "rogue sociologist," I was nervous.  After finished the book, I am very relieved.  He, unlike most sociologist, focused on the actual data or situations of the people rather than turning it into a "bash the opposite political party."  

     Another interesting point about this book is the location.  It takes place in Chicago so I cannot truly be surprised by this level of corruption.  It has long been a running joke in popular culture to make fun of Chicagoan politics, but in reality, it isn't funny.  It actually a serious matter.  Corruption is always present in any form of government, but the level that exists in Chicago is unacceptably high for a city in the USA.

    Near the end, I was kind of disappointed by the book.  Sudhir felt for some reason that he had to distance himself from JT for no reason.  Even going as far as to say he was never his friend.  Now, I don't understand how you could hang around someone for seven years, go through some much, and basically make your career off of his life and then not even consider him a "friend." He didn't have to say that he was his bestiest-best-friend of all time, but it is not hard to call someone a friend, and considering all the things JT did for Sudhir, it is just downright cruel to not even give him the label of friend.

    Monday, September 26, 2011

    Blog Post #11- Multitoe: high-precision interaction with back-projected floors based on high-resolution multi-touch input

    Reference Information

    Authors:
    Thomas Augsten     Hasso Plattner Institute, Potsdam, Germany
    Konstantin Kaefer     Hasso Plattner Institute, Potsdam, Germany
    René Meusel     Hasso Plattner Institute, Potsdam, Germany
    Caroline Fetzer     Hasso Plattner Institute, Potsdam, Germany
    Dorian Kanitz     Hasso Plattner Institute, Potsdam, Germany
    Thomas Stoff     Hasso Plattner Institute, Potsdam, Germany
    Torsten Becker     Hasso Plattner Institute, Potsdam, Germany
    Christian Holz     Hasso Plattner Institute, Potsdam, Germany
    Patrick Baudisch     Hasso Plattner Institute, Potsdam, Germany

    Published In:
    UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology





    Summary

    The authors of this paper have set out to, in their words, "explore the design decisions," of large high-resolution input devices.  Specifically, they explore a system that uses pressure sensing devices to detect minute foot presses on a floor.  The system they used is called FTIR.  It is able, through the use of a series of pressure sensing devices, to detect any number of pressure points in the system.  A big concern of the paper is whether the authors can eliminate unwanted inputs.  Since the input system is the floor itself, the users have to stand on it and walk across it to do anything.  They are concerned with eliminating these unwanted foot presses and focus only on deliberate foot presses.  In order to determine the best method of disregarding unwanted foot presses and highlight intended foot presses, the group created a focus group based on thirty volunteers.  The authors created two simulation buttons on the ground and asked the users to show how they would activate one while ignoring the other.  After monitoring how they behaved, the authors verbally interviewed the volunteers in order to understand how and why they acted.  Through this study, the authors concluded the best method was to make the users tap, double tap, jump, or stomp on a button to cause its activation.  Another few studies were performed, each with a different task in mind, examples include: which part of the foot should cause activation, and which point should be considered the "hotspot" or the primary method of activating buttons.

    Hypothesis:
    There isn't so much as a hypothesis, as there is a question of usability.  Is it possible to create a large multi-input floor that can detect such small differences in foot presses that it could tell users apart by the differences of their shoe's soles.  Also the question of whether it is feasible to create a system that users can walk on and activate controls comfortably.  Ultimately the authors collected enough data from user study groups to form the beginnings of ideas for user input.  They have not yet constructed the actual room with the floor based input device, but they have a prototype and detailed construction plans.

    Discussion

    As always with these discussion, I must be careful to comment on the technical merit of the paper and not just focus on the feasibility/usefulness of the ideas presented.  So I will only mention this once: I really don't see the point of this.  It has to be  implemented into the construction of the room, so home use it out of the window.  In commercial use, an actual keyboard will probably be better 95% of the time, I can only see this delegated to cheap, gimmicky games or other advertising ploys.  Also as a side note, I have played a LOT of DDR. I mean a lot, and I never, ever liked using the pad to navigate through the menus.  Many times my friends or I would just pick up the controller or use the keyboard to make selections and leave the dance pad for the actual game.  Even then, playing in arcades was much different than playing at home because the pad the arcade was always worn down from too many users putting too much pressure on them, which is a problem I see for this device.

    That aside, I think this technology is pretty interesting.  The ability to differentiate users based on their shoe is very intriguing.  Typing with your feet seems pretty frustrating through, aside from the length of time it would take, the motions sound like it would wear on you.  I think they were spot on with the methods of telling input between just walking, though if I were them, I would not force users to jump to do a simple task like bring up a context menu.  This seems ill-suited to older users or less athletic users.

    Thursday, September 15, 2011

    HCI Initial Write-up

    Prior Perceptions


    To integrate myself, I plan to take the route of the typical new member.  I will show up at the meeting and express my interest to join and also mention my complete lack of experience in this area.  I suspect that only half of their new members join like this though.  The other half probably have been fencing for a while in high school and might know the people in the club through major fencing tournaments.  Regardless, as a newbie in the club and in the field of fencing, I hope the group accepts me and 'shows me the ropes' instead of being annoyed by an unskilled person who tags along with the group.

    I think to fully understand a group, one must experience said group without the spectre of being an outsider hanging over them the entire time.  Thus, I plan to actually join the group and take the route of a new member instead of introducing myself as having a project and asking to observe them.  By being in the inside, I hope I can gain an acute understand of this group, something that would be very difficult as a foreign observer.   Honestly, I think the best time and way to bring up the project would just to say that I have to write a few reports over "groups" for a class and let them imagine the rest.  They will probably assume it is for a sociology or psychology class and think nothing of it.  The knowledge that I had this assignment for a while and that I specifically sought out and joined this club solely for this class would probably be harmful to my membership and friendship in this group.  And truthfully, I wanted to join this club anyway.  I took the opportunity to complete this assignment and get involved in something that seemed interesting to me and in all honestly, I probably would have joined this club even without this assignment.


    As far as prior perceptions that I have with this group... I expect that it will mostly be males.  They will probably all be in what most people would call "nerdy" majors.  I expect a lot of the people in the club will have been doing this since before they came to A&M.  As a result, I assume many will be from upper middle class backgrounds.  I am not sure how they will treat new members.  The possibility that they are critical and cold towards newer members is there, though they also might not be so elitist and instead welcome new blood with open arms.  Of these two possibilities, I cannot know which to be true until I actually attend the meeting. 



    Initial Results


    The initial interaction in the group was extremely satisfying.  Not only did I have tons of fun, but I was able to gather a ton of good information.  The first thing I noticed my initial expectations about the gender gap was not as wide as I originally envisioned.  At the beginning of the meeting, there were about 12 people total, with 4 females.  The atmosphere of the group seemed laid back, they were very happy to have new members so that was heartening.  The first thing we did once everyone was ready, was warm up exercises.  This activity kind of solidified my initial expectations about the kind of members present in this group; that is, everyone struggled with the exercises,  basically: this was not a group of hardcore athletes.  After the exercises, during the stretches, we all introduced ourselves so the new members (Trevor and me) could get to know everyone and we could introduce ourselves without being awkward.  At this point, another assumption was proven true, most of these guys were in either engineering or scientific fields.  Next we did footwork drills and since I was new and did not know how to do them, one of the members was nice enough to take me aside and go over them one on one. After these were complete, the rest of the members gathered their equipment and began sparring.  The same member who went over the footwork drills with me, continued her kindness and took me to the armory and showed me where all of the equipment is and what items I needed.  We got back to the practice room and several people helped me suit up and I was immediately thrust into a sparring lane.  The dominate theory of learning in this group seemed to be trial by fire.  Armed with a foil and almost no knowledge, I was put to the test against one of the experienced members.  At first I was nervous and worried, but these feelings soon faded and I enjoyed myself.  Like all good sports, the best way to learn is to do.  You don't sit in a classroom or have someone explain how to catch a ball or make a free throw, you just go out there and do it. We spent the rest of the four hour meeting sparring with different members, getting tips and pointers, and generally having fun. 

    Overall, I felt really welcomed into the group and I expect this to be a very good group to complete my ethography on.