Thursday, February 14, 2013

PaleoSketch: Accurate Primitive Sketch Recognition and Beautification




Howdy!
            In this blog post, I am going to write a brief summary of the above mentioned paper and my opinion on it. 

           The authors intend to recognize a user's intention in making a gesture and using that information in the gesture- recognition process. They present a new low- level recognition and beautification system to recognize eight primitive shapes: Line, Polyline, Circle, Ellipse, Arc, Curve, Spiral and Helix. The authors also introduce two new features: Normalized Distance Between Direction Extremes (NDDR) and Direction Change Ratio (DCR). 

          NDDR is the ratio of the length of the stroke between the point with the highest direction value on the direction graph and the point with the lowest direction value and the total length of the stroke. DCR is the ratio of the maximum change in direction divided by the average change in direction. NDDR is high for curved shapes such as arcs while it is low for shapes like polylines. DCR values are high for polylines than for curves.


Figure 1

To recognize the stroke, it is passed through a pre- recognition stage, tested for each primitive shape, converted to a beautified version and then passed through a hierarchy stage where each interpretation is ranked. The algorithm to rank different interpretations is a very important contribution of this paper.


Figure 2

The results indicate that this new algorithm to recognize gestures is very effective.

I find the ideas presented in this paper very interesting. The different interpretations ranking is a very useful feature which can be used by a higher- level recognition system to use context to determine which ranked version is the best fit. 


I used the following sources for this blog post:
[1]  Proceeding IUI '08 Proceedings of the 13th international conference on Intelligent user interfaces. Pages 1-10. ACM New York, NY, USA ©2008

Thanks for reading my blog! Have a great and blessed day!

Gig'em!!!



What!?! No Rubine Features?: Using Geometric-based Features to Produce Normalized Confidence Values for Sketch Recognition


What!?! No Rubine Features?: Using Geometric-based Features to
Produce Normalized Confidence Values for Sketch Recognition

Brandon Paulson, Pankaj Raja , Pedro Davalos, Ricardo Gutierrez-Osuna, Tracy Hammond
Sketch Recognition Lab
Pattern Recognition and Intelligent Sensor Machines Lab
Spacecraft Technology Center
Texas A&M University
3112 TAMU
College Station, TX 77843 USA
{bpaulson, pankaj, p0d9861, rgutier, hammond}@cs.tamu.edu

Howdy! 
             This blog post contains important points from the above mentioned paper and a brief section of my opinion on it. 

              As I mentioned in my previous blog posts, gesture- based user interfaces are becoming more and more popular. The problem is that there are not good enough recognition algorithms to interpret a gesture correctly. Two approaches are normally used to identify a gesture: gesture- based recognition and geometry- based recognition. In this paper, the authors present a method that is a hybrid combination of those 2 methods and demonstrate the high accuracy. The algorithm is used to recognize primitive single- stroke gestures but can be easily extended to support more complex gestures by using languages like LADDER to describe complex shapes in terms of primitive ones. 

          The authors used a total of 44 features: gesture- based features from Rubine's features and geometry- based features. They use a quadratic classifier and their algorithm returns a ranked list of interpretations which can be used by a higher- level classifier to understand the context and use an appropriate interpretation. The rank is assigned based on normalized confidence values for each interpretation which is a big contribution of this paper.

         They started out with the following set of features:























Figure 1

They used a variety of feature- subsets to determine which features contributed negatively to recognition and which were very effective. The ones in the bold in the above figure were chosen to be the most important features and the set of those features proved to be most optimal in recognizing gestures.

I find the idea of using a hybrid approach very interesting and useful. It allows users to freely draw without worrying about the underlying technical details and without being constrained. 

I used the above mentioned paper as a source for this blog. Here's the link to the paper: Research Paper

Thanks for reading my blog! Have a great and blessed day!

Gig'em!!!



Sunday, February 10, 2013

Visual Similarity of Pen Gestures

Visual Similarity of Pen Gestures

A. Chris Long, Jr., James A. Landay, Lawrence A. Rowe, and Joseph Michiels
Department of Electrical Engineering and Computer Science
University of California at Berkeley
Berkeley, CA 94720-1776

{allanl, landay, rowe, cujoe} @cs.berkeley.edu
+1-510-643-7106
http://www.cs.berkeley.edu/~{allanl, landay, rowe}

Howdy! 
             In this post, I'm going to briefly summarize the above research paper and express my opinion on it. 

             In the paper, the authors talk about their research on pen gestures and their perceived similarity among other things. Pen gestures are getting more and more popular everyday as they are easier to work with and remember as compared to text commands. But a lot of times, users of a gesture- based input system might confuse gestures or the system might recognize two different gestures as the same one. The authors designed two experiments to figure out the underlying features that make a gesture uniquely perceivable by the user. 

            For the first experiment, they created gestures that varied a lot from each other in terms of how an user might perceive them. 


             The users were shown all possible triads (groups of 3 gestures at a time) from the above gesture set and asked to mark the most different gesture in each triad. The authors analyzed the data collected and made a list of 22 features that they thought might be distinguishing factors for a gesture. 


             One of the goals of the second experiment was to figure out how different features affect the perceived similarity of gestures. The authors made three different gesture sets. The first gesture set was designed to explore absolute angle and aspect, the second to explore length and area and the third to explore rotation. The authors picked two gestures from each set and added them to a fourth set to make the number of triads smaller. The procedure was similar to the first experiment. 

            The model learned in each experiment was used to predict similarities between pairs of gestures from the gesture set used in the other experiment. The results showed that the model obtained from the first experiment was a little bit better than the model obtained from the second experiment. The results from using the first model agreed with the users' input with a correlation of 0.56. They also obtained some interesting results like neither the length nor the area of the bounding box is a very strong distinguishing feature while the logarithm of aspect is a strong influence on the similarity of gestures.

I found the paper very interesting. Such analysis can be very helpful in designing gestures that are easy to remember and are convenient. It can help a great deal in enhancing the user experience of a pen- based interface. 

I used the following sources for this blog post:
[1] Proceeding CHI '00 Proceedings of the SIGCHI conference on Human Factors in Computing Systems. Pages 360-367. ACM New York, NY, USA ©2000

Thanks for reading my blog! Have a great and blessed day!

Gig'em!!!


Thursday, February 7, 2013

“Those Look Similar!” Issues in Automating Gesture Design Advice

"Those Look Similar!" Issues in Automating Gesture Design Advice


Howdy!!!
                In this blog post, I'd like to talk about the above mentioned paper briefly and express my views and opinions on it. 

                The authors of the above mentioned paper designed and implemented a system that recognizes gestures that are similar to one another with respect to human perception and/ or gesture recognition algorithms. Based on that information, it gives advice as to how the gestures can modified so that they may not cause confusion. A lot of novel techniques for human- computer interaction like pen- based gesture recognition, human- body movement based gesture recognition and many others are getting common nowadays and they are very useful because they're quick and easy to remember than text- based commands. It'd be very convenient if the interface designers could integrate those techniques in the design of their interfaces but unfortunately, not a lot of designers have access to tools that can help them with this purpose. Also, sometimes the gestures created can be very similar and the gesture- recognition algorithm might not be able to detect differences between different gesture classes. 

           The authors developed a system quill which is a tool for designing gesture- based interfaces based on Rubine algorithm for gesture- recognition. It also provides advice when two gesture classes are similar. The designer can store templates for each gesture forming a gesture- class. There can be several gesture- classes which can be combined to form a gesture- group. The gesture- group and gesture- classes form the gesture- set. Each gesture class normally has about 10- 15 examples of the gesture.

Figure 1

         Whenever the designer is creating a new gesture- class, the program runs a background analysis to determine if the gesture would be confusing to the user or be perceived as similar to some other gesture. It warns the user and provides an explanation and advice on how to modify it to make it more distinguishable. 


In my opinion, it is a very good tool as it makes the creation of gesture- based interfaces easier and also makes sure that the gestures created will have very unique characteristics for the purposes of distinguishing between different gestures. It can be improved to get more accurate results and extend support for hand- gestures and other kinds of gestures.

I used the following sources for this blog post:
[1] Proceeding PUI '01. Proceedings of the 2001 workshop on Perceptive user interfaces. Pages 1-5. ACM New York, NY, USA ©2001

Thanks for reading my blog! Have a great and blessed day!

Gig'em!!!