Eye movement analysis of reading from computer displays, eReaders and printed books
Authors: Daniela Zambarbieri, Elena Carniglia
To compare eye movements during silent reading of three eBooks and a printed book. The three different eReading tools were a desktop PC, iPad tablet and Kindle eReader.
Video-oculographic technology was used for recording eye movements.
In the case of reading from the computer display the recordings were made by a video camera placed below the computer screen, whereas for reading from the iPad tablet, eReader and printed book the recording system was worn by the subject and had two cameras: one for recording the movement of the eyes and the other for recording the scene in front of the subject.
Data analysis provided quantitative information in terms of number of fixations, their duration, and the direction of the movement, the latter to distinguish between fixations and regressions. Mean fixation duration was different only in reading from the computer display, and was similar for the Tablet, eReader and printed book. The percentage of regressions with respect to the total amount of fixations was comparable for eReading tools and the printed book.
The analysis of eye movements during reading an eBook from different eReading tools suggests that subjects’ reading behaviour is similar to reading from a printed book.
The study was published in The Journal of the College of Optometrists - Ophthalmic & Physiological Optics, 2012.
Apply Web-based Analytic Tool and Eye Tracking to Study the Consumer Preference of DSLR Cameras
Authors: Jih-Syongh Lin, Shih-Yen Huang
Consumer’s preferences and purchase motivation of products often lie in the purchasing behaviors generated by the synthetic evaluation of form features, color, function, and price of products. If an enterprise can bring these criteria under control, they can grasp the opportunities in the market place. In this study, the product form, brand, and prices of five DSLR digital cameras of Nikon, Lumix, Pentax, Sony, and Olympus were investigated from the image evaluation and eye tracking. The web-based 2-dimensional analytical tool was used to present information on three layers. Layer A provided information of product form and brand name; Layer B for product form, brand name, and product price for the evaluation of purchase intention (X axis) and product form attraction (Y axis). On Layer C, Nikon J1 image samples of five color series were presented for the evaluation of attraction and purchase intention. The study results revealed that, among five Japanese brands of digital cameras, LUMIX GF3 is most preferred and serves as the major competitive product, with a product price of US$630. Through the visual focus of eye-tracking, the lens, curvatured handle bar, the curve part and shuttle button above the lens as well as the flexible flash of LUMIX GF3 are the parts that attract the consumer’s eyes. From the verbal descriptions, it is found that consumers emphasize the functions of 3D support lens, continuous focusing in shooting video, iA intelligent scene mode, and all manual control support. In the color preference of Nikon J1, the red and white colors are most preferred while pink is least favored. These findings can serve as references for designers and marketing personnel in new product design and development. The study was published in the International Journal of Business Research and Management. Vol. (4), Issue (4), 2013.
Driving behavior pattern analysis for elderly people
Authors: Guan-Lun Chen, Jia-Yuarn Guo, Chia-Tso Huang
The study aims at evaluating factors associated with driving patterns and self-reported driving difficulty, with particular attention to vision and cognitive impairment. This study uses cross-sectional data from 10 elderly participants (65 years old or older) and 10 young participants along with simulation program, and comparison is by putting on mobile eye tractor. Neurocognitive tests, driving simulation, and road tests provide complementary sources of evidence to evaluate driver safety. No single test is sufficient to determine who should drive and who should not. Finally, we compare the concentration ability and reaction ability between elderly and young participants.
Casestudy: Portable lab - know-how in a briefcase / MyBOOM relies on Mangold technology
Authors: Thorsten Voß
Why Westphalian Internet service provider MyBOOM relies on Mangold technology.
When the Fingers do the Talking: A Study of Group Participation with Varying Constrains to a Tabletop Interface
Authors: Paul Marshall, Eva Hornecker, Richard Morris, Nick Sheep Dalton and Yvonne Rogers
A user study is presented that investigates how different configurations of input can influence equity of participation around a tabletop interface. Groups of three worked on a design task requiring negotiation in four interface conditions that varied the number (all members can act or only one) and type (touch versus mice) of input. Our findings show that a multi-touch surface increases physical interaction equity and perceptions of dominance, but does not affect levels of verbal participation. Dominant people still continue to talk the most, while quiet ones remain quiet. Qualitative analyses further revealed how other factors can affect how participants contribute to the task. The findings are discussed in terms of how the design of the physicaltechnological set-up can affect the desired form of collaboration.
Behavioural Analysis of the Tower Controller Activity
Authors: Ella Pinska and Marc Bourgois
In this paper we report on an initial study concerning the importance of direct observation for control tower activity. The results confirm that looking outside of the window is the most frequent and longest activity of the tower controller, occuoying him for roughly 30-40% of the time. Two other significant activities were scanning radar image and strips. The change of attention between these three information sources is frequent but not in a defined order.
Virtual cognitive model for Miyazawa Kenji based on speech and facial images recognition
Authors: Hamido Fujita, Jun Hakura and Masaki Kurematsu
In this paper we a representing a virtual interactive model based on cognitive model of Miyazawa Kenji. We created a computer model based on cognitive thinking of Kenji literature on story telling. The user can interact in real time with Virtual Kenji. The facial gestures been collected and analyzed through Motion capture system consists of six camera. These six cameras set to collect all emotional facial gestures of people who read and practice an recorded assigned Kenji manuscripts for experiment. Each person has 50 markers of 5 mm size attached to all parts of the face (lips, mouth, eyebrow, moustache, eyelash, forehead). The emotional linkage between these facials parts and cognitive emotion been analyzed and recorded. We have proposed a database; called as Facial recognition database based on FACS model, Also we have correspondingly, speech synthesis part that would analyze the emotional part of human speech. These synthesized two parts are been re-constructed on hologram that represents the cognitively the character of Kenji virtual model who has a face with gestures harmonize with a speech and facial images generated by the system. Also, the system interacts with the human user based on collected observed response on human user and inference by the system in real time.
Labeling of Gestures in SmartKom − Concept of the Coding System
Authors: Silke Steininger
The SmartKom project is concerned with the development of an intelligent computer−user interface that allows a user to communicate almost naturally with an adaptive and self−explanatory dialogue system. Among other things the system will be able to analyze the gestural input of the user. To train a gesture analyzer, data is required, preferably realistic data. One of the tasks of our institute in the project is the collection and annotation of such data. Since the machine does not yet exist the data collection is done with help of so called Wizard of Oz−experiments: The system is simulated by humans (the "wizards") and the subjects are made believe that they interact with an existing machine. We record the subjects (video and audio) as they solve short tasks. The recordings are labeled off−line with respect to the gestures that the subjects used.Download
Evaluating Software Support for Video Data Capture and Analysis in Collaborative Design Studies
Authors: Linda Candy, Zafer Bilda, Mary Lou Maher and John S. Gero
In order to understand the implications of introducing new digital tools into design practice, research into how designers work collaboratively using both traditional and digital media is being undertaken. For that purpose it is necessary to gather large quantities of empirical data and this poses problems as to how to manage and analyse that data effectively. This paper describes the evaluation of a software system for capturing and analysing video data in the context of collaborative design studies. These studies will generate large amounts of data and support for its management and analysis is vital to the successful completion of the work. In order to find a match to our specific requirements, we conducted a survey from which the software application, INTERACT was identified. A study of its use and suitability was carried out in conditions as near as possible to the intended research. We found that INTERACT met our requirements and provided significant efficiency gains for the analysis of the data.
Comparing Collaborative Design Behavior in Remote Sketching and 3D Virtual Worlds
Authors: Mary Lou Maher, Zafer Bilda and David Marchant
The aim of this study is to compare two architects’ collaborative design behaviour while using a shared whiteboard application in one design session and a 3D virtual world in a second design session. Our preliminary analysis shows that designers spend more time discussing design ideas while sketching and more time creating the design model and inspecting spatial relationships while in a 3D virtual world.
Example using the MangoldVision Eye Tracker in Augmented Reality Based E-Commerce Platform
Authors: Min-Chai Hsieh and Hao-Chiang Lin
This Taiwanese presentation shows an example of using the MangoldVision Eye Tracker in studies on an augmented reality based e-commerce platform.Download
Studies on Visual Illusion Figures using the MangoldVision Eye Tracker
Authors: Mei-Chi Chen and Hao-Chiang Lin
This Taiwanese presentation shows an application of the MangoldVision Eye Tracker in psychological studies on Visual Illusion Figures.Download