In this new era of computing, where the iPhone, iPad, Xbox Kinect, and similar devices have changed the way to interact with computers, many questions have risen about how modern input devices can be used for a more intuitive user interaction. Interaction Design for 3D User Interfaces: The World of Modern Input Devices for Research, Applications, and Game Development addresses this paradigm shift by looking at user interfaces from an input perspective.
Book Topics
Theory of input devices and user interfaces, with an emphasis on multi-touch interaction
Advanced topics on reducing noise on input devices using Kalman Filters
A collection of hands-on approaches that allows the reader to gain experience with some devices
A case study examining speech as input
Excercises
Most of the chapters contain exercises that provide practical experience to enhance knowledge of the material in the related chapter. With its hands-on approach and the affordability of the required hardware, this book is an excellent flexible resource for both the novice and the expert in 3D user input device development. Researchers and practitioners will gain a much deeper understanding about user input devices and user interfaces. Game developers and software designers will find new techniques to improve their products by adding intuitive user interaction mechanisms to their games and applications. In addition to the resources provided in the book, its companion web site (http://3DInputBook.com) provides additional resources, which include: additional exercises and project ideas, additional chapters, source code, and class instructors’ resources, among others. The additional resources are provided to keep helping you with new technology as they become available and newer research to help you stay up to date.
Features
Provides in-depth discussions of 3D user interface programming.
Provide multiple chapters about multi-touch interfaces.
Advanced topics to reduce noise in sensors. In particular, motion sensors (e.g., Gyroscopes)
Uses hardware that is affordable and universal in most scenarios
Contains exercises that provide practical experience and enhance knowledge of the material
Includes code that is downloadable from the book’s website
Side notes are provided in the chapters to discuss relevant topics. In most cases, the side notes are used to provide tips, commentary, or additional information about related topics.
Each chapter contains additional reading suggestions and a complete bibliography with over 900 references.
Additional online chapters and resources in its companion web site (http://3DInputBook.Com)
Biographies
A postdoctorate research fellow at Florida International University, Miami, where he received his PhD in computer science. He is the current director of the Open Human-Interface Device Laboratory at Florida International University (http://openhid.com). He was a member of the Digital Signal Processing Laboratory at FIU, and has over 17 years of experience in software development and systems integration. His interests are in 3D user interfaces, input interfaces, human–computer interaction, 3D navigation, and input modeling. He has multiple publications in journals, lecture notes, and conference proceedings.Received her PhD in Electrical engineering from Florida International University, Miami, where she was also a research assistant in the Digital Signal Processing Laboratory, focusing on sensor fusion for human motion tracking. She is currently a Fraud Risk Data Scientist, focusing on financial data analyzing. Her research interests are data Mining, data analysis, statistical modeling, sensor fusion and wearable devices. She is a former Open Science Data Cloud PIRE National Science Foundation Fellow.A faculty member of the Electrical and Computer Engineering Department at Florida International University, Miami, as well as the director of FIU’s Digital Signal Processing Laboratory. He earned his PhD in electrical engineering from the University of Florida, Gainesville. His work has focused on applying DSP techniques to the facilitation of human–computer interactions, particularly for the benefit of individuals with disabilities. He has developed human–computer interfaces based on the processing of signals and has developed a system that adds spatialized sounds to the icons in a computer interface to facilitate access by individuals with “low vision.” He is a senior member of the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery.Eminent Chair Professor of Computer Science at Florida International University, Miami. He has authored three books on database design and geography and has edited five books on database management and high performance computing. He holds four US patents on database querying, semantic database performance, Internet data extraction, and computer medicine. He has also authored 300 papers in journals and proceedings on databases, software engineering, Geographic Information Systems, Internet, and life sciences. His TerraFly project—a 50-terabyte database of aerial imagery and Web-based GIS—has been extensively covered by worldwide press.A professor with the Department of Electrical and Computer Engineering at Florida International University, Miami. He received his PhD from the Electrical Engineering Department at The University of Florida, Gainesville. He is the founding director of the Center for Advanced Technology and Education funded by the National Science Foundation. His earlier work on computer vision to help persons with blindness led to his testimony to the US Senate on the committee of Veterans Affairs on the subject of technology to help persons with disabilities. His research interests are in image and signal processing with applications in neuroscience and assistive technology research.
Contents
I Theory
1. Introduction
1.1 The Vision
1.2 Human–Computer Interaction
1.2.1 Usability
1.2.2 The Sketchpad
1.2.3 The Mouse
1.2.4 The Light Pen and the Computer Mouse
1.2.5 Graphical User Interfaces and WIMP
1.2.6 3D User Interfaces
1.3 Definitions
Further Reading
2. Input: Interfaces and Devices
2.1 Introduction
2.2 Input Technologies
2.2.1 Transfer Function
2.2.2 Direct-Input Devices
2.2.3 Input Device States
2.2.4 Input Considerations
2.3 User Interfaces: Input
2.3.1 3D User Interfaces: Input Devices
2.4 Input Devices
2.4.1 Keyboard
2.4.2 The Mouse and Its Descendants
2.4.3 Joystick and the GamePad
2.4.4 3D Mouse and 3D User-Worn Mice
2.4.5 Audio
2.4.6 Inertial Sensing
2.4.7 Vision-Based Devices
2.4.8 Data Gloves
2.4.9 Psychophysiological Sensing
2.4.10 Tracking Devices
2.4.11 Treadmills as Input Devices
2.5 Input Recognition
2.6 Virtual Devices
2.7 Input Taxonomies
Further Reading
3. Output Interfaces and Displays
3.1 3D Output: Interfaces
3.1.1 Human Visual System
3.1.2 Visual Display Characteristics
3.1.3 Understanding Depth
Displays
3.2.1 Near-Eye Displays
3.2.2 Three-Dimensional Displays
Further Reading
4. Computer Graphics
4.1 Computer Graphics
4.1.1 Camera Space
4.1.2 3D Translation and Rotations
4.1.3 Geometric Modeling
4.1.4 Scene Managers
4.1.5 Collision Detection
Further Reading
5. 3D Interaction
5.1 Introduction
5.2 3D Manipulation
5.2.1 Classification of Manipulation Techniques
5.2.2 Muscle Groups: Precision Grasp
5.2.3 Isomorphic Manipulations
5.2.4 Pointing Techniques
5.2.5 Direct and Hybrid Manipulations
5.2.6 Non-Isomorphic Rotations
Further Reading
6. 3D Navigation
6.1 3D Travel
6.1.1 3D Travel Tasks
6.1.2 Travel Techniques
6.2 Wayfinding
6.2.1 Training versus Transfer
6.2.2 Spatial Knowledge
6.2.3 Navigation Model
6.2.4 Wayfinding Strategies
6.3 3D Navigation: User Studies
6.3.1 Search during Navigation
6.3.2 Additional User Studies for Navigation
Further Reading
7. Descriptive and Predictive Models
7.1 Introduction
7.2 Predictive Models
7.2.1 Fitts’ law
7.2.2 Choice Reaction Time: Hick–Hyman Law
7.2.3 Keystroke-Level Model (KLM)
7.2.4 Other Models
7.3 Descriptive Models
7.3.1 Bi-Manual Interaction
7.3.2 Three-State Model for Graphical Input
Further Reading
8. Multi-Touch
Introduction
Hardware
8.2.1 Projective Capacitive Technology
8.2.2 Optical Touch Surfaces
8.2.3 Vision-Based Optical
8.3 Multi-Touch and Its Applications
8.3.1 Basics of Multi-Touch
8.3.2 Multi-Touch Gestures and Design
8.3.3 Touch Properties
8.3.4 Multi-Touch Taxonomy
8.3.5 Are Multi-Touch Gestures Natural?
8.3.6 Touch: Multi-Modality
8.3.7 More about Touch
8.3.8 Multi-Touch Techniques
8.4 Figures of Large TabletTop Displays
Further Reading
9. Multi-Touch for Stereoscopic Displays - Dimitar Valkov
9.1 Understanding 3D Touch
9.1.1 Problems with Stereoscopic Touch Interfaces
9.1.2 Parallax Problem
9.1.3 Design Paradigms for Stereoscopic Touch Interaction
9.2 Touching Parallaxes
9.3 Multi-Touch Above the Tabletop
9.3.1 Triangle Cursor
9.3.2 Balloon Selection
9.3.3 Triangle Cursor vs. Balloon Selection
9.3.4 Design Considerations
9.4 Interaction with Virtual Shadows
9.5 Perceptual Illusions for 3D Touch Interaction
Further Reading
10. Pen and Multi-Touch Modeling and Recognition
10.1 Introduction
10.2 The Dollar Family
10.2.1 $1 Recognizer
10.2.2 $1 Recognizer with Protractor
10.2.3 $N Recognizer
10.2.4 $ Family: $P and Beyond
10.3 Proton++ and More
10.4 FETOUCH
Further Reading
10.4.1 FETOUCH+
10.4.2 Implementation: FETOUCH+
11. Using Multi-Touch with PetriNets
11.1 Background
11.1.1 Graphical Representation
11.1.2 Formal Definition
11.2 PeNTa: Petri Nets
11.2.1 Motivation and Differences
11.2.2 HLPN: High-Level Petri Nets and IRML
11.2.3 PeNTa and Multi-Touch
11.2.4 Arc Expressions
11.2.5 A Tour of PeNTa
11.2.6 Simulation and Execution
Further Reading
12. Eye Gaze Tracking as Input in Human–Computer Interaction
12.1 Principle of Operation
12.2 Post-Processing of POG Data: Fixation Identification
12.3 Emerging Uses of EGT in HCI: Affective Sensing
Further Reading
13. Brain–Computer Interfaces: Considerations for the Next Frontier in Interactive Graphics and Games
Frances Lucretia Van Scoy
13.1 Introduction
13.2 Neuroscience Research
13.2.1 Invasive Research
13.2.2 EEG Research
13.2.3 fMRI Research
13.3 Implications of EEG and fMRI-Based Research for the Brain–Computer Interface
Computer Interface
13.3.1 Implications of Constructing Text or Images from Brain Scan Data
13.3.2 Implications of Personality Models for Digital Games
13.4 Neuroheadsets
13.4.1 Some Available Devices
13.4.2 An Example: Controlling Google Glass with MindRDR
13.5 A Simple Approach to Recognizing Specific Brain Activities Using Low-End Neuroheadsets and Simple Clustering Techniques
13.6 Using EEG Data to Recognize Active Brain Regions
13.7 Conclusion
For Further Reading
Advanced Topics
14. Math for 3D Input
Steven P. Landers and David Rieksts
14.1 Introduction
14.2 Axis Conventions
14.3 Vectors
14.3.1 Equality
14.3.2 Addition
14.3.3 Scalar Multiplication
14.3.4 Negation and Subtraction
14.3.5 Basis Vectors
14.3.6 Magnitude
14.3.7 Unit Vector and Normalization
14.3.8 Dot Product
14.3.9 Cross Product in R3
Matrices
14.4.1 Transposition
14.4.2 Trace
14.4.3 Addition
14.4.4 Scalar Multiplication
14.4.5 Matrix Multiplication
14.4.6 Identity Matrix
14.4.7 Determinant
li 14.4.8 Transformation Matrices
14.4.9 Reflection Matrices
14.4.10 Eigenvalues, Eigenvectors
14.5 Axis Angle Rotations
14.6 Two Vector Orientation
14.7 Calibration of Three Axis Sensors
14.7.1 Bias
14.7.2 Scale
14.7.3 Cross-Axis Effect and Rotation
14.8 Smoothing
14.8.1 Low-Pass Filter
14.8.2 Oversampling
Further Reading
15. Introduction to Digital Signal Processing
15.1 Introduction
15.2 What Is a Signal?
15.3 Classification of Signals
15.4 Applications of Digital Signal Processing
15.5 Noise
15.6 Signal Energy and Power
15.7 Mathematical Representation of Elementary Signals
15.7.1 The Impulse Function
15.7.2 The Unit Step Function
15.7.3 The Cosine Function
15.7.4 Exponential Function
15.7.5 Ramp Function
15.7.6 Gaussian Function
15.8 Sampling Theorem
15.9 Nyquist–Shannon Theorem
15.10 Aliasing
15.11 Quantization
15.12 Fourier Analysis
15.12.1 Discrete Fourier Transform
15.12.2 Inverse Discrete Fourier Transform
15.13 Fast Fourier Transform
15.14 z-Transform
15.14.1 Definitions
15.14.2 z-Plane
15.14.3 Region of Convergence
15.15 Convolution
Further Reading
16. Three Dimensional Rotations
16.1 Introduction
16.2 Three Dimensional Rotation
16.3 Coordinate Systems
16.3.1 Inertial Frame
16.3.2 Body-Fixed Frame
16.4 Euler Angles
16.4.1 Rotation Matrices
16.4.2 Gimbal Lock
16.5 Quaternions
16.5.1 What Are Quaternions?
16.5.2 Quaternion Rotation
Further Reading
17. MEMS Inertial Sensors and Magnetic Sensors
17.1 Introduction
17.2 Inertial Sensors
17.2.1 Accelerometers
17.2.2 Gyroscopes
17.3 MEMS Inertial Sensor Errors
17.3.1 Angle Random Walk
17.3.2 Rate Random Walk
17.3.3 Flicker Noise
17.3.4 Quantization Noise
17.3.5 Sinusoidal Noise
17.3.6 Bias Error
17.3.7 Scale Factor Error
17.3.8 Scale Factor Sign Asymmetry Error
17.3.9 Misalignment (Cross-Coupling) Error
17.3.10 Non-Linearity Error
17.3.11 Dead Zone Error
17.3.12 Temperature Effect
17.4 Magnetometers
17.5 MEMS Magnetometer Errors
Further Reading
18. Kalman Filters
18.1 Introduction
18.2 Least Squares Estimator
18.3 Kalman Filter
18.4 Discrete Kalman Filter
18.5 Extended Kalman Filter
Further Reading
19. Quaternions and Sensor Fusion
19.1 Introduction
19.2 Quaternion-Based Kalman Filter
19.2.1 Prediction Step
19.2.2 Correction Step
19.2.3 Observation Vector Using Gradient Descent Optimization
19.2.4 Observation Vector Determination Using Gauss–Newton
19.3 Quaternion-Based Extended Kalman Filter
19.3.1 Measurement Process
19.4 Conversion between Euler and Quaternion
Further Reading
III Hands-On
Hands-On: Inertial Sensors for 3D Input
Paul W. Yost
20.1 Introduction
20.2 Motion Sensing and Motion Capture
20.2.1 Motion Sensing
20.2.2 Motion Capture
20.3 Types of Motion Sensing Technology
20.3.1 Marker-Based Optical Systems
20.3.2 Marker-Less Optical Systems
20.3.3 Mechanical Systems
20.3.4 Magnetic Systems
20.3.5 Inertial Systems
20.4 Inertial Sensor Configurations for Input
20.4.1 Single Sensor Configurations
20.4.2 Multiple Sensor Configurations
20.4.3 Full-Body Sensor Configurations
20.5 Hands-On: YEI 3-Space Sensors
20.5.1 Overview
20.5.2 Using a Single YEI 3-Space Sensor
20.5.3 Installing a Sensor
20.5.4 Communicating with a Sensor Using Command and Response
20.5.5 Communicating with a Sensor Using Streaming Mode
20.5.6 Using the 3-Space Sensor API
20.5.7 Hands-On: Single 3-Space Sensor Applications