What is it about?

In this new era of computing, where the iPhone, iPad, Xbox Kinect, and similar devices have changed the way to interact with computers, many questions have risen about how modern input devices can be used for a more intuitive user interaction. Interaction Design for 3D User Interfaces: The World of Modern Input Devices for Research, Applications, and Game Development addresses this paradigm shift by looking at user interfaces from an input perspective.

Book Topics

Excercises

Most of the chapters contain exercises that provide practical experience to enhance knowledge of the material in the related chapter. With its hands-on approach and the affordability of the required hardware, this book is an excellent flexible resource for both the novice and the expert in 3D user input device development. Researchers and practitioners will gain a much deeper understanding about user input devices and user interfaces. Game developers and software designers will find new techniques to improve their products by adding intuitive user interaction mechanisms to their games and applications. In addition to the resources provided in the book, its companion web site (http://3DInputBook.com) provides additional resources, which include: additional exercises and project ideas, additional chapters, source code, and class instructors’ resources, among others. The additional resources are provided to keep helping you with new technology as they become available and newer research to help you stay up to date.

Features

Biographies

A postdoctorate research fellow at Florida International University, Miami, where he received his PhD in computer science. He is the current director of the Open Human-Interface Device Laboratory at Florida International University (http://openhid.com). He was a member of the Digital Signal Processing Laboratory at FIU, and has over 17 years of experience in software development and systems integration. His interests are in 3D user interfaces, input interfaces, human–computer interaction, 3D navigation, and input modeling. He has multiple publications in journals, lecture notes, and conference proceedings.
Received her PhD in Electrical engineering from Florida International University, Miami, where she was also a research assistant in the Digital Signal Processing Laboratory, focusing on sensor fusion for human motion tracking. She is currently a Fraud Risk Data Scientist, focusing on financial data analyzing. Her research interests are data Mining, data analysis, statistical modeling, sensor fusion and wearable devices. She is a former Open Science Data Cloud PIRE National Science Foundation Fellow.
A faculty member of the Electrical and Computer Engineering Department at Florida International University, Miami, as well as the director of FIU’s Digital Signal Processing Laboratory. He earned his PhD in electrical engineering from the University of Florida, Gainesville. His work has focused on applying DSP techniques to the facilitation of human–computer interactions, particularly for the benefit of individuals with disabilities. He has developed human–computer interfaces based on the processing of signals and has developed a system that adds spatialized sounds to the icons in a computer interface to facilitate access by individuals with “low vision.” He is a senior member of the Institute of Electrical and Electronics Engineers and the Association for Computing Machinery.
Eminent Chair Professor of Computer Science at Florida International University, Miami. He has authored three books on database design and geography and has edited five books on database management and high performance computing. He holds four US patents on database querying, semantic database performance, Internet data extraction, and computer medicine. He has also authored 300 papers in journals and proceedings on databases, software engineering, Geographic Information Systems, Internet, and life sciences. His TerraFly project—a 50-terabyte database of aerial imagery and Web-based GIS—has been extensively covered by worldwide press.
A professor with the Department of Electrical and Computer Engineering at Florida International University, Miami. He received his PhD from the Electrical Engineering Department at The University of Florida, Gainesville. He is the founding director of the Center for Advanced Technology and Education funded by the National Science Foundation. His earlier work on computer vision to help persons with blindness led to his testimony to the US Senate on the committee of Veterans Affairs on the subject of technology to help persons with disabilities. His research interests are in image and signal processing with applications in neuroscience and assistive technology research.

Contents

I Theory

  1. 1. Introduction

  2. 1.1 The Vision
  3. 1.2 Human–Computer Interaction
    1. 1.2.1 Usability
    2. 1.2.2 The Sketchpad
    3. 1.2.3 The Mouse
    4. 1.2.4 The Light Pen and the Computer Mouse
    5. 1.2.5 Graphical User Interfaces and WIMP
    6. 1.2.6 3D User Interfaces
  4. 1.3 Definitions
  5. Further Reading
  1. 2. Input: Interfaces and Devices

  2. 2.1 Introduction
  3. 2.2 Input Technologies
    1. 2.2.1 Transfer Function
    2. 2.2.2 Direct-Input Devices
    3. 2.2.3 Input Device States
    4. 2.2.4 Input Considerations
  4. 2.3 User Interfaces: Input
    1. 2.3.1 3D User Interfaces: Input Devices
  5. 2.4 Input Devices
    1. 2.4.1 Keyboard
    2. 2.4.2 The Mouse and Its Descendants
    3. 2.4.3 Joystick and the GamePad
    4. 2.4.4 3D Mouse and 3D User-Worn Mice
    5. 2.4.5 Audio
    6. 2.4.6 Inertial Sensing
    7. 2.4.7 Vision-Based Devices
    8. 2.4.8 Data Gloves
    9. 2.4.9 Psychophysiological Sensing
    10. 2.4.10 Tracking Devices
    11. 2.4.11 Treadmills as Input Devices
  6. 2.5 Input Recognition
  7. 2.6 Virtual Devices
  8. 2.7 Input Taxonomies
  9. Further Reading
  1. 3. Output Interfaces and Displays

  2. 3.1 3D Output: Interfaces
    1. 3.1.1 Human Visual System
    2. 3.1.2 Visual Display Characteristics
    3. 3.1.3 Understanding Depth
  3. Displays
    1. 3.2.1 Near-Eye Displays
    2. 3.2.2 Three-Dimensional Displays
  4. Further Reading
  1. 4. Computer Graphics

  2. 4.1 Computer Graphics
    1. 4.1.1 Camera Space
    2. 4.1.2 3D Translation and Rotations
    3. 4.1.3 Geometric Modeling
    4. 4.1.4 Scene Managers
    5. 4.1.5 Collision Detection
  3. Further Reading
  1. 5. 3D Interaction

  2. 5.1 Introduction
  3. 5.2 3D Manipulation
    1. 5.2.1 Classification of Manipulation Techniques
    2. 5.2.2 Muscle Groups: Precision Grasp
    3. 5.2.3 Isomorphic Manipulations
    4. 5.2.4 Pointing Techniques
    5. 5.2.5 Direct and Hybrid Manipulations
    6. 5.2.6 Non-Isomorphic Rotations
  4. Further Reading
  1. 6. 3D Navigation

  2. 6.1 3D Travel
    1. 6.1.1 3D Travel Tasks
    2. 6.1.2 Travel Techniques
  3. 6.2 Wayfinding
    1. 6.2.1 Training versus Transfer
    2. 6.2.2 Spatial Knowledge
    3. 6.2.3 Navigation Model
    4. 6.2.4 Wayfinding Strategies
  4. 6.3 3D Navigation: User Studies
    1. 6.3.1 Search during Navigation
    2. 6.3.2 Additional User Studies for Navigation
  5. Further Reading
  1. 7. Descriptive and Predictive Models

  2. 7.1 Introduction
  3. 7.2 Predictive Models
    1. 7.2.1 Fitts’ law
    2. 7.2.2 Choice Reaction Time: Hick–Hyman Law
    3. 7.2.3 Keystroke-Level Model (KLM)
    4. 7.2.4 Other Models
  4. 7.3 Descriptive Models
    1. 7.3.1 Bi-Manual Interaction
    2. 7.3.2 Three-State Model for Graphical Input
  5. Further Reading
  1. 8. Multi-Touch

  2. Introduction
  3. Hardware
    1. 8.2.1 Projective Capacitive Technology
    2. 8.2.2 Optical Touch Surfaces
    3. 8.2.3 Vision-Based Optical
  4. 8.3 Multi-Touch and Its Applications
    1. 8.3.1 Basics of Multi-Touch
    2. 8.3.2 Multi-Touch Gestures and Design
    3. 8.3.3 Touch Properties
    4. 8.3.4 Multi-Touch Taxonomy
    5. 8.3.5 Are Multi-Touch Gestures Natural?
    6. 8.3.6 Touch: Multi-Modality
    7. 8.3.7 More about Touch
    8. 8.3.8 Multi-Touch Techniques
  5. 8.4 Figures of Large TabletTop Displays
  6. Further Reading
  1. 9. Multi-Touch for Stereoscopic Displays - Dimitar Valkov

  2. 9.1 Understanding 3D Touch
    1. 9.1.1 Problems with Stereoscopic Touch Interfaces
    2. 9.1.2 Parallax Problem
    3. 9.1.3 Design Paradigms for Stereoscopic Touch Interaction
  3. 9.2 Touching Parallaxes
  4. 9.3 Multi-Touch Above the Tabletop
    1. 9.3.1 Triangle Cursor
    2. 9.3.2 Balloon Selection
    3. 9.3.3 Triangle Cursor vs. Balloon Selection
    4. 9.3.4 Design Considerations
  5. 9.4 Interaction with Virtual Shadows
  6. 9.5 Perceptual Illusions for 3D Touch Interaction
  7. Further Reading
  1. 10. Pen and Multi-Touch Modeling and Recognition

  2. 10.1 Introduction
  3. 10.2 The Dollar Family
    1. 10.2.1 $1 Recognizer
    2. 10.2.2 $1 Recognizer with Protractor
    3. 10.2.3 $N Recognizer
    4. 10.2.4 $ Family: $P and Beyond
  4. 10.3 Proton++ and More
  5. 10.4 FETOUCH
  6. Further Reading
    1. 10.4.1 FETOUCH+
    2. 10.4.2 Implementation: FETOUCH+
  1. 11. Using Multi-Touch with PetriNets

  2. 11.1 Background
    1. 11.1.1 Graphical Representation
    2. 11.1.2 Formal Definition
  3. 11.2 PeNTa: Petri Nets
    1. 11.2.1 Motivation and Differences
    2. 11.2.2 HLPN: High-Level Petri Nets and IRML
    3. 11.2.3 PeNTa and Multi-Touch
    4. 11.2.4 Arc Expressions
    5. 11.2.5 A Tour of PeNTa
    6. 11.2.6 Simulation and Execution
  4. Further Reading
  1. 12. Eye Gaze Tracking as Input in Human–Computer Interaction

  2. 12.1 Principle of Operation
  3. 12.2 Post-Processing of POG Data: Fixation Identification
  4. 12.3 Emerging Uses of EGT in HCI: Affective Sensing
  5. Further Reading
  1. 13. Brain–Computer Interfaces: Considerations for the Next Frontier in Interactive Graphics and Games

    Frances Lucretia Van Scoy
  2. 13.1 Introduction
  3. 13.2 Neuroscience Research
    1. 13.2.1 Invasive Research
    2. 13.2.2 EEG Research
    3. 13.2.3 fMRI Research
  4. 13.3 Implications of EEG and fMRI-Based Research for the Brain–Computer Interface
    1. Computer Interface
    2. 13.3.1 Implications of Constructing Text or Images from Brain Scan Data
    3. 13.3.2 Implications of Personality Models for Digital Games
  5. 13.4 Neuroheadsets
    1. 13.4.1 Some Available Devices
    2. 13.4.2 An Example: Controlling Google Glass with MindRDR
  6. 13.5 A Simple Approach to Recognizing Specific Brain Activities Using Low-End Neuroheadsets and Simple Clustering Techniques
  7. 13.6 Using EEG Data to Recognize Active Brain Regions
  8. 13.7 Conclusion
  9. For Further Reading

Advanced Topics

  1. 14. Math for 3D Input

    Steven P. Landers and David Rieksts
  2. 14.1 Introduction
  3. 14.2 Axis Conventions
  4. 14.3 Vectors
    1. 14.3.1 Equality
    2. 14.3.2 Addition
    3. 14.3.3 Scalar Multiplication
    4. 14.3.4 Negation and Subtraction
    5. 14.3.5 Basis Vectors
    6. 14.3.6 Magnitude
    7. 14.3.7 Unit Vector and Normalization
    8. 14.3.8 Dot Product
    9. 14.3.9 Cross Product in R3
  5. Matrices
    1. 14.4.1 Transposition
    2. 14.4.2 Trace
    3. 14.4.3 Addition
    4. 14.4.4 Scalar Multiplication
    5. 14.4.5 Matrix Multiplication
    6. 14.4.6 Identity Matrix
    7. 14.4.7 Determinant
    8. li 14.4.8 Transformation Matrices
    9. 14.4.9 Reflection Matrices
    10. 14.4.10 Eigenvalues, Eigenvectors
  6. 14.5 Axis Angle Rotations
  7. 14.6 Two Vector Orientation
  8. 14.7 Calibration of Three Axis Sensors
    1. 14.7.1 Bias
    2. 14.7.2 Scale
    3. 14.7.3 Cross-Axis Effect and Rotation
  9. 14.8 Smoothing
    1. 14.8.1 Low-Pass Filter
    2. 14.8.2 Oversampling
  10. Further Reading
  1. 15. Introduction to Digital Signal Processing

  2. 15.1 Introduction
  3. 15.2 What Is a Signal?
  4. 15.3 Classification of Signals
  5. 15.4 Applications of Digital Signal Processing
  6. 15.5 Noise
  7. 15.6 Signal Energy and Power
  8. 15.7 Mathematical Representation of Elementary Signals
    1. 15.7.1 The Impulse Function
    2. 15.7.2 The Unit Step Function
    3. 15.7.3 The Cosine Function
    4. 15.7.4 Exponential Function
    5. 15.7.5 Ramp Function
    6. 15.7.6 Gaussian Function
  9. 15.8 Sampling Theorem
  10. 15.9 Nyquist–Shannon Theorem
  11. 15.10 Aliasing
  12. 15.11 Quantization
  13. 15.12 Fourier Analysis
    1. 15.12.1 Discrete Fourier Transform
    2. 15.12.2 Inverse Discrete Fourier Transform
  14. 15.13 Fast Fourier Transform
  15. 15.14 z-Transform
    1. 15.14.1 Definitions
    2. 15.14.2 z-Plane
    3. 15.14.3 Region of Convergence
  16. 15.15 Convolution
  17. Further Reading
  1. 16. Three Dimensional Rotations

  2. 16.1 Introduction
  3. 16.2 Three Dimensional Rotation
  4. 16.3 Coordinate Systems
    1. 16.3.1 Inertial Frame
    2. 16.3.2 Body-Fixed Frame
  5. 16.4 Euler Angles
    1. 16.4.1 Rotation Matrices
    2. 16.4.2 Gimbal Lock
  6. 16.5 Quaternions
    1. 16.5.1 What Are Quaternions?
    2. 16.5.2 Quaternion Rotation
  7. Further Reading
  1. 17. MEMS Inertial Sensors and Magnetic Sensors

  2. 17.1 Introduction
  3. 17.2 Inertial Sensors
    1. 17.2.1 Accelerometers
    2. 17.2.2 Gyroscopes
  4. 17.3 MEMS Inertial Sensor Errors
    1. 17.3.1 Angle Random Walk
    2. 17.3.2 Rate Random Walk
    3. 17.3.3 Flicker Noise
    4. 17.3.4 Quantization Noise
    5. 17.3.5 Sinusoidal Noise
    6. 17.3.6 Bias Error
    7. 17.3.7 Scale Factor Error
    8. 17.3.8 Scale Factor Sign Asymmetry Error
    9. 17.3.9 Misalignment (Cross-Coupling) Error
    10. 17.3.10 Non-Linearity Error
    11. 17.3.11 Dead Zone Error
    12. 17.3.12 Temperature Effect
  5. 17.4 Magnetometers
  6. 17.5 MEMS Magnetometer Errors
  7. Further Reading
  1. 18. Kalman Filters

  2. 18.1 Introduction
  3. 18.2 Least Squares Estimator
  4. 18.3 Kalman Filter
  5. 18.4 Discrete Kalman Filter
  6. 18.5 Extended Kalman Filter
  7. Further Reading
  1. 19. Quaternions and Sensor Fusion

  2. 19.1 Introduction
  3. 19.2 Quaternion-Based Kalman Filter
    1. 19.2.1 Prediction Step
    2. 19.2.2 Correction Step
    3. 19.2.3 Observation Vector Using Gradient Descent Optimization
    4. 19.2.4 Observation Vector Determination Using Gauss–Newton
  4. 19.3 Quaternion-Based Extended Kalman Filter
    1. 19.3.1 Measurement Process
  5. 19.4 Conversion between Euler and Quaternion
  6. Further Reading

III Hands-On

  1. Hands-On: Inertial Sensors for 3D Input

    Paul W. Yost
  2. 20.1 Introduction
  3. 20.2 Motion Sensing and Motion Capture
    1. 20.2.1 Motion Sensing
    2. 20.2.2 Motion Capture
  4. 20.3 Types of Motion Sensing Technology
    1. 20.3.1 Marker-Based Optical Systems
    2. 20.3.2 Marker-Less Optical Systems
    3. 20.3.3 Mechanical Systems
    4. 20.3.4 Magnetic Systems
    5. 20.3.5 Inertial Systems
  5. 20.4 Inertial Sensor Configurations for Input
    1. 20.4.1 Single Sensor Configurations
    2. 20.4.2 Multiple Sensor Configurations
    3. 20.4.3 Full-Body Sensor Configurations
  6. 20.5 Hands-On: YEI 3-Space Sensors
    1. 20.5.1 Overview
    2. 20.5.2 Using a Single YEI 3-Space Sensor
    3. 20.5.3 Installing a Sensor
    4. 20.5.4 Communicating with a Sensor Using Command and Response
    5. 20.5.5 Communicating with a Sensor Using Streaming Mode
    6. 20.5.6 Using the 3-Space Sensor API
    7. 20.5.7 Hands-On: Single 3-Space Sensor Applications
    8. 20.5.8 Hands-On: Multiple 3-Space Sensor Applications
  7. 20.6 Hands-On: YEI Prio for Whole-Body Input
    1. 20.6.1 Using the Prio API
    2. 20.6.2 Hands-On: Prio for Full-Body Immersion in Unity
    3. 20.6.3 Hands-On: Prio for Full-Body Motion Capture
  8. Further Reading
  1. 21 Simple Hands-On Project with Unity 3D and Oculus Rift

    Nonnarit O-larnnithipong
  2. 21.1 Installation and System Requirements
  3. 21.2 Getting Started
    1. 21.2.1 Creating a New Project
  4. 21.3 Creating Game Scene
  5. 21.4 Lighting, Camera and Skybox
  6. 21.5 GameObject and Basic Action Script
  7. 21.6 Graphic User Interface (GUI)
  8. 21.7 Oculus Rift Integration for Unity
    1. 21.7.1 Installation and Package Import
    2. 21.7.2 Oculus Rift Prefab
  9. Further Reading
  1. 22 Hands-On Approach with Leap Motion

    Frank E. Hernandez
  2. 22.1 What Is Leap Motion
  3. 22.2 Installation
  4. 22.3 Hands-On Mini-Project
  5. Further Reading
  1. 23 Hands-On Approach with Kinect Sensor v2

    Frank E. Hernandez
  2. 23.1 What Is the Kinect Sensor
  3. 23.2 Installation
  4. 23.3 Hands-On Mini-Project
  5. Further Reading
  1. 24 Creating Home-Brew Devices with Arduino Microcontrollers

    Sudarat Tangnimitchok
  2. 24.1 Microcontroller
  3. 24.2 Analog Sensor
  4. 24.3 Serial Communication
    1. 24.3.1 Universal Synchronous Receiver/Transmitter
  5. Hands-On Project: Ultrasonic Proximity Sensor
    1. 24.4.1 Introduction to Arduino
    2. 24.4.2 Ultrasonic Sensor
    3. 24.4.3 Connecting Circuit
    4. 24.4.4 Coding (Sketch)
    5. 24.4.5 Testing the Project
  6. Further Reading
  1. 25 Autonomous Bicycle with Gyroscope Sensor

    Panuwat Janwattanapong and Mercedes Cabrerizo
  2. 25.1 Introduction
  3. 25.2 AU Self-Balancing Bicycle (AUSB)
    1. 25.2.1 Mechanical Structure
    2. 25.2.2 Controller: dsPIC30F4011
    3. 25.2.3 Gyroscope Sensor: MicroStrain 3DM-GX1
  4. 25.3 Data Processing
    1. 25.3.1 Structure of Data Processing
    2. 25.3.2 Analog to Digital Converter
  5. 25.4 System Implementation and Results
    1. 25.4.1 Control System of AU Self-Balancing Bicycle (AUSB)
    2. 25.4.2 Analysis of AU self-balancing bicycle (AUSB) System
    3. 25.4.3 Result
  6. 25.5 Conclusion
  7. Further Reading
  1. 26 Input Implementation Details

  2. 26.1 Input Devices
    1. 26.1.1 Device Listeners and Common Interfaces
    2. 26.1.2 3D Mouse
    3. 26.1.3 Inertial Navigation System
    4. 26.1.4 Microsoft Kinect
    5. 26.1.5 Keyboard and Mouse
    6. 26.1.6 GamePad
  3. 26.2 Multi-Touch Implementation
  4. 26.3 Working with a 3D Graphics Engine: OGRE
  5. 26.4 ECHoSS: Experiment Module
  6. Further Reading

IV Case Study: Speech as Input

  1. 27 Multimodal Human-Like Conversational Interfaces

    Ugan Yasavur and Christine Lisetti
  2. Dialogue Management Overview
    1. 27.1.1 Dialog Management Based on Machine Learning
    2. 27.1.2 Dialog Management and Reinforcement Learning
  3. 27.2 Dialogue Management in Health Dialogue Systems
  4. 27.3 Task-Based Spoken Dialog Systems
  5. 27.4 Embodied Conversational Agents
  6. 27.5 Brief Interventions for Alcohol Problems
  7. 27.6 Conclusion
  8. Conclusion
  1. 28 Adaptive Dialogue Systems for Health

    Ugan Yasavur and Christine Lisetti
  2. 28.1 Approach
  3. 28.2 Reinforcement Learning Background
  4. 28.3 Markov Decision Processes
  5. 28.4 Modeling World with Interconnected MDPs
  6. 28.5 Agent and Dialogue Strategy Learning
  7. 28.6 Reward Function Design
  8. 28.7 Speech Recognition and Language Model
  9. 28.8 Dialog Corpus
  10. 28.9 Conclusion
  11. Further Reading

Appendices

  1. Displays

    Jorge Chernicharo
  2. A.1 Fixed Displays
    1. A.1.1 Single Display
    2. A.1.2 Multiple Displays
  3. A.2 Portable Displays
    1. A.2.1 Tablets and Smartphones
    2. A.2.2 Portable Projectors
  4. A.3 Hybrid Systems
    1. A.3.1 Fixed Displays + Smartphones or Tablets
    2. A.3.2 Fixed Displays + Portable Projectors
  1. Creating Your Own Virtual Reality Headset

    Karrel Muller
  2. B.1 Introduction
  3. B.2 Google Cardboard