Touch input for assistive technologies

Registered by Luke Yelavich

With the push to have a converged user interface on mobile, tablet, and desktop, consideration should be given to extra requirements for those who use assistive technologies such as a screen reader or magnification.

This blueprint focuses on the touch gestures required for assistive technology users. Such gestures would either augment, or replace the standard set of gestures used to navigate the GUI by fully able bodied users. The technical implementation is also outlined, as several core pieces of desktop infrastructure will need to be extended.

Blueprint information

Status:
Not started
Approver:
Jason Warner
Priority:
Undefined
Drafter:
Luke Yelavich
Direction:
Approved
Assignee:
Luke Yelavich
Definition:
Drafting
Series goal:
Proposed for trusty
Implementation:
Not started
Milestone target:
milestone icon later

Related branches

Sprints

Whiteboard

Other touch platforms, i.e IOS and android have differing levels of assistive technology enablement already present, as per the below lists:

IOS:
Screen reader: VoiceOver on the IOS platform replaces the standard IOS gestures with its own, allowing the user to explore what is on the screen, and perform different actions on the currently selected item, where selected means the last item the user located with their finger, and whos name or description was spoken.
Magnifier: The IOS magnifier augments the existing set of gestures with its own, allowing the user to turn magnification on and off, zoom in and out, and navigate the screen with 3 finger drags to find the content they are looking for. The user can then use standard IOS gestures to perform the desired action on the content they are working with.

Android:
Screen Reader: The Android screen reader as of 4.1, called Talkback, replaces the standard Android gestures with its own, along similar lines to VoiceOver on IOS.
Magnifier: As of Android 4.2, magnification is included.

Given the other touch platforms, it would ideally be best to follow along similar lines to IOS. Magnification could be done using the eZoom plugin in Compiz, and Orca would provide screen reader functionality.

Implementation:
Proper implementation of this functionality requires several pieces of desktop infrastructure to be extended, a preliminary outline follows:
 * Unity Next needs to accept touch gestures as commands.
 * A screen magnifier needs to be written, either as a part of Mir, or Unity Next, depending on what part of the stack has enough access to the screen buffer to perform magnification.
 * The at-spi registry daemon needs to be extended to snoop touch input events, to allow assistive technologies such as Orca to perform various actions based on gestures.
 * Orca needs to be extended to allow explore by touch, reporting back what is under the user's finger.

 === Base set of gestures ===
The below gestures are based on what is used on the IOS platform for navigation and interraction with onscreen elements.

 * Navigate and locate onscreen elements - single finger drag
 * Move to next onscreen element - Single finger flick right
 * Move to previous onscreen element - Single finger flick left
 * Activate currently focused element - Single finger double tap
 * Scroll down - 3 finger swipe up
 * Scroll up - 3 finger swipe down

(?)

Work Items

Work items:
[themuso] Initiate discussion with upstream at-spi developers about extending at-spi to snoop for touch input events: TODO
[themuso] Work with Orca upstream to extend Orca to accept touch input, in accordance with discussions: TODO

Dependency tree

* Blueprints in grey have been implemented.

This blueprint contains Public information 
Everyone can see this information.