Apple Announces Assistive Touch for Apple Watch, Eye-Tracking Specifications on iPad Among Other Accessibility Updates
Apple has announced a range of accessibility specifications that are designed for people with mobility, vision, hearing, and cognitive constraint. These specifications will be available through software updates later this year. One of the most interesting features is the Apple Watch allowing people with limb differences to navigate its interface using AssistiveTouch. Users on iPhone and iPad will also be a part of the new accessibility-focussed treatment. Additionally, Apple has announced a new sign language interpreter service called SignTime that will be available to communicate with AppleCare and retail customer care.
AssistiveTouch on watchOS will allow Apple Watch users to navigate a cursor on the display through a series of hand gestures, such as a pinch or a clench. Apple tells the Apple Watch will use built-in motion sensors like the gyroscope and accelerometer, along with the optical heart rate sensor and on-device machine learning, to detect subtle differences in muscle movement and tendon activity.
New gesture control support through AssistiveTouch would let people with limb differences to more easily answer incoming calls, control an onscreen motion pointer, and access Notification Centre and Control Centre — all on an Apple Watch — without requiring them to touch the display or move the Digital Crown. However, the company has not provided any details about which Apple Watch models will be compatible with the new features.
Furthermore, gesture controls on the Apple Watch, iPadOS will bring support for third-party eye-tracking devices to allow users to control an iPad using their eyes. Apple says compatible MFi (Made for iPad) devices will track where a person is looking on the screen to move the pointer accordingly to follow the person’s gaze. This will work for performing different actions on the iPad, including a tap, without requiring users to touch the screen.
Apple is also updating its preloaded screen reader — VoiceOver — with the ability to allow people to explore more details in images. These details will include text, table data, and other objects. People will also be able to add their own descriptions to images with Markup to bring a personalised feel