If you’ve been following my work or this blog for a while, you’ll know I’m always referencing that authors need to support both mouse users as well as keyboard users. What I haven’t mentioned much of is touch. Touch devices have given us a number of different modalities to interact with applications and the web. The problem arises when users don’t have the ability to touch the screen, or only have a single pointer when multiple are needed. Or perhaps they can’t perform the gesture. This is the point of this success criteria, providing an alternative for those users. Let’s check out how we do this.
Even though this is primarily targeting the use of touch input, it also applies to all interactions including the mouse. When we need to perform drag-n-drop style function or move things with the mouse, we need to provide a way that supports keyboard only interaction.
And you just paused. The thought that ran through your head is, “Wait, they said keyboard access for mobile devices?” Yes, I did. When the user has motor control difficulties, they may rely on a keyboard to interact with the device. They may also rely on a switch device, head-mounted mouse or pointer, or perhaps eye tracking. Most of these alternative control types are built upon how the keyboard interacts with the program or site. This is why we need to get keyboard interaction right.
Wait, they said keyboard access for mobile devices?
Some interactions have common design patterns that are considered standardized. Others may not. The short of it is that the gesture interaction must be replicated by single point interaction that is accessible by keyboard.
Drag-N-Drop
This is one of the most common control types that I see coming up with issues. In a mouse based interaction, the user will identify the handle (usually a couple of horizontal lines with arrows pointing up for an icon with a name to match), click and hold the mouse, and drag or move the item into a different location in the list. Taking your finger off of the mouse button will drop the item in the new location.
For the touch based user, the interaction is the same, but instead of clicking and holding with the mouse, we are touching and holding with a finger. So how do we do it with a keyboard?
For keyboard interaction we need to control the states of our tools. We still have the handle, and we make sure it is a button. When activated by Space or Enter, we provide a state via aria-pressed="true" to the button. While the button is selected, the user can use arrows to move the item in the list. Usually lists are vertical, so we would focus on the up and down arrows. Upon getting it in the right location, pressing Space or Enter will change the state of aria-pressed to false and drop the item in it’s new position. Every time the item is moved, its new position must be announced via a live region.
But not everyone will use a keyboard with a touch device. And they still may not be able to drag. To compensate for this, we need to provide a method that is a single point to move it. We already have our point to grab and activate, our trigger handle. We could provide buttons to move up or down. If you supply instructions, you can offer another method like touching above or below the activated unit to move it up or down.
Moving Within an Area
Many innovative and traditional online tools now rely on the <canvas> element to draw dynamic images and charts on the screen. The problem is that everything inside a canvas is a bitmap drawing. This means there are no controls unless we supply them.
We can do this by putting a native HTML control inside the canvas that will allow the interaction. If the canvas is something like a map where there is movement in the X and Y directions, you will need to establish what point is the focal point of the view (usually the center), inform the user when you move in a direction, how much they moved, and alerting the user to any new content or controls displayed.
Interaction here should include buttons to move X and Y. Maps also require a Zoom button. When moving in these directions we also want to tie in keyboard commands. Usually we assign those to our arrow keys. But if you have more interaction, you may need to add other shortcuts. For example, in mapping we usually have these set of controls:
- Moving North and South: Up and Down Arrow
- Moving East and West: Left and Right Arrow
- Zoom: Plus (+) and minus (-) keys
- Rotate the view (3D): A and D keys
- Horizon line (3D): W and S keys
Drawing
Drawing on screen usually means working inside a <canvas> element. To facilitate drawing, we need to establish a grid in the canvas as the single-point user will rely on that grid to move about the interface. A user then finds the coordinate point they want to start drawing at and drop a point with the keyboard, likely using Enter or Space. They can then use the grid to move to the next point they want to drop. Just like in the mapping, we want to provide a keyboard and button method for navigating X, Y, and Z axis.
Using zoom controls, the grid can be scaled to make drawing easier. For instance, if you wanted to draw a curve in bitmap, we’d drop a point. If we stay at the default scale and move right one and down one, the line will be straight. If I zoom in to where my grid only covers 4 CSS pixels, I can do the same thing and get a smoother curve. Make sure that you are announcing the coordinates as the user moves including the density of the grid.
Multi-point Gestures
Phones and tablets allow us to do multi-point gestures. These are gestures that are not able to be replicated by a mouse as they require two or more points along with a gesture. Examples of this are pinch-n-zoom, rotation, and double/triple finger swipe.
The best solution in these cases is providing buttons for when these controls are used. Using buttons on the screen with accessible labels allows the user to activate the control with a single finger to use. If they are native HTML buttons, they will automatically be in the focus order and operable by keyboard. One button would open the menu the multi-point gesture triggers. Another to move around in the menu. Expectations would be that once a menu item is selected, the menu goes away and the action is taken.
Path Based Interaction
Path based interaction requires a minimum of two points and a direct path between them. Passing through these two points in the specific order outlined (left to right, right to left) will trigger the action.
This is not to be confused with dragging. Dragging is when you touch the A point and can move anywhere on the screen before dropping it on the B point. Path based must pass through A and B in a straight-ish line. This is how drag-n-drop works, but it isn’t the only thing it is used in. Swiping maps, carousels, and tables too use a path based system. These aren’t necessarily needed, but they are natural ways humans interact with them.
Summary
If you have an action that can be triggered by a movement of the mouse or gesture, you must provide both a keyboard method and a single point method of replicating that behavior.
Want to continue the conversation? Hit me up on BlueSky or LinkedIn.

Comments are closed.