UITouchType.Stylus
Supporting the Pencil
I'm happy to finally be able to share what I've been working on: supporting the Apple Pencil.
In this post I want to cover the basics of supporting UITouchType.Stylus
, some conveniences to make it easier to support iOS 8 and iOS 9.0, and a hole in the current processing of events in iOS that can make supporting the Pencil tricky (plus how you can work around it).
Note: I'm not going to cover UIPress
in this post; that's another issue entirely.
The Basics
You may already be aware that in iOS 9.0 UITouch
gained some new properties like force
and type
. The UITouchType
enumeration also shipped with Direct
and Indirect
to distinguish between finger touches on the iPhone/iPad and indirect touches on the AppleTV.
As of iOS 9.1 we now have three touch types::
Direct | Classic finger touches | 9.0 |
Indirect | AppleTV Siri Remote | 9.0 |
Stylus | Apple Pencil | 9.1 |
UIGestureRecognizer
gains allowedTouchTypes
which does exactly what it says. If you want to detect which touch type triggered the gesture you'll need to create duplicate recognizers with different allowed touch types because the information is no longer available once the gesture selector fires.
UITouch
gains preciseLocationInView
which returns the pixel-perfect location of the touch. For stylus input this is probably the method you want to use because it gives you very accurate location information.
The docs warn against using
preciseLocationInView
for hit-testing because finger touches can often lie a bit outside a view but overlap it enough to be considered a hit.
UITouch
also gains some angle-related properties that are only valid for stylus input:
azimuthAngleInView
andazimuthUnitVectorInView
: These give the rotation of the Pencil around an axis extending from the tip to the cap, or what I like to think of as the "yaw".altitudeAngle
: The angle (in radians) of the Pencil with respect to the iPad's screen;π/2
means the Pencil is perfectly perpendicular to the screen, while0
means it is lying flat on the screen.
The force of the tip pressing against the screen comes from force
, the same as 3D Touch on the iPhones.
Don't forget: by enabling multi-touch you can let the user perform completely different simultaneous actions with fingers and the Pencil. Try using the ruler and the Pencil in the Notes app for an example.
A Word on the Run Loop
There is one important thing you must know or the next two sections won't make any sense at all. The main run loop aims to complete one iteration every 16.67ms (or 60fps). If there is no work to do or handling events takes less time then the run loop will idle until it is time to display the next frame. Think about it... if the run loop were just a while true { }
loop then it would spend the majority of its time chewing up battery life asking "Are there new events? No? Cool. Are there new events yet? No? Cool. Are there new events yet?...". To the CPU, it is very likely the user will take an eternity to provide the next input.
The problem arises when you have fancy touch digitizers that scan input 120 or 240 times per second. Do you wake the run loop every time new input is available? The animation system (among many other things) assumes 60fps. Even if it didn't, it would be a waste to double the number of animation frames you need to draw just because a finger happens to be touching the screen.
iOS takes a different approach to handling all that extra touch input: coalescing.
Coalescing
Touch coalescing takes the extra touch input received during the previous frame and makes it available to you during the next frame.
Coalescing is actually very simple to use. There's a method on UIEvent
called coalescedTouchesForTouch
. In touchesMoved:withEvent:
just pass the current touch you're handling to that method and you'll get back all the touch events you missed between the last time touchesMoved:withEvent:
was called. On a 120hz touch input display that means two touch events are received every frame. On the iPad Pro the 240hz rate gives you four touch events per frame. For a continuous touch that means coalescedTouchesForTouch
should return 2 or 4 touches respectively (the array includes the current touch so you can use a simple loop to process the touches)
If your app is drawing a shape then just call your addPoint
method in a loop on the coalesced touches array; if there is no array then fall back to the current touch.
if let touches = event?.coalescedTouchesForTouch(touch) {
for t in touches {
addPoint(t.preciseLocationInView(self.view))
}
} else {
addPoint(touch.preciseLocationInView(self.view))
}
One word of caution: For non-coalesced touch handling, the same UITouch
object corresponding to the same input device or finger is re-used for every event. Coalesced touches don't do that, they are always new objects for every touch (think of them as snapshots). If you were using the trick of checking pointer equality to track different fingers you'll have to keep using the non-coalesced touch object for that before deciding to grab the coalesced touches.
Prediction
The Pencil introduces a new wrinkle (or rather exposes one that was always present): Any lag between touch input and the display updating is perceived and makes the app feel less like a physical thing and more like some disconnected virtual entity mired in molasses.
iOS 9 partly handles this by reducing the latency for touch processing and Core Animation updates, from around 60ms to 30ms. To put it another way, if the user touches the screen at the start of an animation frame you can update the display in response within two animation frames.
Another technique used by iOS is touch prediction. Touches can provide estimated values (where iOS thinks the value will be in the future), then update those values once input has caught up. Prediction itself isn't new to iOS 9.1; it was part of iOS 9.0. However there was no information about the touch's predicted values, you just used an API similar to coalesced touches: predictedTouchesForTouch
.
I won't go into full detail on prediction here but I can warn you that the docs are extremely thin. The best information is probably in this WWDC 2015 Video: Advanced Touch Input on iOS.
The basic idea is that you should store the predicted touch points separately but draw them as if they were real. When you get the next update throw away all the old predicted points and replace them with the coalesced points (which are what the user actually did, not the system's guess), then tack-on the new predicted touch points.
If you're wondering how iOS handles prediction when there are various values to predict (like force, azimuth angle, etc), read on!
Estimating
In iOS 9.1 Apple added several properties related to prediction to UITouch
. These properties let you see which values are predictions and which ones are known. (For some confusing reason these are all called estimated values rather than predicted values.)
estimationUpdateIndex
: Remember how coalesced and predicted touches are always new instances? This index is monotonically increasing and allows you to match up a future touch with the original touch that provided an estimated value. In other words newer touches with the sameestimationUpdateIndex
replace values from older ones with that index.estimatedProperties
: These flags indicate which values are currently predicted or estimated values on the touch object (which may not be all the properties expected to have future updates)estimatedPropertiesExpectingUpdates
: These are properties that are expected to have updated values in the future. If a property is not included here then the current value is the final estimated value and you can stop looking for updates to it.
Using these properties you can now store estimated force, angle, etc separately and track their predicted vs actual values. A touch may in fact have a precise force and position but a predicted angle. In the next moment it may gain a predicted location while having a known angle. (For UIResponder
s you can receive estimated property updates via touchesEstimatedPropertiesUpdated
.)
You can find examples for most of this touch handling stuff in the TouchCanvas example project.
Supporting iOS 8.0 / 9.0
Although normally I'd check for a feature by using respondsToSelector
in this case there isn't a convenient way to do that because iOS 9.0 just introduces a new enum constant. I suppose you could take the address of the UITouchTypeStylus
case in Objective-C but an OS version check seems easier... plus it will simplify what we need to do to support iOS 8.
Apple finally bowed to the inevitable and implemented their own version checking mechanism, so at least we can be reasonably certain the check is correct and forward-compatible:
let hasStylus = NSProcessInfo.processInfo().isOperatingSystemAtLeastVersion(
NSOperatingSystemVersion(majorVersion: 9, minorVersion: 1, patchVersion: 0))
Next we'll map preciseLocationInView
to locationInView
on earlier operating systems. This lets us be clear in our intent by using preciseLocationInView
when we want points to draw but use locationInView
when performing hit testing (regardless of input method). You don't have to do it this way - you could add your own category methods to UITouch
if you wanted.
let touchClass = UITouch.self;
let locInView = class_getInstanceMethod(touchClass, Selector("locationInView:"))
let locationInViewImp = method_getImplementation(locInView)
let locationInViewTypes = method_getTypeEncoding(locInView)
class_addMethod(touchClass, Selector("preciseLocationInView:"), locationInViewImp, locationInViewTypes)
Since you'll probably re-use them a lot it may also make sense to add a category to UITouch
that provides class methods with the various combinations of touch types. It can detect if you're on iOS 9.0 and just return an empty array if asked for touchTypesStylusOnly
.
Event Routing 101
You might be tempted to route touch events to different views based on the input device. If you can't imagine why then let me back up and talk about touch handling and routing on iOS.
UIKit uses a multi-phase routing system to determine who can receive touch events. Input first comes to the UIWindow
which performs hit testing on its root view, following the chain down subviews until it locates the UIView
it believes should own the touch.
You can modify this by returning a different view at any level, so subviews can choose to handle all touches for their children by returning self
from hitTest:withEvent:
. The default implementation of hitTest:withEvent:
calls pointInside:withEvent:
to quickly check that the touch location is inside the bounds of the UI object. If so, hitTest:withEvent:
is called recursively on all subviews to determine which view should own the touch.
Once an "owning" view has been determined all the gesture recognizers attached to that view and it's entire superview chain are given a chance to handle the touch. If any of them are set to delayTouchesBegan
then UIKit will cache the touches on the UIEvent
and the view won't see them until the gesture recognizer fails. Otherwise if a gesture recognizer later succeeds the view will get touchesCancelled:withEvent:
to let it know the previous touches should be ignored.
Assuming no gesture recognizers are involved and delaying touches, the event handling chain begins. UIWindow.sendEvent
is called and routes the touch event to the owning view. If the owning view doesn't handle the event, it is passed to the nextResponder
in the UIResponder
chain.
The docs warn you not to call the
nextResponder
directly, but to callsuper
and let it forward to the next responder.
The Ugly Parts
Notice how I refer to "owning view"? UIKit was not really built with the idea that anyone would want to actually use the responder chain with touches and in fact most built-in UIView
s do not behave properly when you attempt to forward touches. Once a view "owns" the touches you have lost the opportunity to redirect the touches elsewhere. You'll also notice gesture recognizers happen in an earlier phase so a lot of behavior (like UIScrollView
pan/zoom) couldn't even be handled by forwarding anyway.
Imagine you have a view on top of a scroll view and you want to direct all stylus input to the overlaying view, but let finger touches control the scroll view. It simply isn't possible to setup any combination of forwarding or allowedTouchTypes
to make this work because as far as UIKit is concerned the overlaying view owns the touches. The scroll view can't see them and even if you forward them the scroll view's gesture recognizers can't see them. If you try to manually feed the touches into the gesture recognizers you'll get some really odd behavior and everything will break.
In theory you can perform this kind of routing at the UIWindow
level by controlling hit testing in hitTest:withEvent:
. The touches don't have an owning view at that point so the set of touches is empty. The UIEvent
has a type which does have Direct
and Indirect
, but unfortunately there's a hole in the implementation: Stylus
is absent so it isn't possible to route the touches this way to control the view that will ultimately own them.
Workaround
The workaround in our example scenario is to disable user interaction on the overlaying view and add a custom UIGestureRecognizer
subclass to the scroll view. Set the custom recognizer to only allow Stylus
input, then set the scroll view's pan gesture recognizer to only allow Direct
and Indirect
. Then your custom recognizer can forward all the touch events on to the overlaying view. As long as your own views are built to anticipate forwarded events it is safe to do so (you just can't do it with built-in UIKit views).
Alas, the Simulator
Although the simulator does have the ability to simulate force presses, as far as I am aware it cannot simulate Stylus input so until you snag your iPad Pro and Apple Pencil you'll have to take your best guess at how this all fits together.
Conclusion
Using the Pencil is a fantastic experience. I once owned a PocketPC running Windows CE; it required a stylus to use. Later I had a chain of Windows Mobile phones that also required a stylus to use. The Apple Pencil is nothing like those awful plastic sticks and it makes me want to slap anyone trying to use the Steve Job's stylus quote in reference to the Pencil.
I'm also glad the Pencil has arrived now that finger-based touch has a proven track record. I remember when the iPhone was announced; many people said touch-based UIs just couldn't work (or at least couldn't be taken seriously). If Apple had shipped a stylus back then, developers would have taken the lazy way out and simply designed UIs that required a stylus.
Now go forth and get your Pencil on!
P.S. I still haven't seen a Smart Keyboard; it seems no amount of string-pulling or pleading can make one of those things appear.
This blog represents my own personal opinion and is not endorsed by my employer.