<![CDATA[jeffwatkins.dev]]> https://jeffwatkins.dev/ Sun, 10 May 2020 00:00:00 -0700 en-us Copyright 2026 3600 <![CDATA[Building an adaptive button]]> https://jeffwatkins.dev/articles/building-an-adaptive-button https://jeffwatkins.dev/articles/building-an-adaptive-button Sun, 10 May 2020 00:00:00 -0700 I know. I started this series by extolling the virtue of using a UIButton instead of building something completely custom. You’d think for this next instalment you’d be settling in to learn the secrets of how to make your UIButtons look like this…

image-20200407073922753

Or more interestingly this…

image-20200407073958596

If we wanted to create buttons with only a single line of text or with only manual layout, this would be absolute simplicity. However, I specifically want multiple lines of text and auto layout is a requirement for any reasonable component at this point. But given how jealously UIButton manages its constraints, the only way to build a component similar to what I want would be to completely ignore its titleLabel and imageView. After exploring a number of possibilities, I reluctantly concluded I couldn’t create a UIButton derived button at all.

But just because I’m not using UIButton doesn’t mean I’m giving up and doing everything from scratch. There’s still no excuse for using UITapGestureRecognizer to solve this problem. The solution lies in moving one step up the class tree to UIControl.

By subclassing UIControl we get touch tracking implemented for us. All we need do is properly configure our constraints for titleLabel, subtitleLabel, and imageView and when our button receives touches update our colours appropriately in response to isHightlighted.

public override var isHighlighted: Bool {
    didSet {
        self.updateColors()
    }
}

The result is a fully functional button as you can see here:

Because I’ve configured both titleLabel and subtitleLabel to adjust their text size for Dynamic Type, all I need to do is tweak the size of the image when the content size category changes.

func updateImageHeightConstraint() {
    guard let imageView = self.imageView else { return }
    guard let image = imageView.image else { return }

    let bodyFont = UIFont.preferredFont(forTextStyle: UIFont.TextStyle.body)
    let lineHeight = bodyFont.lineHeight

    let size = image.size
    let aspectRatio = size.width / size.height
    let width = lineHeight * aspectRatio

    if self.imageWidthConstraint == nil {
        self.imageWidthConstraint = imageView.widthAnchor.constraint(equalToConstant: width)
        self.imageWidthConstraint?.priority = UILayoutPriority.defaultHigh
        self.imageWidthConstraint?.isActive = true
    } else {
        self.imageWidthConstraint?.constant = width
    }
}

The result is a button that’s both adaptive to different layouts and accessible.

Of course, when the button reaches accessibility text sizes it gets rather unusably large, so it might make sense to limit the point size of the fonts for the title and subtitle labels. We want the button to grow with Dynamic Type but not become crazy large.

var titleFont: UIFont {
    let largestContentSizeCategory = UITraitCollection(preferredContentSizeCategory: UIContentSizeCategory.accessibilityMedium)
    let normalContentSizeCategory = UITraitCollection(preferredContentSizeCategory: UIContentSizeCategory.large)

    let largestFont = UIFont.preferredFont(forTextStyle: UIFont.TextStyle.body,
                                           compatibleWith: largestContentSizeCategory)
    let baseFont = UIFont.preferredFont(forTextStyle: UIFont.TextStyle.body,
                                        compatibleWith: normalContentSizeCategory)

    let fontMetrics = UIFontMetrics(forTextStyle: UIFont.TextStyle.body)
    return fontMetrics.scaledFont(for: baseFont,
                                  maximumPointSize: largestFont.pointSize,
                                  compatibleWith: self.normalContentSizeCategory)
}

var subtitleFont: UIFont {
    let largestContentSizeCategory = UITraitCollection(preferredContentSizeCategory: UIContentSizeCategory.accessibilityMedium)
    let normalContentSizeCategory = UITraitCollection(preferredContentSizeCategory: UIContentSizeCategory.large)

    let largestFont = UIFont.preferredFont(forTextStyle: UIFont.TextStyle.footnote,
                                           compatibleWith: largestContentSizeCategory)
    let baseFont = UIFont.preferredFont(forTextStyle: UIFont.TextStyle.footnote,
                                        compatibleWith: normalContentSizeCategory)

    let fontMetrics = UIFontMetrics(forTextStyle: UIFont.TextStyle.footnote)
    return fontMetrics.scaledFont(for: baseFont,
                                  maximumPointSize: largestFont.pointSize,
                                  compatibleWith: normalContentSizeCategory)

}

The one drawback to using a UIControl instead of a UIButton is there doesn’t seem to be a good way to convince it to send UIControl.Event.primaryActionTriggered when the user taps the button. Instead you’ll need to add your actions to UIControl.Event.touchUpInside.1 This is a bummer, but not the end of the world. I’ll probably add a bit more code to simulate the .primaryActionTriggered control event.

An additional draw back when using the UIControl in Interface builder is that dragging a link from the button to a View Controller to set up an action will connect the UIControl.Event.valueChanged handler. I wish I had some clue how to fix this, but I don’t use Interface Builder and in my regular workflow.

Using UIControl to replace UIButton seems completely viable. The result is accessible to Dynamic Type and Voice Over as well as meeting my design goals. I admit I’m disappointed I wasn’t able to find a solution using UIButton, but that doesn’t mean you shouldn’t use UIButton for simpler tasks.

If you’d like to take a look at the code, please hop over to GitHub. I make no promises or warrantees. I might even update it as I explore new solutions.


  1. I find lots of iOS developers still use UIControl.Event.touchUpInside any way, at least when configuring their target/actions in code. So this might not be too big a problem. ↩︎

]]>
<![CDATA[Constraints and UIButton]]> https://jeffwatkins.dev/articles/constraints-and-uibutton https://jeffwatkins.dev/articles/constraints-and-uibutton Thu, 09 Apr 2020 00:00:00 -0700 While UIControl has contentHorizontalAlignment and contentVerticalAlignment, neither of these properties is sufficient to allow us to specify a re-alignment of the icon and title in the button. So let’s add an enumeration to our Button class.

public enum ContentLayout {
    /// Arrange content horizontally with the traditional layout:
    /// icon followed by title.
    case horizontal
    /// Arrange content horizontally with the reverse of the traditional
    /// layout with the title followed by the icon.
    case horizontalReversed
    /// Arrange content vertically with the icon followed by the title.
    case vertical
    /// Arrange content vertically with the title followed by the icon.
    case verticalReversed
}

That would allow us to specify horizontal buttons like the following:

image-20200407073922753

And also vertical buttons like the following:

image-20200407073958596

If we look at UIButton there are methods that stand out as potential candidates we might implement to get icon and label layouts like we want:

open func contentRect(forBounds bounds: CGRect) -> CGRect
open func titleRect(forContentRect contentRect: CGRect) -> CGRect
open func imageRect(forContentRect contentRect: CGRect) -> CGRect

The first, contentRect(forBounds:), allows us to modify the location of the content of the button given the bounds of the UIView. This is helpful if we want to draw a fancy border around the button. Honestly, you’re probably better off just setting the alignmentRectInsets and allowing auto layout to figure this out for you however.

The other two — titleRect(forContentRect:) and imageRect(forContentRect:) — allow us to move the location of the title and image around within the content of the button. This seems like exactly what we what we want to do. However, there’s a catch: these methods don’t execute at all when we’re using auto layout — which is necessary to make multiline labels work. We’ll need an alternate solution.

By adding two method overrides to our button we can add breakpoints to determine what constraints are being added and when:

public override func addConstraint(_ constraint: NSLayoutConstraint) {
    super.addConstraint(constraint)
}

public override func addConstraints(_ constraints: [NSLayoutConstraint]) {
    super.addConstraints(constraints)
}

We discover all of the (questionable) constraints are added to our button in updateConstraints. In fact, UIButton inserts a couple new instances of a private _UIButtonContentCenteringSpacer class to handle arranging the views. This isn’t how we’d probably do things today, but auto layout wasn’t quite as sophisticated when UIButton first adopted it.

If we take a look at the view hierarchy at the start of updateConstraints we see mostly what we expect:

<Button: 0x7f8c65e0fb50; baseClass = UIButton; frame = (150 114.5; 54 34)>
   | <UIImageView: 0x7f8c65c0ef20; frame = (20 10; 14 14)>
   | <UIButtonLabel: 0x7f8c65d306f0; frame = (1.5 6.5; 51 21); text = 'Button'>

However, immediately after updateConstraints executes we can see the addition of the _UIButtonContentCenteringSpacer views1:

<Button: 0x7f8c65e0fb50; baseClass = UIButton; frame = (150 114.5; 54 34)>
   | <UIImageView: 0x7f8c65f05100; frame = (0 0; 54 34)>
   | <UIImageView: 0x7f8c65c0ef20; frame = (20 10; 14 14)>
   | <UIButtonLabel: 0x7f8c65d306f0; frame = (1.5 6.5; 51 21)>
   | <_UIButtonContentCenteringSpacer: 0x7f8c65f2f790; frame = (0 0; 0 0); hidden = YES; tag = 12000274>
   | <_UIButtonContentCenteringSpacer: 0x7f8c65f2fed0; frame = (0 0; 0 0); hidden = YES; tag = -12000274>

Apple’s documentation includes this warning about implementing an override for updateConstraints:

image-20200407084922973

This puts us in a somewhat tricky situation. We should always heed admonishments like this, however, the constraints UIButton creates in its updateConstraints aren’t quite fit for our purpose. Because my testing revealed all constraints were added in updateConstraints I’d be tempted to simply skip the UIButton implementation of updateConstraints and call UIControl directly.

The following bit of clever runtime manipulation is courtesy Dave DeLong, but I also received a bunch of help from the Seattle Xcoders group. Basically, we need to create a call to super that bypasses UIButton. We can do that using the runtime:

struct objc_super skipSuper;
skipSuper.receiver = self;
skipSuper.super_class = [[[self class] superclass] superclass];

void (*callSuper)(struct objc_super *, SEL) = (void (*)(struct objc_super *, SEL))objc_msgSendSuper;
callSuper(&skipSuper, _cmd);

I’d need to translate this into Swift — which I suppose I’m capable of — or rewrite my button class in Objective-C — which would make me rather happy. But for an example of the right way to do things, this feels wrong somehow. :)

Instead, I’m tempted to allow UIButton to create it’s awkward constraints on its empty titleLabel and imageView and create our own views instead. That means we’ll be losing the default support UIButton gives us for highlighting the button when touches begin, displaying an alternate appearance when disabled, and possible some more things. To get this functionality which UIButton offers out of the box, we’ll need to implement the following at least:

open override var isHighlighted: Bool {
    didSet { … }
}

open var isEnabled: Bool {
    didSet { … }
}

But I wouldn’t be surprised if we wind up having to implement some combination of the following as well to ensure everything works correctly:

open func beginTracking(_ touch: UITouch, with event: UIEvent?) -> Bool

open func continueTracking(_ touch: UITouch, with event: UIEvent?) -> Bool

open func endTracking(_ touch: UITouch?, with event: UIEvent?)

open func cancelTracking(with event: UIEvent?)

This feels like a lot of work and maybe it’s not any different from just creating a subclass of UIControl and doing everything from scratch. We’ll also want to make certain our button is accessible by setting the accessibilityLabel at some point, but we were already doing that anyway.

This feels like a lot of work ahead. Not an unreasonable amount of work for a first-class UI component, but certainly not something you’d want to knock out in an evening. But if your application calls for buttons of this kind, it would definitely make sense to invest a moderate amount of time to create a reusable component like this.


  1. Did you spot the tag value? I certainly did. How many times have you been told not to use viewWithTag because it performs a recursive search of the view hierarchy? I mean you should still avoid it, but really, I have to wonder why not just add two additional view pointers? ↩︎

]]>
<![CDATA[Dressing up your UIButton]]> https://jeffwatkins.dev/articles/dressing-up-your-uibutton https://jeffwatkins.dev/articles/dressing-up-your-uibutton Thu, 02 Apr 2020 00:00:00 -0700 Before we start decorating our buttons with faux leather backgrounds reminiscent of the days before iOS 7, lets take a moment to see how our buttons behave for users with vision limitations. By default, Apple would exempt buttons from scaling along with the text of your UI. I think this is a mistake. You can change this behaviour quite easily:

button.titleLabel?.adjustsFontForContentSizeCategory = true

However, in the previous example with multiline buttons, this won’t work because the fonts are embedded in the NSAttributedString. We’ll need to do a tiny bit more work. Really, it’s nothing.

public override func traitCollectionDidChange(_ previousTraitCollection: UITraitCollection?) {
    guard self.traitCollection.preferredContentSizeCategory != previousTraitCollection?.preferredContentSizeCategory else { return }

    self.updateTitle()
}

So when the trait collection changes, if we notice that the preferred content size category has changed, we update the title. I suppose I could have been smarter by just updating the fonts in the NSAttributedString, but to be honest, most folks don’t change their type size. They set it and go. This will work just fine.

Multiline button at AX4 size

Now let’s take a look at bringing back the nice rounded button styles we’re starting to see in some iOS apps. There are a number of ways we could go about this, but in keeping with how I like to approach things, we’re going to take advantage of the features UIButton already offers rather than building something new.

Unfortunately, when we add a background to UIButton the code that adds spacing between the image and the title stops working. I’m not certain exactly why1, but you can see the result here:

rounded-background-button

In a future article I’m planning to show you how to move the image all around the button, so for now, I’ll simply remove the image leaving a nice (non-broken) button with a rounded background image:

rounded-background-button

I’m going to get a little tricky with this button and overload the meaning of the backgroundColor property. Instead of backgroundColor determining the background colour of the button itself, I’m going to use that to drive the colour of the background image. To do this, I’ll override the backgroundColor property:

var _buttonBackgroundColor: UIColor?
public override var backgroundColor: UIColor? {
    get { _buttonBackgroundColor }
    set {
        _buttonBackgroundColor = newValue
        self.setupButton()
    }
}

Notice, I don’t ever call super here. So the button will never know it’s background colour has changed. Instead, I store the colour and call setupButton.

In the previous arcticle about UIButton, I failed to mention the setupButton method. This wasn’t out of any nefarious intent, but rather because setupButton did simple things like setting up constraints to ensure the titleLabel didn’t extend beyond the top or bottom of the button when it began to wrap.

if self.titleConstraints.isEmpty, let titleLabel = self.titleLabel {
    titleLabel.numberOfLines = 0
    titleConstraints.append(titleLabel.topAnchor.constraint(greaterThanOrEqualTo: self.topAnchor))
    titleConstraints.append(titleLabel.bottomAnchor.constraint(lessThanOrEqualTo: self.bottomAnchor))
    NSLayoutConstraint.activate(titleConstraints)
}

But now I need to add a bit more code in there to handle configuring the background image based on the new background colour:

if let backgroundColor = self.backgroundColor {
    self.contentEdgeInsets = UIEdgeInsets(top: self.cornerRadius / 2.0,
                                          left: self.cornerRadius,
                                          bottom: self.cornerRadius / 2.0,
                                          right: self.cornerRadius)
    let backgroundImage = self.backgroundImage(fill: backgroundColor)
    let highlightImage = self.backgroundImage(fill: backgroundColor.withAlphaComponent(0.3))
    self.setBackgroundImage(backgroundImage, for: UIControl.State.normal)
    self.setBackgroundImage(highlightImage, for: UIControl.State.highlighted)
} else {
    self.setBackgroundImage(nil, for: UIControl.State.normal)
}

This background image is created using the following function which takes a fill colour and returns a resizeable image based on the cornerRadius of the button.

func backgroundImage(fill fillColor: UIColor) -> UIImage {
    let radius = self.cornerRadius
    let size = CGSize(width: radius * 2 + 1, height: radius * 2 + 1).floorToPixel()
    let rect = CGRect(origin: CGPoint.zero, size: size)

    let renderer = UIGraphicsImageRenderer(size: size)

    let backgroundImage = renderer.image { _ in
        let path = UIBezierPath(roundedRect: rect, cornerRadius: radius)
        fillColor.set()
        path.fill()
    }

    return backgroundImage.resizableImage(withCapInsets: UIEdgeInsets(all: radius))
}

Unfortunately, this code has one flaw: at large accessibility sizes, the text bumps up against the side of the background image as you can see here:

rounded-button-ax4

To fix this we’ll add some constraints to the titleLabel to ensure it respects the contentEdgeInsets and keeps a nice amount of padding around the button:

if self.currentBackgroundImage != nil {
    if let titleLabel = self.titleLabel, self.backgroundConstraints.isEmpty {
        self.backgroundConstraints = [
            titleLabel.leadingAnchor.constraint(greaterThanOrEqualTo: self.leadingAnchor,
                                                constant: self.cornerRadius),
            titleLabel.topAnchor.constraint(greaterThanOrEqualTo: self.topAnchor,
                                            constant: self.cornerRadius / 2.0),
            self.trailingAnchor.constraint(greaterThanOrEqualTo: titleLabel.trailingAnchor,
                                           constant: self.cornerRadius),
            self.bottomAnchor.constraint(greaterThanOrEqualTo: titleLabel.bottomAnchor,
                                         constant: self.cornerRadius / 2.0)
        ]
        NSLayoutConstraint.activate(self.backgroundConstraints)
    }
} else {
    if !self.backgroundConstraints.isEmpty {
        NSLayoutConstraint.deactivate(self.backgroundConstraints)
        self.backgroundConstraints = []
    }
}

These constraints ensure our button looks perfect even at accessibility sizes.

rounded-button-ax4-correct

We don’t build skeuomorphic apps any more, but that doesn’t mean we can’t have nice things. Building a great app has always meant putting in a bit more work than just putting standard controls on the screen and hoping for the best. Adding a rounded background to your buttons is no different. But just like adding multiline support, do it once and you’ll have that functionality across your entire app.


  1. Yes, I know I should file a Feedback and maybe I even will. But ever since Apple switched from Radar to Feedback it’s felt like even more of a black hole. Although I have to say the iOS app for filing feedback is definitely much nicer than the old web site. ↩︎

]]>
<![CDATA[Nobody loves UIButton]]> https://jeffwatkins.dev/articles/nobody-loves-uibutton https://jeffwatkins.dev/articles/nobody-loves-uibutton Tue, 31 Mar 2020 00:00:00 -0700 I’m rather particular about ensuring my apps follow accessibility best practices. I really want to ensure as many people are getting the benefit from my work as possible. That’s why I cringe when I see code like this:

let tapRecogniser = UITapGestureRecognizer(target: self,
                      action: #selector(doSomethingGreat))
complicatedView.addGestureRecognizer(tapRecognizer)

In this example complicatedView is a view containing maybe a label or two and an image. You know, rather like a button.

Chances are you’ve seen something like this in code you work on. Maybe you’ve even written something just like it. There are a number of accessibility problems with using a UITapGestureRecognizer as if it were a button. Lets start with the first most obvious one: it isn’t a button, so your views won’t behave like a button.

We’re all familiar with buttons. So familiar in fact that they’ve blended into the background of iOS user interfaces. We’ve probably forgotten many of the intricacies of how buttons interact with users. First, they provide feedback as demonstrated in the video below.

As you can see, tapping on a button causes the text to highlight while tapping on the views using a UITapGestureRecognizer does not. This is an important bit of feedback. It’s one of the many things customers look for when they determine whether your app feels like a native app.

I’ve also heard complaints about how hard it is to work with multiple lines of text in UIButton. As you can see, the demo I’m using has two lines of text each in a different font. To make this easier on myself, I subclassed UIButton.1 My subclass exposes two additional properties title and subtitle and makes setTitle(_ title:, for state:) and setAttributedTitle(_ title, for state:) unavailable to ensure code uses the new properties.

The properties are both very simple:

public var title: String? {
    didSet {
        self.updateTitle()
    }
}

public var subtitle: String? {
    didSet {
        self.updateTitle()
    }
}

All the complexity lives in the updateTitle method. After fetching the paragraph style to use based on the content alignment, we set to work creating an attributed string for the button text:

if let title = self.title {
    let font = UIFont.preferredFont(forTextStyle: UIFont.TextStyle.body)
    let attributes = [NSAttributedString.Key.font: font]
    let attributedTitle = NSAttributedString(string: title,
                                             attributes: attributes)
    buttonText.append(attributedTitle)
    accessibilityParts.append(title)
}

Starting with the title property, I create an attributed string using the preferred font for body text. You’ll note I also record the title for use in the accessibility label later. Next, I do the same thing for the subtitle property:

if let subtitle = self.subtitle {
    let font = UIFont.preferredFont(forTextStyle: UIFont.TextStyle.footnote)
    let attributes = [NSAttributedString.Key.font: font]
    let attributedSubtitle = NSAttributedString(string: subtitle,
                                                attributes: attributes)
    if buttonText.length > 0 {
        buttonText.append(NSAttributedString(string: "\n"))
    }
    buttonText.append(attributedSubtitle)
    accessibilityParts.append(subtitle)
}

Finally, I apply the paragraph style corresponding to the value of UIControl.ContentHorizontalAlignment and use a wee bit of trickery to ensure the title updates without any unneeded blinking:

let range = NSRange(location: 0, length: buttonText.length)
buttonText.addAttribute(NSAttributedString.Key.paragraphStyle,
                        value: paragraphStyle,
                        range: range)

// Prevent an unwanted title update animation by turning off animations
// and calling layoutIfNeeded.
UIView.performWithoutAnimation {
    super.setAttributedTitle(buttonText, for: UIControl.State.normal)
    self.layoutIfNeeded()
}

And because I want to ensure my multi-line button sounds just right I set the accessibilityLabel value when I’m all done2:

self.accessibilityLabel = accessibilityParts.joined(separator: "\n")

That probably seems like a lot of work just to get multiple lines of text working in a UIButton, but it’s worth it. First, now I have a multiline button that behaves like a button. And it’s accessible as you can see in the following video.

Voice Over users expect to be given a hint that the item they’re interacting with can be activated — that it isn’t just static text. In the case of buttons, that hint is the additional announcement of “Button” after the phrase “Edit something, you know you want to.” Furthermore, the entire phrase is one Voice Over element, while the multi-view UITapGestureRecognizer exposes each label individually — and each can be activated to perform the action of the pseudo-button.

Of course, if you’ve gone all in on SwiftUI you won’t be looking back at UIButton at all, but if you’re like most of us and can’t yet adopt SwiftUI there’s still a lot to love about UIButton. I’d urge you to take a closer look. I bet you’ve misjudged it.


  1. So scandalous! There once was a time when we were told not to do this, because of the dangers of class clusters. But now this seems to be totally fine as long as you’re using a standard button. ↩︎

  2. In theory, UIButton should use the attributedTitle for the accessibilityLabel, however it doesn’t seem to pause correctly for the newline. This just makes it sound a little better. ↩︎

]]>
<![CDATA[A Native Server-driven UI]]> https://jeffwatkins.dev/articles/native-server-driven-ui https://jeffwatkins.dev/articles/native-server-driven-ui Sun, 15 Mar 2020 00:00:00 -0700 I was really impressed with Kate Castellano’s talk on day two of UIKonf 2019. She explained how Clue was using server generated JSON to configure native UI to provide a highly flexible experience to their users. If you haven’t seen her talk, you should take a moment to watch it.

She gave a follow up talk at Pragma Conf later in 2019 with even more information including a demo of adding a video component to the templated system. I’ve been mulling over how this technique changes building native apps for a while and her talk gave me a lot to think about.

When I led the front-end team at the Apple Online Store, I tried to move more and more of the logic into the Javascript on the visitor’s browser. This worked well enough for components like the Valentine’s Day custom engraving feature, but we discovered running a lot of Javascript client-side was slow during the checkout process and unbearably — sale preventingly — slow on slower browsers. Of course, all this was before the shadow DOM made what we were trying to do so much faster and more efficient. Everyone’s using React or Vue or something client-side these days.

Years later as the primary developer working on the iOS versions of iTunes Connect and App Store Connect I struggled with this same balance of where the code should execute. Except this time I argued for the intelligence to be on the server; I’d learnt my lesson. The client didn’t have any place interpretting the business logic and rules that might change at — what seemed to us — a moment’s notice. Far better for the client to receive an already rendered value than to attempt to concatenate an application’s platform, bundle version, and status together to form a string like “iOS 2020.3.13 Pending”. The more the client served to simply display data sent from the server, the more I could focus on building better infrastructure which would allow us to deploy more features1.

The natural extension of this approach is a server-defined user interface like Kate describes in her UIKonf & Pragma Conf talks. I think building the UI from the server is particularly appropriate when an application is driven by campaigns or other timely information (as is the case in Kate’s examples). When you think about it, most of our applications are all about timely information unless you’re building a reference library. Giving your marketing and product teams greater control over how the application changes may seem like giving up that control, but it really gives you the flexibility to focus on what you do best: building great software.

I really like this idea and would love to explore it a bit. I’ve got the bare beginnings of an idea for a sample app that I’m going to bounce off a few folks. This might give me the opportunity to expore a couple other technologies I’ve been curious about like Sign in with Apple and CloudKit.


  1. I know it didn’t seem like either app iterated quickly on features. I had a wish list as long as my arm of features I wanted to implement and our design team had another list they wanted to jump on, but for reasons we could just never get them scheduled. Let’s say it was a frustrating job. ↩︎

]]>
<![CDATA[Misusing Appearance Selectors]]> https://jeffwatkins.dev/articles/misusing-appearance-selectors https://jeffwatkins.dev/articles/misusing-appearance-selectors Fri, 07 Feb 2020 00:00:00 -0800 Whether it’s right or wrong, I tend to approach app development as if I were still an Apple framework developer. Perhaps my background as a Objective-C developer is why I feel particularly comfortable extending UIKit classes with additional functionality. I’ll often find myself writing an extension of UIView where I’d like to know when the view has been added to the view hierarchy. I could just implement willMove(toSuperview:) (AKA -willMoveToSuperview:) of which the documentation says:

The default implementation of this method does nothing. Subclasses can override it to perform additional actions whenever the superview changes.

But how can I be certain noone in our app or any library we use implements willMove(toSuperview:) or will ever implement willMove(toSuperview:)? Because they’re not expecting the implementation in UIView to do anything, they might not call super. Then my code won’t run.

Clearly, what I need are life-cycle hooks instead. I’m pretty certain I filed a radar about this while I was at Apple, but I’m not going to wait under water for them to be implemented.

Instead, lets consider UI_APPEARANCE_SELECTOR.

The documentation on UI_APPEARANCE_SELECTOR is nearly non-existent, but it marks an Objective-C selector as participating in the shadowy world of UIAppearance. UIKit evaluates appearance selectors on views as they are added to the view hierarchy. We can abuse this behaviour for our own nefarious ends.

I’m going to use Objective-C code here, because I can’t be bothered to figure this out in Swift. In my UIView categories, I include the following bogus UI_APPEARANCE_SELECTOR:

@interface UIView (Sample)

@property (nonatomic, setter=set_jw_bogus) BOOL jw_bogus UI_APPEARANCE_SELECTOR;

@end

Because we’re dealing with a category and the compiler can’t add properties to UIView, we’ll need to implement the getter & setter. But I don’t actually care about the value of jw_bogus. It’s purely there for the side effect of being called.

@implementation UIVIew (Sample)

- (void)set_jw_bogus:(BOOL)jw_bogus
{
    // Here's where we can trigger code that needs to run when the view is added to the view hierarchy.
}

- (BOOL)jw_bogus
{
    return YES;
}

@end

None of this will do any good unless we tell UIKit views should have a custom “appearance”. We can do that with a static constructor which will be executed as your app starts up. This grabs the UIAppearance proxy for UIView and sets the jw_bogus property to YES. This tells UIKit every instance of UIView should have it’s jw_bogus property set to YES after it is added to the view hierarchy.

__attribute__((constructor))
static void SetupBogusAppearance()
{
    [UIView appearance].jw_bogus = YES;
}

Of course, there are problems with this approach. Because it’s merely tricky and not evil, it’s not quite as efficient as it could be. If you move a tree of views from one view hierarchy to another, all the views in the moving hierarchy will be notified when they’re added to the new view hierarchy instead of just their root view. This could be inefficient. In a recent project I where I tried to use this, the bottom up notification of UI_APPEARANCE_SELECTORS caused an infinite notification loop in my code. I couldn’t be bothered to try figuring out how to solve it and reached for a bigger hammer instead.

However, with a little cleverness UI_APPEARANCE_SELECTOR can be a tool for a lot more than just setting colours and fonts on your custom views.

]]>
<![CDATA[Considerate Apps Sample Code]]> https://jeffwatkins.dev/articles/considerate-apps-sample-code https://jeffwatkins.dev/articles/considerate-apps-sample-code Tue, 14 Jan 2020 00:00:00 -0800 While the concepts behind the Considerate Apps talk were first tested in the App Store Connect for iOS app, I wanted to make certain I had something to share with developers to help clarify the ideas. That meant building what I’ve heard described as a micro-framework.

Let’s start with Component. This is a basic UIView subclass which arranges its subviews in a contentView according to constraints vended from constraintsForHorizontalOrientation or constraintsForVerticalOrientation. A Component has two values for orientation, preferredOrientation and effectiveOrientation. The preferredOrientation is the orientation the component would desire under ideal circumstances, e.g. type size is small and there’s plenty of space. The effectiveOrientation is the orientation determined by computeIdealOrientation in an attempt to make the content fit best.

When a Component receives either traitCollectionDidChange or didMoveToSuperview, it calls setNeedsUpdateEffectiveOrientation to update the value of effectiveOrientation. This ensures it will have the ideal layout as the content size category changes and when it is first added to the view hierarchy. It’s important not to call updateEffectiveOrientation too often, because it might not be the fastest thing we could do on the main thread.

Then we have either FlexibleHeader or AdaptiveHeader. These do essentially the same thing, but with a slightly different implementation of computeIdealOrientation.

In the FlexibleHeader implementation of computeIdealOrientation all that’s done is to determine whether the content size category represents an accessibility category size:

public override func computeIdealOrientation() -> Orientation {
    if traitCollection.preferredContentSizeCategory.isAccessibilityCategory {
        return .vertical
    } else {
        return .horizontal
    }
}

On the other hand, AdaptiveHeader uses the more sophisticated approach of determining whether the title label will truncate to determine which orientation to return:

public override func computeIdealOrientation() -> Orientation {
    if self.titleWillTruncate() {
        return .vertical
    } else {
        return .horizontal
    }
}

The implementation of titleWillTruncate is really quite simple:

func titleWillTruncate() -> Bool {
    guard let title = self._title, let text = title.text else {
        return false
    }

    guard text.count > 0 else {
        return false
    }

    let bounds = title.bounds
    let measureSize = CGSize(width: bounds.width,
        height: CGFloat.greatestFiniteMagnitude)
    let measureBounds = CGRect(origin: CGPoint.zero, size: measureSize)
    let fullRect = title.textRect(forBounds: measureBounds,
        limitedToNumberOfLines: 0)
    let titleRect = title.textRect(forBounds: measureBounds,
        limitedToNumberOfLines: title.numberOfLines)
    return fullRect.height > titleRect.height;
}

In essence, if the height of the full, unbounded rectangle for the text in the title is greater than the actual title rectangle, then the title label is truncated.

Take a look at the source on GitHub and feel free to let me know what you think.

]]>
<![CDATA[Considerate Apps Use Dynamic Type]]> https://jeffwatkins.dev/articles/dynamic-type https://jeffwatkins.dev/articles/dynamic-type Tue, 07 Jan 2020 00:00:00 -0800 Arguably one of the most important features Apple has added to their platform in years, Dynamic Type enables users with vision limitations, such as myself, to enlarge the text on screen to make it more easily readable. Let’s take a look at how the Apple Health app scales as we adjust the type size.

As you can see, content starts out primarily horizontal, but as the text size increases and space becomes more constrained, Apple Health switches to a vertical layout. We can do the same with our apps. Of course we’re all getting great, detailed designs like the following:

Standard

With subheader

Max 2 lines of title

But we also need to get designs for accessibility sizes which layout the content vertically like this:

Dynamic Reflow: AX Sizes 1 – 5

Then we can implement traitCollectionDidChange to determine whether our component needs to update its effective orientation:

public override func traitCollectionDidChange(_ previous: UITraitCollection?) {
    let currentSizeCategory = traitCollection.preferredContentSizeCategory
    let previousSizeCategory = previous?.preferredContentSizeCategory

    if currentSizeCategory != previousSizeCategory {
        self.needsUpdateEffectiveOrientation = true
    }
}

And then when we determine the effective orientation, it’s a simple matter of checking whether the size category represents an accessibility category:

public override func effectiveOrientation(for availableSize: CGSize) -> Orientation {
    if traitCollection.preferredContentSizeCategory.isAccessibilityCategory {
        return .vertical
    } else {
        return .horizontal
    }
}

And then in our method updating the effective orientation, if the return is different than the preferred orientation, we create new constraints.

This gives us a flexible header that will adapt to the dynamic type size and switch into a vertical orientation when necessary. Here’s how that looks.

Non accessibility sizes

Accessibility sizes

If all we needed to worry about was the English speaking market, this would probably work just fine. But really that’s not good enough any more.

Apple makes it so easy for us to distribute our apps world wide, but in order for them to succeed, it’s important to localise them. This means more than just translating the strings in our application — although that’s an important first start. We need to ensure our app is adaptable to the new languages and continues to provide a great experience for our users. Let’s see how our new flexible layout works when our header is translated into Malay:

Non accessibility sizes

Accessibility sizes

As you can see, before the header switches to vertical layout in the accessibility sizes, the title begins to truncate. That’s definitely not great. We could negotiate with our designers to allow the title to expand to three lines, but what happens if we localise to another language with even longer translations? Do we expand to four lines? Where do we draw the line? Fortunately, we’ve already found a solution: we can display everything vertically.

This does mean we need to implement a slightly more tricky bit of code, because autolayout doesn’t handle multi-line text super well. However, because we’re only executing this code when the size category changes, it’s not a significant performance impact. Let’s take a look at the updated effectiveOrientation(for:) method.

public override func effectiveOrientation(for availableSize: CGSize) -> Orientation {
    if self.titleWillTruncate() {
        return .vertical
    } else {
        return .horizontal
    }
}

As expected, we’re going to return vertical orientation when the title would truncate and horizontal orientation when the title would not truncate. The implementation of titleWillTruncate is super simple:

func titleWillTruncate() -> Bool {
    guard let title = self._title, let text = title.text else {
        return false
    }

    guard text.count > 0 else {
        return false
    }

    let bounds = title.bounds
    let measureSize = size: CGSize(width: bounds.width,
        height: CGFloat.greatestFiniteMagnitude)
    let measureBounds = CGRect(origin: CGPoint.zero, size: measureSize)
    let fullRect = title.textRect(forBounds: measureBounds,
        limitedToNumberOfLines: 0)
    let titleRect = title.textRect(forBounds: measureBounds,
        limitedToNumberOfLines: title.numberOfLines)
    return fullRect.height > titleRect.height;
}

Ultimately, it comes down to comparing the height of the text without truncation (fullRect) and height of the text constrained to the title’s number of lines (titleRect). If the height of the unconstrainted rectangle is greater, then we know the title will be truncated and we can use a vertical orientation.

This yields us a more adaptive layout that works better when translations need more space, but also when they are more compact. It’s also advantageous if we’re receiving text from the server and can’t anticipate how much space we’ll need.

Your components might have more complex needs than simply determining whether the title will truncate, however the approach is the same and will yield the same adaptive results. In the end, this provides a better experience for the user, because there’s no need to worry whether the text will be truncated.

]]>
<![CDATA[Considerate Apps Sound Good]]> https://jeffwatkins.dev/articles/voice-over https://jeffwatkins.dev/articles/voice-over Mon, 06 Jan 2020 00:00:00 -0800 Voice Over is the screen reader technology Apple pioneered with MacOS and iOS. For the most part, if you use standard controls, Voice Over just works. However, what you get by default can be a bit clunky and can be improved by grouping views together and using custom accessibilityLabel values.

The Trulia online real estate application is an example of one app that has gone beyond simple Voice Over support. As we can hear in the following example, they’ve implemented a custom accessibilityLabel for each cell in their open house list. This really isn’t too much extra work, but it makes a big difference. In the end, Voice Over should sound like a trusted assistant reading the information to you not some clunky tool for reading the screen.

Listen to the information for an open house around the corner from where I live:

In case you had a hard time catching that, I’ve transcribed what Voice Over announced:

One open house in ninety eight thousand, one hundred and ten. For sale. Zero dollars minus one point three million dollars. Two plus bee dee. Two plus bah. Two thousand plus ess cue eff tee. Just now. Open house? Eight hundred and ninety nine thousand dollars. Three bee dee four bah three thousand and thirteen ess cue eff tee. Four thousand, six hundred and fifty two Island av en ee. Bainbridge Island. Double You ay. Open. The fourteenth of December 13 and the fifteenth of December 13.

That’s not bad, but as a native english speaker there are a couple things that make me cringe or simply confuse me. The first comes right away:

One open house in ninety eight thousand, one hundred and ten.

As I said, this house is right around the corner from where I live. If I wasn’t sighted, I wouldn’t know that “98110” represents the ZIP or Postal Code for the house. This should be announced as individual digits instead of a numeral. The best way to get this is to use UIAccessibilitySpeechAttributeSpellOut in the attributedAccessibilityLabel. There are a number of other great attributes we can use, like UIAccessibilitySpeechAttributePunctuation for IP addresses and UIAccessibilitySpeechAttributeIPANotation for words or phrases that Voice Over just never gets correct.

The next sentence has trouble too:

Zero dollars minus one point three million dollars.

I get it. Sometimes we just don’t have enough space on screen and we take typographical shortcuts (although I’d argue that’s not the case here). Sometimes we use a mathematical minus when we should use a dash. It doesn’t make a big difference visually1, but Voice Over will distinguish between the two. But that’s not all. This sentence just doesn’t sound right. That’s not how someone would say the same thing. It gets worse when describing the physical characteristics of the house:

Two plus bee dee. Two plus bah. Two thousand plus ess cue eff tee.

This isn’t Voice Over suffering a brain aneurysm. It’s once again reading just what’s on screen. However, the developers of Trulia have used abbreviations which make no sense to Voice Over. This and the previous example would be a great opportunity to use separate formatters for auditory messages. In the case of the price range a better auditory message would be:

Up to one point three million dollars.

And for physical characteristics, instead of abbreviating it’s better to use full words:

Two or more bedrooms, two or more bathrooms, two thousand plus square feet.

I’m glad to hear Trulia has implemented better than average support for Voice Over in their app. That’s why I didn’t feel too bad about calling them out. They’re already better than most applications.


  1. I mean it does make a difference, but only to those of us who care about that sort of thing. ↩︎

]]>
<![CDATA[The Case for Considerate Apps]]> https://jeffwatkins.dev/articles/case-for-considerate-apps https://jeffwatkins.dev/articles/case-for-considerate-apps Sun, 05 Jan 2020 00:00:00 -0800 Unless you’re building an open source app like NetNewsWire you need to make money from your app. One way or another. Either you make money by convincing your users to pay you for goods and services, or you show advertisements for someone who wants to sell your users goods and services. If you’re making your money by harvesting and selling user data, you should reconsider your life choices.

Maybe you’re an independent developer and you’re trying to decide between enhancing your app to support Voice Over or diving into a feature that can take advantage of CreateML. Maybe you’re not the one prioritising the features in your app. Maybe that’s your boss, the product owner, or a client. I’d like to give you enough information to open a discussion with them. Maybe you can change their mind.

Who uses smartphones?

Ten years ago, the answer might have been “young people”, but the market for smartphones is nearly saturated in all age groups. The graph below shows global usage for 2017. It isn’t until you get people between 45 and 54 that usage drops below 90% to 82%. Again, this is from 2017. Two years ago. I would expect these numbers to show continued growth in 2020.

Who Uses Smartphones?

Why should you care what age groups use smartphones? If we look at average American salaries, we can see folks don’t start earning much until they hit “middle age” or around 35 years old.

Average American Salaries

Average American Salaries

Yes, it’s true. Average Americans do not make the same amount of money as software developers do. Additionally, it isn’t until they’ve reached 35 or so that they begin to accumulate any wealth. Presumably this is because they’ve paid off their student loans, stopped partying all the time1, and settled in to save up for a house.

Average American Net Worth

Average American Net Worth

The lesson to learn from this is if you want to target Americans with some disposable income and a bit of accumulated wealth, a good age range to consider is between 35 and 55 years old. Its probably sensible to market to them at earlier ages to develop an aspiration for your product or simply to bring them into the revenue generating funnel.

However, if you want people to part with their money, you also want to make it as easy for them as possible. That means making your app as pleasant to use as possible. As much as a smooth animation impresses all your developer friends average consumers are more swayed by whether an app is easy to use. One factor in whether an app is easy to use is whether they can see it clearly.

53% of Americans between 18-44 wear corrective lenses

Between the ages of 45 and 54, that number goes up to 67%. And more than ¾ of Americans over 55 wear corrective lenses. Needing reading glasses becomes super common after age 40 regardless of other vision problems. Although I need my glasses to see things far away, I can’t count the number of times I’ve had to take them off to read my phone in order to get directions or interact with an app while outside. This is particularly frustrating when I’m wearing my prescription sunglasses.

Americans with Vision Impairments

More than 6% of Americans have vision impairments even with corrective lenses

In this case, a vision impairment is defined as having less than 20/40 vision even with corrective lenses. I know I’ve been in product pitches where the phrase “All we need is 5% of the market and we’re set!” was bandied about. I’m not saying you should refocus your app to exclusively target users with vision limitations, but it does seem bonkers to just write off a large part of the market.

Legal repercussions to inaccessibility

Recently Domino’s Pizza fought against making their web site and app accessible for people with vision limitations. This is because the people who run Domino’s Pizza are terrible people. After a prolonged battle, they were dealt a resounding defeat in court with the following judgement:

The “inaccessibility of Domino’s website and App impedes access to the goods and services of its physical pizza franchises — which are places of public accommodation.”
— 9th U.S. Circuit Court of Appeals, Domino’s Pizza v. Guillermo Robles, No. 18-1539.

While the Americans with Disabilities Act clearly defines a “place of public accommodation”, if our apps represent the only way for customers to interact with our products, we should expect to come under similar scrutiny. Furthermore, juristictions like the European Union which are known for genuinely caring about human rights may have more stringent requirements for the accessibility of web sites and apps.

Maybe you’re young and don’t wear glasses today, but chances are good you will by the time you’re in your 40s. An investment in building a considerate app is an investment in your own future.


  1. I don’t think the inability to accumulate wealth among younger folks is really a case of partying all the time. Instead I think it’s a lack of focus. I know in my own case, saving money for the future simply didn’t occur to me until I was in my 30s. ↩︎

]]>
<![CDATA[Introducing Considerate Apps]]> https://jeffwatkins.dev/articles/introducing-considerate-apps https://jeffwatkins.dev/articles/introducing-considerate-apps Sat, 04 Jan 2020 00:00:00 -0800 When the iPhone first launched, the platform offered so little by comparison to today, I believe we put more of our energy into building applications to enable our users to do things they’d never done before (or been frustrated by doing manually). Today, in order to be competitive we must consider how we’ll use interruptable animations, machine learning, augmented reality, declarative user interfaces, and other platform advances. Additionally, we spend more of our time building grand architectural edifaces to manage our data models and networking. Writing an app today is nothing like the days of the iPhone OS 2.0 for good and for ill.

We’ve become distracted from honing the minimum viable product which inspired our users in the first place. In order to stay on schedule implementing planned features, fixing bugs, and adopting new “requirements” announced at WWDC each year, I believe we’ve failed to take advantage of opportunities to make our apps more adaptive for our users.

Given the choice between implementing a new feature with an exciting technology like machine learning or ensuring rock solid support for Voice Over, most companies feel the competitive pressure to choose the new feature. How many of your daily applications were updated to support Dark Mode, but still don’t support Dynamic Type? Localisation was one of the most welcome “features” we added to App Store Connect/iTunes Connect in my six years on the team. Many US developed applications aren’t localised – not just from indie developers, but from large companies – because it’s hard and just not as glamorous or important as feature work.

Building a considerate app means putting the user first. Accepting the user, along with their limitations, all while providing the rich, immersive experience we’ve come to expect from iOS applications. But accepting the user comes first. That means we should expect our applications to be accessible to those with motor control limitations, cognitive limitations, hearing limitations1, and vision limitations. It’s also important to remember even if your user can understand the language you’ve developed your app in, that may not be their primary or most fluent language2.

Making your app adaptive to your users will go a long way to making it stand out among your competitors.


  1. I’ve got significant hearing loss in both ears, but only in the vocal range. If there’s a lot of background noise, I won’t “hear” you unless I can see you talking to me. I don’t read lips, but I think my brain realises it needs to boost the vocal range signal when it sees you talking. Brains. How do they even work? ↩︎

  2. I’m always humbled by my friends in Europe for whom English is often their third or fourth language. I speak German at a level equivalent of a toddler. And I struggle with that. ↩︎

]]>