<![CDATA[Stories by Besher Al Maleh on Medium]]> https://medium.com/@almalehdev?source=rss-1d15c542811f------2 https://cdn-images-1.medium.com/fit/c/150/150/1*2aPhtx8VUrCiv6AgyVi4_g.png Stories by Besher Al Maleh on Medium https://medium.com/@almalehdev?source=rss-1d15c542811f------2 Medium Tue, 14 Apr 2026 13:01:20 GMT <![CDATA[The Nested Closure Trap]]> https://medium.com/@almalehdev/the-nested-closure-trap-356a0145b6d?source=rss-1d15c542811f------2 https://medium.com/p/356a0145b6d Thu, 12 Mar 2020 12:29:52 GMT 2020-03-13T18:59:47.236Z

Suppose we have code that executes an animation closure nested inside a Dispatch Work Item.

Here are 3 versions of this code. Can you tell which one introduces a retain cycle? (click for mobile-friendly gist)

Spot the leak

The answer is, all three versions do! That’s right, even version B, the one that creates a reference to the view outside the closure, introduces a retain cycle.

In a previous article, I wrote at length about [weak self] and the fact that it’s not always needed. But there is a case I did not cover — it’s when you have multiple nested closures.

You’ve probably run into a scenario where you ended up with nested closures and you needed [weak self] to avoid a retain cycle. In that situation, are you supposed to use [weak self] inside every one of the nested closures, just the innermost one, the outermost?

Going back to the example from before, there are two different closures in this code:

My instinct initially was to add [weak self] to the innermost UIView.animate closure. After all, this is where we are using self to access the view property, so it makes sense to add it here, right?

Turns out this introduces a retain cycle, and we need to move [weak self] up one level to the Dispatch Work Item closure.

The work item in this example contains the closure of interest — self stores a strong reference to the work item closure (in line 6: self.workItem = workItem), while the closure also stores a strong reference to self.

This strong reference to self is created even in version B, where we only referenced self with the [weak self] expression itself, inside the nested closure. Merely adding [weak self] in that case has inadvertently created a strong reference to self, leading to a retain cycle.

Conclusion

When you have nested closures, if one of them happens to require [weak self] (as per the diagram from my previous article), then be sure to add [weak self] at the level of that closure of interest (or somewhere higher). If you add it at a lower level (i.e. inside a nested closure), you will introduce a memory leak.

Here’s what a cycle-free version of our example looks like:

Alternatively, we can create a reference to the view from outside the closure (like version B above), and avoid using [weak self] altogether:

I updated my weak-self app with an additional case that covers nested closures. Download the app here.

That’s it for this post, a pretty short one! Thanks for reading, and be sure to check out my longer deep dive into [weak self] if you haven’t already. As always, if you have any comments I’d love to hear them either here or on Twitter !

Thanks for reading. If you enjoyed this article, feel free to hit that clap button 👏 to help others find it. If you *really* enjoyed it, you can clap up to 50 times 😃

Check out some of my other articles:

]]>
<![CDATA[Concurrency Visualized — Part 3: Pitfalls and Conclusion]]> https://medium.com/@almalehdev/concurrency-visualized-part-3-pitfalls-and-conclusion-2b893e04b97d?source=rss-1d15c542811f------2 https://medium.com/p/2b893e04b97d Wed, 29 Jan 2020 02:01:40 GMT 2020-02-20T13:58:49.699Z Concurrency Visualized — Part 3: Pitfalls and Conclusion
Thanks to Pablo Stanley for these amazing illustrations!

This is part 3 of my concurrency series. Check out Part 1 and Part 2 if you missed them.

In my earlier discussion of sync, async, serial, and concurrent, I alluded to some pitfalls that you might encounter while working with concurrency. That’s our main topic for this article. Afterwards, I will wrap up this series with a summary and some general advice.

Pitfalls

Priority Inversion and Quality of Service

Priority inversion happens when a high priority task is prevented from running by a lower priority task, effectively inverting their relative priorities.

This situation often occurs when a high QoS queue shares a resources with a low QoS queue, and the low QoS queue gets a lock on that resource.

But I wish to cover a different scenario that is more relevant to our discussion — it’s when you submit tasks to a low QoS serial queue, then submit a high QoS task to that same queue. This scenario also results in priority inversion, because the high QoS task has to wait on the lower QoS tasks to finish.

GCD resolves priority inversion by temporarily raising the QoS of the entire queue that contains the low priority tasks which are ‘ahead’ of, or blocking, your high priority task. It’s kind of like having cars stuck in front of an ambulance. Suddenly they’re allowed to cross the red light just so that the ambulance can move (in reality the cars move to the side, but imagine a narrow (serial) street or something, you get the point :-P)

To illustrate the inversion problem, let’s start with this code:

We create a starter queue (where we submit the tasks from), as well as two queues with different QoS, then we dispatch tasks to each of these two queues, each task printing out an equal number of circles of a specific colour (utility queue is blue, background is white.)

Because these tasks are submitted asynchronously, every time you run the app, you’re going to see slightly different results. However, as you would expect, the queue with the lower QoS (background) almost always finishes last. In fact, the last 10–15 circles are usually all white.

No surprises there

But watch what happens when we submit a sync task to the background queue after the last async statement. You don’t even need to print anything inside the sync statement, just adding this line is enough:

Priority inversion

The results in the console have flipped! Now, the higher priority queue (utility) always finishes last, and the last 10–15 circles are blue.

To understand why that happens, we need to revisit the fact that synchronous work is executed on the caller thread (unless you’re submitting to the main queue.) In our example above, the caller (starterQueue) has the top QoS (userInteractive.) Therefore, that seemingly innocuous sync task is not only blocking the starter queue, but it’s also running on the starter’s high QoS thread. The task therefore runs with high QoS, but there are two other tasks ahead of it on the same background queue that have background QoS. Priority inversion detected!

As expected, GCD resolves this inversion by raising the QoS of the entire queue to temporarily match the high QoS task; consequently, all the tasks on the background queue end up running at user interactive QoS, which is higher than the utility QoS. And that’s why the utility tasks finish last!

Side-note: If you remove the starter queue from that example and submit from the main queue instead, you will get similar results, as the main queue also has user interactive QoS.

To avoid priority inversion in this example, we need to avoid blocking the starter queue with the sync statement. Using async would solve that problem.

Although it’s not always ideal, you can minimize priority inversions by sticking to the default QoS when creating private queues or dispatching to the global concurrent queue.

Thread explosion

When you use a concurrent queue, you run the risk of thread explosion if you’re not careful. This can happen when you try to submit tasks to a concurrent queue that is currently blocked (e.g. with a semaphore, sync, or some other way.) Your tasks will run, but the system will likely end up spinning up new threads to accommodate these new tasks, and threads aren’t cheap.

This is likely why Apple suggests starting with a serial queue per subsystem in your app, as each serial queue can only use one thread at a time. Remember that serial queues are concurrent in relation to other queues, so you still get a performance benefit when you offload your work to a queue, even if it isn’t concurrent.

Race conditions

Swift Arrays, Dictionaries, Structs, and other value types are not thread-safe by default. For example, when you have multiple threads trying to access and modify the same array, you will start running into trouble.

There are different solutions to the readers-writers problem, such as using locks or semaphores, but the relevant solution I wish to discuss here is the use of an isolation queue.

Let’s say we have an array of integers, and we want to submit asynchronous work that references this array. As long as our work only reads the array and does not modify it, we are safe. But as soon as we try to modify the array in one of our asynchronous tasks, we will introduce instability in our app.

It’s a tricky problem because your app can run 10 times without issues, and then it crashes on the 11th time. One very handy tool for this situation is the Thread Sanitizer in Xcode. Enabling this option will help you identify potential race conditions in your app.

This option is only available on the simulator

To demonstrate the problem, let’s take this (admittedly contrived) example:

One of the async tasks is modifying the array by appending values. If you try running this on your simulator, you might not crash. But run it enough times (or increase the loop frequency on line 7), and you will eventually crash. If you enable the thread sanitizer, you will get a warning every time you run the app.

To deal with this race condition, we are going to add an isolation queue that uses the barrier flag. This flag allows any outstanding tasks on the queue to finish, but blocks any further tasks from executing until the barrier task is completed.

Think of the barrier like a janitor cleaning a public restroom (shared resource.) There are multiple (concurrent) stalls inside the restroom that people can use. Upon arrival, the janitor places a cleaning sign (barrier) blocking any newcomers from entering until the cleaning is done, but the janitor does not start cleaning until all the people inside have finished their business. Once they all leave, the janitor proceeds to clean the public restroom in isolation. When finally done, the janitor removes the sign (barrier) so that the people who are queued up outside can finally enter.

Here’s what that looks like in code:

We have added a new isolation queue, and restricted access to the private array using a getter and setter that will place a barrier when modifying the array.

The getter needs to be sync in order to directly return a value. The setter can be async, as we don’t need to block the caller while the write is taking place.

We could have used a serial queue without a barrier to solve the race condition, but then we would lose the advantage of having concurrent read access to the array. Perhaps that makes sense in your case, you get to decide.

Conclusion

Thank you so much for reading this series! I hope you learned something new along the way. I will leave you with a summary and some general advice

Summary

  • Queues always start their tasks in FIFO order
  • Queues are always concurrent relative to other queues
  • Sync vs Async concerns the source
  • Serial vs Concurrent concerns the destination
  • Sync is synonymous with ‘blocking’
  • Async immediately returns control to caller
  • Serial uses a single thread, and guarantees order of execution
  • Concurrent uses multiple-threads, and risks thread explosion
  • Think about concurrency early in your design cycle
  • Synchronous code is easier to reason about and debug
  • Avoid relying on global concurrent queues if possible
  • Consider starting with a serial queue per subsystem
  • Switch to concurrent queue only if you see a measurable performance benefit

I like the metaphor from the Swift Concurrency Manifesto of having an ‘island of serialization in a sea of concurrency’. This sentiment was also shared in this tweet by Matt Diephouse:

Matt Diephouse on Twitter

The secret to writing concurrent code is to make most of it serial. Restrict concurrency to a small, outer layer. (Serial core, concurrent shell.) e.g. instead of using a lock to manage 5 properties, create a new type that wraps them and use a single property inside the lock.

When you apply concurrency with that philosophy in mind, I think it will help you achieve concurrent code that can be reasoned about without getting lost in a mess of callbacks.

If you have any questions or comments, feel free to reach out to me on Twitter

Besher Al Maleh

Thanks for reading. If you enjoyed this article, feel free to hit that clap button 👏 to help others find it. If you *really* enjoyed it, you can clap up to 50 times 😃

almaleh/Dispatcher

Check out some of my other articles:

Further reading:

WWDC Videos:

]]>
<![CDATA[Concurrency Visualized — Part 2: Serial vs Concurrent]]> https://medium.com/@almalehdev/concurrency-visualized-part-2-serial-vs-concurrent-fd04e32c20a9?source=rss-1d15c542811f------2 https://medium.com/p/fd04e32c20a9 Wed, 29 Jan 2020 02:00:26 GMT 2020-02-16T08:04:24.144Z Concurrency Visualized — Part 2: Serial vs Concurrent

This is part 2 of my concurrency series. If you missed part 1, check it out here.

In part 1, we explored the differences between synchronous and asynchronous execution when dispatching tasks using GCD. Now, we’re going to focus on what happens in the queue after you dispatch your task.

Serial vs Concurrent

Serial and concurrent affect the destination — the queue in which your work has been submitted to run. This is in contrast to sync and async, which affected the source.

A serial queue will not execute its work on more than one thread at a time, regardless of how many tasks you dispatch on that queue. Consequently, the tasks are guaranteed to not only start but also terminate, in first-in, first-out order. Moreover, when you block a serial queue (using a sync call, semaphore, or some other tool), all work on that queue will halt until the block is over.

From Dispatcher on Github

A concurrent queue can spawn multiple threads, and the system decides how many threads are created. Tasks always start in FIFO order, but the queue does not wait for tasks to finish before starting the next task, therefore tasks on concurrent queues can finish in any order. When you perform a blocking command on a concurrent queue, it will not block the other threads in this queue. Additionally, when a concurrent queue gets blocked, it runs the risk of thread explosion. I will cover this in more detail later on.

From Dispatcher on Github

The main queue in your app is serial. All the global pre-defined queues are concurrent. Any private dispatch queue you create is serial by default but can be set to be concurrent using an optional attribute as discussed in part 1.

It’s important to note here that the concept of serial vs concurrent is only relevant when discussing a specific queue. All queues are concurrent relative to each other. That is, if you dispatch works asynchronously from the main queue to a private serial queue, that work will be completed concurrently with respect to the main queue. And if you create two different serial queues, and then perform blocking work on one of them, the other queue is unaffected.

To demonstrate the concurrency of multiple serial queues, let’s take this example:

Both queues here are serial, but the results are jumbled up because they execute concurrently in relation to each other. The fact that they’re each serial (or concurrent) has no effect on this result. Their QoS level determines who will generally finish first (order not guaranteed.)

If we want to ensure the first loop finishes first before starting the second loop, we can submit the first task synchronously from the caller:

This is not necessarily desirable, because we are now blocking the caller while the first loop is executing.

To avoid blocking the caller, we can submit both tasks asynchronously, but to the same serial queue:

Now our tasks execute concurrently with respect to the caller, while also keeping their order intact.

Note that if we make our single queue concurrent via the optional parameter, we go back to the jumbled results, as expected:

Sometimes you might confuse synchronous execution with serial execution (at least I did), but they are very different things. For example, try changing the first dispatch on line 3 from our previous example to a sync call:

Suddenly, our results are back in perfect order, but this is a concurrent queue, so how could that happen? Did the sync statement somehow turns it into a serial queue?

The answer is no!

This is a bit sneaky. What happened is that we did not reach the async call until the first task had completed its execution. The queue is still very much concurrent, but inside this zoomed-in section of the code, it appears as if it were serial. This is because we are blocking the caller, and not proceeding to the next task until the first one is finished.

If another queue somewhere else in your app tried submitting work to this same queue while it was still executing the sync statement, that work will run concurrently with whatever we got running here because it’s still a concurrent queue.

Which one to use?

Serial queues take advantage of CPU optimizations and caching, and help reduce context switching. Apple recommends starting with one serial queue per subsystem in your app — for example one for networking, one for file compression, etc. If the need arises, you can later expand to a hierarchy of queues per subsystem using the setTarget method or the optional target parameter when building queues.

If you run into a performance bottleneck, measure your app’s performance, then see if a concurrent queue helps. If you do not see a measurable benefit, it’s better to stick to serial queues.

— End of Part 2—

Check out Part 3 here:

Concurrency Visualized — Part 3: Pitfalls and Conclusion

If you have any questions or comments, feel free to reach out to me on Twitter

Thanks for reading. If you enjoyed this article, feel free to hit that clap button 👏 to help others find it. If you *really* enjoyed it, you can clap up to 50 times 😃

almaleh/Dispatcher

Check out some of my other articles:

]]>
<![CDATA[Concurrency Visualized — Part 1: Sync vs Async]]> https://medium.com/@almalehdev/concurrency-visualized-part-1-sync-vs-async-c433ff7b3ebe?source=rss-1d15c542811f------2 https://medium.com/p/c433ff7b3ebe Wed, 29 Jan 2020 01:59:13 GMT 2020-04-27T17:35:47.752Z Concurrency Visualized — Part 1: Sync vs Async

Throughout this concurrency series, I am going to cover the different types of queues in Grand Central Dispatch (GCD.) More specifically, I will explore the differences between serial and concurrent queues, as well as the differences between synchronous and asynchronous execution.

If you’ve never used GCD before, this series is a great place to start. If you have some experience with GCD, but are still curious about the topics mentioned above, I think you will still find it useful, and I hope you will pick up one or two new things along the way.

I created a SwiftUI companion app to visually demonstrate the concepts in this series. The app also has a fun short quiz that I encourage you to try before and after reading this series. Download the source code here, or get the public beta here.

Part 1 starts with an introduction to GCD, followed by a detailed explanation on sync vs async. In Part 2, I will discuss serial and concurrent queues. And Part 3 will cover pitfalls that one might encounter with concurrency, before ending with a summary and some general advice.

Introduction

Let’s start with a brief intro on GCD and dispatch queues. Feel free to skip to the Sync vs Async section if you are already familiar with the topic.

Concurrency and Grand Central Dispatch

Concurrency lets you take advantage of the fact that your device has multiple CPU cores. To make use of these cores, you will need to use multiple threads. However, threads are a low-level tool, and managing threads manually in an efficient manner is extremely difficult.

Grand Central Dispatch was created by Apple over 10 years ago as an abstraction to help developers write multi-threaded code without manually creating and managing the threads themselves.

With GCD, Apple took an asynchronous design approach to the problem. Instead of creating threads directly, you use GCD to schedule work tasks, and the system will perform these tasks for you by making the best use of its resources. GCD will handle creating the requisite threads and will schedule your tasks on those threads, shifting the burden of thread management from the developer to the system.

A big advantage of GCD is that you don’t have to worry about hardware resources as you write your concurrent code. GCD manages a thread pool for you, and it will scale from a single-core Apple Watch all the way up to a many-core Mac Pro.

Dispatch Queues

These are the main building blocks of GCD, letting you execute arbitrary blocks of code using a set of parameters that you define. The tasks in dispatch queues are always started in a first-in, first-out (FIFO) fashion. Note that I said started, because the completion time of your tasks depends on several factors, and is not guaranteed to be FIFO (more on that later.)

Broadly speaking, there are three kinds of queues available to you:

  • The Main dispatch queue (serial, pre-defined)
  • Global queues (concurrent, pre-defined)
  • Private queues (can be serial or concurrent, you create them)

Every app comes with a Main queue, which is a serial queue that executes tasks on the main thread. This queue is responsible for drawing your application’s UI and responding to user interactions (touch, scroll, pan, etc.) If you block this queue for too long, your iOS app will appear to freeze, and your macOS app will display the infamous beach ball/spinning wheel.

When performing a long-running task (network call, computationally intensive work, etc), we avoid freezing the UI by performing this work on a background queue, then we update the UI with the results on the main queue:

As a rule of thumb, all UI work must be executed on the Main queue. You can turn on the Main Thread Checker option in Xcode to receive warnings whenever UI work gets executed on a background thread.

In addition to the main queue, every app comes with several pre-defined concurrent queues that have varying levels of Quality of Service (an abstract notion of priority in GCD.)

For example, here’s the code to submit work asynchronously to the user interactive (highest priority) QoS queue:

Alternatively, you can call the default priority global queue by not specifying a QoS like this:

Additionally, you can create your own private queues using the following syntax:

When creating private queues, it helps to use a descriptive label (such as reverse DNS notation), as this will aid you while debugging in Xcode’s navigator, lldb, and Instruments:

By default, private queues are serial (I’ll explain what this means shortly, promise!) If you want to create a private concurrent queue, you can do so via the optional attributes parameter:

There is an optional QoS parameter as well. The private queues that you create will ultimately land in one of the global concurrent queues based on their given parameters.

What’s in a task?

I mentioned dispatching tasks to queues. Tasks can refer to any block of code that you submit to a queue using the sync or async functions. They can be submitted in the form of an anonymous closure:

Or inside a dispatch work item that gets performed later:

Regardless of whether you dispatch synchronously or asynchronously, and whether you choose a serial or concurrent queue, all of the code inside a single task will execute line by line. Concurrency is only relevant when evaluating multiple tasks.

For example, if you have 3 loops inside the same task, these loops will always execute in order:

This code always prints out ten digits from 0 to 9, followed by ten blue circles, followed by ten broken hearts, regardless of how you dispatch that closure.

Individual tasks can also have their own QoS level as well (by default they use their queue’s priority.) This distinction between queue QoS and task QoS leads to some interesting behaviour that we will discuss in the priority inversion section.

By now you might be wondering what serial and concurrent are all about. You might also be wondering about the differences between sync and async when submitting your tasks. This brings us to the crux of this series, so let’s dive in!

Sync vs Async

When you dispatch a task to a queue, you can choose to do so synchronously or asynchronously using the sync and async dispatch functions. Sync and async primarily affect the source of the submitted task, i.e. the queue where it is being submitted from.

When your code reaches a sync statement, it will block the current queue until that task completes. Once the task returns/completes, control is returned to the caller, and the code that follows the sync task will continue.

Think of sync as synonymous with ‘blocking’.

An async statement, on the other hand, will execute asynchronously with respect to the current queue, and immediately returns control back to the caller without waiting for the contents of the async closure to execute. There is no guarantee as to when exactly the code inside that async closure will execute.

Current queue?

It may not be obvious what the source, or current, queue is, because it’s not always explicitly defined in the code. For example, if you call your sync statement inside viewDidLoad, your current queue will be the Main dispatch queue. If you call the same function inside a URLSession completion handler, your current queue will be a background queue.

Going back to sync vs async, let’s take this example:

The above code will block the current queue, enter the closure and execute its code on the global queue by printing “Inside”, before proceeding to print “Outside”. This order is guaranteed.

Let’s see what happens if we try async instead:

Our code now submits the closure to the global queue, then immediately proceeds to run the next line. It will likely print “Outside” before “Inside”, but this order isn’t guaranteed, depending on the QoS of the source and destination queues, as well as other factors that the system controls.

Threads are an implementation detail in GCD — we do not have direct control over them and can only deal with them using queue abstractions. Nevertheless, I think it can be useful to ‘peek under the covers’ at thread behaviour to understand some challenges we might encounter with GCD.

For instance, when you submit a task using sync, GCD optimizes performance by executing that task on the current thread (the caller.) There is one exception however, which is when you submit a sync task to the main queue — doing so will always run the task on the main thread and not the caller. This behaviour can have some ramifications that we will explore in the priority inversion section.

From Dispatcher on Github

Which one to use?

When submitting work to a queue, Apple recommend using asynchronous execution over synchronous execution. However, there are situations where sync might be the better choice, such as when dealing with race conditions, or when performing a very small task. I will cover these situations shortly.

One large consequence of performing work asynchronously inside a function is that the function can no longer directly return its values (if they depend on the async work that’s being done), and must instead use a closure/completion handler parameter to deliver the results.

To demonstrate this concept, let’s take a small function that accepts image data, performs some expensive computation to process the image, then returns the result:

In this example, the function upscaleAndFilter(image:) might take several seconds, so we want to offload it into a separate queue to avoid freezing the UI. Let’s create a dedicated queue for image processing, and then dispatch the expensive function asynchronously:

There are two issues with this code. First, the return statement is inside the async closure, so it is no longer returning a value to the processImageAsync(data:) function, and currently serves no purpose. But the bigger issue is that our processImageAsync(data:) function is no longer returning any value, because the function reaches the end of its body before it enters the async closure.

To fix this error, we will adjust the function so that it no longer directly returns a value. Instead, it will have a new completion handler parameter that we can call once our asynchronous function has completed its work:

As evident in this example, the change to make the function asynchronous has propagated to its caller, who now has to pass in a closure and handle the results asynchronously as well. By introducing an asynchronous task, you can potentially end up modifying a chain of several functions.

Concurrency and asynchronous execution add complexity to your project as we just observed. This indirection also makes debugging more difficult. That’s why it really pays off to think about concurrency early in your design — it’s not something you want to tack on at the end of your design cycle.

Synchronous execution, by contrast, does not increase complexity; it allows you to continue using return statements as you did before. A function containing a sync task will not return until the code inside that task has completed. Therefore it does not require a completion handler.

If you are submitting a tiny task (e.g. updating a value), consider doing it synchronously. Not only does that help you keep your code simple, it will also perform better — Async is believed to incur an overhead that outweighs the benefit of doing the work asynchronously for tiny tasks that take under 1ms to complete.

If you are submitting a large task, however, like the image processing we performed above, then consider doing it asynchronously, to avoid blocking the caller for too long.

Dispatching on the same queue

While it is safe to dispatch a task asynchronously from a queue into itself (e.g. you can use .asyncAfter on the current queue), you can not dispatch a task synchronously from a queue into the same queue. Doing so will result in a deadlock and immediately crashes the app! (unless your queue is Concurrent, that is. We will discuss concurrent queues in Part 2.)

This issue can manifest itself when performing a chain of synchronous calls that lead back to the original queue, i.e. you sync a task onto another queue, and when the task completes, it syncs the results back into the original queue, leading to a deadlock. Use async to avoid such crashes.

Blocking the main queue

Dispatching tasks synchronously from the main queue will block that queue, thereby freezing the UI, until the task is completed. Thus it’s better to avoid dispatching work synchronously from the main queue, unless you’re performing really light work.

prefer to use async from the main queue

— End of Part 1 —

Check out Part 2 here:

Concurrency Visualized — Part 2: Serial vs Concurrent

If you have any questions or comments, feel free to reach out to me on Twitter

Thanks for reading. If you enjoyed this article, feel free to hit that clap button 👏 to help others find it. If you *really* enjoyed it, you can clap up to 50 times 😃

almaleh/Dispatcher

Check out some of my other articles:

]]>
<![CDATA[Fireworks — A visual particles editor for Swift]]> https://medium.com/@almalehdev/fireworks-a-visual-particles-editor-for-swift-618e76347798?source=rss-1d15c542811f------2 https://medium.com/p/618e76347798 Mon, 16 Sep 2019 02:09:41 GMT 2019-10-10T01:03:12.852Z Fireworks — A visual particles editor for Swift
Fireworks!

I’m incredibly excited to share a new developer tool I’ve been working on called Fireworks, available to download here 😃

Fireworks generates Swift code on the fly for macOS and iOS as you build particle effects, allowing you to interactively design and iterate without having to constantly rebuild your app.

Once you’re satisfied with your particles design, implementing it in your project only takes 2 steps:

  1. Add the texture to your assets catalog
  2. Paste the code generated by Fireworks

No third-party libraries or dependencies needed!

I put together this video to show what it looks like:

Fireworks uses the CAEmitterLayer and CAEmitterCell APIs that are part of Core Animation. If you have used Apple’s SpriteKit framework in the past, you may have seen the SpriteKit particle editor that is built-in to Xcode. It’s a really cool editor, but unfortunately it only works for SKEmitterNodes.

With Fireworks, I wanted to create something similar, but for CAEmitterCells instead, so that people can use it in any iOS or macOS project, without having to import SpriteKit.

Fireworks offers several ways to adjust values:

  • Click and drag the controls on the right to quickly scrub through values
  • Left click on the preview window to move the particles around
  • Hold down CTRL or right mouse button and drag on the preview to resize the emitter
  • Enter values directly into the code editor on the left

The app comes with several built-in templates for the usual suspects (snow, rain, confetti, fire, etc). You can export the built-in textures to use freely in your own projects, and you can also import your own textures.

Creating particles visually using Fireworks is a very different experience from building them using code. As you scrub various values back and forth and see immediate results in the preview, you will run into interesting ways to mix and match values, and you will inevitably produce effects that you normally wouldn’t think to program when building particles in code.

After building Fireworks, I spent hours just playing around with the tool, sometimes creating some really strange effects 😅

I can’t wait to see what other people will create!

Upcoming Features

I have several features planned for future updates including:

  • Adding more controls to change birthrate and other values at the CAEmitterLayer level (which acts as a multiplier for cells)
  • More controls for tweaking cells (such as color speed)
  • Add ability to save multiple templates
  • A new animations section that lets you animate values over time
  • [implemented as of v1.1] Add support for Core Animation Archives for a simpler import process (thanks Guilherme Rambo for the tip!)

If there’s a feature you would like to see, I’d love to hear from you on Twitter!

Open Source and other help

If you’d like to contribute to the code, or just want to look at it, then please let me know. I am thinking of open-sourcing the project if there’s enough interest 😀

If you like Fireworks and want to support its development, there are many ways you can help!

  • Download it from the Mac App Store
  • Review it on your site or blog
  • Share it with friends and colleagues or on social media

Thank you so much for reading, I hope you enjoy using Fireworks and I look forward to hearing your feedback!

Download Fireworks here

https://www.fireworksapp.xyz

🎉🎉🎉

p.s: special thanks to John Sundell for his Splash library. All syntax highlighting in the app is powered by Splash 😊

]]>
<![CDATA[You don’t (always) need [weak self]]]> https://medium.com/@almalehdev/you-dont-always-need-weak-self-a778bec505ef?source=rss-1d15c542811f------2 https://medium.com/p/a778bec505ef Mon, 17 Jun 2019 00:16:39 GMT 2020-03-04T13:42:39.095Z
Thanks to Arpit Sharma for the inspiration!

Cycles… no, not the fun kind shown above. I mean strong reference cycles, the kind that causes entire view controllers to leak in your iOS app. More specifically, I want to talk about the use of [weak self] inside of Swift closures to avoid reference (or retain) cycles, and explore cases where it may or may not be necessary to capture self weakly.

I learned about the topics discussed in this article from reading Apple docs, a variety of blog posts and tutorials, and through trial & error and experimentation. If you think I made a mistake somewhere, feel free to reach out in the comments or on Twitter.

I also put together a small app that demonstrates different memory leak scenarios and also shows where using[weak self] may be unnecessary:

GitHub - almaleh/weak-self: This app demonstrates where [weak self] may and may not be needed

Automatic Reference Counting

Memory management in Swift is handled by ARC (Automatic Reference Counting), which works behind the scenes to free up the memory used by class instances once they’re no longer needed. ARC works mostly by itself, but sometimes you need to provide it with some extra information to clarify the relationships between your objects.

For example, if you have a child controller that stores a reference to its owner/parent in a property, that property would need to be marked with the weak keyword to prevent a circular reference / retain cycle.

If you suspect there might be a memory leak, you can:

  • Look for the deinitializer callback after your object gets dismissed. If you don’t pass there, then you might have a problem
  • If you have an optional object, verify that it equals nil after dismissal
  • Observe your app memory consumption to see if it’s steadily increasing
  • Use the Leaks and Allocations Instruments

As for closures, let’s consider this code:

let changeColorToRed = DispatchWorkItem { [weak self] in
self?.view.backgroundColor = .red
}

Notice how self was captured weakly in that closure, which subsequently turned it into an optional in the body of the closure.

Do we really need [weak self] here? If we don’t use it, would that introduce a memory leak? 🤔

The answer, as it turns out, is “it depends”, but first let me share some history.

Unowned, Weak, and the Strong-Weak Dance

Closures can strongly capture, or close over, any constants or variables from the context in which they are defined. For example, if you use self inside a closure, the closure scope will maintain a strong reference to self for the duration of the scope’s life.

If self also happens to keep a reference to this closure (in order to call it at some point in the future), you will end up with a strong reference cycle.

Luckily, there are tools such as the keywords unowned and weak (as well as other tools discussed below) that can be used to avoid this circular reference.

When I first learned Swift, I used [unowned self] in all my closures. Later on (and after several crashes 😅), I discovered that this is the equivalent to force unwrapping self and trying to access its contents even after it gets deallocated. In other words, it’s very unsafe!

[weak self] accomplishes the same task (preventing reference cycles) in a much safer manner, but it also turns self into an optional in the process. To deal with this optionality, you can prefix your calls with self?. optional chaining. However, a more popular approach is to create a temporary strong reference to self at the start of the closure by using guard let syntax.

In earlier iterations of the Swift language, it was common to perform what was known as the Strong-Weak dance, where you would assign self to a temporary non-optional strongSelf constant like this:

Then, later on, people started using (or abusing 😛 ) a compiler bug with backticks to simplify the code further:

Eventually, with Swift 4.2, the language added official support for guard let self = self syntax, so this became possible:

Erica Sadun endorses the guard let self = self pattern in her book Swift Style, Second Edition, so I’d say it’s pretty safe to use it 😃

It may be tempting to use unowned over weak to avoid dealing with optionality, but generally speaking, only use unowned when you are certain that the reference will never be nil during the execution of the closure. Again, it’s like force unwrapping an optional, and if happens to be nil, you will crash. [weak self] is a far safer alternative.

Here’s what a crash caused by unowned looks like:

note that ‘WeakSelf’ was the name of the app that crashed

So now that we’ve established the benefits of [weak self], does that mean we should start using it in every closure?

This was pretty much me for a while:

But as it turns out, I was introducing optionality in many places in my code where it wasn’t really needed. And the reason comes down to the nature of the closures I was dealing with.

Escaping vs non-escaping closures

There are two kinds of closures, non-escaping and escaping. Non-escaping closures are executed in scope — they execute their code immediately, and cannot be stored or run later. Escaping closures, on the other hand, can be stored, they can be passed around to other closures, and they can be executed at some point in the future.

Non-escaping closures (such as higher-order functions like compactMap) do not pose a risk of introducing strong reference cycles, and thus do not require the use of weak or unowned

Escaping closures can introduce reference cycles when you don’t use weak or unowned, but only if both of these conditions are met:

  • The closure is either saved in a property or passed to another closure
  • An object inside the closure (such as self) maintains a strong reference to the closure (or to another closure that it was passed to)

I made this flowchart to help illustrate the concept:

Delayed Deallocation

You may have noticed the box at the left of the flowchart that mentions delayed deallocation. This is a side effect that comes with both escaping and non-escaping closures. It is not exactly a memory leak, but it might lead to undesired behavior (e.g. you dismiss a controller, but its memory doesn’t get freed up until all pending closures/operations are completed.)

Since closures, by default, will strongly capture any objects referenced in their body, this means they will impede these objects from getting deallocated from memory for as long as the closure body or scope is alive.

The lifetime of the closure scope can range from under a millisecond up to several minutes or more.

Here are some scenarios that can keep the scope alive:

  1. A closure (escaping or non-escaping) might be doing some expensive serial work, thereby delaying its scope from returning until all the work is completed
  2. A closure (escaping or non-escaping) might employ some thread blocking mechanism (such as DispatchSemaphore) that can delay or prevent its scope from returning
  3. An escaping closure might be scheduled to execute after a delay (e.g. DispatchQueue.asyncAfter or UIViewPropertyAnimator.startAnimation(afterDelay:))
  4. An escaping closure might expect a callback with a long timeout (e.g. URLSession timeoutIntervalForResource)

There are probably other cases I have missed, but this should at least give you an idea of what might happen. Here’s an example from my demo app showing URLSession delaying deallocation:

Let’s break down the example above:

  • I am purposely calling port 81 (a blocked port) to simulate a request timeout
  • The request has a 999 seconds timeout interval
  • No weak or unowned keyword is used
  • self is referenced inside the task closure
  • The task is not stored anywhere; it’s executed immediately

Based on the last point above, this task should not cause a strong reference cycle. However, if you run the demo app with the above scenario, and then dismiss the controller without canceling that download task, you will receive an alert saying the controller memory was not freed.

So what exactly happened here?

We are running into Scenario #4 from the list mentioned earlier. That is, we have an escaping closure that is expecting to be called back, and we gave it a long timeout interval. This closure keeps a strong reference to any objects referenced inside it (self in this case), until it gets called or reaches the timeout deadline, or if the task gets canceled.

(I’m not sure how URLSession works behind the scenes, but I’m guessing it keeps a strong reference to the task until it gets executed, canceled or reaches the deadline.)

There is no strong reference cycle here, but this closure will keep self alive for as long as it needs it, thereby potentially delaying self’s deallocation if the controller gets dismissed while the download task is still pending.

Using [weak self] (along with optional chaining or guard let syntax) would prevent the delay, allowing self to get deallocated immediately. [unowned self] on the other hand, would cause a crash here.

‘guard let self = self’ vs Optional Chaining

When using [weak self], there is a potential side effect to using guard let self = self instead of accessing self using optional chaining self?. syntax.

In closures that can delay deallocation due to expensive serial work, or due to a thread blocking mechanism like a semaphore (scenarios #1 and #2 in the list mentioned earlier), using guard let self = self else { return } at the start of the closure would not prevent this deallocation delay.

To illustrate why let’s say we have a closure that performs several expensive operations in succession on a UIImage:

We are using [weak self] along with guard let syntax at the start of the closure. What guard let actually does here is it checks whether self is equal to nil, and if it isn’t, it creates a temporary strong reference to self for the duration of the scope.

By the time we reach the expensive work (lines 5 and below), we have already created a strong reference to self, which will prevent self from deallocating until we reach the end of the closure scope. Said differently, guard let will guarantee that self stays around for the life of the closure.

If we do not use guard let syntax, and instead use optional chaining with self?. notation to access the methods on self, the nil check for self would happen at every method call instead of creating a strong reference at the start. This means that if self happens to be nil at any point during the execution of the closure, it will silently skip that method call and go to the next line.

It’s a rather subtle difference, but I think it’s worth pointing out for cases where you may want to avoid unnecessary work after a view controller gets dismissed, and on the flip side for cases where you want to ensure that all the work gets completed before an object gets deallocated (e.g. to prevent data corruption.)

Examples

I will go through some examples from the demo app showing common situations where [weak self] may or may not be needed.

Grand Central Dispatch

GCD calls generally do not pose a risk of reference cycles, unless they are stored to be run later.

For example, none of these calls will cause a memory leak, even without [weak self], because they are executed immediately:

However, the following DispatchWorkItem will cause a leak, because we are storing it in local property, and referencing self inside the closure without the[weak self] keyword:

UIView.Animate and UIViewPropertyAnimator

Similar to GCD, animation calls generally do not pose a risk of reference cycles, unless you store a UIViewPropertyAnimator in a property.

For example, these calls are safe:

The following method, on the other hand, will cause a strong reference cycle, because we are storing the animation for later use without using [weak self]:

Storing a function in a property

The following example demonstrates a cunning memory leak that can hide in plain sight.

It can be useful to pass closures or functions of one object to a different object, to be stored in a property. Let’s say you want object A to call some method from object B anonymously, without exposing object B to A. Think of it like a lightweight alternative to delegation.

As an example, here we have a presented controller that stores a closure in a property:

We also have a main controller (which owns the above controller), and we want to pass one of the main controller’s methods to be stored in the presented controller’s closure:

printer() is a function on the main controller, and we assigned this function to the closure property. Notice how we didn’t include the () parentheses in line 6 because we are assigning the function itself, not the return value of the function. Calling the closure from inside the presented controller will now print the main’s description.

Cunningly, this code introduces a strong reference cycle even though we didn’t explicitly use self. The self is implied here (think of it like self.printer), therefore the closure will maintain a strong reference to self.printer, while self also owns the presented controller which in turn owns the closure.

To break the cycle, we need to modify setupClosure to include [weak self]

Note that we are including the parentheses after printer this time, because we want to call that function inside the scope.

Timers

Timers are interesting, as they can cause issues even if you don’t store them in a property. Let’s take this timer for example:

  1. This timer repeats
  2. self is referenced in the closure without using [weak self]

As long as these two conditions are met, the timer will prevent the referenced controller/objects from deallocating. So technically, this is more of a delayed allocation rather than a memory leak; the delay just happens to last indefinitely.

Be sure to invalidate your timers when they’re no longer needed to avoid keeping their referenced objects alive indefinitely and don’t forget to use [weak self] if any of the referenced objects keep a strong reference to the timer.

Demo App

There are other examples in the demo app, but I think this article is already long enough as it is, so I won’t cover all of them. I encourage you to clone the app and open it in Xcode, then check out the different leak scenarios in PresentedController.swift (I added comments explaining each scenario.)

Notice how when you run the app with a leaky scenario, your app memory usage will steadily increase as you present and dismiss controllers

The staircase is not a good sign!

Alternatives to [weak self]

Before I conclude, I just want to mention a couple of tricks that you can use if you don’t want to bother dealing with [weak self] (I learned about them from these excellent articles by objc.io and swiftbysundell)

First, instead of passing self directly to closure and having to deal with [weak self], you can create a reference to the property that you want to access in self, and then pass that reference to the closure.

Let’s say we wanted to access the view property on self inside an animation closure. Here’s what that could look like:

We create a reference to the view property on line 2 and use it inside the closures in lines 4 and 7 instead of using self. The animation then gets stored as a property on self in line 9, but the view object doesn’t have a strong reference to the animation, so no circular reference takes place.

If you want to reference multiple properties on self in the closure, you can group them all together in a tuple (let’s call it context), then pass that context to the closure:

Conclusion

Congratulations on making it this far! The article ended up being far longer than I originally planned for 😅

Here are some key takeaways:

  • [unowned self] is almost always a bad idea
  • Non-escaping closures do not require [weak self] unless you care about delayed deallocation
  • Escaping closures require [weak self] if they get stored somewhere or get passed to another closure and an object inside them keeps a reference to the closure
  • guard let self = self can lead to delayed deallocation in some cases, which can be good or bad depending on your intentions
  • GCD and animation calls generally do not require [weak self] unless you store them in a property for later use
  • Be careful with timers!
  • If you aren’t sure, deinit and Instruments are your friends

I think the flowchart I posted earlier also provides a helpful recap of when to use [weak self]

Update: I revisted the topic of [weak self] in my new post here, which covers nested closures in Swift.

Thanks for reading. If you enjoyed this article, feel free to hit that clap button 👏 to help others find it. If you *really* enjoyed it, you can clap up to 50 times 😃

Check out some of my other articles:

References used for this article:

Demo App:

GitHub - almaleh/weak-self: This app demonstrates where [weak self] may and may not be needed

]]>
<![CDATA[My first Swift project]]> https://medium.com/@almalehdev/my-first-swift-project-c44d22ac2a29?source=rss-1d15c542811f------2 https://medium.com/p/c44d22ac2a29 Wed, 27 Mar 2019 01:49:13 GMT 2024-03-20T12:37:31.434Z
Photo by Patrick Ward on Unsplash

In my earlier posts, I mentioned how personal projects helped me stay motivated while I was learning programming. In this post I will talk about the first personal project that I built back in 2017, covering all steps from idea to final version.

I covered the purpose of this project back in this post, but as a quick recap it is a macOS application intended for my previous day job as an IT support specialist. The app was meant to save us time by displaying relevant information about the user we were helping (their access rights, software licenses, etc).

Here’s what it looked like in action:

almaleh/LionSearchDemo

It’s called LionSearch (Lion was the logo of our company, and many internal services were named after it).

Some of its features:

  • Launches instantly, no loading
  • Instantly look up any user in the organization (over 60,000 users)
  • Auto-complete as you are typing the username (no need to memorize full username)
  • Call user via Skype by tapping directly on their phone number
  • Filter group memberships to find specific info (like VPN access)

Getting started

The idea for this app is based on a bash script written by one of my colleagues. The script polls the Active Directory and retrieves info about the user I’m helping. So as I was learning Swift, I started thinking how how cool it would be to have a lightweight Mac app that lets me look-up any user in the company and shows all of the relevant IT information in one centralized location.

To get started I had to research several topics:

  • How do I build a Mac app? (at this point, I could barely make iOS apps)
  • How could I possibly interact with the Active Directory via Swift?
  • How do I filter the info that I get back from the Active Directory to get something useful? (it returns a massive text file for each user)

I started tackling these points one by one. Luckily, it turns out macOS is not that different from iOS. Many of the elements I was familiar with had Mac counterparts (UIView -> NSView, UITableView -> NSTableView, etc), although the implementation details were often different (e.g. NSTableView can have multiple columns)

After reading a few tutorials on macOS app development and getting a bit familiar with the platform, I turned my attention to the next problem. How in the world do I communicate with Microsoft’s Active Directory?

The answer I settled on was to execute a shell script using the Process and Pipe Foundation classes 😅

Shell scripts in Swift

It was difficult to find resources online on running arbitrary shell scripts with Swift; basically I wanted to execute this Terminal command:

dscl localhost -read Active Directory/LL/All Domains/Users/<User>

After some online research, and lots of trial and error, I finally got this code to do it:

The dscl localhost -read command returns a massive text file (thousands of lines!) for the requested user. So next up I had to search through this file to get the relevant information. I was looking for stuff like their email address, password expiration time, VPN access level, etc.

As an example, the last password change date would be listed next to the string PasswordLastSet: inside the file, access memberships would be listed after memberOf:, etc. So after some research, I decided that Regular Expressions would be my best option here.

What’s a Regex?

This was the first time I’d heard about regular expressions, and I had absolutely no idea how they worked! (let alone how to run them in Swift, which is kind of a pain in itself).

So I went back to study mode and started learning about Regex. After a few days of reading, and lots of practice with Regex 101, I felt comfortable enough with Regex to continue my work on the project. But then I hit another road block, running Regex in Swift.

After yet more research and trial & error, I settled on this code:

It’s not pretty, but it gets the job done! For instance, to find the last password change date, I can do this:

let lastPassRegex = "(?<=PasswordLastSet: )[^(\n)]+"
reg(lastPassRegex)

Next up I had to build the regex for all the different values I wanted to extract. At this point I knew what I had to do to finish this project, it just took some time to build it (lots of edge cases and small details to consider).

If I count all the research time, the initial version of the app took me about 2 weeks to complete. This version positively received, and my IT colleagues started requesting features, such as last logon time, software licenses, group memberships, etc.

The app started as a single screen. I gave the app window a fixed size to save myself the headaches of resizing, and I tried to keep the design as simple as possible. Here’s what it looked like:

Group membership

After ‘shipping’ the app, the next major feature I added was displaying group memberships. Every user in our organization was a member of several Active Directory groups, such as the VPN access group (which gave them access to our company VPN), the Adobe Creative Suite group, etc.

While troubleshooting a user problem, it was very useful for me to know what kind of memberships they had. For instance, if someone complained that they can’t use the office Distribution List (which lets them email everyone in a certain office), I can easily confirm if they’re part of the group that can use the DL and inform them if they’re missing access.

Users could be members of 90+ groups, so I had to add a second screen to the app to fit this information. This second screen would be presented using the ‘Sheet’ style.

By default, Sheet controllers are resize-able on the Mac, even if the base App window isn’t! This caused all sorts of weird behavior, as you could make the sheet so tiny the Close button would disappear, or make it as big as your entire screen.

To prevent these issues, I added the following line in viewDidAppear to disable resizing:

view.window!.styleMask.remove(NSWindow.StyleMask.resizable)

The rest of the Groups controller was pretty straightforward. There’s an NSTextField to filter for specific groups. An NSTableView to display the results, and a clipboard button.

Getting the Groups data was just a matter of applying the correct Regex to the Active Directory text file.

Auto-complete

This feature was much more difficult to implement than I had anticipated. First, the app needs to have an up-to-date list of all the employee usernames within our company (over 60,000 users). Then it has to display the closest matches in a tableview, and let the user seamlessly navigate between the results and the search field with their keyboard, then hit Enter to choose a result. (or use the mouse)

To get the users list, I tried a variety of Terminal commands, to no avail. The most I could get was 1000 (seemingly random) users out of 60,000. From what I’ve read online, it is not possible on the Mac to list more than 1000 users from the Active Directory.

Luckily I had access to a Windows Server instance. So I googled how to create PowerShell scripts, wrote a script that downloads the usernames of all Active Directory users, and scheduled a daily Windows task to run that script every morning to get an updated usernames list, then it would upload this list to a Box folder on the cloud under a specific URL.

I configured LionSearch (the Mac app) to automatically fetch this list of users from the above URL upon launch.

Now that I had the data I needed in the app, I started building the tableView to display the auto-complete results. I added some conditions to show the table, for example it would stay hidden unless 2+ characters are entered in the search field, and at least 1 match was found.

At the time I didn’t know how to make the table resize itself dynamically based on its content, so I hard coded the different size values based on the results count, then adjusted its origin to compensate for the change 😅

Finally, I needed to find a way to move between cells and select a result using the keyboard (in addition to the mouse.) The solution I settled on was to monitor keyboard events, then perform work based on the event code.

You can monitor events with this line:

NSEvent.addLocalMonitorForEvents(matching: NSEvent.EventTypeMask.keyDown, handler: myKeyDownEvent)

This gist should give you an idea of how the myKeyDownEvent function worked: https://gist.github.com/almaleh/8265e69e9d7a28199276b89b785f530f

I think the result looked pretty sweet 🙂

Some surprises along the way

  1. NSSearchField does not have a built-in shake animation that you can use for invalid entries. I had to use a custom animation (thanks HWS!) for this purpose:

2. It took me a while to figure out the proper syntax to call a phone number directly from the app (using FaceTime, Skype, or other software you might have installed). Here it is:

NSWorkspace.shared.open(URL(string: "tel:" + number)!)

3. The responder chain is a much bigger deal on the Mac than iOS

4. Working with Regex in Swift is not a pleasant experience! Although Raw Strings in Swift 5 will make it slightly less painful

Overall, I really enjoyed this project and I learned a ton of things along the way. As mentioned above, the initial version of the app took me about 2 weeks to complete, but the additional features and bug fixes were completed in the months following release.

The source code for the app is available here:

almaleh/LionSearchDemo

]]>
<![CDATA[High performance drawing on iOS — Part 2]]> https://medium.com/@almalehdev/high-performance-drawing-on-ios-part-2-2cb2bc957f6?source=rss-1d15c542811f------2 https://medium.com/p/2cb2bc957f6 Sat, 19 Jan 2019 22:28:53 GMT 2019-06-19T19:07:19.446Z High performance drawing on iOS — Part 2
Credit: Sharon Pittaway

In Part 1 I discussed two different ways to perform 2D drawing on iOS, both of which are CPU-based. The first one performed ok on the iPhone but very poorly on the iPad (17 fps average), while the second one delivered great performance and only incurred about 25% CPU utilization on average.

I mentioned at the end of Part 1 that I wasn’t very happy with the single-threaded nature of my final result. In this post, I will cover two different ways to draw that leverage the parallel nature of the GPU.

Core Graphics uses the CPU for rendering, while Core Animation uses the GPU. The first two techniques I covered in Part 1 used Core Graphics extensively (hence the high CPU utilization). Now in order to leverage the GPU, we are going to use Core Animation instead, which is responsible for all CALayer-based classes.

The two techniques I am going to discuss are similar in terms of performance. I encourage you to check both out, and choose the one that makes more sense to you. And of course, if you want to share a different way to draw, don’t hesitate to post in the comments or on Twitter!

I uploaded an app to github that demonstrates all the techniques discussed in this series. And at the bottom of this post, I posted gifs showing the incredible gains from using GPU-based drawing.

Sublayer GPU-based drawing

This technique is centered around the use of UIBezierPaths and CALayers to draw and store the drawings within a UIImageView. In touchesBegan, we store the new touch point in a property like this:

Then in touchesMoved, we call a drawing function (drawBezier) to which we pass the previous and new touch points:

Now let’s look at the implementation for drawBezier:

  1. Create a new drawing layer if one doesn’t already exist, to be used as the ‘parent’ layer. This will be assigned to the drawingLayer property
  2. This CAShapeLayer will be added as a sublayer to the drawing layer
  3. Set the scale to match the device scale (2x, 3x, etc). This is really important, since the default scale for newly created CALayers is 1, and that will give you pixellated drawing on 2x and 3x devices (virtually all devices right now)
  4. If we exceed 400 sublayers, we flatten to improve performance

Here’s the setupDrawingLayerIfNeeded method used above:

And finally, here is the implementation for flattenToImage():

First, we get the previously flattened image (if one exists), and we merge it with the newly added drawings/layers. Then we generate a new image from the merged result, and assign it to our UIImageView’s image property.

This flattening process is not ideal, as it relies too heavily on the CPU for my taste. Here’s what the final class would look like.

The frame rate here is pretty stable while drawing, but let’s look at the CPU utilization:

Not bad

As you can see, this is a good improvement over the 25% we got in Part 1. There are tiny, recurrent spikes to 15% as the flattenImage method gets called. Ideally, I want to get rid of that CGContext in the flattening phase, and rely exclusively on Core Animation. This brings us to my next technique.

draw(_layer:ctx:) GPU-based drawing

This approach is somewhat similar to the draw(rect:) approach we did in Part 1, but instead of draw(rect:), we will be working with draw(_layer:ctx:).

We start with a UIView subclass; in touchesMoved, we store the new touch locations in a line array, and then we call layer.setNeedsDisplay(rect:) to perform the drawing (note that we are calling setNeedsDisplay on the layer this time, and not on the view itself.)

We are carrying over the optimizations from Part 1. Namely, we are calling layer.setNeedsDisplay(rect:) to only update the dirty area of the view, and we are flattening the image/emptying the array once we reach a certain number of points.

Here’s what touchesMoved looks like:

Next up, we override draw(layer:ctx:) as discussed earlier to leverage the GPU. Inside of that method, we need to create a CAShapeLayer (a CALayer subclass) to perform the actual drawing. We will not use the CGContext that was passed in by the method signature, because we don’t want to rely on the CPU for rendering.

Our method will look like this:

Here are the important points as numbered above:

  1. Reuse the existing CAShapeLayer to perform drawing, or create a new one if it doesn’t exist
  2. Match device scale to avoid pixellation
  3. Create a UIBezierPath that will guide the drawing
  4. Loop through the existing points in our line array, and build the path accordingly
  5. Now that the path has been stroked and assigned to the shape layer, we assign the layer to a property, and add it as a sublayer to the main view’s layer. This is only performed once (per flatten)

The last thing left to discuss with this approach is the flattening process, which is necessary to prevent performance deterioration over time as you add more points. We add a property observer as we did before:

That in turn calls this method:

Notice how we’re flattening after collecting only 25 points this time. This is because we no longer use a CGContext in the flattening process, and since we are no longer CPU-bound, we can do this more often (every 25 points). Here’s the flattening method:

This may look a bit cryptic at first, and it’s because we’re trying to copy a CALayer, which is not an intuitive process (much like copying a UIView). Copying a layer this way lets us bypass the requirement to use a CGContext, therefore we no longer need to render the existing layers into a context then generate a bitmap image out of the context. This saves us many CPU cycles.

To summarize the steps in the code:

  1. Access the optional drawingLayer, which contains everything we drew earlier in draw(layer:ctx:)
  2. Encode the layers from that drawingLayer into a data object, then decode that object into a brand-new layer (a copy)
  3. Access the optional value of that brand-new layer
  4. Add that new layer as a sublayer on the view’s layer to display it

Now that the drawing layers are safely copied to a separate layer, and the points array has been emptied, we can continue drawing new points without losing our existing drawing.

And if you ever want to clear the existing drawing, you can call this function:

That will safely loop through all the sublayers that are of type CAShapeLayer, and remove them from the main view’s layer. (‘for case let x as’ is one of my favourite patterns in Swift 😄)

Here is the link to what the full class looks like if you want to check it out.

The frame rate on my 11" iPad Pro was a steady 120 FPS while drawing with this approach. And here’s the CPU utilization:

Looking good!

I think 10–11% is a pretty good improvement from the 25% we were seeing in Part 1! Notice how we no longer get spikes to 15% as we did in the first technique discussed above.

Summary

Throughout these two articles, I covered the following techniques:

  • Low performance CPU-based drawing
  • High performance CPU-based drawing
  • Sublayer GPU-based drawing (what I ended up using in my game)
  • Draw(layer:ctx:) GPU-based drawing

The final image that gets drawn is identical amongst the four. In terms of performance on an iPad, the first technique is far too slow, averaging below 20FPS. But the other 3 will all give you a smooth drawing experience (at the max supported frame rate for your device).

Things will start getting trickier if you have other stuff running in the background, or if your app contains multiple drawing canvases. That was the case in my game, which lets 4 players play on the same device, meaning you can have up to 8 drawing canvases simultaneously:

Draw with Math

In those cases, it can be worth the effort to try and minimize the CPU utilization of your drawing to get better performance. Your users will also get extra battery life as an added bonus.

As mentioned in Part 1, I made this small app to demonstrate the techniques discussed in this series. If you build and run the app, you’ll see two buttons that will draw a spiral for you. The left button (Spiral Link), uses a CADisplayLink to draw, meaning it gets called once every frame, while the second button (Max Speed) uses a Timer with an incredibly small interval (0.00001 second), this way it will basically draw as fast as the device will allow it to.

The ‘Max Speed’ button clearly demonstrates the difference in performance between the CPU-based and GPU-based approaches. I am going to leave you with 4 gifs showing what happens after I tapped on it with each technique:

Slow CPU-based
Fast CPU-based
Sublayer GPU-based
draw(layer:ctx:) GPU-based

Thanks for reading. If you enjoyed this article, feel free to hit that clap button 👏 to help others find it. If you *really* enjoyed it, you can clap up to 50 times 😃

]]>
<![CDATA[High performance drawing on iOS  — Part 1]]> https://medium.com/@almalehdev/high-performance-drawing-on-ios-part-1-f3a24a0dcb31?source=rss-1d15c542811f------2 https://medium.com/p/f3a24a0dcb31 Thu, 17 Jan 2019 02:09:50 GMT 2019-06-17T00:42:23.252Z High performance drawing on iOS — Part 1
Source: Tim Arterbury — Unsplash

As I was building my recently released drawing game, I explored different ways to perform 2D drawing on iOS. Throughout this 2-part series, I will share what I’ve learned by comparing the performance of 4 distinct ways to draw on the platform. All 4 techniques will look exactly the same in terms of the final drawn image, but as we will see, their implementation and performance will vary wildly.

This is by no means a definitive guide for 2D drawing. I am still learning about all of this, and I’ll be updating this article as I learn new information. If you wish to share a better way to draw, I would love to hear your thoughts in the comments section or on Twitter!

Part 1 will cover:

  • Low performance CPU-based drawing
  • High performance CPU-based drawing

Part 2 (available here) will cover:

  • Draw(layer:ctx:) GPU-based drawing
  • Sublayer GPU-based drawing (what I ended up using in my game)

Low performance CPU-based drawing

This approach was probably the simplest to implement. Here you would subclass UIImageView and use it as your drawing canvas, and then you call this drawing code inside touchesMoved:

let renderer = UIGraphicsImageRenderer(size: bounds.size)
image = renderer.image { ctx in
 image?.draw(in: bounds)
 lineColor.setStroke() // any color
 ctx.cgContext.setLineCap(.round)
 ctx.cgContext.setLineWidth(lineWidth) // any width
 ctx.cgContext.move(to: previousTouchPosition)
 ctx.cgContext.addLine(to: newTouchPosition)
 ctx.cgContext.strokePath()
}

You can view the entire canvas class here if interested.

note: be sure to set isUserInteractionEnabled = true on your UIImageView

That code lets you draw stuff like this:

Draw with Math

This approach got me pretty far in my development of the game. In fact, I only noticed its drawbacks once I launched the game on my iPad to test the UI on the bigger screen (I spent the first month of development exclusively on my iPhone 6s 😅)

I was getting a smooth 60 frames per second while drawing on my iPhone 6S, but bizarrely, when I tried drawing on my brand-new 11" iPad Pro the frame rate plummeted to the teens! (17 fps average)

Something was terribly wrong here. I looked at Xcode, and noticed that the CPU was sitting at 100% utilization throughout my drawing

Ouch!

After some investigation, the culprits turned out to be:

image = renderer.image { ctx in ... }

and

image?.draw(in: bounds)

First, the renderer is creating a UIImage based on a set of drawing instructions inside the closure. This is a CPU-intensive operation.

Second, that closure above also includes image?.draw, which itself is drawing the entire image to that graphics context. This is another CPU-intensive operation.

Now imagine calling both of these methods hundreds of times a second, no wonder the frame rate is plummeting!

You might be wondering why the iPhone 6s, a nearly 4 year-old phone, doesn’t have this issue, while the iPad Pro does. There are actually two reasons for this:

  • The 11" iPad Pro has a resolution of 2388x1668, that’s 4 million pixels, while the iPhone 6s has a resolution of 1334x750, or 1 million pixels
  • The iPad Pro has a 120hz display while the iPhone’s is just 60hz. In other words, touchesMoved gets called twice as often on the iPad

Overall, we’re looking at a ~8x increase in CPU demand. The iPad Pro’s A12X CPU is certainly far more advanced than the 6s’s A9. Unfortunately, CPU-based drawing is a single-threaded operation (note the screenshot above shows 100% used out of potential 800), so the multicore setup in the A12X doesn’t give it any benefit here. And if we compare the single-threaded performance between the two CPUs, the A12X is just over twice as fast as the A9 according to Geekbench.

Now that the performance mystery has been solved, it’s time to explore a better way to draw!

High performance CPU-based drawing

This approach is centered on the draw(_ rect:) method for UIView, which you cannot call directly. Instead, you invoke that method by calling setNeedsDisplay() on the view. The idea for drawing here is that you would store the points you want to draw inside of an array (this is done in touchesMoved), and then you would loop over these points inside draw(_ rect:) to perform the actual drawing.

The initial class looks like this:

If you run this code on an iPad, you’ll see an immediate improvement over the first approach. This time you’ll start with 120 FPS, and CPU utilization will no longer be maxed out at 100%, however it’s still somewhat high, hovering around 60%. Moreover, if you continue drawing on the canvas, you’ll notice the CPU % gradually rising over time. After a short while, it will reach 100%, and soon after the frame rate will plummet to the teens. Uh oh, not good!

Back to where we started

But fear not, there are a couple of easy optimizations we can make that will greatly improve the performance here. First up, instead of calling setNeedsDisplay, which marks the entire view as needing a redraw, we really should be calling its sibling, setNeedsDisplay(_rect:), which would mark only a small rect within that view as dirty and in need of a redraw.

To do that, we would need to calculate the area that needs to be updated within the view. This is a rectangle between the last touch location and the new one, something like this:

We should also account for the drawing line width, so the actual rectangle will be slightly bigger. We calculate it with this method:

That method takes the lastTouchLocation, so we need to retrieve that by modifying our touchesMoved method slightly. Here’s what it looks like now:

Try launching the app on the iPad again, and you’ll find that the CPU utilization has dropped all the way down to 20% while drawing now. Nice!!

However, we’re not out of the woods yet; you’ll notice that the CPU % quickly climbs back up as you continue drawing. To deal with this, we will have to flatten the image every once in a while (i.e empty the lines array and convert the drawing to bitmap.)

We will flatten the image whenever we collect 200 points, or when touchesEnded is called. A nice side effect of this is that we no longer need a multi-dimensional array of lines to save the points. A simple [CGPoint] array will do the job, since we’re no longer storing multiple lines (they get flattened now).

We need to add some convenience methods here (summary below):

To summarize what was added, we’re basically flattening the image when touchesEnded is called. We also flatten the image when we reach 200 points (this is checked in the property observer), however we leave one point in the array, to ensure no interruption to the drawing.

The draw(_ rect:) method has also changed a bit, since we need to render the flattened image now if it exists:

You’ll notice that we’re using image.draw() again (this caused a massive slowdown above), but don’t worry, because this time we’re calling it on a small subset of the view. It won’t apply to the entire size of the view.

If you run the app on the iPad, you’ll see that the CPU will now stay around 25% utilization no matter how long you keep drawing. And the frame rate is a steady 120 FPS (or 60 FPS depending on device.) Cool!

Hurray for improved battery life!

So, to recap this approach:

  1. In touchesMoved, we store the touch points inside of an array
  2. We call setNeedsDisplay(rect:) to mark the dirty area for redraw
  3. In draw(rect:), we loop over the array of points and perform the actual drawing
  4. When the array reaches 200 points, we flatten the image and empty the array

At this point we have a very workable system, and I can see myself shipping the game like this. But something still bothered me about this approach; it’s the fact that it’s single-threaded. I wanted to know if it were possible to utilize the full power of the hardware instead of just one CPU core.

In Part 2, I will talk about my attempts to leverage the GPU to perform parallel high-performance drawing that is not limited by one core. It actually turned out to be not that difficult. By the time I released the game, I had successfully managed to lower the CPU utilization to just 10%! Check out Part 2 here!

I uploaded a small app to github that demonstrates the drawing concepts described in this series. It has an FPS counter in the corner to help you compare the performance. You can either draw yourself, or tap the “Start Drawing” button to have it draw a spiral for you. You can find it here

Be sure to run it on a real device, as the simulator won’t give you accurate performance measurements.

Available on Github

This is the drawing game that I built based on the concepts discussed in this series:

‎Draw with Math

I learned about the optimizations discussed in this article from this WWDC video; I highly recommend watching it if you’re interested in the topic:

Optimizing 2D Graphics and Animation Performance - WWDC 2012 - Videos - Apple Developer

Thanks for reading. If you enjoyed this article, feel free to hit that clap button 👏 to help others find it. If you *really* enjoyed it, you can clap up to 50 times 😃

]]>
<![CDATA[How I built my first open source library]]> https://medium.com/@almalehdev/how-i-built-my-first-open-source-library-97d8bb2cc254?source=rss-1d15c542811f------2 https://medium.com/p/97d8bb2cc254 Thu, 01 Nov 2018 02:43:27 GMT 2019-05-11T02:26:46.613Z
Credit: Susan Yin

Last week I published my first open source library, QuickTicker. It’s a Swift library that lets you create simple ticker animations using one line of code. The result looks like this:

In this post, I’d like to talk about this project and cover:

  • Why I created this library
  • How I built it (the coding part)
  • Final details (example project, unit tests, README.md, Cocoapods)
  • Summary of major points and general advice (aka TLDR)

Why open source?

Starting with the obvious question, the reason I decided to build this library was because this is a functionality that I usually end up incorporating into most of my projects anyway; so I figured I might as well make it a bit more generic and package it up into a library that can be easily added to any project, as opposed to copy & pasting code between projects.

Building this library was also an opportunity for me to practice thinking in terms of APIs and practice building modular code while hiding implementation details. I also had to use generics, which I hadn’t used in any of my previous projects!

What does it do?

When I started this project, my goal was to build a simple library that lets you animate labels similar to the gif above. I ended up adding a few additional features over the course of the project, although the core concept is still the same. Here is the list of features I ended up with:

  1. Start an animation using just one line of code
  2. Familiar syntax to anyone who’s used UIView’s animate methods
  3. Accepts any Numeric value as end value, you don’t have to convert or typecast
  4. It works even if there is text mixed in with numbers in the same label. Text remains intact while the digits get animated 👍
  5. Completion handler lets you safely queue-up actions following the animation
  6. You can optionally specify animation curves and decimal points for the label
  7. Works on both UILabel and UITextField (I intend to expand this later)

Label animation, the main purpose of this library, is something I’d learned about from a youtube video by Brian Voong. In the video, Brian talks about CADisplayLink and how you can use it to animate text in UILabels to create counters and other similar effects. There is some boilerplate code required to get CADisplayLinkworking (including selectors and @objc methods), and that’s where I think this library can be useful.

Before I go on, I’d like to mention a talk given at Swift & Fika 2018 by Daniel Kennett about API design. During that talk, Daniel spoke about the API boundary, which is the line between the code of the API and the code of the user. As an API designer, you get to choose where this boundary goes, and this decision can have great ramifications.

The closer the boundary gets to the user code, the more work you have to do as an API designer. In return, the API becomes much easier for the user to implement (think Crashlytics with their ridiculously simple install process). Daniel showed this image to convey that point:

Credit: Daniel Kennett

On the other hand, if you decide to place the boundary much closer to your code, you end up having to write less code, but you give the user much more work to do. In return, they usually end up having more control over the API. Daniel offered the example of Spotify Metadata:

Credit: Daniel Kennett

I don’t think either approach is wrong. It all depends on what you’re trying to accomplish with your API. In my case with QuickTicker, my goal was to make it as simple as possible for the user to get started, ideally a one-line function call. So I was interested in making something more similar to the first image (Crashlytics).

How I built it (the technical part)

The early versions of the library did not use a dedicated type. Instead, I built it as an extension on the UILabel type (you can still see it in the early commits on github if you’re curious what it looked like). So you would call the API using dot syntax directly on your label, something like this:

let someLabel = UILabel()
someLabel.startTicker(duration: 2, endValue: 250)

There were a few things I didn’t like about this approach. I wasn’t a big fan of extending the entire UILabel type, as I didn’t want to pollute the namespace for all UILabels for the user, when they probably only needed to animate one or two labels. This approach also meant that I couldn’t extend the same functionality to other types like UITextField without duplicating all my code.

As an aside, during this early experiment with type extension, I learned that it is actually possible to add stored parameters to type extensions in Swift. To do this, you’d have to define a computed property as an associated object, then access that object using an association key, which is a unique pointer to that association. The end result looks like this:

Sure the syntax is ugly, and very “unswifty”, but it gets the job done, and lets you do something that is otherwise not possible in Swift. I think it’s really cool!

Anyway, back to the main topic 😁 as I mentioned earlier, I ended up scrapping this type extension approach for my API, and I went with a dedicated type. Two types to be precise. One of them publicly available, one internal (not visible to the user).

Now you might be wondering why I didn’t go with a protocol, as it may seem like a good solution here. The problem is that I needed to store properties (you’ll find out why when I discuss the implementation below), and protocols don’t allow that (as far as I know!). They only let you add computed properties, and the trick I mentioned above with associated objects doesn’t work on protocols.

This brings us to my custom types. One of them is a public class named QuickTicker, which contains three class methods. The first method contains all possible parameters, while the other two are convenience methods that call into the main one with some default values. The syntax for these methods is inspired by UIView’s animate methods, which will be familiar to many users.

I could have provided default values to all parameters on the main method instead of writing 3 methods, but that didn’t give me the desired level of granularity when testing out auto-complete options. Another advantage to writing separate methods is you get to specify the auto-complete tooltip text for each one, explaining the default values to the user (e.g. 2 second duration). The auto-complete tooltip by the way is added via /// (3 forward slashes) above the method.

I used the class keyword instead of static for these methods, because I wanted to give the user the option to override them if they wanted to. This class also contains an Options enum. The options include 3 animation curves, as well as a decimalPoints case with an associated value denoting the number of decimals. This options array is a nice way to future-proof the API, as it lets me safely add further options down the line without breaking existing users’ code.

The main method accepts two generic parameters, one for the end value, which could be any Numeric value, and one for the animated label, which (for now) could be a UILabel or a UITextField. It looks like this:

It took me a while to figure out how I could create a function that accepts any Numeric value in a manner that is transparent to the user (thinking of the API boundary here). After some research online, I ended up using this answer on StackOverflow. This answer was posted before the Numeric protocol was made official in Swift, so at first I tried to adapt the answer to the existing Numeric protocol in the language, but after several unsuccessful attempts I ended up creating a NumericValue protocol that worked out beautifully (if you know of a way to make this work with Numeric, I would love to know how!)

Next we have the internal QTObject class. This class is necessary because I want to store several values over the course of the animation. Most notably, a weak reference to the animated label, the end value, a completion handler, a reference to the ongoing CADisplayLink (in order to stop it), and other options.

This object has an interesting method called getStartingValue(from: animationLabel):

As I mentioned in the list of features, QuickTicker updates the digits in your label while leaving the text intact. In other words, if you feed it a UILabel with the text “Temperature: 13F”, only the 13 part will get animated. This contributes to placing the boundary closer to the user’s code (less work for the user, no need to worry about mixing text with digits, but more code in the API to handle different scenarios).

To achieve this, the method above works like this:

  1. Try to convert to Double (using Double’s failable initializer)
  2. If it succeeds, return the digit (there is no text in the label)
  3. If it fails (there is text in the label), then find out where the digit starts and ends within the text
  4. Convert the value behind that start…end range to Double, that’s your starting value (the rest is either text, or a second digit, but we’re only concerned with the first digit we find)

To accomplish the above, I make use of another method called getFirstAndLastDigitIndexes. This one is pretty big, so I’m not gonna paste it in the article (you can see it here), but essentially what it does is define an NSCharacterSet with decimalDigits plus “.” then loop over the text looking for any matching characters (while accounting for edge cases, like a “.” in the middle of text)

Moving on to the QTObject initializer, when this gets called (by the class methods in QuickTicker), and once all the properties are initialized (including the starting value mentioned above), the CADisplayLink gets added to the default mode run loop, thereby starting the animation.

Small note regarding run loops: I intend to switch to common mode run loop in my next release, as I just learned that default mode can get blocked by touch events (i.e. scrolling your finger across the screen).

The CADisplayLink is tied to the device refresh rate, and is guaranteed to call its selector (the handleUpdate method in our case) on every screen refresh (every 16.7ms on a 60hz device); this gives you the entire 16.7ms duration to execute that method. Generally speaking, if you have CPU intensive code and wish to take advantage of that entire 16.7ms duration it’s better to use a CADisplayLink than a Timer, since timers can fire up slightly ahead or behind their intended time, thereby giving you less time to execute your code.

Next up is handleUpdateFor:

On every frame update, the handleUpdate method gets called. This method will check the elapsed time since the animation has started, and will calculate the current value for the label based on:

  1. The percentage of elapsed time / full duration
  2. Animation curve stated by the user (default is linear)
  3. End value stated by the user
  4. Number of decimals stated by the user

This is the crux of how the label animation works. Every frame update, you calculate how much time has passed since the animation has started, compare that against the full duration of the animation, and update the label accordingly. If we reached max duration, the displayLink is invalidated, and the completion handler is called.

For example, let’s say you wanted to animate a label from 0 to 40 over the course of 2 seconds. After 0.5 second has elapsed, you are 25% into the animation. With a linear curve, that means the label must show 25% of 40, i.e. 10 at that point in time.

There are a few interesting methods called over the course of the above process. First up is getValueFromPercentage:

This will return a value based on the curve stated by the user. The easeIn curve is easy to calculate, since the percentage is a range from 0–1, so it’s just a matter of raising that percentage to the power of 2.8 (an arbitrary number I chose through trial and error to get a nice curve). To get the inverse curve (easeOut), I subtract the percentage from 1 before raising it to the power of 2.8, then subtract from 1 again.

The next method I wish to explore is updateLabel:

This will apply the desired number of decimal places to the calculated value before updating the label. The user can request a specific number of decimal places, such as 5 zeroes. If they don’t specify anything, the number will be inferred from the requested end value (e.g. if you specify 157.83 as the animation end value, then 2 decimal places will be maintained throughout the animation.

By default, Swift will discard extra zeroes after the decimal point. So if you enter something like let x = 0.983010000 it will be converted to 0.98301 and we wouldn’t be able to respect the desired decimal places. To get around this, we call a function called padValueWithDecimalsIfNeeded that looks like this:

This method does the following:

  1. Infer the number of decimal places from the calculated value (at that point in the animation)
  2. Compare it against the requested number of decimals
  3. If we are short (i.e. Swift discarded extra zeroes), then we pad it out to match the requested number

The second method used inside updateLabel is updateDigitsWhileKeepingText, which ensures that only the digits in the label get animated; the rest of the text stays intact. (just as we did in the beginning to infer the starting value)

This pretty much covers all the major parts of how the library works. There’s a total of 2 classes and 2 protocols, but I placed all of them in a single QuickTicker.swift file to make installation easier (just drag & drop one file). I like it when libraries let you do that (e.g. SwiftyJSON)

Example project

I decided to create an example project to showcase the library. This can be useful to generate some screenshots for the readme, and also to demonstrate to the user how the API calls work with some examples.

I personally find example apps really useful. For instance, when I implemented Google’s Cloud Firestore realtime database in one of my projects, the example helped me out a lot, especially to get acquainted with the API calls. I kept repeatedly referring back to the example app while working on my project to check how certain things are done.

For QuickTicker, I ended up building a single-page app with some sliders to let the user experiment with the different settings, as well as some pre-set counters at the bottom. It looks like this:

Some lessons learned while building the example app:

  1. If you intend to create a Cocoapod (more on that below), don’t build an example app manually. Let Cocoapods create it for you, then change it as you please. You’ll thank me later! 😃
  2. Use UIFont.monospacedDigitSystemFont for animated labels to prevent wobbling during the animation (especially when combining text with digits)

Unit Tests

This was an interesting one. I have limited experience writing unit tests; it’s definitely an area I need to work on 😄

When I started writing tests for this library, I quickly learned two things:

  • In your tests file, you need to add a @testable keyword before your import statement in order to access internal classes
  • You cannot access any private or fileprivate methods from your tests, even with @testable

The second one was pretty big for me. At that point, all of the classes and methods I had written were private except for the 3 class methods that I exposed to the user. I wanted to expose only the bare minimum so I wouldn’t pollute the user’s namespace with meaningless things like QTObject.

But in order to test the methods inside QTObject, I had to remove the private restriction, including the initializer for QTObject. I added a comment to the initializer, alerting the user not to call it directly and to use the class methods instead. And later on, I found out that since this is an internal class, if you install QuickTicker via Cocoapods, you can’t access it directly anyway. Perfect!

Here are the tests I wrote if you’re curious. Not comprehensive by any means, but still serviceable I think.

How to build a README.md

In the same excellent Swift & Fika 2018 conference I mentioned earlier, Roy Marmelstein gave a talk about open source in which he emphasized the importance of having a good readme page for your project on Github.

A good readme starts by explaining what the library is, and what you can do with it. Hopefully it’s something that’s usually hard but is easy with this library. Roy recommends creating a custom logo, and adding screenshots and especially gifs if applicable to your library (i.e. if it’s UI-related).

Other important sections include installation instructions and requirements, as well as an example project.

I started with a readme template, then I used these repos for inspiration:

Cocoapods

Next up was Cocoapods. I have used Cocoapods extensively in my projects before, but never actually created one. The process ended up being surprisingly straightforward.

To create the Cocoapod I followed tutsplus’ tutorial, along with Cocoapod’s own guides on their website.

One advice I’d like to reiterate here, is if you’re planning to include an example project in your library, let Cocoapods create the initial version (along with the test files), then modify it as you please. I had created the project myself, and then when I built the Cocoapod I ended with a second project, so I had to manually move files around from the old project to the new one. It’s a bit of a hassle.

Conclusion

I think I’ve covered all the major steps I went through while building this library! This was a fun weekend project, and I certainly learned a lot while doing it. To summarize some of the important points + some general advice:

  • If there is a certain functionality that you keep adding to your projects over and over, consider packaging it up into an open source library
  • Open source helps you build modular code and think in terms of APIs
  • Before you start, consider where you want to place the API boundary
  • If you place the boundary close to the user’s code (like I did with QuickTicker), expect to do more work on your side
  • Try to future-proof your design, and plan for additive changes in the future as opposed to breaking user’s code in future updates (e.g. I added an options array that I can safely expand without breaking any existing code)
  • If you intend to add an example project, and you wish to create a Cocoapod, let Cocoapods create the project + tests file for you
  • Including an example project helps you generate screenshot (if applicable), and it helps introduce the users to your API
  • Don’t forget about Unit Tests! Libraries with no tests get a lower CocoaPods Quality score
  • And finally, having a good README.md file can be vital for the success of your library!

Hopefully, this article has inspired you to create your own open source library and share it with the world! Thanks for reading!

If you have any questions or suggestions, please leave a comment or tweet @BesherMaleh

You can find QuickTicker here.

If you’re curious, here are a couple of my apps that use this library

InstaWeather
Find My Latte

Thanks for reading. If you enjoyed this article, feel free to hit that clap button 👏 to help others find it. If you *really* enjoyed it, you can clap up to 50 times 😃

]]>