<![CDATA[Anton’s Substack]]>https://antongubarenko.substack.comhttps://substackcdn.com/image/fetch/$s_!mzA2!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1eb672de-f6cd-47a2-98b2-6de3f5be6ffe_2316x3088.jpegAnton’s Substackhttps://antongubarenko.substack.comSubstackTue, 14 Apr 2026 06:37:33 GMT<![CDATA[Spec-Driven Development with OpenSec Promo]]>https://antongubarenko.substack.com/p/spec-driven-development-with-opensec-9eahttps://antongubarenko.substack.com/p/spec-driven-development-with-opensec-9eaTue, 07 Apr 2026 09:40:08 GMT

AI-assisted coding became fast much earlier than it became predictable. That is exactly why spec-driven development is getting more attention now: instead of keeping requirements scattered across chat history, you turn them into artifacts the human and the assistant can both follow. OpenSpec is one of the tools built around that idea. It adds a lightweight workflow where you agree on the change first, generate planning artifacts, and then implement against them.


What is Spec-Driven Development?

Spec-driven development, or SDD, is a workflow where the specification becomes the source of truth before implementation starts. In OpenSpec’s wording, it helps you and your AI coding assistant agree on what to build before any code is written. Rather than relying on one long prompt, you break work into artifacts such as a proposal, specs, design, and tasks.

That matters even more in AI-heavy workflows. A good assistant can generate a lot of code quickly, but if the requirements are vague, the result is usually inconsistent, incomplete, or hard to evolve. SDD adds structure without forcing a huge process. The idea is simple: describe the change, refine the behavior, plan the work, then implement it against an agreed spec.

One of the nice ideas behind OpenSpec is that the workflow is not treated as rigid phase gates. Its docs explicitly frame the model as actions rather than locked phases, so you can create, implement, update, and archive as needed. That makes it fit better with real development, where requirements often change while implementation is already in progress.

SDD birthplace is NASA at 1960s as a mix of formal developments and multiple specifications. So we all can feel like a Space Explorers while using it )


What is OpenSpec, anyway?

OpenSpec is an open-source tool for spec-driven development aimed at AI coding assistants. The project describes itself as a lightweight spec layer that helps teams agree on what to build before code is written, keeps each change organized in its own folder, and works with more than 20 AI assistants through generated slash commands and skills.

The primary goal of OpenSpec is to solve the problem of unpredictability and inconsistency that can arise when using Al tools for coding. Since Al tools are non-deterministic, they can struggle to consistently follow complex requirements. OpenSpec provides a structured approach that allows teams and Al assistants to agree on specifications before coding begins. This leads to more efficient development, fewer errors, and improved code quality

After initialization, OpenSpec generates instructions that compatible assistants can auto-detect. In its docs, that means skills in locations like .claude/skills/, plus a project config file at openspec/config.yaml if you choose to create one. The default schema is spec-driven, and the standard artifact set includes proposal, specs, design, and tasks.

A particularly useful concept in OpenSpec is the idea of delta specs. Instead of rewriting the whole product spec every time, you describe what is changing. The docs present this with sections such as ADDED Requirements and MODIFIED Requirements, which makes OpenSpec more practical for brownfield projects where the codebase already exists. We will get back to this in the next chapters.

Full article here.

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Spec-Driven Development with OpenSec]]>https://antongubarenko.substack.com/p/spec-driven-development-with-opensechttps://antongubarenko.substack.com/p/spec-driven-development-with-opensecTue, 07 Apr 2026 09:38:51 GMT

AI-assisted coding became fast much earlier than it became predictable. That is exactly why spec-driven development is getting more attention now: instead of keeping requirements scattered across chat history, you turn them into artifacts the human and the assistant can both follow. OpenSpec is one of the tools built around that idea. It adds a lightweight workflow where you agree on the change first, generate planning artifacts, and then implement against them.


What is Spec-Driven Development?

Spec-driven development, or SDD, is a workflow where the specification becomes the source of truth before implementation starts. In OpenSpec’s wording, it helps you and your AI coding assistant agree on what to build before any code is written. Rather than relying on one long prompt, you break work into artifacts such as a proposal, specs, design, and tasks.

That matters even more in AI-heavy workflows. A good assistant can generate a lot of code quickly, but if the requirements are vague, the result is usually inconsistent, incomplete, or hard to evolve. SDD adds structure without forcing a huge process. The idea is simple: describe the change, refine the behavior, plan the work, then implement it against an agreed spec.

One of the nice ideas behind OpenSpec is that the workflow is not treated as rigid phase gates. Its docs explicitly frame the model as actions rather than locked phases, so you can create, implement, update, and archive as needed. That makes it fit better with real development, where requirements often change while implementation is already in progress.

SDD birthplace is NASA at 1960s as a mix of formal developments and multiple specifications. So we all can feel like a Space Explorers while using it )


What is OpenSpec?

OpenSpec is an open-source tool for spec-driven development aimed at AI coding assistants. The project describes itself as a lightweight spec layer that helps teams agree on what to build before code is written, keeps each change organized in its own folder, and works with more than 20 AI assistants through generated slash commands and skills.

The primary goal of OpenSpec is to solve the problem of unpredictability and inconsistency that can arise when using Al tools for coding. Since Al tools are non-deterministic, they can struggle to consistently follow complex requirements. OpenSpec provides a structured approach that allows teams and Al assistants to agree on specifications before coding begins. This leads to more efficient development, fewer errors, and improved code quality

After initialization, OpenSpec generates instructions that compatible assistants can auto-detect. In its docs, that means skills in locations like .claude/skills/, plus a project config file at openspec/config.yaml if you choose to create one. The default schema is spec-driven, and the standard artifact set includes proposal, specs, design, and tasks.

A particularly useful concept in OpenSpec is the idea of delta specs. Instead of rewriting the whole product spec every time, you describe what is changing. The docs present this with sections such as ADDED Requirements and MODIFIED Requirements, which makes OpenSpec more practical for brownfield projects where the codebase already exists. We will get back to this in the next chapters.


How to install OpenSpec

OpenSpec requires Node.js 20.19.0 or higher. The main README and installation docs show the global package install via npm, plus alternatives for pnpm, yarn, bun, and nix.

node --version
npm install -g @fission-ai/openspec@latest

If you use another package manager, the official docs also list these options:

pnpm add -g @fission-ai/openspec@latest
yarn global add @fission-ai/openspec@latest
bun add -g @fission-ai/openspec@latest

How to use OpenSpec

The quick-start flow is straightforward. Install the CLI, move into your project, run openspec init, and then start the workflow from your AI assistant with /opsx:propose <what-you-want-to-build>. The default quick path in the docs is propose → apply → archive.

cd your-project
openspec init

This command will create the following directory structure in your project:

openspec/
├── specs/ # Source of truth (your system's behavior)
│ └── <domain>/
│ └── spec.md
├── changes/ # Proposed updates (one folder per change)
│ └── <change-name>/
│ ├── proposal.md
│ ├── design.md
│ ├── tasks.md
│ └── specs/ # Delta specs (what's changing)
│ └── <domain>/
│ └── spec.md
└── config.yaml # Project configuration (optional)

Two key directories:

  • specs/ - Source of Truth. These specifications describe the current behavior of your stem. They are organized by domain (e.g., specs/auth/, specs/payments/).

  • changes/ - Proposed Changes. Each change gets its own folder with all related artifacts. When a change is completed, its specifications are merged into the main

OpenSpec Artifacts

Each change folder contains artifacts that guide the work:

Main artifacts

Artifacts build on each other:

Then in your AI assistant (Claude, in example):

/opsx:propose Add dark theme support to the iOS app

If you want a more expanded workflow, OpenSpec also supports commands such as /opsx:new, /opsx:continue, /opsx:ff, /opsx:verify, /opsx:sync, /opsx:bulk-archive, and /opsx:onboard. The docs say these are enabled by selecting a workflow profile with openspec config profile and then applying it with openspec update.

openspec config profile
openspec update

OpenSpec can also inject project context and rules through openspec/config.yaml. The official docs show fields like schema, context, and per-artifact rules, which is useful when you want the AI to consistently follow your stack, style, testing, or architecture expectations.

Example config:

schema: spec-driven

context: |
  Platform: iOS
  Language: Swift 6
  UI: SwiftUI
  Architecture: MVVM
  Testing: XCTest
  Style: Keep features modular and accessibility-friendly

rules:
  proposal:
    - Mention user-visible impact
    - Mention rollback strategy if theme breaks visual contrast
  specs:
    - Use Given/When/Then scenarios
  design:
    - Note persistence approach and affected views
  tasks:
    - Keep tasks small and implementation-ready

What is Delta and why is it needed?

One of the most useful OpenSpec ideas is the delta. In simple terms, a delta is the change itself, not the entire product spec rewritten from scratch. OpenSpec’s concepts docs define delta specs as specifications for what is being added, modified, or removed.

That matters because most real work happens in an existing codebase. You are usually not writing a greenfield product spec from zero. You are adding dark theme, changing onboarding, refining analytics, or replacing a legacy flow. Rewriting the full spec every time would add noise and make reviews harder. Delta specs keep the focus on what changed.

This is also why delta works well with AI assistants. Instead of telling the model to reinterpret the whole product, you give it a precise target: implement this difference. That makes output easier to review, easier to evolve, and less likely to drift away from the original intent. This last point is an inference from OpenSpec’s artifact model and workflow design, rather than a direct quote from the docs.

A simple way to think about it:

  • full spec = what the product is

  • delta spec = what this feature changes

A typical delta can contain sections like:

## ADDED Requirements
### Requirement: Manual Theme Override

## MODIFIED Requirements
### Requirement: Accessible Theme Colors

## REMOVED Requirements
### Requirement: Legacy Fixed Light Theme

So if your app already exists and you want to add dark mode, the delta does not describe the whole app. It describes only the new or changed behavior related to theme support. That is what makes spec updates manageable over time.


Example: iOS spec to add Dark Theme

Here is the kind of change request OpenSpec fits well. Imagine an existing SwiftUI app that currently supports only a light theme, and now you want to add dark mode properly rather than as a quick visual patch.

A strong OpenSpec workflow would usually begin with a proposal, then move into specs, design notes, and tasks. OpenSpec’s docs position these as the standard artifacts in the default schema, and their examples recommend small, structured tasks and Given/When/Then-style scenarios for requirements.

1. Proposal

# Proposal: Add Dark Theme Support

## Summary
Add full dark theme support to the iOS app so users can use the interface comfortably in low-light environments and align the app with modern iOS expectations.

## Motivation
The app currently uses a light-only visual style. This creates poor visual comfort at night, makes the product feel less native on iOS, and limits accessibility for users who prefer darker interfaces.

## Goals
- Support system light/dark appearance
- Allow optional manual theme override in Settings
- Persist user preference between launches
- Update key screens and reusable components

## Non-Goals
- Rebuild the design system from scratch
- Add seasonal or branded themes
- Change unrelated layout or navigation behavior

## Risks
- Some screens may still use hard-coded colors
- Charts, overlays, and custom backgrounds may need separate tuning
- Contrast issues may appear on older screens

## Success Criteria
- Main user flows look correct in light and dark mode
- Theme switches without broken contrast
- Manual selection persists after relaunch

2. Specs

# Delta for Theme

## ADDED Requirements

### Requirement: System Theme Support
The application MUST support both light and dark appearance modes.

#### Scenario: Follow system appearance
- GIVEN the user has not selected a manual theme override
- WHEN the iOS system appearance changes between light and dark
- THEN the application updates its appearance automatically

### Requirement: Manual Theme Override
The application MUST allow the user to choose Light, Dark, or System in Settings.

#### Scenario: Select dark theme manually
- GIVEN the user opens Settings
- WHEN the user selects Dark theme
- THEN the application immediately applies dark appearance
- AND the selection persists across app launches

#### Scenario: Revert to system theme
- GIVEN the user previously selected a manual theme
- WHEN the user selects System
- THEN the application follows the current iOS appearance again

### Requirement: Accessible Theme Colors
The application MUST use theme-aware colors that preserve readable contrast in both appearances.

#### Scenario: Readability in dark mode
- GIVEN the app is displayed in dark mode
- WHEN the user opens any primary screen
- THEN text, icons, separators, and controls remain clearly readable

3. Design

# Design: Dark Theme Support

## Overview
Implement a centralized theme layer for SwiftUI that supports:
- system appearance
- manual override
- persistent storage
- theme-aware semantic colors

## Architecture
- Add `AppTheme` enum with cases: `system`, `light`, `dark`
- Add `ThemeStore` as an observable state holder
- Persist selected theme using `@AppStorage`
- Inject `ThemeStore` at the app root
- Map theme selection to `preferredColorScheme`

## UI Impact
Affected areas:
- app root scene
- settings screen
- reusable buttons
- cards and list rows
- onboarding screens
- charts and branded surfaces

## Edge Cases
- legacy UIKit-hosted screens may ignore SwiftUI preference
- hard-coded `Color.white` and `Color.black` usages must be replaced
- image assets may require dark variants

4. Tasks

# Tasks

## 1. Theme Infrastructure
- [ ] 1.1 Create `AppTheme` enum with `system`, `light`, `dark`
- [ ] 1.2 Create `ThemeStore` observable object
- [ ] 1.3 Persist selected theme with `@AppStorage`
- [ ] 1.4 Inject theme store into the app entry point

## 2. App Integration
- [ ] 2.1 Apply `preferredColorScheme` at root level
- [ ] 2.2 Add theme selector to Settings
- [ ] 2.3 Ensure changes update UI immediately

## 3. UI Migration
- [ ] 3.1 Replace hard-coded colors with semantic colors
- [ ] 3.2 Audit reusable components
- [ ] 3.3 Update custom backgrounds and overlays
- [ ] 3.4 Test charts and illustrations in dark mode

## 4. Validation
- [ ] 4.1 Test launch persistence
- [ ] 4.2 Test system appearance switching
- [ ] 4.3 Check accessibility contrast on key screens

[img: iOS dark theme concept illustration. iPhone screens transitioning from light mode to dark mode, with a settings panel and palette tokens floating around. Elegant product art, square, no text.]


Example Implementation Prompt

Once the planning artifacts are in place, you can move to implementation. In OpenSpec’s workflow, that is the role of /opsx:apply, which implements tasks while allowing artifacts to be updated as needed.

/opsx:apply Implement the dark theme change for the SwiftUI iOS app.
Respect the existing architecture.
Use semantic colors where possible.
Add manual theme selection in Settings.
Persist the selected option.
Flag any screens that still depend on hard-coded colors.

Further edit: refining the spec after first implementation

This is where OpenSpec becomes more useful than a single big prompt. After the first pass, you often discover missing details. Maybe the theme works, but charts still look wrong, onboarding illustrations are too bright, or a UIKit bridge ignores the chosen mode. OpenSpec’s docs explicitly support updating artifacts during work, and its delta-spec model is built for describing those changes rather than rewriting everything.

For example, after reviewing the first implementation, you might extend the spec like this:

## MODIFIED Requirements

### Requirement: Accessible Theme Colors
The application MUST use theme-aware colors that preserve readable contrast in both appearances, including charts, onboarding graphics, and empty states.

#### Scenario: Charts in dark mode
- GIVEN the app is displayed in dark mode
- WHEN the user opens an analytics or chart-based screen
- THEN chart axes, grid lines, labels, and highlighted values remain readable
- AND color choices preserve sufficient contrast against the background

And you could add follow-up tasks:

## 5. Dark Mode Polish
- [ ] 5.1 Tune chart palette for dark backgrounds
- [ ] 5.2 Add dark-compatible illustration assets where needed
- [ ] 5.3 Audit UIKit-hosted views for color-scheme mismatch
- [ ] 5.4 Re-test Settings, onboarding, and analytics screens

That iterative loop is probably the strongest part of the OpenSpec approach. You do not just tell the assistant “fix dark mode.” You update the source of truth, then implement again from a better artifact set.


Why OpenSpec is interesting for iOS work

For iOS development, OpenSpec feels especially useful in feature work that is bigger than a one-file change but smaller than a full RFC process. Dark theme, onboarding redesigns, analytics instrumentation, subscription flows, paywall experiments, or widget updates all benefit from a shared definition of behavior before the assistant starts touching code.

It also fits the reality of app development better than prompt-only coding. Mobile work usually includes UI states, accessibility, persistence, migration concerns, asset variants, platform conventions, and sometimes legacy UIKit or extension targets. A structured spec makes those details much easier to preserve across multiple AI-assisted implementation steps.

Competitors

Honestly, there is a Kiro. It positions itself as an AI development environment built around spec-driven development, custom agents, and structured workflows instead of prompt-only coding. The most interesting part is the emphasis on moving from fast prototypes to production work with executable specs, steering, and IDE plus CLI support. But it has a credits system.


Final thoughts

Spec-driven development is really a response to a new problem: code generation got fast, but reliable intent transfer did not. OpenSpec addresses that gap with a lightweight workflow built around proposals, specs, design, and tasks, plus a set of assistant-friendly commands and project configuration. It is not about replacing coding. It is about making AI-generated coding more intentional, reviewable, and repeatable.

For an iOS team or even a solo developer, that can be enough process to improve predictability without turning every feature into a documentation project.


References

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.


One More thing…

Through years of mentoring, I’ve built a small bookshelf of titles that are worth reading — or at least being aware of. The Grokking series from Manning Publishing is one of them. The style, the language, and the illustrations all combine to explain complex topics in an easy and engaging way.

You’ll find yourself wanting to learn more about things that once felt intimidating or difficult to understand. One of the latest additions is Grokking Data Structures by Marcello La Rocca.

Sharing the link with you.

]]>
<![CDATA[SwiftUI: Charts Axis Scale Promo]]>https://antongubarenko.substack.com/p/swiftui-charts-axis-scale-promohttps://antongubarenko.substack.com/p/swiftui-charts-axis-scale-promoMon, 30 Mar 2026 08:22:04 GMTHappy to share a new article with you.

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[SwiftUI: Charts Axis Scale]]>https://antongubarenko.substack.com/p/swiftui-charts-axis-scalehttps://antongubarenko.substack.com/p/swiftui-charts-axis-scaleMon, 30 Mar 2026 08:20:12 GMTWe’ve all had this feeling at some point in a developer’s life. You picked the right architecture, managed navigation, fixed isolation. The only thing left is a small specification detail: display data on a chart exactly as described and according to Figma. You try one modifier, then another, then different parameters, and it still doesn’t work. What is domain? Why is range not working? There should be no barriers to building a nice chart.

Here is where chartXScale(range:type:) becomes interesting. Apple describes it as the modifier that configures “the range of x positions that correspond to the scale domain.” By default, that range is determined by the size of the plot area.

The same applies to chartYScale(range:type:). No extra diff spotted.

In other words, this modifier is not about changing your data. It is about changing how the x-axis uses the available horizontal space.

Why Would You Use It?

Most charts look fine with the default x-scale behavior. But sometimes the chart feels just a little off:

• the first bar is too close to the leading edge

• the last point looks glued to the trailing border

• the whole plot feels cramped even though the data itself is correct

That is usually not a domain problem. It is a range problem.

A useful mental model is this:

domain = what values exist on the axis

range = where those values are drawn in the plot area

Apple’s chart scale docs make exactly that separation: domain is about possible data values, while range is about the axis positions used to render them.


Basic Chart

Let’s start with a simple bar chart:

import SwiftUI
import Charts

struct SalesPoint: Identifiable {
    let id = UUID()
    let month: String
    let value: Int
}

struct ChartRangeDemoView: View {
    let data: [SalesPoint] = [
        .init(month: "Jan", value: 12),
        .init(month: "Feb", value: 19),
        .init(month: "Mar", value: 7),
        .init(month: "Apr", value: 15),
        .init(month: "May", value: 23)
    ]

    var body: some View {
        Chart(data) { item in
            BarMark(
                x: .value("Month", item.month),
                y: .value("Sales", item.value)
            )
        }
        .frame(height: 220)
        .padding()
    }
}
Basic Bar Chart

With no explicit range configuration, Swift Charts uses the plot area dimension automatically. That is the default behavior Apple documents for chartXScale(range:type:).


The Most Practical Use of plotDimension

The easiest and most useful version is applying padding to the plotting dimension.

Apple documents plotDimension(padding:) as “a scale range that fills the plot area with the given padding value at start and end.”

Chart(data) { item in
    BarMark(
        x: .value("Month", item.month),
        y: .value("Sales", item.value)
    )
}
.chartXScale(
    range: .plotDimension(padding: 20)
)
.frame(height: 220)
.padding()
range: .plotDimension(padding: 20)

This is the version I’d reach for first. It is simple, visual, and immediately improves charts that feel too tight near the edges.


Different Padding at the Start and End

Sometimes equal padding is not enough. You may want the chart to start tighter on the left, but leave a bit more breathing room on the right for a label, annotation, or simply for better visual balance.

Apple also provides a PositionScaleRange option that fills the plot area with different padding values at the start and end.

Chart(data) { item in
    BarMark(
        x: .value("Month", item.month),
        y: .value("Sales", item.value)
    )
}
.chartXScale(
    range: .plotDimension(startPadding: 0, endPadding: 28)
)
.frame(height: 220)
.padding()
range: .plotDimension(startPadding: 0, endPadding: 28)

This is where the modifier starts feeling less like “some axis API” and more like a visual tuning tool.


Example with an Actual Range

The interesting part is that chartXScale(range:type:) is not only for categorical data like month names. It also applies when your x-axis contains real continuous values such as dates or numbers.

Here is a date-based line chart:

import SwiftUI
import Charts

struct TemperaturePoint: Identifiable {
    let id = UUID()
    let day: String
    let temperature: Double
}

struct TemperatureChartView: View {
    let data: [TemperaturePoint] = [
        .init(day: "Mon", temperature: 18),
        .init(day: "Tue", temperature: 21),
        .init(day: "Wed", temperature: 19),
        .init(day: "Thu", temperature: 24),
        .init(day: "Fri", temperature: 22)
    ]

    var body: some View {
        Chart(data) { item in
            LineMark(
                x: .value("Day", item.day),
                y: .value("Temperature", item.temperature)
            )

            PointMark(
                x: .value("Day", item.day),
                y: .value("Temperature", item.temperature)
            )
        }
        .chartXScale(
            range: .plotDimension(startPadding: 12, endPadding: 12),
            type: .category
        )
        .frame(height: 220)
        .padding()
    }
}
Basic Line Chart

Even though the x-axis is now based on Date, the job of range is still the same: define how the domain maps into the horizontal plot area. Apple’s documentation for the modifier does not limit it to categorical charts; it defines the range generally as the x positions that correspond to the scale domain.


What is Type?

This is the part that often feels vague at first.

Apple’s ScaleType documentation describes it as “the ways you can scale the domain or range of a plot.”

That means the type parameter tells Swift Charts how to interpret the scale, while range tells it how much horizontal plotting space to use and how to distribute it.

A typical explicit example looks like this:

.chartXScale(
    range: .plotDimension(startPadding: 12, endPadding: 12),
    type: .category
)

If your x-axis values are strings like weekdays, month labels, product names, or enum-backed display values, .category makes the intent very clear. Apple exposes category as one of the available scale types.


Why Might Type be Needed?

In many charts, you do not have to set type manually because Swift Charts can infer the right behavior from the values you pass in. But there are still good reasons to use it.

1. It makes the axis behavior explicit

If you are using strings as x-axis values, they are typically meant to be treated as categories, not as a continuous scale. Writing type: .category makes that obvious in code. That follows directly from Apple’s scale model and the presence of .category in ScaleType.

2. It can make chart code easier to reason about

Even if inference works, explicit scale configuration can make the chart easier to read and maintain. This is not a direct Apple quote, but it is a practical consequence of how the API separates the idea of scale type from the scale range itself.

3. It helps you think in the correct model

When you pass categorical values, you are working with discrete positions. When you pass dates or numbers, you are usually working with a continuous axis. type reinforces that distinction.


Category Example with Explicit Type

Here’s a complete example that makes the scale type explicit.

import SwiftUI
import Charts

struct TemperaturePoint: Identifiable {
    let id = UUID()
    let day: String
    let temperature: Double
}

struct TemperatureChartView: View {
    let data: [TemperaturePoint] = [
        .init(day: "Mon", temperature: 18),
        .init(day: "Tue", temperature: 21),
        .init(day: "Wed", temperature: 19),
        .init(day: "Thu", temperature: 24),
        .init(day: "Fri", temperature: 22)
    ]

    var body: some View {
        Chart(data) { item in
            LineMark(
                x: .value("Day", item.day),
                y: .value("Temperature", item.temperature)
            )

            PointMark(
                x: .value("Day", item.day),
                y: .value("Temperature", item.temperature)
            )
        }
        .chartXScale(
            range: .plotDimension(startPadding: 12, endPadding: 12),
            type: .category
        )
        .frame(height: 220)
        .padding()
    }
}
Type with .category

This is a good example of where range and type work together:

• range adds breathing room inside the plot

• type: .category makes the discrete x-axis behavior explicit


How This Differs From Domain?

This is the distinction that matters most.

If you write:

.chartXScale(domain: [”Jan”, “Feb”, “Mar”, “Apr”, “May”])
Domain override

you are configuring which values belong to the x-axis domain.

If you write:

.chartXScale(range: .plotDimension(padding: 20))

you are configuring where those values are drawn horizontally.

And if you write:

.chartXScale(
    range: .plotDimension(padding: 20),
    type: .category
)

you are also telling Swift Charts how that scale should behave.

Apple’s chart scale documentation separates these concerns clearly: domain is about possible axis values, range is about axis positions, and ScaleType describes how the domain or range of a plot is scaled.

A Practical Rule of Thumb

Here’s the version I’d keep in mind while building charts:

• use domain when you want to control what values are visible

• use range when you want to control spacing and plotting space

• use type when you want to make the scale behavior explicit

That makes chartXScale(range:type:) much easier to understand.


Final Thoughts

chartXScale(range:type:) is one of those modifiers that does not look exciting at first glance, but it becomes very useful the moment you start polishing a chart. The biggest win is usually visual: better edge spacing, more balanced composition, and a cleaner plot without changing any data at all.

And the nice part is that it scales from simple to advanced quite naturally:

• start with .plotDimension(padding:)

• move to startPadding and endPadding when you need more control

• add Type when you want the axis model to be explicit

Once you think of it that way, the modifier stops feeling obscure. It is really just a small but important piece of chart layout.

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.


References

]]>
<![CDATA[Swift Bits: Restore ChatGPT macOS App Chats]]>https://antongubarenko.substack.com/p/swift-bits-restore-chatgpt-macoshttps://antongubarenko.substack.com/p/swift-bits-restore-chatgpt-macosFri, 13 Mar 2026 10:20:25 GMTIt’s been a while since I’ve posted Bits, all because of some physical obstacles. However, the recent ChatGPT macOS app update and the 5.4 model reveal made my interaction with AI (OpenAI only) more complicated. Let me tell you why…

Problem Begins

After happily updating to Version 1.2026.048 (1771630681)

the main view welcomed me with this:

ChatGPT macOS app

Chats were not loading and new ones were blocked from being created. Not very appealing, so I went to check the web version at ChatGPT.com, where there was a list of my chats: pinned and unpinned. After checking the latest ones, everything seemed fine. Now it was time to check the pinned ones. And it was the correct guess! One of them was not loading, and I got a toast at the top of the screen:

Could not load conversation <uuid>!

The Safari console looked like this:

500 Server error on conversation load

Applied Fixes

Googling and prompting led to a couple of suggestions. First of all, report it to OpenAI via the help form or chat in the app

OpenAI help form from web

This may trigger a possible fix from the server side.

At the moment of publication, no fixes have been made.

Form client-side:

  • Refresh/Restart: Hard refresh your browser or restart the app

  • Log Out/In: Sign out and back into your account to refresh the session.

  • Clear Data: Clear your browser’s cache and cookies for chat.openai.com.

  • Check Extensions: Disable browser extensions (especially ad blockers) that might interfere with page loading.

  • Try Incognito/New Browser: Use an incognito/private window or a different browser to rule out extension issues.

  • Switch Network: Toggle your VPN off or switch to a different network.

Would I wrote a post if any of this worked?)

Tracing the Problem

As a developer, I can see how the loading of chats might be implemented. For a conversation ID, the app should load the list of chats and retrieve the pinned ones. If any of them fail → show a warning sign with a popup on tap containing details and a “Reload” button. Yeah, right.

Instead, if any chat fails → the macOS app shows none of them. That is because they are not calling each conversation with a GET Conversation ID request. That brings speed, but the batch request fails. What can we do about it? Exclude the problematic chat. Pin/Unpin will not work. The only option is to archive it. This will prevent it from loading and move it further into Settings:

Archive action for chat


Later you can find it here:

Archived chats in Data Control

And that’s it! The chat is prevented from loading on the initial start.

Quite interesting that in API chats are called conversations. And even browser has a https://chatgpt.com/c/ prefix.

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Fortify Your App: Essential Strategies to Strengthen Security Q&A]]>https://antongubarenko.substack.com/p/fortify-your-app-essential-strategieshttps://antongubarenko.substack.com/p/fortify-your-app-essential-strategiesMon, 09 Mar 2026 07:05:14 GMTMeetups and webinars continue, and this time Apple hosted a session on Security Strategies, covering topics such as Enhanced Security, Memory Integrity, and writing secure code with Swift. Previously, I was already surprised by a four-hour duration. Now it’s almost six hours (six!) with three breaks, and it requires a lot of focus.

In the survey after the session, I mentioned that splitting it into separate sections would be better, in my opinion. Finding that much time during either the night or regular working hours is not always possible. Let’s hope they are actually reading the feedback.

As usual, questions are splitted into more or less common section. Grammar and punctuation are kept too.


🔐 Security & Memory Safety

What strategies can we use to evaluate third-party libraries for security vulnerabilities?

Hi! Evaluating third-party libraries can be tricky and require a lot of domain expertise as there are a wide variety of issues to look for. This is especially true if you only have access to the libraries in a binary form. Certain features—like EMTE—serve as mitigations and can be applied (in some cases) without a recompile, which can help in cases where you cannot easily audit the source. When auditing the source, some good hints for finding security relevant bugs are: 1. Memory unsafe code (C/C++, Swift could which uses unsafe, etc.), especially decoders and deserialization routines (e.g. bespoke file/request format parsing). 2. File path construction and archive expansion (zip files, etc.) are a common source of problems due to path traversal 3. Whether or not the code is memory safe, take special care to understand where it comes from and whether you trust it.


What are the potential security vulnerabilities associated with storing data in UserDefaults and plist files, and what best practices should be followed to protect against tampering or exposure?

In terms of pure data security, both approaches are fairly similar. The data is being stored in the file system and is protected the same way any other file does. However, using your own plist file does mean you can directly set the file’s protection level, which is another reason to avoid storing sensitive data in UserDefaults. Similarly, both are using the same on-disk format and data parser. Theoretically, that’s an attack vector just like any file parser. However, this particular parser is relatively simple compared to other formats (for example, SQL) and so widely used throughout the entire system that I’d consider it quite safe.


For a standalone iOS app with no backend, what’s Apple’s recommended baseline for protecting sensitive user-entered data at rest? Is ‘Complete Protection’ (NSFileProtectionComplete) plus Keychain for secrets the correct default?

The baseline for most data should be NSFileProtectionCompleteUntilFirstUserAuthentication (for files) and kSecAttrAccessibleAfterFirstUnlock (keychain). The issue with storing data at higher security levels is that data stored at “Complete” can really only be safely accessed while your app is in the foreground. More specifically, the data will only be accessible if the device is unlocked, but that state changes independently from background execution, meaning data can suddenly become inaccessible anytime your app is running in the background. You can and should use “Complete” whenever you can, but it’s a choice that needs to be made as part of your app’s overall design, not as an automatic default.


Is it safe to use the Data type’s withUnsafeBytes method? I find myself using it a lot when parsing values from Bluetooth devices.

Methods with “unsafe” in the name can be used safely, but it requires more care and attention to do so. You won’t always be able to avoid using such methods, but you can carefully isolate such code and of course make sure to devote more effort to reviewing and verifying any code that uses unsafe methods or types.


I’m confused of the purpose of blast door, given the validation of the image was ultimately down to messages… surely blast door would protect and / or sanitise the object/data recieved? Or have I missed the point?

Properly validating images can itself be a perilous task and so it’s one best not performed inside a (relatively) privileged process like Messages. BlastDoor validates and unpacks attachments in a highly restricted environment and then passes a much safer, simpler version of it to Messages. So, for example, Messages may give BlastDoor a JPG and BlastDoor will ultimately hand back a simple bitmap representation of it. This indemnifies Messages against memory corruption bugs in the JPG parser.


Apple on messages for example never denies uploads of different file types, but then how does it protect against file-based vulnerabilities?

Messages allows sending (mostly) arbitrary files but—notably—it does not parse/preview/unpack arbitrary files. Messages and BlastDoor work together to ensure that attachments which are processed are processed in a way that’s safe. It is generally safe to accept arbitrary files so long as they are not processed.


Memory safe languages like Swift or Rust have sometimes to bypass security checks (e.g. to access the kernel). Which protections and tools can be used in that case?

On Apple platforms, EMTE is a great way to mitigate issues caused by this gap. Even kernel accesses to user memory are tag checked, and so out-of-bounds/use-after-free accesses to user memory will still be reported according to the process’ enforcement mode. Other techniques such as fuzzing with libfuzzer and either ASan or EMTE on can also serve as strategies to gain confidence that unsafe code has the desired memory safety properties.


This question may be covered today, but I’m curious what resources and guidance exists for maximizing security in apps when we build across platforms. SwiftUI is write once, tweak, deploy. How about security? Sandboxing is different as an example on Mac vs iPhone.

Hi! Thanks for joining. The event today covers many topics about security and memory safety. Most of the guidance is platform agnostic, and some are programming language agnostic. For additional resources specific to your app and project environments, please refer to Apple Platform Security: https://support.apple.com/guide/security/welcome/web


What techniques help verify if a third-party app has adopted security enhancements, hardware memory tagging, security extensions?

If you have the application bundle available to you, you can use the codesign tool to view an application’s entitlements: codesign -d --entitlements - /path/to/application/binary As EMTE is controlled by entitlement, you can use this technique to see if EMTE is enabled for a given executable in the app.


What additional features (settings) required for fully Swift apps? is Enhanced Memory safety required (new Capability + config)? How much of security guarantees Swift provides? anything in the talk required to be enabled or is enabled by default?

Enabling enhanced security features like PAC, EMTE, and typed allocations is still useful in Swift apps. In certain language modes, Swift apps which do not use unsafe can still have memory corruption issues due to data races (concurrently modifying a reference can cause reference counting errors). Similarly, although your application may be fully safe Swift, it may interact with libraries (provided by Apple or third parties) which are not fully memory safe, and so turning on enhanced security features will help protect you against issues not caused by your code.


🛡️ Enhanced Security & Capabilities

For the Pointer Authentication feature, supporting arm64e is required. I understand this applies to third party SDKs, but how about an app that is completely modularized (with all modules in the same workspace), do we need to configure this per module or only the main app target?

arm64e is indeed required, and every target that contributes binary code that’s linked or dynamically loaded into an app does need to have arm64e added as an architecture. When enabling the Enhanced Security capability, Xcode adds the ENABLE_POINTER_AUTHENTICATION build setting (that adds arm64e) as needed, but you may need to add that separately as well.


Is there a cSettings SPM construct to enable PAC, MTE (and other features that do not require the final top-level signing/capabilities entitlement) in a Swift package that has C/C++ targets, so that compiled libraries can be included in an Xcode project using these enhanced security capabilities?

Any clang compiler option can be included in cSettings. Caveat: Prior to Swift 6.2, only some clang options were allowed, but starting with Swift 6.2, you can put any option.


Does PAC specifically (enhanced security include PAC and more) work for iOS 14+ deployment targets? You can enable each part of enhanced security individually, so like we know that typed memory allocator compiles nicely for iOS16 or 17+, but we can get PAC compiled for iOS 14+?

PAC is tied to the arm64e architecture. arm64e is first supported in iOS 17.4, and generally supported starting with iOS 26 and macOS 26. Universal binaries can be built for arm64e + arm64, and arm64 will be used when arm64e isn’t supported. When building the universal binary, both architectures can be compiled for an older deployment target, but keep in mind that arm64e will only be used on newer iOS.


Do these features require Xcode 26.4 to debug, run and test with? I’ve noticed the change in entitlements mentioned in the Xcode 26.4 beta release notes, does that mean that you need to test and run on Apple 26.4 OSes to try out Enhanced Security capabilities now?

Most of the capabilities of Enhanced Security are supported on and can be tested on Apple OS versions starting from 26.0. The “hard mode” of MTE (under which tag-check violations result in an immediate crash) is only supported beginning in 26.4, so to test with that capability on hardware with MTE support you’ll need to test with 26.4 or later OS versions. (described in 26.4 release notes [1]) The new enhanced-security-version-string and platform-restrictions-string string version entitlements you noted described in the Xcode 26.4 release notes [2] are set automatically by Xcode 26.4, but can be set manually in your entitlements plists using a text editor if you need to stay on an earlier Xcode version.
[1] https://developer.apple.com/documentation/ios-ipados-release-notes/ios-ipados-26_4-release-notes [2] https://developer.apple.com/documentation/xcode-release-notes/xcode-26_4-release-notes


I’m having trouble finding the advanced security… is it under signing in capabilities or is it under the settings of Xcode? Also. Can you build an outlet already contained like I was building a journal app I don’t want it to be connected to any network services by default or any services

Hi Elijah, please refer to the documentation below for steps to enable the Enhanced Security capability: https://developer.apple.com/documentation/xcode/enabling-enhanced-security-for-your-app


I’ve noticed that when enabling MTE, my app keeps crashing on launch with reports pointing to 3rd party SDKs. Whenever I take out one SDK, it just crashes in the next. It seems the reported tag is always 0, which makes me believe it’s not just violations but maybe a configuration problem on my end?

Additionally, arm64 binaries produced by older versions of clang may have issues where the tag is incorrectly stripped from the pointer. Recompiling the binary with a recent compiler should remediate the issue.


Will enabling enhanced security extension from code signing help to find bugs for debug builds on simulators as well?

Yes, starting in the 26.4 OS versions, applications that enable MTE (checked-allocations) as part of Enhanced Security will run with MTE enabled in the simulator when running on macOS hardware that supports MTE.


Is enabling standard library hardening recommended for all build types not just for debug? Will it cause any latency issues when enabled for prod configuration? Does $(__LIBRARY_HARDENING_DEFAULT_VALUE) mean no hardening set for project?

Yes, we recommend enabling at least the fast hardening mode, even in production builds. It has been designed to have minimal performance impact, but if you have very performance sensitive workloads, as always: benchmark before and after. For configurations with optimization disabled (level set to 0 — normally Debug configs), __LIBRARY_HARDENING_DEFAULT_VALUE defaults to “debug” (the more extensive checks). For optimized configurations (e.g. Release), __LIBRARY_HARDENING_DEFAULT_VALUE defaults to “fast” if Enhanced Security is enabled, “none” otherwise.


🔍 Hardware Support & Compatibility

Which of the devices announced this week support memory integrity enforcement?

Memory Integrity Enforcement is supported on A19, A19 Pro, M5, M5 Pro, and M5 Max, which power iPhone 17e, the new MacBook Air (M5), and the new MacBook Pro (M5 Pro or M5 Max).


Is the memory tag exposed in memgraphs or Instruments in any way?

When running under the Allocations template in Instruments, the memory tags do show up in object addresses. In memgraphs and the CLI tools like heap, these tags aren’t currently exposed, with the addresses correlating directly to their containing VM regions. If you’re interested to see which regions have tagging enabled, this is available with vmmap --attributes


Does Memory Integrity Enforcement support pre-Swift 6.0?

Yes, Memory Integrity Enforcement can be used with any Swift version.


🔄 C/C++ Interoperability

In some C/C++ functions the Swift interop uses unsafe pointers to bytes. Is there something like a Span API over the bytes of a Swift var to ensure a memory-safe and bounds-enforced buffer pointer to pass through to C/C++ code instead of using unsafe methods such as withUnsafeMutableBytes()?

Great question! Automatically generated wrapper functions that safely unwrap Span types and pass along the pointer to C/C++ is a feature available since Xcode 26 when the experimental feature SafeInteropWrappers is enabled. This requires annotating std::span parameters with __noescape, or pointer parameters with both __noescape and __counted_by/__sized_by, directly in the header or using API notes. Note that this is only safe if Swift can accurately track the lifetime of the unwrapped pointer, which is why the Span wrapper is not generated without the __noescape annotation. More resources are available at https://developer.apple.com/videos/play/wwdc2025/311/ and https://www.swift.org/documentation/cxx-interop/safe-interop/. Since this is an experimental feature with ongoing development, questions and feedback on https://forums.swift.org are extra welcome to help us shape and stabilize this feature!


💉 Pointer Authentication

How does PAC work with ObjC method swizzling at runtime?

When you use the functions provided by the ObjC runtime, they ensure that any necessary pointer signing is correctly handled.


How does pointer authentication different from other memory defense ways mentioned as part of whole app protection?

Pointer authentication makes it more difficult to create a pointer (from an integer) or to modify an existing pointer. This complements technologies such as MTE (which can catch many bound and lifetime errors) and typed allocation (which mitigates the effects of memory re-use).


Why is Pointer Authentication a compile-time, opt-in feature, and not a platform-enforced, runtime-enabled feature? I imagine something like that should be possible if something as magical as Rosetta is also possible 🙂

Additionally, PAC is a compile time change as it requires different instructions throughout the program.


Where are the cryptographic tags used for pointer authentication stored on Apple devices? Are they kept in the Secure Enclave or in another hardware component?

The signatures are, however, stored in the upper bits of the pointer itself.


📦 Allocators & Memory Management

Are type and alignment independent for typed allocators?

Mostly, yes, but not entirely. The typed allocators segregate and isolate allocations by size class and, within each size class, by type space partition. Let’s call the (size class, type space partition) combination the “type bucket” that serves a particular allocation. Requesting aligned allocations (e.g. via aligned_alloc()) can change the effective size class of an allocation because of implementation details of the allocators, and so can change the type bucket that the allocation is served from.


What is best approach to use system allocator for third party SDK e.g. Braze if enabled enhanced security extension for app and memory corruption is noticed?

Third-party SDKs linked in to your app/program will generally be using the system allocator automatically and benefit from Memory Integrity Enforcement automatically. If there are memory corruption bugs in those SDKs that Memory Integrity Enforcement features like MTE detect and turn into crashes, you would want to work with the developers of those SDKs to have them fix the underlying bugs. You could use MTE soft mode [1] to avoid having those memory corruptions crash your app while you wait for fixes from the developers, at the cost of the relative reduction in security that that entails. https://developer.apple.com/documentation/BundleResources/Entitlements/com.apple.security.hardened-process.checked-allocations.soft-mode


📏 Bounds Safety

How do bounds safety checks prevent out-of-bounds (OOB) accesses in practice? Is this implemented as a built-in feature in Clang? When developers add annotations, what mechanisms are applied internally to enforce or adjust the bounds checks?

Yes, this is built into Clang. With -fbounds-safety enabled Clang will emit bounds checks wherever pointers are dereferenced or reassigned (exception: assigning to __bidi_indexable does not trigger a bounds check, since __bidi_indexable can track the fact that the pointer is out of bounds and defer the bounds check). If the bounds check fails the program will jump to an instruction that traps the process. Clang uses a combination of static analysis and runtime checks to enforce that pointer bounds are respected.


Why is CoreFoundation missing all bounds-checking annotations? Do I have to use __unsafe_forge_single for all initializers?

Yes, that is the recommended approach when interoperating with libraries that do not have bounds annotations, when you want to be explicit about the fact that you’re interacting with unsafe code. This makes it easy to grep for “unsafe” in your code base when doing a security audit. If you are confident that the API adheres to a bounds safe interface but simply lacks the annotations, you can redeclare the signature in your local header with added bounds annotations, like this:

//--- system_header.h bar_t * /* 
implicitly __unsafe_indexable */ foo(); 
//--- project_header.h 
#include #include bar_t * __single foo();

📱 App Store & Privacy

Are there common App Store review pitfalls for apps that store sensitive contact data locally (even if it’s not medical)? Anything you recommend we avoid to prevent rejection or privacy concerns?

Hi Gloria! Please see the following pages on user privacy and data use:

- https://developer.apple.com/app-store/user-privacy-and-data-use
- https://developer.apple.com/app-store/app-privacy-details/
- https://developer.apple.com/documentation/uikit/protecting-the-user-s-privacy/
- https://developer.apple.com/documentation/uikit/requesting-access-to-protected-resources
- https://developer.apple.com/documentation/uikit/encrypting-your-app-s-files


🏆 Acknowledgments

A huge thank-you to everyone who joined in and shared thoughtful, insightful, and engaging questions throughout the session — your curiosity and input made the discussion truly rich and collaborative.

Special thanks to:

Larry Wang, Chandrachud Patil, Christopher Sheats, Gloria Glazebrook, Patrick Hoekstra, Chris CL, Eric Dorphy, AAron Wangugi, Quinten Johnson, Nicholas Levin, Kim Ahlberg, Jason Brooks, Patrick Cousot, Alex Infanti, Ilia, nikolay dubina, Danylo, Pablo Butron, Elijah Cody Bain Black, Paul Floyd, Kim Kyoungsu, Zaid Al-Timimi, Rupinder, Tanner Bennett, Sanket P, Steven Joubanian and Gordon Leete.

Finally, a heartfelt thank-you to the Apple team and moderators for leading session, sharing expert guidance, and offering such clear explanations of the apps optimization techniques. Your contributions made this session an outstanding learning experience for everyone involved.


Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[SwiftUI Foundations: Build Great Apps with SwiftUI Q&A Promo]]>https://antongubarenko.substack.com/p/swiftui-foundations-build-great-apps-163https://antongubarenko.substack.com/p/swiftui-foundations-build-great-apps-163Tue, 17 Feb 2026 07:04:17 GMTRecently Apple returned with webinars format where engineers answered questions directly from developers while hosts were talking about different parts of some topic.

This time it was SwiftUI: from basics to state behavior. If the last session about “Coding Agents in Xcode“ went for 16 mins and had 3 question - this one is totally different. It took more than 3 hours (!) and more than a 150 question.

But! Questions were not always linked to the session topic. You can find info about Concurrency or SwiftData also.

Even formatting and session questions gathering took hours for me. So, this is truly unique content. There even was a question regarding where can I found the Q&A session transcript later. The answer is: nowhere except here)

This huge amount with a threads (this also were not so common previously) is taking out-of-email size. Follow the link below for the full list of answers:
SwiftUI Foundations: Build Great Apps with SwiftUI Q&A

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[SwiftUI Foundations: Build Great Apps with SwiftUI Q&A]]>https://antongubarenko.substack.com/p/swiftui-foundations-build-great-appshttps://antongubarenko.substack.com/p/swiftui-foundations-build-great-appsTue, 17 Feb 2026 06:58:41 GMTRecently, Apple returned to the webinar format, where engineers answer developers’ questions directly while hosts cover different parts of a topic.

This time it was SwiftUI foundations: Build great apps with SwiftUI. If the previous session about “Coding Agents in Xcode” lasted 16 minutes and had only 3 questions, this one was completely different. It ran for more than 3 hours (!) and included over 100 questions.

But! The questions were not always tied strictly to the session topic. You’ll also find discussions about Concurrency and SwiftData.

Even formatting and gathering the session questions took hours for me, so this is truly unique content. Grammar and punctuation were slightly adjusted, but the authors’ formatting was kept as is.

And without further ado — here’s the session Q&A.


💾 SwiftData & Backend

Do you think it would be possible to write a ModelContainer targeting a backend other than iCloud, such as Firebase, and still be compatible with SwiftData framework annotations?

This is such an awesome question, the answer personally is I do not know, but would love to try to see if can be done, I think it is possible to write a ModelContainer targeting a backend other than SwiftData, however Firebase is a 3rd party framework so we can’t provide compatibility with SwiftData framework. SwiftData provides a way to abstract the backend and focus on defining models and relationships.

Start by defining your SwiftData models with the necessary annotations to specify relationships and attributes. Configure the ModelContainer to use your chosen backend.

It sounds like you can try to make it work In your application, configure the ModelContainer to use Firebase as the persistent store but then will not be store in SwiftData. You can now perform CRUD operations on your models using SwiftData, leveraging the backend of your choice, but I do not work if that can work. I would love to see that working, personally.

Implement synchronization and conflict resolution mechanisms between SwiftData and your backend sounds like a hard project. This may require additional configuration or logic tailored to your use case. While challenging, I believe the customization and migration could way too complex. Migration is the key word and being able to work in the future versions.


Does SwiftData support data virtualization for large row counts to avoid loading all items

SwiftData Query doesn’t currently support partial fetching. When rendering a large SwiftData dataset, consider using FetchDescriptor with appropriate predicates to paginate the data so you only load the data of the current page.


Can the sorting option in Swift Data queries take a user sort preference from @AppStorage?

Yes, SortDescriptor is codable, and so you can definitely persist a sort descriptor to AppStorage, or other kind of storage, and then retrieve it from the storage when needed, and use it with a SwiftData query.


What if I want to store the sort property per trip, so each different trip’s sort setting is preserved

In that case, If your trips grow unbounded you might want to consider persisting the sort options in your data model using SwiftData.


SwiftData and SwiftUI....I’ve found using @Model and @Query a reliable and easy to use approach for simple CRUD operations for a view. Is this the first go-to recommendation for implementing SwiftData in SwiftUI apps, when basics only are needed?

That’s correct. You can start with @Model and @Query. If you have questions when going deeper, consider asking in the Developer Forums.


🪟 SwiftUI Views & Layout

Is List backed by UICollectionView? What is the most “CollectionView”- like View for SwiftUI?

List is the most “CollectionView”-like View for SwiftUI.


Given a list of cards that are mixed media (list of text rows, images, charts). Should you use Collection, List, ScrollView, or something else?

It’s possible you would need all of them! for direct answers that are relevant to your project, I would make a post on the developer forums and link your code.


What happens with a ViewThatFits when no option actually fits? Does it render neither?

Are you talking about https://developer.apple.com/documentation/swiftui/viewthatfits ?

If a ViewThatFits option in SwiftUI does not find any view that fits within the specified size constraints, it typically renders nothing. This behavior is designed to conserve space by not displaying any content if no suitable view can be accommodated. The ViewThatFits view automatically adjusts its layout based on the available space, but when no view fits, it simply does not render. Check the link I sent you as the documentation is pretty good at how ViewThatFits works.

And check the sample code in the doc https://developer.apple.com/documentation/swiftui/viewthatfits


What might cause Text to sometimes be truncated? Using .fixedSize(horizontal: false, vertical: true) seems to always fix this issue. Is it due to ambiguous layout?

It might not have a frame large enough to contain the content until you give it one. You can share the code and project on the forums to get input from engineers around the world. See here https://developer.apple.com/forums/


Silly question, for adaptive views why cannot I use ZStack over .overlay() or .background()?

I don’t know what you mean, can you provide an example? Oh sorry use the forums to provide the example as will be a better answer if we can see what you are trying to accomplish as using ZStack over .overlay() or .background() for adaptive views isn’t necessary because both ZStack and these modifiers are part of SwiftUI’s declarative layout system and can work seamlessly together to manage stacking and overlaying of views.

Please provide your sample at the apple forums https://developer.apple.com/forums/


What is the best way to make a view inside a ScrollView at least the content size of the ScrollView itself and then it’s scrollable if larger than that. (Ex. I want to centre some empty/error state text).

To achieve this effect, you’ll want to ensure that the view inside the has a minimum height equal to the height of the itself and grows if the content exceeds this size. With GeometryReader once outside the to capture the device’s screen height available for scrolling, and once inside to measure the content height. This setup will ensure the text is centered and the view is scrollable if the content exceeds the default screen/scrollable area height.

I can’t write code in chat very well, but something like that?

GeometryReader { geometry in
    ScrollView {
        VStack {
            Spacer() Text("test test.")
                .multilineTextAlignment(.center).padding()
            Spacer() }
    }
}

🔷 Observable & State Management

Question about project from webinar

Why is Trip and Activity a class? I would have made them structs and only decorate DataSource with @Observable.

You can only place @Observable with a class (not struct). This is because the @Observable macro is designed specifically for reference types (class) to enable observation of property changes. For value types like struct, you should use @State in your view to manage data changes, as structs are designed to be copied rather than shared.


To rephrase my question from earlier. I understand, that only classes can be annotated with @Observable. My question was more pointing towards why you chose class also for Trip and Activity in the sense of that DataSource is already observable, so all its properties are. Or am I getting it wrong?

For the sample code, this allowed to pass the objects to views as references instead of copies, so that they could be mutated in place (when editing a Trip, adding activities to a trip, or marking an activity as completed). This made it so we wouldn’t need to mutate the whole model when a change was made.


Will @StateObject be updated to work with @Observable class without needing to conform the class to Combine’s ObservableObject protocol? We can’t use @State because it leaks heap memory, i.e. it inits a new initial object on every View init. And using optional State that’s init .onAppear is a painful

If the object is created in View init and set to a @State property via State(initialValue:), the new object instance is expected to be immediately freed if the view’s identity didn’t change, so leaking heap memory is not expected, and the leak might have been caused by other reasons like retain cycles. If there are concerns to create the @Observable object many times, an alternative would be to create the object higher up and pass it down to this view via a parameter in the init or via the environment.


@Environment causes all views that contain it to update? So it is hard to make the views that use it stable? Any and every change causes all views to update ... Correct?

@Environment in SwiftUI causes views containing it to re-evaluate their state when the associated value changes.

If the environment value changes frequently or across the app, consider using or directly within the relevant view hierarchy to encapsulate logic and data handling. Use conditional rendering to avoid unnecessary re-renders.

Yes they cause to update but while provides a powerful mechanism for data sharing, manage its usage carefully to maintain UI stability and responsiveness. By leveraging SwiftUI’s declarative nature and state management capabilities, minimize unnecessary re-renders and ensure a smooth user experience.

Main benefit it to be used to share state objects accessed by multiple views.


Can you subclass an @Observable object so that parent and child would have the macro? Example:

@Observable
class Trip {}

@Observable class ExtendedTrip: Trip {}

Are you talking about the sample or in general on SwiftUI? You can subclass an in Swift to create a custom observable object that can be used across the view hierarchy. This allows you to encapsulate your data and logic in a way that can be observed by any views that depend on it.

This pattern allows you to encapsulate your data and logic in a reusable observable object that can be shared across different views in your SwiftUI app. Like this?

@ObservedObject var observableObject: CustomObservableObject

In SwiftUI lists, should item model be value types (struct) and rely on parent ViewModel updates, or reference types (Observable/ObservableObject) so each row can update independently? What’s the recommended trade-off for performance and architecture?

This question is being answered in the current session. Please tune into the live Data Flow session to learn more.

The answer is… it depends! Use structs when your data is of a value type, and use class when your data contains reference types. When using a class, you typically make it Observable, so a change on the data can update related SwiftUI views.


I want to ask a quite basic question. If we are creating a model we should always be using class instead of struct? And there is no such thing as “View Model” then?

You normally use struct when your data is of a value type, and use class when your data contains reference types. When using a class, you typically make it Observable so a change on the data can trigger relevant SwiftUI updates.


I have an ObservedObject with Published Properties associated with a View. This is extended into 3 ObservedObjects conforming to a common Protocol. But the properties of the protocol would actually be Published Properties, which is not possible? Can we define a Protocol with such properties?

Swift doesn’t allow adding a stored property to a protocol extension, so if you tend to add published properties to a protocol extension so the types conforming the protocol automatically gain the properties, you can’t do so. You might consider using an Observable class instead. For more information, see mentioned in the following doc.


Could you explain the difference and why using let and var inside Views. It looks like with let the whole View is redrawn when the value change in the caller View.

When developing SwiftUI views, understanding the differences between and is crucial for efficient view updates and correct behavior. Let: Creates a constant value that cannot be modified after initialization. Var: Allows properties to change over time, triggering view updates.

Use let for stable values that do not change during view updates, ensuring predictable view behavior. Dynamic data in SwiftUI can be managed using var with @State for interactive or changing data, enabling reactive UI updates.

Mastering let and var in SwiftUI views is crucial for efficient, responsive user interfaces. Use let for immutable data and var with @State for mutable state

I would recommend to post this question into the forums as the basics of let and var in the views is probably not enough for you.

Visit Apple Developer Forums here: https://developer.apple.com/forums/


🚏 Navigation

How did you migrate from UIKit to SwiftUI specifically in navigation? Did you have a coordinator patter?

It depends on what exactly you’re trying to achieve here. The coordinator patter works as one way to handle navigation but it’s not required. The best thing to note, is that if you’re in UIKit UINavigationController you will want to use the standard push/present calls and present a UIHostingController. And inside a SwiftUI context you will want to use SwiftUI presentation. Mixing the navigation stacks can have unintended consequences. If a coordinator patter helps manage that, that works perfectly but you can also just call the views directly depending on the complexity of the UI and what fits best.


Migrating from UIKit to SwiftUI has two major difficulties, and they are not on View, it’s navigation and database. Do you have a presentation or a study case for this? How to move from segue navigation to SwiftUI? How to move from Core Data context to Swift Data?

A great video for that and resource I like is: https://developer.apple.com/videos/play/wwdc2023/10189/

Migrating from UIKit’s segue-based navigation to SwiftUI’s navigation system involves understanding the differences between the two approaches and adapting your navigation logic accordingly.

To migrate to SwiftUI, you’ll need to create a SwiftUI view that represents the same flow. Instead of using a navigation controller, you’ll use SwiftUI’s navigation views.

Also this is my go to https://developer.apple.com/documentation/swiftui/building-a-document-based-app-using-swiftdata

Bring core data to swiftUI https://developer.apple.com/videos/play/wwdc2021/10017/

And remember to model your data https://developer.apple.com/videos/play/wwdc2023/10195/

I am not an expert in SwiftData so I would recommend to ask that question and be specific of what you want migrate in another question for our expert to pick up.


How can we best provide async navigation links? This seems to be a fundamental requirement. I built a custom built AsyncNavigationLink that overlays a button on a NavigationLink, triggers an async call, overlays a ProgressView(). The async call triggers the NavigationLink. Is this the best way?

NavigationLinks themselves can’t be responsible for asynchronous handling. The destination view can have a ProgressView and react to asynchronous changes to your model. But, since NavigationLink is a view itself, it needs to be processed on the main actor. If you want to present the ProgressView before moving to the new view, you can make a Button that triggers the asynchronous callback, and then update your navigation path (that you can set with NavigationStack(path: $path)) when it is finished. This will use the .navigationDestination(for:) modifiers to decide the views to move to. Not sure if this entirely answers your question (


Navigation in Swift is one of the topics I’ve struggled with for a while. Whether using NavigationDestination or building your own Router to navigate. What is your recommendation for doing such, especially when you have multiple views that you want to push? Any suggestions?

The developer website has tons of documentation on recommended navigation styles: See https://developer.apple.com/documentation/swiftui/bringing-robust-navigation-structure-to-your-swiftui-app and https://developer.apple.com/documentation/swiftui/navigation


🛠️ UIKit & SwiftUI Integration

HostingConfiguration or HostingController ? On TableView and CollectionView , what’s best way to wrap SwiftUI code in cells? Is there difference between hosting config and hosting controller?

For wrapping a SwiftUI hierarchy in UICollectionViewCell or UITableViewCell objects, UIHostingConfiguration is the intended class. Please see the documentation for details: https://developer.apple.com/documentation/swiftui/uihostingconfiguration/ UIHostingController is intended to be used more broadly for SwiftUI views in general.


What is the best way to integrate SwiftUI into an existing UIKit App, keeping the performance of SwiftUI Previews high? I can use Swift Package Manager to integrate packages that contain SwiftUI components into the project. Is this the best way or are there simpler paths to go for best speed?

To use SwiftUI views in your UIKit app. See UIKit integration https://developer.apple.com/documentation/swiftui/uikit-integration


I’d like to re-ask the question by Joshua Arnold about resizing with UIViewRepresentable, as the link provided does not give an answer as to how to animate the size change

The reference is at the bottom of the linked page, see Adding UIKit to SwiftUI View Hierarchies https://developer.apple.com/documentation/swiftui/uiviewrepresentable#Adding-UIKit-views-to-SwiftUI-view-hierarchies

You can also post your code, images and more on the forums to receive direct help https://developer.apple.com/forums/


When using UIViewRepresentable, is it possible for the UIView to internally resize itself, and have SwiftUI animate the change in size? If I call invalidateIntrinsicContentSize() on the view, SwiftUI resizes the view immediately without an animation.

You must communicate size changes back to SwiftUI state for animations to work. For info how see Adding UIKit views to SwiftUI view hierarchies https://developer.apple.com/documentation/swiftui/uiviewrepresentable


What is Apple’s recommended approach for detecting when a UIHostingConfiguration / ContentView changes size internally? If I update the a UIHostingConfiguration with a larger view, the view will just truncate / clip instead of bubbling up to UIKit to resize itself.

When working with a hybrid UIKit and SwiftUI environment, SwiftUI views may change size internally without automatically updating their containing UIKit views. This is because SwiftUI views are declarative and may not notify UIKit of size changes beyond their initial setup. To address this, you can use SwiftUI view modifiers and UIKit mechanisms to propagate size changes.

GeometryReader to read the current size of a SwiftUI view and communicate changes back to UIKit.

I would personally wrap a SwiftUI view in a UIViewRepresentable and use a coordinator to listen for size preference changes and update the UIKit view size accordingly.

in a UIKit view controller or view to adjust the layout so observe notifications if possible

Visit Apple Developer Forums here: https://developer.apple.com/forums/


🔷 ScrollView & Visibility

Is there any further documentation or reliable ways to know if something is actually visible on screen? Based our experiences, onAppear in VStack in ScrollView, List, and LazyVStack all have different onAppear behaviors for their elements.

Please check out the .onScrollVisibilityChange modifier.


Is there a suggested way to determine the difference between .onAppear being called in a ‘forward’ transition vs a ‘return’ transition? i.e. list displayed vs returning to list from details view.

Yes, it is possible to differentiate between when .onAppear is called during a forward transition versus a return transition in SwiftUI, though it requires some manual state management since SwiftUI itself does not provide built-in distinction for these cases directly within .onAppear. I got some sample code so you can provide a more detailed information about your question.:

.onAppear { 
    if appearedOnce {
            print(”Main View reappeared (back navigation)”) 
    } else { 
        print(”Main View appeared (forward navigation)”)
        appearedOnce = true 
    } 
}

If I have a database with 100k+ rows, how do I manage paging/infinite scroll in SwiftUIs “List”, to manage memory appropriately. Similar to UITableViews prefetch handler.

Similar the the VStack and HStack shown on stage, there is a LazyVStack and LazyHStack which automatically allocate and deallocates views as they are needed on screen. For more info see “Creating performant scrollable stacks“: https://developer.apple.com/documentation/swiftui/creating-performant-scrollable-stacks


🦋 Animation & Effects

For the symbolEffect(_:options:value:) method for SF symbols, the animation only runs when the value changes. Is there a way to implement this behavior (only run something once for each unique value) idiomatically in SwiftUI?

Depending on the use case, this might not require explicit treatment to run an animation (e.g. the animation(value:) modifier might be all you need). But if you do need to perform side effects based on value changes, onChange(of:...) might be right to reach for.


I’ve added an animation to a Button but when it’s placed inside a NavigationStack’s ToolbarItem, the animation doesn’t run. When the Button is used elsewhere, the animation works as expected. Can you share how to use this animating Button inside a .toolbar?

This is a good question for the forums. Given a Button inside a .toolbar { } modifier, we’d need to see a fuller code sample to know things like where the animation is declared.


How can I start an animation after another animation? (without using hard coded durations)

A KeyframeAnimator or PhaseAnimator can be used for more complex animations! In KeyframeAnimator, for example, you can have separate steps in a KeyframeTrack, and separate tracks in a KeyframeTimeline. Helpful docs: https://developer.apple.com/documentation/swiftui/controlling-the-timing-and-movements-of-your-animations https://developer.apple.com/documentation/swiftui/keyframetimeline


⛏️ Modifiers & Customization

Is it possible to write modifier functions that can configure a custom view property even if it is contained in other container views? This would be similar to how .font works, in that it can be applied to a view and any contained Text will be reconfigured?

One possible way to achieve this is to use a type such as Group that can create a collection of views, such that its modifiers apply to all of its member views: https://developer.apple.com/documentation/swiftui/group


UIKit is inheritance-driven, so it is possible to subclass an UIButton, for example. The same is not possible with SwiftUI. What is the recommended alternative for a similar approach (mainly to customize appearance)? Style, viewModifiers, custom view, something else?

Custom view modifiers are one way in which you can reuse multiple view modifiers across different views. You can even extend the View protocol with a function that applies your custom modifiers. Check out “Reducing view modifier maintenance” for how to do that.
https://developer.apple.com/documentation/swiftui/reducing-view-modifier-maintenance


Is it possible to add a view modifier for the text width and set it to expanded as well as making it italic? The following code does not work, and italic is only applied if the width is .standard:

.font(.caption2.width(.expanded).italic())
   .fontWeight(.bold)
   .frame(maxWidth: width, alignment: .leading)
   .italic(italic)

Something like that too: .font(.caption2)

it is possible to combine text width and italic styling in SwiftUI using a custom view modifier. The issue with your current code is that the modifier in SwiftUI does not support applying both and modifiers directly in a combined way like you’re attempting. Instead, you can create a custom view modifier to achieve this behavior

If I put it together should be something like:

content .font(.caption2) .fontWeight(.bold).frame(maxWidth: width, alignment: .leading) .italic(italic)

Disclamer I didn’t test it, but I think is the idea.

Allows you to flexibly adjust the text’s width and apply italic styling independently within a single view modifier.


What’s the best approach to building modifiers in SwiftUI? I am specifically interested in how to handle the inert state. Is it better to apply the modifier conditionally depending whether it’s inert state or not. Or is it better to deal with the sate inside the modifier implementation.

Generally, try to avoid conditionally applying modifiers (specifically when the condition can change at runtime). It is better to handle an “inert” state from within the modifier. If you have a specific question about building a modifier that handles some state, feel free to send a question for that as well!


🔎 Performance & Debugging

I’m trying to debug a log about “Attribute Graph Cycle”. I’ve tried instruments, but it’s hard to identify the source. In my case, I think it has to do with frames/geometry negotiating sizes. I don’t notice a major slowdown - Is this even a problem?

Check if you have a custom layout or onGeometryChange that could be affecting your View’s sizing. Instruments can help show the catalyst of an update (through the Cause-and-Effect graph). Hitting a cycle may lead to unexpected behavior (aside from just performance issues/hangs), so it is still important to identify if you can. If you don’t see anything in your program that could be causing a loop, consider filing a bug report on Feedback Assistant!


Is there a way to see SwiftUI’s dependency graph for a view, so we can figure out whats causing excessive updates etc? Is instruments the best way to do this?

Instruments is the best way to figure out what is causing excessive updates and other issues with performance. Check out “Optimize SwiftUI performance with Instruments” for a lot of great information about SwiftUI and how to use Instruments to profile and optimize your app.


Can we see the view’s dependency graph in Xcode? Via debugging?

If you profile using SwiftUI template in Instruments, it shows you information such as long view body updates, cause-and-effect graph, update groups etc. please review the Understanding and improving SwiftUI performance article to learn how to use instruments to understand and debug your SwiftUI Views.


When is it better to extract a view into a separate ViewBuilder function/computed property versus creating a separate struct View? What are the performance implications of each approach?

Extracting your views into a separate view structs is a good way to, reduce dependencies to only those that matter to the view. This makes your code easier to read, and your dependencies become apparent at its use site. Ultimately you want to use your best judgement here because not every dependency deserves to be scoped to a separate view struct.

When views are scoped to separate view struct, try reducing view values to only the data they actually depend on. This could mean fewer updates and that implies better performance when data changes in your app.


During SwiftUI view reordering or layout updates, what typically causes temporary disappearance of views, and how can state consistency be preserved during rapid UI changes?

Having views temporarily disappear isn’t typical. Check out the tips in “Optimize SwiftUI performance with Instruments” to determine if there are some performance issues or tuning that you can do to improve the view. If that doesn’t help, please post your specific issue on the Developer Forums for more assistance.


One of the great things I find in working with SwiftUI is that you can build most of your app staying in Xcode with Preview. I loved the last section about using print in view, but it is only at runtime. Is there a library that let you use a debug overlay modifier to display debug data in a view?

You can output into a Label? And there are many views where output is allowed. But the NSLOG is a way to go to the console. You can add a GeometryReader within a view’s .overlay()

SwiftUI provides a convenient way to display debug data

There is a video that may help you: WWDC23: Debug with structured logging

I would recommend this doc as well https://developer.apple.com/library/archive/documentation/UserExperience/Conceptual/AutolayoutPG/DebuggingTricksandTips.html


Are there tools for diagnosing layout issues? On macOS, my app behaves very strangely, growing the window arbitrarily even though subviews’ sizes aren’t changing.

Hi, you might find the following guide helpful: “Diagnosing issues in the appearance of a running app”.


🏡 Dismissal & Environment

What would be the difference in using the @Enviroment(.dismiss) or making it a binding when dismissing a sheet? Is there performance benefit?

Both and make sheet dismissal in SwiftUI, but they differ in functionality and performance. It is a property wrapper that exposes the modifier applied to a view. It’s often used to dismiss views or sheets from outside their hierarchy, like from a parent view or navigation stack. It is efficient as it doesn’t create additional state or UI components. It simply exposes an existing closure in the view hierarchy, involves defining a closure for dismissal and wrapping it in a . This is useful for more control or integrating with custom views that require explicit dismissal but creating a binding has some overhead, but it’s usually negligible. The impact is minimal compared to other UI operations.

In summary, and are efficient is straightforward for dismissing views from outside their hierarchy, while binding provides more control and flexibility for custom dismissal logic. Performance differences are minimal I think, and the choice depends on your application’s requirements.


🏛️ Architecture & Patterns

Is MVVM organization recommended for a SwiftUI project?

Different apps need different architectures. An app with heavily server-driven content should make architectural decisions specific to that design. An app focused more on displaying data locally (and might not be updating its content as frequently) should use an architecture that makes that use case easier to build for. SwiftUI is architecture agnostic to allow deliberate architectures that work for your app.

When passing in an action into a view is it better to use closure or function/method reference? Example:

SomeView(saveAction: { viewModel.performSave() }) 
vs
SomeView(saveAction: viewModel.performSave)

Great question and thanks for providing the code. What I think is whether to use a closure or a function/method reference when passing an action into a view depends on your specific use case and personal preference. Both approaches have their advantages, and the choice might be influenced by factors like readability, performance, and Swift features. You can capture specific variables from the surrounding context using a capture list if needed, providing more flexibility. It allows for quick inline modifications or additional logic without needing to create a new method elsewhere. Cleaner syntax, especially for simple method calls, improving readability. Potentially more performant since it directly references the function without additional closure overhead?


🐞 iOS Issues & Bug Reporting

Since iOS26, dismissing the keyboard stopped working as expected. If I have a TextField focused in a navigationDestination view or in the content view of a fullScreenCover, and then dismiss the view, the keyboard would sometimes leave a white bar at the bottom in the safeArea. Is this a known issue?

If you come across any visual issues while using SwiftUI, please feel free to file a bug report via Feedback Assistant. Thanks for letting us know about this! https://developer.apple.com/bug-reporting/


🎓 Learning Resources & Miscellaneous

What’s the best way to learn how to use Metal, particularly for visual effects rather than graphics?

This is just my approach, I like to take apart and study sample code. For Metal, there is an entire database of sample code here https://developer.apple.com/metal/sample-code/

Also, please check out the segment on Metal in this talk from WWDC24, “Create custom visual effects with SwiftUI”.


Love to learn more about SwiftUI and Metal - for video rendering would the performance be faster using a custom shader with [[stitchable]] and SwiftUI vs NSView or MkitView?

It depends on the purpose! [[stitchable]] Metal shaders can be used to apply an effect to an existing SwiftUI view via the colorEffect(_:isEnabled:), distortionEffect(_:,maxSampleOffset:isEnabled), and layerEffect(_:maxSampleOffset:isEnabled:) modifiers. If you’re looking for a more flexible view that displays Metal objects (i.e. drawing your own shapes with vertex and fragment shaders), you should probably lean on MTKView. MTKView is provided by MetalKit and can be placed in an NSViewRepresentable or UIViewRepresentable. One is not strictly faster than the other, and you should choose whichever method matches your vision! MTKView: https://developer.apple.com/documentation/metalkit/mtkview SwiftUI Shader: https://developer.apple.com/documentation/swiftui/shader


What do your designers (in Apple) use to design for passing on to developers? Whenever I’ve had designs from Figma, Sketch, Azure, Miro, whatever, they aren’t true to platform and don’t correctly confine the designer to use native components.

This is a good question! It can be hard for design tools to match the fidelity of the look and feel of Apple platforms. Apple provides design resources for Sketch and Figma at https://developer.apple.com/design/resources/ and the materials are adjusted to resemble iOS as closely as possible.

In your partnership with developers, check in with the engineers and ensure that the designs are well understood before they are implemented. It also helps when everyone communicates using the same language, so we recommend that both developers and designers (and even other members of the team) familiarize themselves with the Human Interface Guidelines for the platform your app supports. The Human Interface Guidelines are available at https://developer.apple.com/design/human-interface-guidelines/


I had some trouble finding the entry point of sessions like “Code-along: Start building with Swift and SwiftUI” on the developer website. Is there an entry point where I can navigate thru all the sessions like “code along” and today’s or previous sessions?

Thanks for the question. I recommend to start with this video: https://developer.apple.com/videos/play/wwdc2021/10062, then a code along is always good like: https://developer.apple.com/videos/play/meet-with-apple/237/ and do not forget the what’s new to avoid using the older API https://developer.apple.com/videos/play/wwdc2024/10136/


As an absolute beginner, what would be the first go-to step to go for training? or documentation? on Apple.

A great place to learn how to develop for Apple is through Pathways (https://developer.apple.com/pathways/?cid=ht-pathways), your first step to creating for Apple platforms! For documentation, here is the top-level page: https://developer.apple.com/documentation/


Do you have any advice for someone with Flutter/Dart experience starting SwiftUI?

Thanks for the question. I think we got something similar from someone else, but welcome to Swift coming from Flutter! To start I would recommend to start with this video: https://developer.apple.com/videos/play/wwdc2021/10062/ then a code along is always good like: https://developer.apple.com/videos/play/meet-with-apple/237/ and do not forget the what’s new to avoid using the older API https://developer.apple.com/videos/play/wwdc2024/10136/ But welcome to Xcode! You are going to love it.


Any recommendations for resources (books, WWDC sessions, sample repos, or articles) on doing test-driven development with Swift/SwiftUI—especially patterns for testing view models/presenters and keeping UI code thin?

There’s several WWDC sessions that may be beneficial to you, https://developer.apple.com/videos/play/wwdc2024/10195 and https://developer.apple.com/videos/play/wwdc2019/413


There are tons of modifiers, how we suppose to memorize all of them or even know that they are exists? 😊

The Developer Documentation is always a great place to start! https://developer.apple.com/documentation/swiftui You can also look into bundling your commonly-used modifiers into custom view modifiers- check out “Reducing view modifier maintenance”.


🔷 Wishlist App Specific (App from the webinar)

Where can I download the wishlist app xcode project?

Thank you for your question and attending the event. The sample code for “SwiftUI foundations: Build great apps with SwiftUI” will be made available on the Apple developer website sometime after the event concludes. Stay tuned!


Will the code for Wishlist app be made available?

Thank you for your question and attending the event. The sample code for “SwiftUI foundations: Build great apps with SwiftUI” will be made available on the Apple developer website sometime after the event concludes. Stay tuned!


Why cardsize is provided for TripCard in example of Wishlist struct presented in motion presentation ?

TripCard could either be represented in different sizes, either as a compact or expanded card with a larger width.


I noticed the Wishlist sample app has a customized font in the NavigationBar’s title. Curious how this was accomplished?

For a navigation bar title with styling, you can create a custom view containing a Text with styling modifiers, then wrap it in ToolbarItem(placement: .largeTitle) or ToolbarItem(placement: .title) on the .toolbar.


Is the Wishlist app MultiPlatform for iPhone, Watch, TV, Mac, etc?

Thanks for the question. The app is currently an iOS / iPadOS app. You can run it on macOS and visionOS in Designed for iPad mode.


💬 Text & Fonts

Any recommendations to have a more readable or idiomatic alternative to .frame(maxWidth: .infinity, alignment: .leading)?

Certainly! If you’re looking for a more readable or idiomatic alternative to , it depends on the context and the layout you’re working with. If you’re using Auto Layout, you can define constraints programmatically. This might be more readable if you’re comfortable with constraints, use that. If you have a container view that naturally supports full-width content, you might use a container view to manage the layout. If you need more control over the frame, you might define a custom frame Choose the method that best fits your design system and the constraints of your project. Auto Layout is often preferred for its flexibility and control, especially in UIKit. But I personally like adding a Spacer() into the view!.


I would like to follow up on the question on how to add a different font in the navigationTitle. While it is true, that you can add a styled text to the modifier. The result will will compile but not change its appearance and show an error: “Only unstyled text can be used with navigationTitle(_:)”.

You can create a ToolbarItem(placement: .title/.largeTitle) that is a Text styled with AttributedString.


📐 Geometry & Sizing

What if a subview prefers a bigger HxW than what is offered by the parent view? How are the conflicts handled?

I’d recommend testing this out, then adding other modifiers to build intuition for SwiftUI’s layout system!

struct BiggerChild: View { 
  var body: some View {
     VStack {
       Color.orange .frame(width: 200, height: 200) .opacity(0.3) 
      }
      .frame(width: 100, height: 100).background(.purple) 
    } 
}

You’ll find that the parent won’t clip the child, and the child will extend beyond the parent. From there, check out the clipShape() modifier.


Is there a canonical way to set .frame sizes that are dynamic to device sizes? For example, .frame(height: oneThirdOfDeviceHeight)? UIScreen? Geometry reader? Is there a preferred approach?

To set dynamic frame sizes in Swift for different devices, you can calculate dimensions based on screen width and a predefined ratio to maintain aspect ratios. The GeometryReader approach is modern and powerful, defining views in terms of available space. For complex layouts, Auto Layout provides flexible constraints that adapt to device sizes. For simplicity and direct access, using or similar methods works well for basic dynamic sizing. For more complex layouts and better responsiveness, Auto Layout is more powerful. Constants and computed properties help keep code organized and maintainable.

I think I understand your question with the frames utilization but please feel free to ask the question again with more details.


I have the following:

LazyVGrid { 
  ForEach { 
    VStack { 
      Image() 
      Text() 
    } 
   } 
}

Now, because the text of each view differs (sometimes it takes 2 lines, sometimes 1 line and sometimes 3 lines) the views are no longer aligned on a base-line. How can I get the views aligned again at a common base-line.

It could be the way your LazyVStack is deallocating views. I think this would be a great question for the Developer Forums. I encourage you to post this question there so we can more easily share code and others with the same issue can find help for the same issue. Visit Apple Developer Forums here: https://developer.apple.com/forums/


If I have a view thats 50px x 50px, and I want to display many of them in a HStack, evenly spaced so they are edge to edge. Whats the most performant way to achieve this? I’ve been using a “Spacer()” inbetween each, but wondering if thats causing overhead

Best way to see the best perform control is to debug and use instruments to check your UI as has SwiftUI instruments as well. Using between views in an is a common approach, but if you’re concerned about performance due to a large number of views, you might consider a more direct calculation to distribute space evenly. Lately I have been a huge fan of GeometryReader as It is used to get the available width of the parent view. Calculate the total width occupied by the views and subtract it from the available width to get the total spacing needed. Divide this by the number of gaps (which is one less than the number of views).

But Space is really good, I would really recommend you to make Instruments your best friend.

Visit Apple Developer Forums here: https://developer.apple.com/forums/


Any performant alternatives to geometry reader? I have to put a vertically centered textfield, with a helper text below it, but also add pull to refresh, so I have added a scrollview.refreshable, one alternative is to create a manual drag gesture, but can we achieve this using scrollview?

This depends on what you’re trying to achieve. For example, if you’re trying to respond to scroll view changes you should checkout the onScrollGeometryChange, onScrollTargetVisibilityChange, onScrollVisibilityChange
and onScrollPhaseChange API.


📼 Images & Media

I’d like to have a zoomable and draggable image within a scrollview, like you would see in the photos app. Can this be done with SwiftUI?

We can provide information how Apple apps controls work, you can achieve a zoomable and draggable image within a in SwiftUI using the modifiers and the and views.

.gesture( DragGesture() .onChanged { value in 
  print(”Dragging at: (value.location)”) 
}.onEnded { value in 
  print(”Drag ended at: (value.location)”)
})

Other this are ScrollView, creates a horizontal ScrollView without showing the scroll indicator bar.

If I present a series of Image views in a LazyVStack -- Image views that need to be resized. How can I make sure all images get resized to the same width/hight (a common denominator) and don’t end-up looking different?

To ensure consistent sizing of views in a series, explicitly set modifiers on each view to define a specific width and height for all images. Decide if you want to maintain the aspect ratio by using modifiers to control how images fit within their frames. This makes images scalable, sets a fixed size for each image, and clips images that exceed frame bounds for neatness.

Great question for sure! Make sure when displaying a series of views in a and ensuring they all have the same dimensions so you do not need to set constraints.

Visit Apple Developer Forums here: https://developer.apple.com/forums/


🎨 Colors & Themes

How do create themes in a SwiftUI app. A theme that can hold typography, colors, spacing. How do we handle a case with multiple themes. And a case where the theme is fetched from the backend.

You can create a file with all your application constants, and use conditional logic in the View to select which constant you need on a view by view basis, or pick up on environments such as light mode, dark mode or orientations.

If you need help, make a post on the forums!


If you use a non-semantic color how do you make sure it works in dark mode?

I’d consider checking the colorScheme environment and using an appropriate color based on the current dark/light mode. See the following doc for more details.


For designing is there a best practice for example the title in the trip detail is white but what if the photo is having a white color where the text is going to render, or there is a way to make it adaptable depending on the background to create the right contrast

If the title sits on top of scrollable content, for example a toolbar or safe area bar, the color automatically adapts to the content beneath it.


Maho mentioned a neon color. Since it is not part of the system colors, how do we create it? Do we have to provide the light, dark and increased contrast versions?

How to add a color literal to an Xcode project is discussed here: After adding a color to your asset catalogs, you can set up the color variants, and also set the hex value of the color. The Wishlist sample will be made available on the Apple developer website sometime after the event concludes. You can download it and look into the details of the color.


What impact does use of Color have on battery life? How can we override dark mode?

Have you use the Instruments from Xcode3? I think like colors with higher saturation tend to require more brightness to be perceived clearly, which can drain a battery faster? But you can use the battery impact into the Instruments app to give you clear answers but using color and avoiding color. Remember that while overriding dark mode can help with visibility and comfort in bright environments, it may increase battery consumption but Instruments here are your best friend.

Visit Apple Developer Forums here: https://developer.apple.com/forums/


📊 Charts & Performance

Following up on my Chart question. The chart is showing live data showing over time. I tested with instruments and tracked it down to the chart that is causing it. My current plan is to use Canvas instead for better performance but wanted to confirm I’m not missing something with Swift Charts

If you have spotted out that Swift Charts is the performance bottleneck, yeah, drawing with Canvas with your own code may help, because that way you can only draw the content specific to your needs. But again, we will be interested in looking into more details about your use case. I suggest that you ask this question in our Developer Forums with your code snippets and analysis so SwiftUI folks can follow up from there. Thanks.


I have been trying to test Swift Charts with some sample tests. My use case requires updating the chart continually (at least a few times a second). For 200 data points I’m seeing around 40% CPU usage which is a lot for my use case where I might have multiple charts displaying at the same time.

For a performance issue, I’d typically start with using Instruments to profile the app and figure out the performance bottleneck, and then see how it can be improved. The following doc may be worth reading: - Understanding and improving SwiftUI performance In your case, you might also consider if “at least a few times a second” is really needed and brings a lot of value. Refreshing a chart that frequently does consume resources.


🚟 Platform Specific

I have a SwiftUI view which displays a List, with a few labels and an image. The list row height should be consistent for each item. When the list items grows large, the same code performs just fine on iOS, but starts lagging on macOS (26.2). What explains this?

It is difficult to say for a specific app what would cause the difference. In general, first use Instruments to profile the app and optimize its performance. Check out “Optimize SwiftUI performance with Instruments” for detailed information and tips. If you’re still having issues, please post on the Developer Forums for more assistance.


When building a native Mac app using SwiftUI how do I debug the title bar looking different on my machine (built via Xcode or exported) vs a testers machine even when on the same macOS version (26.2)? Ex: The title bar being transparent as expected in some places or just greyed at the top instead.

Hello, you can troubleshoot visual inconsistencies across different devices by making sure all of your Appearance settings are the same between them, e.g. Dark Mode, Liquid Glass, Accessibility display settings, etc. Also make sure that the content underneath the titlebar is consistent. If you are still experiencing issues despite this (or the appearance is not what you would expect), please file a bug report via Feedback Assistant: https://developer.apple.com/bug-reporting/


What is the best way to make an app that works for all the devices (iPhone, iPad, Mac, TV, Watch, Vision Pro)? Making a single “empty” project in Xcode then adding each platform target, or a separate Xcode project per platform?

Create a new Multiplatform app. Then in the target list in Xcode, add a new target; Add a Watch app that is a companion to the existing app target. In the Watch app target properties, check or uncheck the box for “Supports Running without iOS App Installation” depending on your app’s requirements.


In macOS, SwiftUI provides some default menus. How can I specify keyboard equivalents for default menu items without having to replace the implementation, specifically for Edit->Delete? Even if I provide an implementation, it seems I’m forced to replace all of the pasteboard commands, not just Delete.

You can find a full guide on configuring the Mac menu bar here: https://developer.apple.com/documentation/swiftui/building-and-customizing-the-menu-bar-with-swiftui


Is SwiftUI ideal for extensions?

Hello, if you’re interested in building app extensions with SwiftUI, it’s a snap when paired with WidgetKit!
https://developer.apple.com/documentation/swiftui/app-extensions


Can SwiftUI be used to develop a 2D game? If not what is the suitable apple framework to use? (SwiftUI or SpriteKit or ARKit or something else?)

SpriteKit is a framework you could use to add characters to your game. For setting up the game and its logic, I would start with GameKit! See this documentation to get started: https://developer.apple.com/documentation/GameKit/creating-real-time-games

And ARKit is if you want to make an immersive 3D app using the device camera.


Will the custom shape overlay MKMultiPolygon from UIKit MapKit be included in SwiftUI MapKit? It’s a major polygon type overlay for map engineering and developers currently have access to only Polygon,Circle,Polyline etc… any graceful suggestions in the mean time?

Depending on how you’re currently using MKMultiPolygon, you could use a ZStack with multiple Polygons in your Map, and apply your style modifiers to the ZStack to apply them to all of the Polygons in the ZStack.


🎹 TextField & Keyboard

Is there any way to present an editable TextField that never presents a keyboard or responds to a USB keyboard? I’m building a calculator app with its own keypad that appends to an expression. But I’d like to give the user the ability to select parts of the expression or place the insertion point.

SwiftUI doesn’t provide a way to replace the system-provided keyboard with an input view. I’d suggest that you file a feedback report for that. As a rescue, you can use UITextField + UIViewRepresentable (assuming iOS / iPadOS), and give the UITextField instance an inputView.


Textfield - inline formatting. Currently, Textfield format API doesn’t support inline formatting . What is recommended way to format inline forms on swift ui text fields?

I would start by looking at the prompt of a TextField, see: https://developer.apple.com/documentation/swiftui/textfield/init(_:value:format:prompt:)


🛡️ Security

Where is the best place to keep Access Token and Refresh token/s handling most secure session for Swift Apps?

Use the Keychain to store tokens and other secrets. See the API reference for Keychain items for more information.


🏎️ Concurrency

Say the code is already on Main queue, I used a check Thread.isMainThread and then dispatch on main if false, if true, execute as it is. I want to achieve similar behaviour using modern concurrency. Is it possible? Task {@MainActor} will always behave like main.async

You can await MainActor.run to execute a synchronous piece of code on the MainActor, which should avoid suspending execution if you are already on the MainActor. If you need to know specifically what actor you are on, you can use the #isolation function argument to introspect the current execution context (which returns a type of isolated (any Actor)?). When you are nonisolated, this will be nil. Conversely, when you are isolated--and on the MainActor--it will be .some(MainActor).


🖌️ Icons & App Design

Can I build an app icon from scratch in Icon Composer app or I’d need to first design the icon in Figma/Sketch first than import to Icon Composer to add liquid glass design?

Hi, have you checked out the guide Creating your app icon using Icon Composer? You can download an App Icon Template to create the layers in Figma and Sketch, before importing to Icon Composer. Template available here: https://developer.apple.com/design/resources/


I am having trouble getting the new app icon process to work. My icons in the assets catalog work fine, but icons that I create in IconMaker do not appear at runtime after I add them to my project, whether iOS or macOS.

Hi, you can review the Creating your app icon using Icon Composer article. It’s a great resource to learn about using Icon Composer and adding your Icon Composer file to your Xcode project.


🎬 Toolbar & Actions

Is there a heuristic for the number of toolbar actions to show before moving them into a “more” menu in the toolbar?

You can’t predict the number of toolbar actions that can be shown before they will be displayed in a “more” menu. This will be highly dependent on the size of the device, the size of the window or view, and the size of the toolbar items.


On iPad, DocumentGroup’s title/rename UI gets clipped when I fill .topBarLeading, .principal, and .topBarTrailing (trailing is an HStack). Any recommended toolbar structure to preserve the system rename affordance without dropping .principal or using a menu?

I think this would be a great question for the Developer Forums. I encourage you to post this question there so we can more easily share code and others with the same issue can find help for the same issue. Visit Apple Developer Forums here: https://developer.apple.com/forums/


Apple encourages us to use the navigation bar, but some Apple apps are starting to use more controls in the bottom of the screen, i.e. Safari. Should we focus on using the navigation bar or try to add buttons at the bottom. Thank you!

I would encourage you to find what it works for you, recommendations are one thing, but we do not know your requirements, if the navigation bar fulfills your requirement

Your custom buttons have their merits and can be effective depending on the context and design goals of your app.

The navigation bar is a traditional element in iOS apps and is widely recognized by users. Typically occupies less screen space at the top, providing more room for content than buttons.

Consider how users naturally interact with the device. I don’t think this what you wanted to hear but personal requirements are hard to fulfill in one quick answer. I would recommend the forums if you have examples of what you trying to accomplish.


🔷 Miscellaneous

Is swift UI good at managing memory within messages or keyboard extensions? Lots of bandwidth issues causing crashes when loading a lot of data. Host app is fine…anything helps!

Extensions typically have a hard memory limit. The system doesn’t allow extensions to consume more memory than the limit. Simply switching to SwiftUI doesn’t change the limit, and probably doesn’t help reduce the memory consumption either. Memory consumption is typically related to data management, and so focusing on loading and using data more effectively may better help. I’d suggest that you ask this question in our Developer Forums with more details of how your extensions consumes memory, and we can have a deeper discussion from there. Thanks.


How to best abstract the model for views? A custom view like SearchRow(tripName:, photoName:) is already its own view model. But SwiftUI components like Button don’t give us access to their model. The scene would be like ForEach(searchRows) and ForEach(buttons) or more generally ForEach(rows)

Trying to understand your question and hope I got exactly what you mean with the best abstract model for a view as is hard to answer, I would recommend you to go to the Apple forums as well to follow up. When designing the SwiftUI views with models, for lists with interactive components like buttons, clearly separate and manage state effectively, a clear data model representing the information you display and interact with, view model holding instances of your data model and necessary business logic or state. Define custom views presentation logic for each item with state in the view model or pass closures to handle actions as your main view to iteratee over your view model’s data, passing state and actions to child views. Ensure state changes in interactive elements reflect in the view model, which updates the view due to properties.


I want to add a half screen mostly transparent dialog over the main window of a UI photo editing app. Examples or suggestions?

I’d probably consider using ZStack to lay out a view above the main window and to the target position, make it almost transparent, and use it as the dialog.


In the example of let _ = print(), will that print get called every time the view updates or just on initial creation?

It depends on where you put the print. If you put it at the beginning of the body, every time the SwiftUI recalculates the view body, the print will be called.


I use SwiftData and the @Query annotation to display my data. Each Item is rendered in a subview. Is there a way to pass the Item to be modifiable using either an annotation or something else? Let’s say a pinned attribute is being flipped and updating my @Query without an explicit call to SwiftData

Not sure if I understand your question correctly, but if you intent is to pass a SwiftData model to a view and change the model in the subview, you can definitely do so, and @Query will automatically detect the changes on the model. If you are seeking a way to change an attribute of the model without calling the setter of the attribute, no, there is no way to do that.


Why are views discarded? The dependency tracking- is this feature’s efficiency directly dependent on the discarded views? The Apple Developer Library-the extent of the resources pulled from there?(Customizable?)

SwiftUI efficiently manages memory by discarding views that are no longer visible or needed. This process optimizes performance by keeping only visible or recalculated views in memory. SwiftUI uses a declarative syntax to define the UI state, and the framework automatically updates views when the state changes. Instead of manually updating each view, SwiftUI only updates the affected parts of the view hierarchy. Views are part of a hierarchy, and when their state changes, SwiftUI recalculates the entire affected hierarchy. If a view is no longer visible or necessary, it is removed from memory, reducing memory usage and improving rendering performance.

There are numerous examples and code snippets demonstrating how to implement various features in SwiftUI and related technologies.

Yes, the efficiency of SwiftUI’s dependency tracking and rendering can be dependent on how effectively views are discarded. SwiftUI keeps track of which views depend on which other views

I don’t know what you want exactly from the library.


Is there a way to only draw a portion of a view at a time? With AppKit I’d use -[NSView drawRect:] to only draw visible parts of the view. I have a View whose body is a large Path, and I’d like to reduce the amount of drawing I need to do at once by only drawing the section of the Path visible.

SwiftUI is a different approach, and the view body is updated as a whole based on changes to its data. You might be able to split this into multiple views that each represent part of the Path which can each be individually updated when the underlying data changes. The Developer Forums are a great resource to get more assistance with your specific use case.


🏆 Acknowledgments

A huge thank-you to everyone who joined in and shared thoughtful, insightful, and engaging questions throughout the session — your curiosity and input made the discussion truly rich and collaborative.

Special thanks to:

Aap, Aaron Brooks, Abdalla Mohammed, Alexis Schotte, Andrew Hall, Arpad Larrinaga Aviles, Ash, Ativ Patel, Brian Leighty, Cyril Fonrose, Dan Fabulich, Dan Stoian, Dan Zeitman, David Ko, David Mehedinti, David Veeneman, Dipl.-Ing. Wilfried Bergmann, Eleanor Spenceley, Elif Arifoglu, Elilan, Etkin, Evelyn, Farhana Mustafa, Fred Baker, Greg C, Ian Shimabukuro, James Clarke, Jan Armbrust, Jason Howlin, Jay Graham, Jesse Wesson, Jobie, John Ellenich, John Gethoefer, John Lawter, Jon Bailey, Jon Judelson, Jonas Helmer, Joshua Arnold, Julien, Kaan, Kaylee, Lloyd W Sykes, Lucy, Luis Rebollo, Maic Lopez Saenz, Maisomage, Malcolm Hall, Marin Joe Garcia, Markus, Matthew Brown, Michael Bachand, Michael K., mustii, Muthuveerappan Alagappan, Nathan Odendahl, Nicolas Dunning, Oliver Dieke, Omar zuñiga, Oshua Arnold, Pete Hoch, Rayan Khan, Ricardo Moreno Martinez, Rick Mann, Rupinder kaur, Sam Wu, Satish Muthali, Siddhartha Khanna, Simon McLoughlin, SRINIVASA Potnuru, Tom Brodhurst-Hill, Tula W, Ulien, Vijay Ram, Walid Mohamed Nadim, Will Loew-Blosser, Willian Zhang, Yafes Duygulutuna, Yassin El Mouden, Yuriy Chekan and Zulfi Shah.

Now that’s a list!

Finally, a heartfelt thank-you to the Apple team and moderators for leading session, sharing expert guidance, and offering such clear explanations of the apps optimization techniques. Your contributions made this session an outstanding learning experience for everyone involved.

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Swift Bits: Git Worktree]]>https://antongubarenko.substack.com/p/swift-bits-git-worktreehttps://antongubarenko.substack.com/p/swift-bits-git-worktreeTue, 10 Feb 2026 08:17:36 GMTIt’s funny how a tiny CLI command can shift your entire development workflow. For years I was stuck in the same pattern: git stash, git checkout, git stash pop, repeat and branching. Then I worked a lot with the ChatGPT macOS app and its connectors. Next, the Codex macOS app came along and, along with the Skills tab, there was a picker for the task run destination. Bottom bar is very informative:

Task destination in Codex macOS app

You can see a “New worktree“ and that attracted my attention. But that was not the first time I had used it. I just didn’t know about it.

Claude app also use it for parallel session:

Work in parallel with sessions

Click + New session in the sidebar to work on multiple tasks in parallel. For Git repositories, each session gets its own isolated copy of your project using worktrees, so changes in one session don’t affect another until you commit them. Worktrees are stored in~/.claude-worktrees/by default.
Docs

And that’s why worktrees are hitting the spotlight lately because AI-powered tools (like Conductor) and AI coding assistants: increasingly rely on running parallel workflows. And worktrees are the unsung hero that make those workflows fast and sane.


What Is Git Worktree, Really?

Imagine this: you could check out multiple branches of the same repo at the same time without cloning, without stashing, and without jumping between contexts.

That’s exactly what Git Worktrees let you do. Technically, worktrees create separate working directories that are all linked back to the same .git repository. You can have one directory on main, another on feature/foo, and another on hotfix/bar, all open at once. Each has its own working set of files, but they share the same Git history and object database — so commits, tags, and branches stay in sync without duplication.

It’s not magic! It’s Git’s native way of handling multiple working copies without wasting space. If you’ve ever cloned the same repo three times just to manage multitasking, consider this your apology from the universe.

Flow diagram

This approach also solve the old merging problem when you need to commit fast to the feature branch to jump back to main/master. Just make a worktree from the problematic branch and keep working.

Also, you might now that we can’t have multiple branches for a repo. Well, seems that we can do that. Not sure why you might need it? Next section will tell you more.


How You Can Use It (Beyond “Just Another Git Trick”)

At first glance, worktrees are an efficiency hack for the multitasker. But their real superpower emerges in two increasingly common workflows:

🛠 Parallel Feature Development

Instead of constantly switching context, you can open separate features in separate worktrees. Want to build a UI component in one and debug an urgent bug in another? Done.

Our multiple branches issues from above

🤖 AI-Powered Development

With AI tools that spin up separate agents to run against your codebase (like Conductor does with parallel Claude Codes, each in its own sandboxed copy), worktrees make it possible to do this locally without cloning or spinning up separate VMs. Essentially, each agent gets a dedicated branch + working copy — without you having to manage massive clones.

This paradigm is quickly becoming the backbone of AI-augmented workflows: by avoiding stash chaos and enabling isolated contexts, worktrees let you let the machines do their thing without disrupting your flow.


A Simple Worktree Example: Create & Remove

Here’s the minimal set of commands you actually need. Think of these as your “git worktree starter pack”.

Create a Worktree for a Feature

# Create a new worktree for a feature branch
git worktree add ../feature-awesome -b feature/awesome

What this does:

  • Makes a new folder ../feature-awesome

  • Creates and checks out feature/awesome there

  • Links it back to your central .git repository

    No stashing, no switching — everything stays in your current directory.

When You’re Done: Remove It

git worktree remove ../feature-awesome

Boom — worktree gone. Git cleans up references, and the branch stays in your repo just like any other.

💡If you are keen to working with IDEs: Codex app have an task location picker at the bottom bar and Claude have a “New Session“ which will all too use Worktree as we know now.


Final Thoughts

Git Worktree feels like one of those features that should have existed all along. It solves a real pain point: context switching chaos. And now that AI tools and agent managers are building their workflows on top of it, it’s fast becoming a tooling standard.

If you’re still in the mindset of git stash → git checkout → git stash pop, give Git Worktree a try. Your future self (and your future AI tools) will thank you.

Happy branching 🚀!

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.


References

]]>
<![CDATA[Swift Bits: Transition vs Transaction]]>https://antongubarenko.substack.com/p/swift-bits-transition-vs-transactionhttps://antongubarenko.substack.com/p/swift-bits-transition-vs-transactionMon, 02 Feb 2026 07:01:32 GMTSwiftUI’s animation system is powerful but often misunderstood. Even Xcode, when you start typing a keyword, shows a load of almost identical options. What about when, in a tech interview, you are asked about the difference?

Two concepts that frequently come up when building polished, animated interfaces are Transitions and Transactions. They sound similar (and they’re both related to motion and state changes) but they serve very different purposes.


Transition

A Transition in SwiftUI defines how a view appears and disappears when it is inserted into or removed from the view hierarchy. It’s about enter and exit animation — the visual effect used when a view changes state and is added or removed.

Transitions don’t control timing or easing. They describe the type of animation that happens. The actual animation timing comes from a surrounding animation context.

Built-in Transitions

SwiftUI provides several built-in transitions:

Text("Hello SwiftUI!")
    .transition(.opacity)

You can also combine transitions:

.transition(.scale.combined(with: .opacity))

Or define asymmetric transitions, where insertion and removal behave differently:

.transition(.asymmetric(
    insertion: .slide,
    removal: .scale
))

Or create a custom Transition relying on AnyTransition class. It’s pretty straightforward. How about blur and scale transition?
Need to make a transition modifier (yeah):

struct BlurAndScaleTransition: ViewModifier {
    let progress: CGFloat

    func body(content: Content) -> some View {
        content
            .scaleEffect(progress)
            .blur(radius: (1 - progress) * 10)
            .opacity(progress)
    }
}

Then wrap in AnyTransition:

extension AnyTransition {
    static var blurAndScale: AnyTransition {
        .modifier(
            active: BlurAndScaleTransition(progress: 0),
            identity: BlurAndScaleTransition(progress: 1)
        )
    }
}

// and use it! A plain example is shown in the next section
.transition(.blurAndScale)

How SwiftUI Applies Transitions

Transitions take effect only when views are inserted into or removed from the view hierarchy. This usually happens inside conditional statements such as if blocks or when modifying collections in List or ForEach.

SwiftUI compares the previous and current view tree, detects insertions and removals, and applies the transition accordingly.

Importantly, a transition alone does not animate. It must be wrapped in an animation context using withAnimation or .animation(_:).

Example: Using a Transition

struct MyView: View {
    @State private var show = false

    var body: some View {
        VStack {
            if show {
                RoundedRectangle(cornerRadius: 20)
                    .fill(.blue)
                    .frame(height: 100)
                    .transition(.move(edge: .top))
            }

            Button("Toggle") {
                withAnimation(.easeInOut) {
                    show.toggle()
                }
            }
        }
    }
}

Here:

  • The rectangle is inserted and removed from the hierarchy

  • The transition defines the movement

  • The animation defines timing and easing


Transaction

A Transaction in SwiftUI represents the context of a state change. It carries information about how SwiftUI should process that change, including animation behavior.

Every time SwiftUI processes a state update, it creates a Transaction and propagates it through the view hierarchy.

A transaction can include:

  • The animation associated with the update

  • Whether animations are disabled

  • Metadata used internally by SwiftUI during rendering

Why Transactions Matter

Transitions define what happens visually. Transactions define how that visual change is executed.

If you use withAnimation, SwiftUI attaches the animation to the transaction. That transaction then flows down to child views unless explicitly modified.

This mechanism explains why animations sometimes affect views you didn’t expect — they are all responding to the same transaction.

Let’s track what the transaction does in our example:

struct SwiftBitsTransactionView: View {
    
    @State private var show = false
    
    var body: some View {
        VStack {
            if show {
                RoundedRectangle(cornerRadius: 20)
                    .fill(.blue)
                    .frame(height: 200)
                    .transition(.move(edge: .top))
                    //Tracking transaction for animation
                    .transaction { thx in
                        print(thx as Any)
                    }
            }
            
            Button("Toggle Rect") {
                withAnimation(.easeInOut) {
                    show.toggle()
                }
            }
        }
    }
}

The console will reveal this:

Transaction(plist: [TransactionPropertyKey<AnimationKey> = Optional(AnyAnimator(SwiftUI.BezierAnimation(duration: 0.35, curve: (extension in SwiftUI):SwiftUI.UnitCurve.CubicSolver(ax: 0.52, bx: -0.78, cx: 1.26, ay: -2.0, by: 3.0, cy: 0.0))))])

And if we change it to Linear:

Transaction(plist: [TransactionPropertyKey<AnimationKey> = Optional(AnyAnimator(SwiftUI.BezierAnimation(duration: 0.35, curve: (extension in SwiftUI):SwiftUI.UnitCurve.CubicSolver(ax: -2.0, bx: 3.0, cx: 0.0, ay: -2.0, by: 3.0, cy: 0.0))))])

It really matches the result on screen. Nice!

Modifying Transactions

SwiftUI provides the .transaction(_:) modifier:

Text("No animation")
    .transaction { thx in
        thx.animation = nil
    }

this removes animation for that view and all of its children, even if the state change was wrapped in withAnimation.

Or, it can be used to override the original animation behavior:

struct SwiftBitsTransactionView: View {
    
    @State private var show = false
    
    var body: some View {
        VStack {
            if show {
                RoundedRectangle(cornerRadius: 20)
                    .fill(.blue)
                    .frame(height: 200)
                    .transition(.move(edge: .top))
                    .transaction { thx in
                        thx.animation = thx
                            .animation?
                            .delay(2.0)
                            .speed(2)
                    }
            }
            
            Button("Toggle Rect") {
                withAnimation(.linear) {
                    show.toggle()
                }
            }
        }
    }
}

Important Transaction Properties

  • animation: The animation applied to this update, if any

  • disablesAnimations: A Boolean that disables animations for this subtree

  • addAnimationCompletion: Completion closure to run when the animations created with this transaction are all complete

Transactions allow very fine-grained control over animation behavior and are especially useful in complex view hierarchies.


Transition vs Transaction: Key Differences

Conceptual Difference

  • Transitions answer: “How should this view appear or disappear?”

  • Transactions answer: “How should this state change be animated?”

Transitions are declarative and visual. Transactions are contextual and behavioral.


When to Use Which

Use Transitions When:

  • A view is conditionally shown or hidden

  • A view is inserted or removed from a collection

  • You want a clear visual effect for appearance or disappearance

Examples include modals, banners, expanding panels, or list rows.

Use Transactions When:

  • You need to override or suppress animations

  • You want different animation behavior in specific subtrees

  • You need programmatic control over animation propagation

  • Default animation behavior is too broad or unpredictable

Transactions are more advanced and typically used when standard .animation and .transition modifiers are not enough.

Happy coding!

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Swift Bits: CMMotionManager with Swift 6]]>https://antongubarenko.substack.com/p/swift-bits-cmmotionmanager-with-swifthttps://antongubarenko.substack.com/p/swift-bits-cmmotionmanager-with-swiftTue, 20 Jan 2026 07:07:24 GMTSome journeys start with a first step. Others start with falling into a crash stack frame with placeholders…

For one of my apps, I was building a device motion tracking feature. Imagine a small circle-aligning game on a screen to prevent unnecessary action. Mostly all that was needed was to track pitch and roll. And from all the examples around the web, it’s pretty straightforward. But what is amazing about development jobs is how everything changes and you need to rewind and update the knowledge that you already have. No more guessing, a crash is waiting.

I’ve spent quite some time to figure it out and apply a patch. Hope that it will save someone’s time.


Tracking Motion

To do this we need to use CMMotionManager (docs) and a couple of methods:

  • startDeviceMotionUpdates: to get CMDeviceMotion with altitude, rotate rate and acceleration rate

  • startAccelerometerUpdates: to get CMAccelerometerData with acceleration split by x/y/z axes

  • startMagnetometerUpdates: to get CMMagnetometerData with magnetic field

Each of the methods can be used with handlers (yes, it’s not migrated to async/await) or just plain call with further CMMotionManager pooling. That is not very convenient and leads to usage of other method, for example:

Apple already suggests in the documentation:

queue

An operation queue provided by the caller. Because the processed events might arrive at a high rate, using the main operation queue is not recommended.

Indeed, don’t want to flood the Main Thread with sequenced and multiplied events. How it was done before Swift 6 and Strict Concurrency?


Swift 5

It was a plain class with separate NSOperation. ObservableObject is used as common binding tool across the whole app (with Observable it wouldn’t have big performance increase since only 2 fields are updating).

final class MotionManager: ObservableObject {
    private let manager = CMMotionManager()
    private let queue: OperationQueue = {
        let q = OperationQueue()
        q.name = "MotionManager.queue"
        q.qualityOfService = .userInitiated
        q.maxConcurrentOperationCount = 1
        return q
    }()
    
    private var isRunning = false    

    @Published var roll: Double = 0
    
    @Published var pitch: Double = 0
    
    func start() {
        guard !isRunning else {return}
        guard manager.isDeviceMotionAvailable else { return }
        manager.deviceMotionUpdateInterval = 1.0 / 60.0
        
        isRunning = true
        self.manager.startDeviceMotionUpdates(to: self.queue) { [weak self] motion, error in
            guard let motion else { return }
            DispatchQueue.main.async {
                self?.roll = motion.attitude.roll
                self?.pitch = motion.attitude.pitch
            }
        }
    }
    
    func stop() {
        isRunning = false
        manager.stopDeviceMotionUpdates()
    }

    
    deinit {
        stop()
    }
}

Swift 6

And this is where migration hits the isolation block. I’ve turned the Approachable Concurrency and set Default Isolation: MainActor. Concurrency Checkings: Full.

First of all, this refactoring was made:

self.manager.startDeviceMotionUpdates(to: self.queue) { [weak self] motion, error in
    guard let motion else { return }
    let r = motion.attitude.roll
    let p = motion.attitude.pitch
    
    // This is a real MainActor hop
    Task { @MainActor [weak self] in
        self?.roll = r
        self?.pitch = p
    }
}

No warnings, compiler check passed. Runtime is the next step. And now we are getting this:

First frame of the crash

Lovely, assembly commands with placeholders on non-Main thread. Other stacks are also informative:

Second frame trace

What do we see:

  • LIBDISPATCH Assert: our CMMotionManager block expected to be on another thread.

  • Serial executor is checking that it matches the MainExecutor

To be more sure, we can observe the iOS crash report on device (Organizer → Devices):

and it will reveal:

ASI found [libdispatch.dylib] (sensitive) ‘BUG IN CLIENT OF LIBDISPATCH: Assertion failed: Block was expected to execute on queue [com.apple.main-thread (0x1fa25ca40)]’

Solution

We need to hop from MainActor to other executor.

First, need to keep the manager isolation and store pitch and roll handler values into constants to remove the motion races error.

let r = motion.attitude.roll
let p = motion.attitude.pitch

//Sending 'motion' risks causing data races
Task { @MainActor [weak self] in
     self?.roll = r
     self?.pitch = p
}

And finally, the best thing that comes to mind is to use the same queue to start startDeviceMotionUpdates. That will bring us to non-Main Actor and prevent crashes at runtime.

final class MotionManager: ObservableObject {
    private let manager = CMMotionManager()
    private let queue: OperationQueue = {
        let q = OperationQueue()
        q.name = "MotionManager.queue"
        q.qualityOfService = .userInitiated
        q.maxConcurrentOperationCount = 1
        return q
    }()
    
    private var isRunning = false    

    @Published var roll: Double = 0
    
    @Published var pitch: Double = 0
    
    func start() {
        guard !isRunning else {return}
        guard manager.isDeviceMotionAvailable else { return }
        manager.deviceMotionUpdateInterval = 1.0 / 60.0
        
        isRunning = true
        queue.addOperation {
            self.manager.startDeviceMotionUpdates(to: self.queue) { [weak self] motion, error in
                guard let motion else { return }
                let r = motion.attitude.roll
                let p = motion.attitude.pitch
                
                // This is a real MainActor hop that the compiler understands.
                Task { @MainActor [weak self] in
                    self?.roll = r
                    self?.pitch = p
                }
            }
        }
    }
    
    func stop() {
        isRunning = false
        manager.stopDeviceMotionUpdates()
    }

    
    deinit {
        stop()
    }
}

If you have a better idea, please write in the comments. Happy coding!

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Swift Bits: Menus]]>https://antongubarenko.substack.com/p/swift-bits-menushttps://antongubarenko.substack.com/p/swift-bits-menusTue, 06 Jan 2026 08:49:01 GMTMenus in iOS are no longer just contextual pop-ups hidden behind long presses. Starting from iOS 14, Apple introduced a unified Menu system that works across buttons, navigation bars, toolbars, and context interactions.

Recently, for one of the apps (built with UIKit, with some parts in SwiftUI), there was a task to add filtering. Typically, a separate sheet or modal view works better — filter state handling, restoration, and persistence are more flexible with that approach. However, basic two-option sorting can be implemented inline as well.


What is a Menu in iOS?

A Menu is a structured collection of actions presented to the user in a compact, system-styled interface. Unlike alerts or action sheets, menus are:

  • Lightweight

  • Non-blocking

  • Context-aware

  • Pointer- and keyboard-friendly

Menus are backed by the different core building blocks across UIKit and SwiftUI:


When Should You Use a Menu?

Menus are ideal when multiple related actions exist, but none of them is primary enough to deserve its own button.

✅ Good use cases

  • Sorting & filtering

  • Display mode selection

  • Secondary actions (Share, Duplicate, Rename)

  • Overflow actions (••• buttons)

  • Toolbar or navigation bar actions

Examples:

  • “Sort by Name / Date / Recommended”

  • “View as List / Grid”

  • “More” button in navigation bars

❌ Avoid menus when:

  • There is a single primary action

  • Actions are time-critical

  • User must discover the action immediately

  • The menu hides destructive actions too deeply

If the user has to guess where an action is — don’t put it in a menu.


Apple Human Interface Guidelines (HIG)

Such a UI-ish thing could not be missed by Apple Designers. We also should not skip the recommendations. Take your time to check them out: Human Interface Guidelines.

A stylized representation of a menu containing a selected item and displaying a submenu. The image is tinted red to subtly reflect the red in the original six-color Apple logo.
Courtesy of Apple Inc.

Key recommendations from Apple:

  • Keep menus short and scannable

  • Group related actions

  • Put destructive actions last

  • Use SF Symbols consistently

  • Avoid deeply nested menus

Menus should feel predictable, not clever. Time to create some!


Adding a Menu in UIKit

UIKit’s menu system is powerful and works almost everywhere: buttons, bar items, and even views.

Menu on a UIButton

enum SortOption {
    case name
    case date
}

var selectedSort: SortOption = .name

func makeSortMenu() -> UIMenu {
    let sortByName = UIAction(
        title: "Sort by Name",
        state: selectedSort == .name ? .on : .off
    ) { _ in
        selectedSort = .name
        updateMenu()
    }

    let sortByDate = UIAction(
        title: "Sort by Date",
        state: selectedSort == .date ? .on : .off
    ) { _ in
        selectedSort = .date
        updateMenu()
    }

    return UIMenu(title: "Sort", children: [sortByName, sortByDate])
}


let button = UIButton(type: .system)
button.setImage(UIImage(systemName: "arrow.up.arrow.down"), for: .normal)
button.menu = makeSortMenu()
//Don't forget this! Or menu will not show up instantly.
button.showsMenuAsPrimaryAction = true

Menu in Navigation Bar

navigationItem.rightBarButtonItem = UIBarButtonItem(
    title: nil,
    image: UIImage(systemName: "ellipsis.circle"),
    primaryAction: nil,
    menu: menu
)

This pattern replaces custom action sheets and is now Apple’s preferred solution.


Adding a Menu in SwiftUI

SwiftUI introduces a dedicated Menu view. It’s easy to add it like any other View. In example:

Basic Menu

enum SortOption {
    case name
    case date
}

@State private var selectedSort: SortOption = .name

var body: some View {
    Menu {
        Button {
            selectedSort = .name
        } label: {
            Label(
                "Sort by Name",
                systemImage: selectedSort == .name ? "checkmark" : ""
            )
        }

        Button {
            selectedSort = .date
        } label: {
            Label(
                "Sort by Date",
                systemImage: selectedSort == .date ? "checkmark" : ""
            )
        }
    } label: {
        Label("Sort", systemImage: "arrow.up.arrow.down")
    }
}

If you want a more scalable approach:

Menu {
    Picker("Sort", selection: $selectedSort) {
        Text("Sort by Name").tag(SortOption.name)
        Text("Sort by Date").tag(SortOption.date)
    }
} label: {
    Label("Sort", systemImage: "arrow.up.arrow.down")
}

Menu in Toolbars

.toolbar {
    Menu {
        Button("Refresh") {}
        Button("Settings") {}
    } label: {
        Image(systemName: "ellipsis.circle")
    }
}

Submenus

We can easily add a submenu. Just keep in mind:

  • Submenus should not be too different from other menu items meaning

  • Don’t overuse the logic. Submenu in submenu is to much. It’s not West Coast Customs branch of “Pimp My Ride“ )

    Menu un Submenu…
  • Menu content hierarchy for Submenus is reversed. If you want to have it below other option → put it on top like in example.

enum SortOption {
    case name
    case date
}

enum SortDirection {
    case asc
    case desc
}

@State private var selectedSort: SortOption = .name
@State private var selectedSortDirection: SortDirection = .asc

Menu {
    Menu {
        Picker("Direction", selection: $selectedSortDirection) {
            Text("Ascending").tag(SortDirection.asc)
            Text("Descending").tag(SortDirection.desc)
        }
    } label: {
        Label("Direction", systemImage: "arrow.up.arrow.down")
    }
    
    Picker("Sort", selection: $selectedSort) {
        Text("Sort by Name").tag(SortOption.name)
        Text("Sort by Date").tag(SortOption.date)
    }
} label: {
    Label("Sort", systemImage: "arrow.up.arrow.down")
}

Final Thoughts

Menus are no longer a secondary UI element in iOS — they’re a core interaction pattern.

They:

  • Replace action sheets

  • Reduce UI clutter

  • Scale naturally from iPhone to iPad and Mac

If your app still relies on custom “More” sheets or overloaded toolbars, it’s probably time to switch to menus.

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Swift Bits: Override Color Scheme]]>https://antongubarenko.substack.com/p/swift-bits-override-color-schemehttps://antongubarenko.substack.com/p/swift-bits-override-color-schemeTue, 30 Dec 2025 10:05:28 GMTLight and Dark Theme is one of the ancient holy-war topics in development. Our predecessors were sitting near a campfire (Dark Souls fans - I salute you), setting screensavers on CRT monitors…

In iOS, it hasn’t been around for that long, but still many apps lack proper support of it. This raises a big question: should an app adapt to system preferences and make a consistent UX for device? But for now, let’s just learn how to change it — or not.

When was Dark/Light Mode introduced in iOS?

Modes was officially introduced in iOS 13 (2019).

Before iOS 13, developers could only simulate “dark themes” manually by:

  • Using custom color palettes

  • Detecting the time of day (yeah… kind of a solution, to be honest)

With iOS 13, Apple introduced:

  • System-wide Light & Dark appearance

  • Dynamic colors

  • Asset Catalog appearance variants

Now we can both read and change the color scheme. The values are:

  • .light

  • .dark

  • .unspecified (system default)

They are self-explanatory and we can observe and change them. Differently for each framework.


UIKit

Many apps are still written in UIKit, no doubtsI initially found a nice color scheme solution in a UIKit-based app.

Read

UIKit notifies views and view controllers when traits change:

If you need to access it:

UIView also holds the same traits collection but doesn’t allow to override it.

@property (nonatomic, readonly) UIUserInterfaceStyle userInterfaceStyle API_AVAILABLE(tvos(10.0)) API_AVAILABLE(ios(12.0)) API_UNAVAILABLE(watchos);

Set

For UIController, we can override for the whole controller:


SwiftUI

And this is a modern UI approach. We have an environment variable colorScheme. You can read more about it in Apple Docs.

Read

SwiftUI exposes appearance like this:

Set

You can override the user’s system appearance for a specific view hierarchy by applying the preferredColorScheme(_:) view modifier.


Overriding Light / Dark Mode

By default, iOS determines an app’s appearance based on the system-wide setting chosen by the user. This setting propagates through the entire UI hierarchy automatically and consistently.

However, iOS also allows developers to override this behavior. An override lets your app (or a specific part of it) request a different appearance than the system preference.

This capability exists for intentional, contextual use cases, not for bypassing user choice without reason.

Overriding vs. Theming (Important Distinction)

Overriding appearance is not the same as building a theme system.

  • Appearance override → Light / Dark only

  • Theming → colors, typography, spacing, branding

You should never use Light/Dark overrides to implement:

  • Brand colors

  • Feature-based themes

  • Seasonal designs

Those belong in a design system, not in appearance traits.

If you still decide to proceed - there are two levels of override:

  1. App-wide (global)

  2. User-controlled (stored in UserDefaults, in example)

Global override (force appearance)

An app-wide override forces a single appearance for the entire application, regardless of the user’s system preference.

This override is applied at the highest possible level of the UI hierarchy — typically the UIWindow (UIKit) or the root view of the app (SwiftUI). From that point downward, every screen, component, and child view inherits the same appearance.

What does this mean in practice?

  • The entire app is locked to Light or Dark

  • No screen can opt out unless it explicitly overrides again

  • Dynamic system colors resolve consistently across all UI

  • The override behaves like a global environment rule

UIKit / SwiftUI

This snippet is already adapted to latest Scene windows handling (works for both frameworks):

or hold an Observable in the root of your app and change preferredColorScheme.


User-controlled

This approach is intended for targeted UI tweaks. You might want to have specific screens in Dark or Light Mode only, skip override for controls and input components. At first glance, this might seem like a bad pattern - who would like to reimplement the system which is natively working?

This strategy respects both:

  • The system default

  • The user’s explicit intent

It shifts control away from the developer and back to the user — which aligns with Apple’s Human Interface Guidelines.

Rather than overriding because you can, you override because the user asked you to.

Actually, global override comes “hand-to-hand“ with this approach. Allowing user to pick what mode to use is handy and transparent from a UX prospective.

Here is a basic implementation with AppStorage:

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Swift Bits: Closure Capture List]]>https://antongubarenko.substack.com/p/swift-bits-closure-capture-listhttps://antongubarenko.substack.com/p/swift-bits-closure-capture-listTue, 23 Dec 2025 08:20:40 GMTClosures are one of Swift’s most powerful features and also a common source of subtle bugs.

To understand why capture lists exist and why strong capture is the default, we need to look at how closures interact with value types, reference types, and memory ownership.


What Is a Closure?

A closure is a self-contained block of code that can:

  • Be stored in a variable

  • Be passed around

  • Capture values from its surrounding context

In Swift, we should care about 2 main points about closures:

  • They are reference type

  • Have capture lists to work with surrounding values/objects

Our simple closure might look like this:


What Does “Capturing” Mean?

While reference nature of a closure is not so mystic, we can dive into second aspect. When a closure references variables defined outside its body, Swift must decide:

  • Should the closure copy the value?

  • Or should it reference the original storage?

The answer depends on:

  • Whether the variable is a value type or reference type

  • Whether capture is implicit or explicit


Implicit Capture (Default Behavior)

When you don’t specify a capture list, Swift captures variables implicitly.

What’s happening?

  • value is a variable, not a constant

  • Swift places value into a shared heap box

  • Both the closure and outer scope reference the same storage

This makes closures feel “live” and stateful.


Explicit Capture List = Snapshot

Now add a capture list:

Key differences:

  • [value] captures the value at closure creation

  • The captured value is a constant

  • Later mutations are invisible

This is not a memory-management feature—it’s a semantic change.


Why Explicitly Captured Values Are Immutable

This fails to compile:

Because:

  • Explicit capture means copy

  • Copies are immutable by design

  • Mutating them would break determinism

Swift forces you to be honest about intent.


The Tricky Example: Same Name, Different Reality

Some might call it shadowing.

Each call:

  • Starts from the same captured snapshot

  • Mutates only a local copy

  • Never touches outer state

This looks stateful—but isn’t.


Reference Types

Now let’s switch from value types to reference types.

Here:

  • The closure captures a reference

  • Both scopes point to the same instance

  • Mutations are shared

Even if you write:

You are still mutating shared state, because the reference itself was copied, not the object.


Why strong Capture Is the Default

Swift closures capture strongly by default, especially for reference types for different reasons. Some of them are architectural, others are because of types implementation.

1. Predictability

If closures captured weakly by default, objects could disappear unexpectedly:

Strong capture guarantees:

“If this closure runs, everything it needs still exists.”

2. Safety Over Convenience

Unexpected deallocation is far worse than a retain cycle.

Swift chooses:

  • Explicit memory leaks over

  • Implicit crashes

This is why [weak self] is opt-in.

3. Closures Are Ownership Boundaries

A closure is a unit of work.

If it references an object, it owns it unless told otherwise.

That aligns with Swift’s philosophy:

Ownership must be explicit when it changes.


Capture Lists Are About Semantics, Not Just Memory

Many developers think capture lists exist only to avoid retain cycles. That’s incomplete.

Capture lists control:

  • When values are captured

  • Whether state is shared

  • How mutations behave

  • What the closure truly depends on

Memory management is just one side effect.


Final Takeaway

Closures don’t just run code. They share, or own parts of your program.

Understanding capture lists means understanding:

  • Value vs reference semantics

  • Ownership

  • Time (when state is captured)

And once you see that, Swift’s rules stop being surprising—and start being precise.

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Swift Bits: Autoreleasepool Usage]]>https://antongubarenko.substack.com/p/swift-bits-autoreleasepool-usagehttps://antongubarenko.substack.com/p/swift-bits-autoreleasepool-usageTue, 16 Dec 2025 08:20:15 GMTiOS development might not have the longest history compared to other platforms, but it has still gone through several epochs.From MRC (Manual Reference Counting) to ARC (Automatic Reference Counting), Objective-C to Swift transition (which is not even totally done inside Apple). One of the concepts (or mechanisms, I should say) is NSAutoreleasePool or just autoreleasepool (for Swift).

And yes, that question might come up in a tech interview as well.

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.


What is autoreleasepool?

An autorelease pool is a container that temporarily holds objects sent an autorelease message.

When the pool is drained, all objects inside it receive a release. This takes us back to when we had to pair retain/release calls to manage memory and to reduce the lifetime (release call) for an object until a specific scope is finished.

Syntax is pretty simple:

autoreleasepool {
    // Autoreleased objects
}

Even under ARC, autorelease pools still exist because:

  • Many system APIs are written in Objective-C

  • Bridging between Swift and Objective-C often produces autoreleased objects

  • Some frameworks intentionally return autoreleased values for performance


When is autoreleasepool used by default?

You usually don’t see it — but it’s there.

Run Loop Boundaries (Main Thread)

According to the documentation, every iteration of the main run loop creates an autorelease pool automatically.

Run loop iteration
 ├─ create autorelease pool
 ├─ handle events
 ├─ drain autorelease pool

This means:

  • UI events

  • Touch handling

  • Timers

  • Display updates

…all benefit from automatic memory cleanup at the end of each loop.

App Entry Point (main)

UIKit and Swift runtime wrap your app’s execution in an autorelease pool.

Objective-C (conceptually):

int main() {
    @autoreleasepool {
        UIApplicationMain(...)
    }
}

Swift does the same internally.

Objective-C APIs Returning Autoreleased Objects

For example:

NSString *s = [NSString stringWithFormat:@”%@”, value];
NSArray *a = [NSArray arrayWithObjects:...];

Even when used from Swift, these APIs may generate autoreleased objects under the hood.


Where can (and should) you use autoreleasepool manually?

This is the main question when it comes to calling our old friend. Cases are pretty rare, since ARC does a lot for us now. And still for peak performance optimization - useful to know that we still have this tool.

Autorelease pools are especially helpful when you need finer control over memory usage. They shine in resource-heavy scenarios such as processing large data sets, parsing XML or JSON, and repeatedly loading or releasing views.

Tight Loops Creating Many Objects

This is the most important use case.

for i in 0..<10_000 {
    autoreleasepool {
        let image = UIImage(contentsOfFile: path)
        process(image)
    }
}

Without a local pool:

  • Objects live until the outer run loop pool drains

  • Memory spikes dramatically

With a pool:

  • Memory is released per iteration

Background Threads & GCD Queues

Unlike the main thread, background queues do not automatically create autorelease pools.

DispatchQueue.global().async {
    autoreleasepool {
        // Objective-C objects
    }
}

This is critical when:

  • Processing images

  • Parsing large files

  • Using Core Graphics, Core Image, or Foundation APIs

Command-Line Tools & Scripts

I’ve warned you about rare cases. In Swift CLI tools:

autoreleasepool {
    runTool()
}

Without it:

  • Autoreleased objects may never be drained until process exit

  • Memory usage grows unnecessarily

Interacting with Core Foundation / C APIs

Did I mention rare cases? Some APIs create bridged Objective-C objects that rely on autorelease pools:

  • Core Image

  • AVFoundation

  • PDFKit

  • Metal tools that bridge to Foundation


Summary

  • autoreleasepool is not obsolete, even under ARC

  • It controls when temporary Objective-C objects are released

  • Automatically created:

    • Per main run loop iteration

    • Around app entry points

  • You should create one manually:

    • In tight loops

    • On background threads

    • In CLI tools

  • Swift developers still need it when working with Objective-C frameworks


References

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Swift Bits: @dynamic vs @objc]]>https://antongubarenko.substack.com/p/swift-bits-dynamic-vs-objchttps://antongubarenko.substack.com/p/swift-bits-dynamic-vs-objcTue, 09 Dec 2025 08:39:12 GMTYou have probably, at some point, become aware of dispatch methods:

  • Static

  • Dynamic

  • Message dispatch

Explaining each of them has been done many times and continues to bring new insights. Here are some valuable links to help you get the full picture:

What’s the Difference?

Both of them come from the Objective-C world when we had only Message Dispatch using objc_msgSend.

@objc exposes things to the Objective-C runtime, but by itself does not always force Objective-C message dispatch for Swift callers.

Swift adds @objc implicitly when:

  • You subclass NSObject

  • You override an Objective-C method

  • You conform to an @objc protocol

In other cases, behavior varies depending on where it’s added.


For a property:

class Person: NSObject {
    @objc var name: String = “”
}

What this does:

  1. The getter and setter are exposed as Objective-C methods:

    • - (NSString *)name

    • - (void)setName:(NSString *)name

  2. The property becomes visible to:

    • Objective-C code (normal property access)

    • KVC (value(forKey: “name”), setValue(_:forKey:))


For a class:

@objc class MyController: NSObject {
    @objc func doStuff() { }
}

What this does:

  1. Registers the class with the Objective-C runtime (as long as it’s Obj-C compatible, typically via NSObject).

  2. The class is now visible in Objective-C:

    • MyController *c = [[MyController alloc] init];

    • Can be used in Objective-C APIs, storyboards, etc.

Important points:

  • Marking a class with @objc does not automatically make all members @objc. You must either:

    • Mark members individually with @objc, or

    • Use @objcMembers on the class:

      @objcMembers
      class MyController: NSObject {
          func foo() { }  // implicitly @objc
      }

dynamic (usually used as @objc dynamic) is what forces Objective-C–style message dispatch even from Swift.

In Swift, this forces dynamic dispatch. And that’s something confusing for Objective-C old-timers.


Tricky Story

If you remember @dynamic from Objective-C, it will NOT generate getter/setter and it says: “You will find out the implementation later, at runtime“.

In Swift, @NSManaged is used to postpone getter/setter generation:

class Person: NSManagedObject {
    @NSManaged var name: String
}

Summary

  • @objc:

    • Makes the symbol participate in the Objective-C runtime (gives it a selector, etc.).

    • Ensures Objective-C callers use message dispatch.

    • Does not forbid the Swift compiler from using static/vtable dispatch for Swift call sites.

  • dynamic (which implies @objc):

    • Does force the use of Objective-C dynamic dispatch (Message Dispatch) (via objc_msgSend) for all calls, including from Swift.

    • This is what you need for KVO, method swizzling, and other runtime features.

If you want a simple rule of thumb:

  • Need Obj-C interop only? → @objc

  • Need KVO / swizzling? → @objc dynamic

    Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.


]]>
<![CDATA[SwiftUI: Charts Interactivity - Part 2]]>https://antongubarenko.substack.com/p/swiftui-charts-interactivity-part-1edhttps://antongubarenko.substack.com/p/swiftui-charts-interactivity-part-1edMon, 08 Dec 2025 08:46:34 GMTIn the previous part, we built a humidity chart using the native Swift Charts framework. It has a nice X-axis with visible dates and a Y-axis with dynamic calculations and marks for noticeable humidity spikes.

Let me show you once again:

Chart with visible dates and humidity

The chart also supports basic selection with date highlight:

Selection with date info

It’s time to improve it!


Custom Selection Handling

Our selection relies on:

.chartXSelection(value: $selectedX)

With it we are binding to values on X-axis: dates in our case. However, our tweak will require an old implementation using chartOverlay which returns ChartProxy. It helps convert the tap point into an actual value on the axis.

.chartOverlay { proxy in
    GeometryReader { geometry in
        Rectangle()
            .fill(.clear)
            .contentShape(Rectangle())
            .gesture(
                DragGesture()
                    .onChanged { value in
                        // Convert the gesture location to the plot area’s coordinate space.
                        let origin = geometry[proxy.plotAreaFrame].origin
                        guard let plotFrame = proxy.plotFrame else {return}
                                let origin = geometry[plotFrame].origin
                                let location = CGPoint(
                                    x: value.location.x - origin.x,
                                    y: value.location.y - origin.y
                                )
                                // Get the x (date) and y (humidity) value from the location.
                                let (date, humidity) = proxy.value(at: location, as: (Date, Int).self) ?? (Date(), 0)
                        
                        // Clamp to real data range.
                        if
                            let firstDate = data.first?.date,
                            let lastDate = data.last?.date,
                            date >= firstDate,
                            date <= lastDate
                        {
                            selectedX = date
                        }
                    }
                    .onEnded { _ in
                        selectedX = nil
                    }
            )
    }
}

In this modifier, we:

  • Using the GeometryReader with Rectangle which takes the whole width and height of a View

  • Assigning a DragGesture to get a tap location

  • Adjusting location based on chart origin

  • Converting adjusted tap location to values from data set

  • Limiting the selectedX with only actual dates

This codes makes the same as chartXSelection with fill control of the range.


Jump to Closest Values

During selection we can drag to mid values between grid steps like on picture above. That’s not very informative sometimes and selecting only actual steps will be a good option.

First, need to calculate the closest date to selected one. A small extension will help with that:

extension Date {
    func closestDate(in dates: [Date]) -> Self? {
        guard !dates.isEmpty else { return nil }
        return dates.min(by: { abs($0.timeIntervalSince(self)) < abs($1.timeIntervalSince(self)) })
    }
}

Then, on handling the drag:

.chartOverlay { proxy in
    GeometryReader { geometry in
        Rectangle()
            .fill(.clear)
            .contentShape(Rectangle())
            .gesture(
                DragGesture()
                    .onChanged { value in
                        guard let plotFrame = proxy.plotFrame else { return }
                        
                        // Convert the gesture location to the plot area’s coordinate space.
                        let origin = geometry[plotFrame].origin
                        let location = CGPoint(
                            x: value.location.x - origin.x,
                            y: value.location.y - origin.y
                        )
                        
                        // Get the x (date) and y (humidity) value from the location.
                        let (date, humidity) = proxy.value(
                            at: location,
                            as: (Date, Int).self
                        ) ?? (Date(), 0)
                        
                        // Clamp to valid data range.
                        if
                            let firstDate = data.first?.date,
                            let lastDate = data.last?.date,
                            date >= firstDate,
                            date <= lastDate
                        {
                            // Calculate closest date.
                            let dates = data.map(\.date)
                            let closestDate = date.closestDate(in: dates)
                            selectedX = closestDate
                        }
                    }
                    .onEnded { _ in
                        selectedX = nil
                    }
            )
    }
}

X-axis Interpolation

Years ago, I had a task not just to show a vertical line on current selection, but also highlight the current X-value on line curve. With limited data set we either have to:

  • Add extra values to make steps smaller

  • Calculate the values using the same formula that plots the chart line

Adding values is a controversial step. And breaks the Source-of-Truth paradigm. The best solution is to go with pure math!

We have .catmullRom interpolation, which is well described on Wikipedia.

Basically (or not) to plot 2 points spline we need:

Courtesy of Wikipedia
  • 4 points

  • 1 normalized parameter t

Direct function in Swift:

func catmullRom(_ p0: Double, _ p1: Double, _ p2: Double, _ p3: Double, t: Double) -> Double {
        let t2 = t * t
        let t3 = t2 * t
        
        return 0.5 * (
            (2 * p1) +
            (-p0 + p2) * t +
            (2*p0 - 5*p1 + 4*p2 - p3) * t2 +
            (-p0 + 3*p1 - 3*p2 + p3) * t3
        )
    }

Now we just need to pass parameters to it. And the code is a little bit bigger:

// Returns an interpolated humidity value at a specific date using Catmull–Rom spline interpolation.
    /// - Parameters:
    ///   - date: The target date for interpolation.
    ///   - data: Array of humidity points sorted by date.
    /// - Returns: Interpolated humidity as `Double`, or `nil` if interpolation cannot be performed.
    func interpolatedHumidity(at date: Date, data: [HumidityRate]) -> Double? {
        // Need at least 4 control points for Catmull–Rom interpolation.
        guard data.count >= 4 else { return nil }
        
        // Find the first index where the data point’s date is >= target date.
        // This identifies the segment the target date falls into.
        guard let idx = data.firstIndex(where: { $0.date >= date }), idx > 0 else {
            return nil
        }

        // Select four surrounding control points (with safe bounds)
        let i0 = max(0, idx - 2)
        let i1 = max(0, idx - 1)
        let i2 = idx
        let i3 = min(data.count - 1, idx + 1)

        // Extract humidity values (converted to Double for interpolation)
        let p0 = Double(data[i0].humidity)
        let p1 = Double(data[i1].humidity)
        let p2 = Double(data[i2].humidity)
        let p3 = Double(data[i3].humidity)

        // Compute normalized parameter t in [0, 1] along the i1–i2 segment
        let x1 = data[i1].date.timeIntervalSince1970
        let x2 = data[i2].date.timeIntervalSince1970
        let t  = (date.timeIntervalSince1970 - x1) / (x2 - x1)

        // Final Catmull–Rom interpolation
        return catmullRom(p0, p1, p2, p3, t: t)
    }

To use this beautiful piece of code small changes required on values calculation on drag:

// Set the selected date.
selectedX = date

// Calculate interpolated humidity (Y-value) for the selected date.
if let humidity = interpolatedHumidity(at: date, data: data) {
    selectedY = Int(humidity)
}

And what’s left:

  • Add var selectedY to track selection changes

  • Add new point mark when both selectedX and selectedY exist

struct HumidityChartViewDemo: View {
    
    @State private var selectedX: Date? = nil
    @State private var selectedY: Int? = nil
    ...

    if let selectedX {
                RuleMark(x: .value(”Selected”, selectedX))
                    .annotation(position: .top) {
                        VStack(spacing: 2) {
                            Text(selectedX, style: .time)
                        }
                    }
                if let selectedY {
                    PointMark(
                        x: .value(”X”, selectedX),
                        y: .value(”Y”, selectedY)
                    )
                }
                
            }
}

Amazing! Life could be a dream with just a few lines of code.

Code for this part

Thanks for reading and stay tuned for the next discoveries!

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Swift Bits: SwiftUI - Animate Binding]]>https://antongubarenko.substack.com/p/swift-bits-swiftui-animate-bindinghttps://antongubarenko.substack.com/p/swift-bits-swiftui-animate-bindingWed, 03 Dec 2025 13:04:47 GMTSmall Intro

After months of publishing articles, you eventually reach a barrier. On one side, there’s an interesting topic or challenge — and on the other, there’s the question of how much information you can disclose to make it a full post. Substack, after all, is geared toward longer pieces with animations, code snippets, and videos.

Post after post, you build trust with your audience and set an expectation level. No, I don’t mean suddenly switching to baking or gardening 😄. With limited tag options (and they’re barely used here), the only thing left is the title and subtitle. So why not create a category and set the expected content type in advance?

In these Swift Bits posts, I want to share small but useful tips discovered during development or while preparing articles. If you’re a beatboxer or any kind of musician in your non-coding hours (hmm, spending free time not coding? What nonsense!) — you’ll recognize “Bits” as those small sound pieces that make a whole track shine.


To make transitions and modifier property changes, we use the .animation modifier. However, in some situations, it’s hard to trigger since we can’t just write:

withAnimation {
   variable.toggle()
}

Let’s check a sample:

struct SwiftBits: View {
    
    //Some mode
    enum PickerMode {
        case left, right
    }
    
    @State private var mode: PickerMode = .left
    
    var body: some View {
        
        VStack {
            
            //Button to toggle the mode
            Button {
                withAnimation {
                    mode = (mode != .left) ? .left : .right
                }
            } label: {
                Text(”Change mode”)
                    .font(.title)
            }
            
            if mode == .left {
                Text(”Left”)
                    .transition(.opacity.combined(with:.scale))
            } else {
                Text(”Right”)
                    .transition(.opacity.combined(with:.scale))
            }
        }
    }
}

Everything looks good. We have an appearance transition combining opacity and scale.

But what if we want to toggle it using a Picker?


struct SwiftBits: View {
    
    //Some mode
    enum PickerMode {
        case left, right
    }
    
    @State private var mode: PickerMode = .left
    
    var body: some View {
        
        VStack {
            //Button to toggle the mode
            Button {
                withAnimation {
                    mode = (mode != .left) ? .left : .right
                }
            } label: {
                Text(”Change mode”)
                    .font(.title)
            }

            //Picker to toggle the mode
            Picker(”Picker”, selection: $mode) {
                Text(”Left”).tag(PickerMode.left)
                Text(”Right”).tag(PickerMode.right)
            }
            .pickerStyle(SegmentedPickerStyle())
            .padding()
            
            if mode == .left {
                Text(”Left”)
                    .transition(.opacity.combined(with:.scale))
            } else {
                Text(”Right”)
                    .transition(.opacity.combined(with:.scale))
            }
        }
    }
}

Unfortunately, this doesn’t work as expected. One option, is to write a custom Binding with get/set:

var body: some View {
        let animatedModeBinding = Binding<PickerMode>(
                    get: { mode },
                    set: { newValue in
                        withAnimation {
                            mode = newValue
                        }
                    }
                )

This looks like too much for such simple logic, and there’s a more convenient solution!

Binding has a lot of extensions, and one of them will help us:

@available(iOS 13.0, macOS 10.15, tvOS 13.0, watchOS 6.0, *)
extension Binding {

    /// Specifies a transaction for the binding.
    ///
    /// - Parameter transaction  : An instance of a ``Transaction``.
    ///
    /// - Returns: A new binding.
    public func transaction(_ transaction: Transaction) -> Binding<Value>

    /// Specifies an animation to perform when the binding value changes.
    ///
    /// - Parameter animation: An animation sequence performed when the binding
    ///   value changes.
    ///
    /// - Returns: A new binding.
    public func animation(_ animation: Animation? = .default) -> Binding<Value>
}

.animation will return new Binding with animation attached.


struct SwiftBits: View {
    
    //Some mode
    enum PickerMode {
        case left, right
    }
    
    @State private var mode: PickerMode = .left
    
    var body: some View {

        VStack {
            //Button to toggle the mode
            Button {
                withAnimation {
                    mode = (mode != .left) ? .left : .right
                }
            } label: {
                Text(”Change mode”)
                    .font(.title)
            }

            //Picker to toggle the mode
            Picker(”Picker”, selection: $mode.animation()) {
                Text(”Left”).tag(PickerMode.left)
                Text(”Right”).tag(PickerMode.right)
            }
            .pickerStyle(SegmentedPickerStyle())
            .padding()
            
            if mode == .left {
                Text(”Left”)
                    .transition(.opacity.combined(with:.scale))
            } else {
                Text(”Right”)
                    .transition(.opacity.combined(with:.scale))
            }
        }
    }
}

Working like a charm! Gist is here.

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[SwiftUI: Charts Interactivity - Part 1]]>https://antongubarenko.substack.com/p/swiftui-charts-interactivity-parthttps://antongubarenko.substack.com/p/swiftui-charts-interactivity-partMon, 01 Dec 2025 09:28:18 GMTIn the previous post, we’ve made a discardable slider to prevent extra data writes and logic triggering. Project is growing and now we need to represent the stored data in an understandable and appealing way - charts (if it can be applied to charts at all). Now it’s become more convenient with Swift Charts Framework.

Before iOS 16, we had to use custom charts or implement them on our own. Some of the popular solutions were::

  • MPAndroidChart port, which is now called DGCharts because of name conflict with native one

  • SwiftUICharts for easy implementation

  • AAChartKit-Swift and it’s still very popular and as you might notice that it’s a port of Android AAChartKit

  • SwiftCharts, without maintainer now for months

As you can see, Android chart libraries (even though they’re custom components) inspired a lot of these solutions. Many popular apps used the mentioned ports. What can I say: a very popular and famous companies have gained their users by implementing the Chart Component and sell it. Not a secret, that TradingView is a leader in charting controls and have their performant implementation.

The ice began to melt at WWDC22, when Apple introduced Swift Charts. Here is the session to get you started and get a brief overview. For those who want an official docs:

That would be enough to get a basic knowledge about the Charts for us to start.


Plain Chart

Let’s create a data array and scatter chart for it. We will show a daily chart of humidity.

struct HumidityRate: Identifiable {
    let humidity: Double
    let date: Date

    var id: Date { date }
}

For convenient testing, we can add an initializer with date offset and extension:

struct HumidityRate: Identifiable {
    let humidity: Int
    let date: Date

    var id: Date { date }

    init(minutesOffset: Double, humidity: Int) {
        self.date = Date().addingTimeInterval(minutesOffset * 60)
        self.humidity = humidity
    }
}

extension HumidityRate {
    static var samples: [HumidityRate] {
        [
            .init(minutesOffset: -3, humidity: 10),
            .init(minutesOffset: -2, humidity: 10),
            .init(minutesOffset: -1, humidity: 10),
            .init(minutesOffset: 0, humidity: 20),
            .init(minutesOffset: 1, humidity: 30),
            .init(minutesOffset: 2, humidity: 40),
            .init(minutesOffset: 3, humidity: 50),
            .init(minutesOffset: 4, humidity: 40),
            .init(minutesOffset: 5, humidity: 30),
            .init(minutesOffset: 6, humidity: 20),
            .init(minutesOffset: 7, humidity: 10)
        ]
    }
}

Now it’s time for the chart. Construction is pretty simple. At least for now ) We will pass data and use LineMark to draw a line and PointMark to place dots.

struct HumidityChartViewDemo: View {
    
    let data: [HumidityRate]
    
    var body: some View {
        Chart(data, id: \.date) { rate  in
            LineMark(
                x: .value(”“, rate.date),
                y: .value(”“, rate.humidity)
            )
            
            PointMark(
                x: .value(”“, rate.date),
                y: .value(”“, rate.humidity)
            )
        }
        .frame(height: 400)
        .padding()
    }
}

#Preview {
    let data: [HumidityRate] = HumidityRate.samples
    HumidityChartView(data: data)
}
Plain chart with line and dots

Isn’t it simple? Yes! Informative - not really ) We can start with colors which are very handy to highlight the values. Values before the current time would have smaller weight and opacity. And smooth interpolation for edges polishing. Which will lead to an interesting turnaround later.

struct HumidityChartView: View {
    let data: [HumidityRate]
    
    var body: some View {
        Chart(data, id: \.date) { rate  in
            LineMark(
                x: .value(”“, rate.date),
                y: .value(”“, rate.humidity)
            )
            .interpolationMethod(.catmullRom)
            .foregroundStyle(LinearGradient(colors: data.compactMap({Color.colorForIndex($0.humidity)}), startPoint: .leading, endPoint: .trailing))
            
            PointMark(
                x: .value(”“, rate.date),
                y: .value(”“, rate.humidity)
            )
            .foregroundStyle(Color.colorForIndex(rate.humidity))
            .annotation(position: .top) {
                Text(”\(rate.humidity)”)
                    .font(.headline)
                    .fontWeight( rate.date < Date() ? .regular: .bold)
                    .opacity( rate.date < Date() ? 0.5 : 1.0)
            }
        }
        .frame(height: 400)
        .padding()
    }
}

extension Color {
    static func colorForIndex(_ humidity: Int) -> Color {
        switch humidity {
        case 0..<20: return .green
        case 20..<50: return .yellow
        case 50..<70: return .orange
        case 70...80: return .red
        default: return .purple
        }
    }
}
Chart with gradient line and colored dots

LinearGradient should have the same amount of points as data samples!

All looks great. Perhaps filling the gap below the chart with AreaMark will bring more visibility.

struct HumidityChartView: View {
    
    let data: [HumidityRate]
    
    var body: some View {
        Chart(data, id: \.date) { rate  in
            //Area highlight
            AreaMark(x: .value(”“, rate.date),
                     y: .value(”“, rate.humidity))
            .foregroundStyle(LinearGradient(colors:
                                                data.compactMap({
                Color.colorForIndex($0.humidity)
            }),
                                            startPoint: .leading,
                                            endPoint: .trailing))
            .interpolationMethod(.catmullRom)
            
            LineMark(
                x: .value(”“, rate.date),
                y: .value(”“, rate.humidity)
            )
            .interpolationMethod(.catmullRom)
            .foregroundStyle(LinearGradient(colors: data.compactMap({Color.colorForIndex($0.humidity)}), startPoint: .leading, endPoint: .trailing))
            
            PointMark(
                x: .value(”“, rate.date),
                y: .value(”“, rate.humidity)
            )
            .foregroundStyle(Color.colorForIndex(rate.humidity))
            .annotation(position: .top) {
                Text(”\(rate.humidity)”)
                    .font(.headline)
                    .fontWeight( rate.date < Date() ? .regular: .bold)
                    .opacity( rate.date < Date() ? 0.5 : 1.0)
            }
        }
        .frame(height: 400)
        .padding()
    }
} 
Chart with interpolated line, points and area mark

Unfortunately, that merged the points with AreaMark. This can be fixed by adding extra annotation to the PointMark:

.annotation(position: .automatic, alignment: .center, spacing: -9.0) {
    Circle()
         .stroke(Color.black.opacity(0.5), lineWidth: 1)
}
Now points are visible

Selection

Starting from iOS 17, framework contains selection handling for each of the axis.

nonisolated public func chartXSelection<P>(value: Binding<P?>) -> some View where P : Plottable

Before that, we had to place an overlay and track which values were available at the exact location. For a small, not complicated selection it might work. By adding RuleMark, vertical line can be drawn with time for the selected X-value:

struct HumidityChartViewDemo: View {
    
    @State private var selectedX: Date? = nil
    
    let data: [HumidityRate]
    
    var body: some View {
        Chart(data, id: \.date) { rate  in
            AreaMark(x: .value(”“, rate.date),
                     y: .value(”“, rate.humidity))
            .foregroundStyle(LinearGradient(colors:
                                                data.compactMap({
                Color.colorForIndex($0.humidity)
            }),
                                            startPoint: .leading,
                                            endPoint: .trailing))
            .interpolationMethod(.catmullRom)
            
            LineMark(
                x: .value(”“, rate.date),
                y: .value(”“, rate.humidity)
            )
            .interpolationMethod(.catmullRom)
            .foregroundStyle(LinearGradient(colors: data.compactMap({Color.colorForIndex($0.humidity)}), startPoint: .leading, endPoint: .trailing))
            
            PointMark(
                x: .value(”“, rate.date),
                y: .value(”“, rate.humidity)
            )
            .foregroundStyle(Color.colorForIndex(rate.humidity))
            .annotation(position: .top) {
                Text(”\(rate.humidity)”)
                    .font(.headline)
                    .fontWeight( rate.date < Date() ? .regular: .bold)
                    .opacity( rate.date < Date() ? 0.5 : 1.0)
            }.annotation(position: .automatic, alignment: .center, spacing: -9.0) {
                Circle()
                    .stroke(Color.black.opacity(0.5), lineWidth: 1)
            }
            
            if let selectedX {
                RuleMark(x: .value(”Selected”, selectedX))
                    .annotation(position: .automatic) {
                        VStack(spacing: 0) {
                            Text(selectedX, style: .time)
                        }
                    }
                
            }
        }
        .chartXSelection(value: $selectedX)
        .frame(height: 400)
        .padding()
    }
}
Selection with RuleMark

Also, this recalculates the Y-axis to show selection more clearly. To remove it, we can strict the domain (by calculating max + 10 on Y-axis):

.chartYScale(domain: 0...(min(100, data.map(\.humidity).max() ?? 0) + 10))

We are moving pretty well! Fixing axis labels would be a good milestone, don’t you think? Right now we can’t even tell what time is it.


Axis Labels

First, we need to show the date values. Time earlier than current should have lower opacity. For this, we need to use chartXAxis. According to docs, it used for configuring the X-axis with ChartAxisContent.

AxisMarks(
    position: .bottom, values: data.compactMap(\.date)
){ value in
    if let date = value.as(Date.self) {
        AxisGridLine(stroke: .init(lineWidth: 1))
        
        AxisValueLabel {
            VStack(alignment: .center) {
                Text(date, format: .dateTime.hour().minute())
                    .font(.footnote)
                    .opacity( date < Date() ? 0.5 : 1.0)
            }
        }
        
    }
}
X-axis labels added

Well, that is not ideal. Sometimes too many info is not needed. Showing only even dates and skip last date. Small extension will help us.

extension Array {
    func evenIndexed() -> Self {
        self.enumerated()
            .compactMap { index, element in
                index.isMultiple(of: 2) ? element : nil
            }
    }
}

.chartXAxis(content: {
        AxisMarks(
            position: .bottom, values: data.compactMap(\.date)
        ){ value in
            if let date = value.as(Date.self) {
                AxisGridLine(stroke: .init(lineWidth: 1))
                if data.compactMap(\.date).evenIndexed().contains(date) && data.compactMap(\.date).last != date {
                    AxisValueLabel {
                        VStack(alignment: .center) {
                            Text(date, format: .dateTime.hour().minute())
                                .font(.footnote)
                                .opacity( date < Date() ? 0.5 : 1.0)
                        }
                    }
               }
            }
        }
    })
Correct X-axis labels

Now we can get humidity axis. As with X-axis, without iterating over values - nothing will work. Humidity values can be calculated with this formula:

Array(stride(from: 0, to: min(100, (data.map(\.humidity).max() ?? 0) + 10), by: 10)

.chartYAxis {
    AxisMarks(
        position: .trailing,
        values: Array(
            stride(
                from: 0,
                to: min(100, (data.map(\.humidity).max() ?? 0) + 10),
                by: 10
            )
        )
    ) { value in
        if let number = value.as(Int.self) {
            // 20 and 50 are border values to highlight
            if [20, 50].contains(number) {
                AxisGridLine(stroke: .init(lineWidth: 2))
                    .foregroundStyle(Color.colorForIndex(number))

                AxisValueLabel {
                    VStack(alignment: .leading) {
                        Text(”\(number)”)
                            .fontWeight(.bold)
                    }
                }
                .foregroundStyle(Color.colorForIndex(number))

            } else {
                AxisGridLine(stroke: .init(lineWidth: 1))

                AxisValueLabel {
                    VStack(alignment: .leading) {
                        Text(”\(number)”)
                    }
                }
            }
        }
    }
}
New Y-axis with border values highlight

This is a great foundation for our upcoming styling and research. The next post will cover selection tweaks. You will find why did we use interpolation after all.

Code for this part

Thanks for reading and stay tuned for the next part!

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

]]>
<![CDATA[Start building with Swift and SwiftUI - Code-Along Notes and Q&A]]>https://antongubarenko.substack.com/p/start-building-with-swift-and-swiftuihttps://antongubarenko.substack.com/p/start-building-with-swift-and-swiftuiTue, 18 Nov 2025 11:42:28 GMTApple’s educational sessions continues with a new “Code-along: Start building with Swift and SwiftUI | Meet with Apple“ video targeted mainly for beginner engineers which are just starting to work with Swift and SwiftUI as a framework, but it might be a good peek and generally a “development process” vision from Apple.

The code-along walks through building a basic app from scratch and using different frameworks (Photos, SwiftData, etc.). A pretty wide range, you might say? Yes! What normally takes months is condensed into 1h 30m, with the video presented alongside the editor view.

Usually, I don’t cover what’s happening on screen during these sessions. However, because the main audience here is beginner Swift developers, asking specific questions in such a “whole-in-one” session is challenging. This leads to a smaller number of concrete questions — and even fewer detailed answers. Still, I’ll include a Q&A summary in the last chapter, because some of them are pretty interesting and direct. For now, here are the main takeaways…


Don’t Skip Tutorials

Courtesy of Apple Inc.

Five years ago, we could only dream of having official tutorials from Apple. The whole community grew up on self-taught articles and videos. Not bad, but not ideal — especially considering Apple clearly has the resources and people to create proper learning materials.

Develop in Swift Tutorials are interactive, detailed, and include step-by-step guidance with source code for each section. They even cover design, distribution, and other parts of app development. You can move at your own pace, follow your own schedule, and return to any part whenever you need.


Meet with Apple Sessions

Courtesy of Apple Inc.

If you’ve been following me for a while, you might have noticed that this is a new format — something that stands out from what Apple has done before. The official YouTube channel only appeared a few years ago, letting us finally browse both new and old WWDC videos there. Previously, the Apple Developer app was the only option.

Take your time to register there: Available Sessions


Join the Swift Student Challenge

The Swift Student Challenge is an annual competition hosted by Apple that encourages students 13 years or older to create an interactive and creative project using Swift Playgrounds. The goal is to create an experience that can be enjoyed in about three minutes. It’s not about building a massive, complex application, but rather showcasing creativity, innovation, and originality.

How the Swift Student Challenge helps beginner developers?

A Great Learning Opportunity

  • Hands-on Experience: The challenge provides a practical way to apply coding skills to a real project. This is a fantastic way to move from theoretical knowledge to practical application.

  • Mastering Swift: Even if you’re new to Swift, the challenge is a great way to learn the language. Apple provides resources like the Swift Playgrounds app, which is designed for beginners to learn coding.

  • Focus on Creativity: The challenge emphasizes creativity and problem-solving over pure technical complexity. This allows beginners to focus on a great idea and execute it well, without needing to be an expert in every aspect of Swift development.

Building a Portfolio and Gaining Recognition

  • Showcase Your Skills: Participating in the challenge, and especially winning, is a great addition to a developer’s portfolio. It demonstrates initiative, technical skills, and creative problem-solving abilities that are highly valued by potential employers and college admissions committees.

  • Industry Recognition: Winners receive recognition from Apple, which can be a significant boost to a young developer’s career. This can lead to internships and job opportunities at major tech companies.

Networking and Community

  • Connect with Peers: The challenge connects you with a global community of student developers. You can share ideas, get feedback, and build friendships with like-minded individuals.

  • Access to Apple’s Developer Network: Winners gain access to Apple’s global developer network, providing opportunities to connect with industry professionals and mentors.

Submissions open February 6-28, 2026.


Q&A

Finally, sharing the sessions summary.

What is the current structure of the tutorials? I see SwiftUI tutorials, app development tutorials, and the new Develop in Swift tutorials.

You can find all the tutorials at: link. The tutorials we are following today are under App Development.

I’m a visual thinker—are there learning resources that help bridge the gap between art tools and code?

The Designing for visionOS section in the Human Interface Guidelines is excellent.

Can accent colors change based on the system’s light or dark appearance?

You can change the accent color in your Asset Catalog. By doing this, you can set a color for light appearance and another for dark appearance if you wish.

How can we find our code snippets in the new Xcode UI?

You can use Command + Shift + L to open the library. You will find your code snippets there.

Does SwiftData replace Core Data?

SwiftData doesn’t completely replace Core Data today. SwiftData’s default store uses Core Data, and Core Data provides more flexibility in some cases.

How does @Observable differ from @ObservableObject in terms of performance and boilerplate?

Starting with iOS 17, we recommend using the Observable macro. Adopting Observation provides these benefits:

  • Tracking optionals and collections of objects, which isn’t possible with ObservableObject.

  • Updating views only when properties read by the view change, improving performance.

    Check out

I see the benefit of copying and pasting from the Swift tutorial to set up the project, but is there guidance about using coding assistants to set up and continue a project?

Coding assistant is a great tool. Also, join the Meet with Apple session on February 5th, where you can follow along with an Apple expert and experiment with the latest Coding Intelligence tools in Xcode 26.

What is UIImage?

A UIImage object manages image data. You use image objects to represent image data of all kinds, and the class handles all formats supported by UIKit. Apple Docs.

Why do some modifiers, like .scrollDismissesKeyboard(), go outside the ScrollView, while modifiers like .navigationTitle() go inside the NavigationStack? What’s the difference?

Excellent question! View modifiers (e.g., .scrollDismissesKeyboard()) modify the view itself and must be applied to the view. Environment modifiers (e.g., .navigationTitle()) modify views within a container’s context and therefore go inside the container.

I’m watching from Windows and normally code in React. I want to understand Swift conceptually so I can rebuild this app in my own stack. What patterns should I pay closest attention to?

Swift Pathway and the Getting Started guide are great resources.

Are visionOS apps accepted for the Swift Student Challenge?

For Swift Student Challenge 2026, app playgrounds are accepted. They can be built in Xcode or Swift Playgrounds, and must target iOS or iPadOS.

I want to participate in Swift Student Challenge 2026. Are we allowed to submit two ideas/apps?

Submit only one app. Visit the eligibility page.

Is it permitted for the Swift Student Challenge to include Apple’s Foundation Models framework in an app?

Yes — on-device Apple Intelligence frameworks and other Apple technologies may be used.

Why separate the contentStack into a variable instead of writing it inline in the body? Is there an advantage?

Extracting views in SwiftUI keeps your code organized, readable, and maintainable. In this project, it will grow in later sections, which is why it was extracted.

How can you pass a binding through a .navigationDestination, such as from a list view to a detail view?

Use navigationDestination(item:destination:). When the item binding is non-nil, SwiftUI passes the value to the destination. Learn more.


Acknowledgments 🏆

A big thank-you to everyone who joined and contributed thoughtful, insightful, and engaging questions throughout the session — your curiosity and participation made the discussion truly rich and collaborative.

Special thanks to:

Jay Zheng, Philippos Sidiroglou, Sasha Jarohevskii, Jonathan Judelson, Okba Khenissi, Jobie, Ayush J., Christopher State, Erik Jimenez, Derek Haugen, David Ramón Chica, Gina Mahaz, Nick, Roman Indermühle, Tatiana Brimm, Evan S., Joe Heck, Ash, Steve Talkowski, Mansi Bansal, Kevin Johnson.

Finally, a heartfelt thank-you to the Apple team and moderators for leading the session, sharing expert guidance, and providing such clear explanations of app optimization techniques. Your contributions made this an exceptional learning experience for everyone involved.

Thanks for reading Anton’s Substack! Subscribe for free to receive new posts and support my work.

One more thing…

Ever tried to explain “Yak Shaving,” “Spaghetti Code,” or “Imposter Syndrome”? Now you don’t have to — just send a sticker.

TecTalk turns everyday developer slang into fun, relatable stickers for your chats. Whether you’re venting about bugs or celebrating a successful deploy, there’s a sticker for every tech mood.

Created by me 🧑‍💻 — made by a dev, for devs — and available now at a very affordable price.

Express your inner techie. Stop typing. Start sticking.

]]>