Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

tvOS: Background audio + local caching works on Simulator but stops on real Apple TV device
Description: I’m developing a tvOS app using SwiftUI where we play background audio (music) in the Welcome screen, with support for offline playback via local caching. 🔹 Feature Overview App fetches audio metadata from API Starts streaming audio (HLS .m3u8) immediately In parallel, downloads the raw audio file (.mp3) Once download completes: Switches playback from streaming → local file On next launch (offline mode), app plays audio from local storage 🔹 Issue This flow works perfectly on the Simulator, but on a real Apple TV device: Audio plays for a few seconds (2–5 sec) and then stops Especially after switching from streaming → local file No explicit AVPlayer error is logged Playback sometimes stops after UI updates or periodic API refresh 🔹 Implementation Details Using AVPlayer with AVPlayerItem Background audio controlled via a shared manager (singleton) Files stored locally using FileManager (currently using .cachesDirectory) Switching playback using: player.replaceCurrentItem(with: AVPlayerItem(url: localURL)) player.play() 🔹 Observations Works reliably on Simulator On device: Playback stops silently Seems related to lifecycle, buffering, or file access No issues when continuously streaming (without switching to local) 🔹 Questions Is there any limitation or known issue with AVPlayer when switching from streaming (HLS) to local file playback on tvOS? Are there specific requirements for playing locally cached media files on tvOS (e.g., file location, permissions, or sandbox behavior)? What is the recommended storage location and size limit for cached media files on tvOS? We understand tvOS has limited persistent storage Is .cachesDirectory the correct approach for this use case? Are there known differences in AVPlayer behavior between Simulator and real Apple TV devices (especially regarding buffering or lifecycle)? What is the recommended approach for implementing offline background audio on tvOS apps? 🔹 Goal We want to implement a reliable system where: Audio streams initially Seamlessly switches to local file after download Continues playing without interruption Supports offline playback on subsequent launches Any guidance or best practices would be greatly appreciated. Thank you!
1
0
51
4h
Ventura Hack for FireWire Core Audio Support on Supported MacBook Pro and others...
Hi all,  Apple dropping on-going development for FireWire devices that were supported with the Core Audio driver standard is a catastrophe for a lot of struggling musicians who need to both keep up to date on security updates that come with new OS releases, and continue to utilise their hard earned investments in very expensive and still pristine audio devices that have been reduced to e-waste by Apple's seemingly tone-deaf ignorance in the cries for on-going support.  I have one of said audio devices, and I'd like to keep using it while keeping my 2019 Intel Mac Book Pro up to date with the latest security updates and OS features.  Probably not the first time you gurus have had someone make the logical leap leading to a request for something like this, but I was wondering if it might be somehow possible of shoe-horning the code used in previous versions of Mac OS that allowed the Mac to speak with the audio features of such devices to run inside the Ventura version of the OS.  Would it possible? Would it involve a lot of work? I don't think I'd be the only person willing to pay for a third party application or utility that restored this functionality. There has to be 100's of thousands of people who would be happy to spare some cash to stop their multi-thousand dollar investment in gear to be so thoughtlessly resigned to the scrap heap.  Any comments or layman-friendly explanations as to why this couldn’t happen would be gratefully received!  Thanks,  em
64
10
36k
5h
Setting up video and image capture pipeline creates internal errors in AVFoundation.
I have created code for iOS that allows me to start and stop video acquisition from a proprietary USB camera using AVFoundation's AVCaptureSession and AVCaptureDevice APIs. There is a start and stop method. The start method takes an argument to specify one of two formats that I use for my custom camera application. I can start the session and switch between formats all day without any errors. However, if I start and then stop the camera three times in a row, on the third invocation of start, I get errors in the console output and the CMSampleBuffers stop flowing to my callback. Additionally, once I get AVFoundation into this state, stoping the camera doesn't help. I have to kill the app and start over. Here are the errors. And below these, the code. I'm hoping someone who has experience with these errors or an engineer from Apple who knows the AVFoundation image capture pipeline code, can respond and tell me what I'm doing wrong. Thanks. <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:558) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:269) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:511) - (err=-16453) Capture session error: The operation could not be completed Capture session error: The operation could not be completed func start(for deviceFormat: String) async throws -> AnyPublisher<CMSampleBuffer, Swift.Error> { func configureCaptureDevice(with deviceFormat: String) throws { guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound } captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } try captureDevice.lockForConfiguration() captureDeviceFormat = deviceFormat captureDevice.activeFormat = format captureDevice.unlockForConfiguration() } return try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Start capture session for \(deviceFormat): \(String(describing: captureSession))") // If we were already steaming camera images from a different mode, terminate that stream. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" do { // Re-configure with the new format; should be harmless if called with the currently configured format. try configureCaptureDevice(with: deviceFormat) // Return a new stream publisher for this invocation. bufferPublisher = PassthroughSubject<CMSampleBuffer, Swift.Error>() // If we are not currently running, start the image capture pipeline. if captureSession.isRunning == false { captureSession.startRunning() } continuation.resume(returning: bufferPublisher!.eraseToAnyPublisher()) } catch { logger.fault("Failed to start camera: \(error.localizedDescription)") continuation.resume(throwing: error) } } } } func stop() async throws { try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Stop capture session: \(String(describing: captureSession))") // The following invocation is synchronous and takes time to execute; // looks like a stall but you can ignore it as the MainActor is not blocked. captureSession.stopRunning() // Terminate the stream and reset our state. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" // Signal the caller that we are done here. continuation.resume() } } }
0
0
78
14h
How to Validate Now Playing Events on Apple Devices (iOS/tvOS)?
Hi Support Team, I need some guidance regarding Now Playing metadata integration on Apple platforms (iOS/tvOS). We are currently implementing Now Playing events in our application and would like to understand: How can we enable or configure logging for Now Playing metadata updates? Is there any recommended way or tool to verify that Now Playing events are correctly sent and received by the system (e.g., Control Center / external devices)? Are there any debugging techniques or best practices to validate metadata updates during development? Our app is currently in the development phase, and we are working towards meeting Video Partner Program (VPP) requirements. Any documentation, tools, or suggestions would be greatly appreciated. Thanks in advance for your support.
0
0
54
17h
tvOS: Background audio + local caching works on Simulator but stops on real Apple TV device
Description: I’m developing a tvOS app using SwiftUI where we play background audio (music) in the Welcome screen, with support for offline playback via local caching. Feature Overview: App fetches audio metadata from API Starts streaming audio (HLS .m3u8) immediately In parallel, downloads the raw audio file (.mp3) Once download completes: Switches playback from streaming → local file On next launch (offline mode), app plays audio from local storage Issue: This flow works perfectly on the Simulator, but on a real Apple TV device: Audio plays for a few seconds (2–5 sec) and then stops Especially after switching from streaming → local file No explicit AVPlayer error is logged Playback sometimes stops after UI updates or periodic API refresh Implementation Details: Using AVPlayer with AVPlayerItem Background audio controlled via a shared manager (singleton) Files stored locally using FileManager (currently using .cachesDirectory) Switching playback using: player.replaceCurrentItem(with: AVPlayerItem(url: localURL)) player.play() Observations: Works reliably on Simulator On device: -- Playback stops silently -- Seems related to lifecycle, buffering, or file access No issues when continuously streaming (without switching to local) Questions: Is there any limitation or known issue with AVPlayer when switching from streaming (HLS) to local file playback on tvOS? Are there specific requirements for playing locally cached media files on tvOS (e.g., file location, permissions, or sandbox behavior)? What is the recommended storage location and size limit for cached media files on tvOS? We understand tvOS has limited persistent storage Is .cachesDirectory the correct approach for this use case? Are there known differences in AVPlayer behavior between Simulator and real Apple TV devices (especially regarding buffering or lifecycle)? What is the recommended approach for implementing offline background audio on tvOS apps? Goal: We want to implement a reliable system where: Audio streams initially Seamlessly switches to local file after download Continues playing without interruption Supports offline playback on subsequent launches Any guidance or best practices would be greatly appreciated. Thank you!
0
0
87
23h
iOS 26.4 regression: The `.pauses` audiovisual background playback policy does not pause video playback anymore when backgrounding the app
Starting with iOS 26.4 and the iOS 26.4 SDK, the .pauses audiovisual background playback policy is not correctly applied anymore to an AVPlayer having an attached video layer displayed on screen. This means that, when backgrounding a video-playing app (without Picture in Picture support) or locking the device, playback is not paused automatically by the system anymore. This issue affects the Apple TV application as well. We have filed FB22488151 with more information.
0
0
77
1d
iPad Pro M4 giving wrong value for layerPointConverted for ultra wide angle
I am using iPad Pro M4 device to apply exposure point to the camera. While converting layerPointConverted from 0 -1 range to device size point it is giving wrong value. But if same code is used for other iPad like Gen2, it gives proper value. In both cases video gravity used is resizeAspectFill. I tried using true depth camera for M4 device but it does not work.
0
0
53
1d
ScreenCaptureKit stops capturing after ~10–15 minutes unexpectedly
When using the built-in macOS screen recording feature, the recording stops automatically after approximately 10–15 minutes without any warning or error message. No manual stop action is performed. The recording simply ends silently. The same issue also occurs when using ScreenCaptureKit in a custom application, which suggests this may be a system-level issue related to screen capture rather than an app-specific problem. This issue is reproducible and happens consistently after running for a period of time.
0
0
110
2d
ScreenCaptureKit stops capturing after ~10–15 minutes unexpectedly
When using the built-in macOS screen recording feature, the recording stops automatically after approximately 10–15 minutes without any warning or error message. No manual stop action is performed. The recording simply ends silently. The same issue also occurs when using ScreenCaptureKit in a custom application, which suggests this may be a system-level issue related to screen capture rather than an app-specific problem. This issue is reproducible and happens consistently after running for a period of time.
0
0
102
2d
ARFaceTrackingConfiguration and AVCaptureMultiCamSession cannot run simultaneously?
This issue affects camera session behavior and UI integration. I would like to request improved support or clarification regarding the simultaneous use of ARFaceTrackingConfiguration and AVCaptureMultiCamSession. Currently, when attempting to use both: Front camera (TrueDepth) for gaze tracking using ARFaceTrackingConfiguration Rear camera for live preview using AVCaptureMultiCamSession the ARKit face tracking stops updating, or the application becomes unstable (e.g., camera preview turns white or the app crashes). Steps to Reproduce: Start ARSession using ARFaceTrackingConfiguration (front camera) Start AVCaptureMultiCamSession using rear camera Overlay both outputs in a single UI Observe that ARKit tracking stops or camera preview becomes invalid Expected Result: ARKit face tracking continues updating while the rear camera is active. Actual Result: ARKit tracking stops updating, and camera output may become unstable or crash. Use Case: This functionality is important for accessibility and educational applications. For example, users can control UI via gaze input (front camera) while observing real-world objects using the rear camera. Request: Support simultaneous use of ARFaceTrackingConfiguration and AVCaptureMultiCamSession, or Improve resource sharing between TrueDepth and rear cameras, or Provide clear documentation about current limitations This feature would significantly enhance accessibility applications on iPad. Attachment: A photo is attached showing the issue on a real iPad device. In the image, the camera preview becomes white while the application is running, indicating unstable behavior when both ARKit face tracking and rear camera capture are active simultaneously.
0
0
63
3d
AVAudioSession : Audio issues when recording the screen in an app that changes IOBufferDuration on iOS 26.
Among Japanese end users, audio issues during screen recording—primarily in game applications—have become a topic of discussion. We have confirmed that the trigger for this issue is highly likely to be related to changes to IOBufferDuration. When using setPreferredIOBufferDuration and the IOBufferDuration is set to a value smaller than the default, audio problems occur in the recorded screen capture video. Audio playback is performed using AudioUnit (RemoteIO). https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc This issue was not observed on iOS 18, and it appears to have started occurring after upgrading to iOS 26. We provide an audio middleware solution, and we had incorporated changes to IOBufferDuration into our product to achieve low-latency audio playback. As a result, developers using our product as well as their end users are being affected by this issue. We kindly request that this issue be investigated and addressed in a future update. “This document has been translated by AI. The original text is included below for reference.” 日本のエンドユーザー間で主にゲームアプリケーションにおける画面収録時の音声の問題が話題になっています。 こちらの症状のトリガーが、IOBufferDurationの変更によるものである可能性が高いことを確認しました。 setPreferredIOBufferDurationを使用し、IOBufferDurationがデフォルトより小さい状態の時、画面収録された動画の音声に問題が発生することをしています。 音声の再生にはAudioUnit(RemoteIO)を使用しています。 https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc iOS 18ではこのような問題は確認されておらず、iOS26になってから問題が発生しているようです。 私たちはオーディオミドルウェアを提供しており、低遅延の再生のためにIOBufferDurationの変更を製品に組み込んでいました。 そのため、弊社製品をご利用いただいている開発者およびエンドユーザーの皆様がこの不具合の影響を受けています。 こちらの不具合の調査及び修正対応を検討いただけますでしょうか。
3
0
623
4d
AVContentKeySession does not call delegate for repeated processContentKeyRequest with same identifier
I’m working with FairPlay offline licenses using AVContentKeySession and ran into behavior that I cannot find documented. I am explicitly calling: contentKeySession.processContentKeyRequest( withIdentifier: identifier as NSString, initializationData: nil, options: nil ) Expected behavior I expect that each call to processContentKeyRequest will eventually result in the delegate callback: contentKeySession(_:didProvide:) Observed behavior If I call processContentKeyRequest with a new identifier, everything works as expected: didProvide is called I complete the request successfully However, if I call processContentKeyRequest again with the same identifier that was already processed earlier, then: No delegate callbacks are triggered at all The session does not appear to be blocked or stuck If I issue another request with a different identifier, it is processed normally So the behavior looks like the session is silently ignoring repeated requests for the same content key identifier. Important context This is not a concurrency issue — the session continues processing other requests This is reproducible consistently I am not using renewExpiringResponseData(for:) because I do not have a live AVContentKeyRequest at the time of retry Use case My use case is offline playback with periodically refreshed licenses. The app can stay alive for a long time (days/weeks), and I need to proactively refresh licenses before expiration. In this scenario: I only have the contentKeyIdentifier I do not have a current AVContentKeyRequest Calling processContentKeyRequest again for the same identifier does not trigger any delegate callbacks Questions Is this behavior expected — that AVContentKeySession ignores repeated processContentKeyRequest calls for the same identifier? What is the recommended way to re-fetch or refresh a license when: I only have the identifier I do not have a current AVContentKeyRequest I need to refresh proactively (not just in response to playback) What is the intended approach in this case? Maybe to create a new AVContentKeySession to force a new request cycle? Or something else? Is there any way to guarantee that a call to processContentKeyRequest will result in a delegate callback, or is it expected that it may be ignored in some cases? Any clarification on the intended lifecycle of AVContentKeySession and how repeated requests should be handled would be greatly appreciated.
1
0
167
4d
PHBackgroundResourceUploadExtension is never scheduled when iCloud Photos is enabled
Feedback: PHBackgroundResourceUploadExtension is never scheduled when iCloud Photos is enabled Summary PHBackgroundResourceUploadExtension's init() and process() methods are never called by the system when iCloud Photos is enabled on the device, even though setUploadJobExtensionEnabled(true) succeeds and uploadJobExtensionEnabled returns true. Environment iOS 26.4 (both devices) Xcode 26.x Tested on iPhone 17 Pro (primary device, 10,000+ photos) and an older iPhone (development device, 200+ photos) Same build deployed to both devices Full photo library access (.authorized) on both devices Steps to Reproduce Create an app with a PHBackgroundResourceUploadExtension (ExtensionKit, extension point com.apple.photos.background-upload) Enable iCloud Photos on the device (Settings > Photos > iCloud Photos) In the host app, request .readWrite photo library authorization and receive .authorized Call PHPhotoLibrary.shared().setUploadJobExtensionEnabled(true) — succeeds without error Verify PHPhotoLibrary.shared().uploadJobExtensionEnabled == true Wait for the system to schedule the extension (tested overnight, with device on charger + WiFi) Expected Behavior The system should call the extension's init() and process() methods to allow the app to create upload jobs, as documented in "Uploading asset resources in the background." Actual Behavior The extension's init() and process() are never called. The extension process is never launched. Investigation via Console.app Using Console.app to monitor system logs on both devices revealed the root cause: When iCloud Photos is DISABLED (extension works correctly): assetsd Checked all submitted library bundles. Result: YES dasd SUBMITTING: com.apple.assetsd.backgroundjobservice.assetresourceuploadextensionrunner:35E7B5 dasd Submitted: ...assetresourceuploadextensionrunner at priority 5 dasd submitTaskRequestWithIdentifier: Submitted BGSystemTask com.apple.assetsd.backgroundjobservice.assetresourceuploadextensionrunner The extension is subsequently launched and init() / process() are called as expected. When iCloud Photos is ENABLED (extension never works): assetsd Checked all submitted library bundles. Result: NO No assetresourceuploadextensionrunner background task is ever submitted to dasd. The extension is never launched. Reproducibility 100% reproducible across two different devices Toggling iCloud Photos off (and waiting for sync to complete) immediately resolves the issue Toggling iCloud Photos on immediately causes the issue to reappear Reinstalling the app does not help The same build works on the same device when iCloud Photos is disabled Impact This effectively makes PHBackgroundResourceUploadExtension unusable for the vast majority of users, as most iPhone users have iCloud Photos enabled. Third-party photo backup apps (Google Photos, Dropbox, OneDrive, etc.) would all be affected by this limitation. The documentation for "Uploading asset resources in the background" does not mention any incompatibility with iCloud Photos being enabled. Requested Resolution Please either: Allow PHBackgroundResourceUploadExtension to be scheduled regardless of iCloud Photos status, or Document this limitation clearly in the API documentation if it is intentional behavior
0
0
143
4d
High bitrate video streaming in avplayer sometimes audio disappears
Hello, I used AVPlayer in my project to play network movie. Most movie could play normally, but I found the sound will disappear sometimes if I play specified 4K video network stream. The video will continue playing but audio stops after video is played for a while. If I pause player and then resume, the sound will be back but disappeared again after several seconds Check AVPlayerItem status: isPlaybackLikelyToKeepUp` == true isPlaybackBufferEmpty` = false player.volume > 0 According the value above, it seems not cause by empty playback buffer or volume issue. I am so confused for this situation. Movie information Video Format : AVC Format/Info : Advanced Video Codec Format profile : High L5.1 Codec ID : avc1 Codec ID/Info : Advanced Video Coding Bit rate mode : Variable Bit rate : 100.0 Mb/s Width : 3 840 pixels Height : 2 160 pixels Display aspect ratio : 16:9 Frame rate mode : Constant Frame rate : 29.970 (30000/1001) FPS Audio Format : AAC LC Format/Info : Advanced Audio Codec Low Complexity Codec ID : mp4a-40-2 Duration : 5 min 19 s Bit rate mode : Constant Bit rate : 192 kb/s Nominal bit rate : 48.0 kb/s Channel(s) : 2 channels Channel layout : L R Sampling rate : 48.0 kHz Frame rate : 46.875 FPS (1024 SPF) Does anyone know if AVPlayer has this limitations when playing high-bitrate movie streams, and are there any solutions?
2
1
725
4d
AVContentKeySession: Cannot re-fetch content key once obtained — expected behavior?
We are developing a video streaming app that uses AVContentKeySession with FairPlay Streaming. Our implementation supports both online playback (non-persistable keys) and offline playback (persistable keys). We have observed the following behavior: Once a content key has been obtained for a given Content Key ID, AVContentKeySession does not trigger contentKeySession(_:didProvide:) again for that same Key ID We also attempted to explicitly call processContentKeyRequest(withIdentifier:initializationData:options:) on the session to force a new key request for the same identifier, but this did not result in the delegate callback being fired again. The session appears to consider the key already resolved and silently ignores the request. This means that if a user first plays content online (receiving a non-persistable key), and later wants to download the same content for offline use (requiring a persistable key), the delegate callback is not fired again, and we have no opportunity to request a persistable key. Questions Is this the expected behavior? Specifically, is it by design that AVContentKeySession caches the key for a given Key ID and does not re-request it — even when processContentKeyRequest(withIdentifier:) is explicitly called? Should we use distinct Content Key IDs for persistable vs. non-persistable keys? For example, if the same piece of content can be played both online and offline, is the recommended approach to have the server provide different EXT-X-KEY URIs (and thus different key identifiers) for the streaming and download variants? Is there a supported way to force a fresh key request for a Key ID that has already been resolved — for example, to upgrade from a non-persistable to a persistable key? Environment iOS 18+ AVContentKeySession(keySystem: .fairPlayStreaming) Any guidance on the recommended approach for supporting both streaming and offline playback for the same content would be greatly appreciated.
0
0
48
5d
Issues with monitoring and changing WebRTC audio output device in WKWebView
I am developing a VoIP app that uses WebRTC inside a WKWebView. Question 1: How can I monitor which audio output device WebRTC is currently using? I want to display this information in the UI for the user . Question 2: How can I change the current audio output device for WebRTC? I am using a JS Bridge to Objective-C code, attempting to change the audio device with the following code: void set_speaker(int n) { session = [AVAudioSession sharedInstance]; NSError *err = nil; if (n == 1) { [session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&err]; } else { [session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&err]; } } However, this approach does not work. I am testing on an iPhone with iOS 16.7. Is a higher iOS version required?
0
0
69
5d
SpeechAnalyzer error "asset not found after attempted download" for certain languages
I am trying to use the new SpeechAnalyzer framework in my Mac app, and am running into an issue for some languages. When I call AssetInstallationRequest.downloadAndInstall() for some languages, it throws an error: Error Domain=SFSpeechErrorDomain Code=1 "transcription.ar asset not found after attempted download." The ".ar" appears to be the language code, which in this case was Arabic. When I call AssetInventory.status(forModules:) before attempting the download, it is giving me a status of "downloading" (perhaps from an earlier attempt?). If this language was completely unsupported, I would expect it to return a status of "unsupported", so I'm not sure what's going on here. For other languages (Polish, for example) SpeechTranscriber.supportedLocale(equivalentTo:) is returning nil, so that seems like a clearly unsupported language. But I can't tell if the languages I'm trying, like Arabic, are supported and something is going wrong, or if this error represents something I can work around. Here's the relevant section of code. The error is thrown from downloadAndInstall(), so I never even get as far as setting up the SpeechAnalyzer itself. private func setUpAnalyzer() async throws { guard let sourceLanguage else { throw Error.languageNotSpecified } guard let locale = await SpeechTranscriber.supportedLocale(equivalentTo: Locale(identifier: sourceLanguage.rawValue)) else { throw Error.unsupportedLanguage } let transcriber = SpeechTranscriber(locale: locale, preset: .progressiveTranscription) self.transcriber = transcriber let reservedLocales = await AssetInventory.reservedLocales if !reservedLocales.contains(locale) && reservedLocales.count == AssetInventory.maximumReservedLocales { if let oldest = reservedLocales.last { await AssetInventory.release(reservedLocale: oldest) } } do { let status = await AssetInventory.status(forModules: [transcriber]) print("status: \(status)") if let installationRequest = try await AssetInventory.assetInstallationRequest(supporting: [transcriber]) { try await installationRequest.downloadAndInstall() } } ...
9
0
1.1k
6d
SpeechAnalyzer > AnalysisContext lack of documentation
I'm using the new SpeechAnalyzer framework to detect certain commands and want to improve accuracy by giving context. Seems like AnalysisContext is the solution for this, but couldn't find any usage example. So I want to make sure that I'm doing it right or not. let context = AnalysisContext() context.contextualStrings = [ AnalysisContext.ContextualStringsTag("commands"): [ "set speed level", "set jump level", "increase speed", "decrease speed", ... ], AnalysisContext.ContextualStringsTag("vocabulary"): [ "speed", "jump", ... ] ] try await analyzer.setContext(context) With this implementation, it still gives outputs like "Set some speed level", "It's speed level", etc. Also, is it possible to make it expect number after those commands, in order to eliminate results like "set some speed level to" (instead of two).
2
0
576
6d
tvOS: Background audio + local caching works on Simulator but stops on real Apple TV device
Description: I’m developing a tvOS app using SwiftUI where we play background audio (music) in the Welcome screen, with support for offline playback via local caching. 🔹 Feature Overview App fetches audio metadata from API Starts streaming audio (HLS .m3u8) immediately In parallel, downloads the raw audio file (.mp3) Once download completes: Switches playback from streaming → local file On next launch (offline mode), app plays audio from local storage 🔹 Issue This flow works perfectly on the Simulator, but on a real Apple TV device: Audio plays for a few seconds (2–5 sec) and then stops Especially after switching from streaming → local file No explicit AVPlayer error is logged Playback sometimes stops after UI updates or periodic API refresh 🔹 Implementation Details Using AVPlayer with AVPlayerItem Background audio controlled via a shared manager (singleton) Files stored locally using FileManager (currently using .cachesDirectory) Switching playback using: player.replaceCurrentItem(with: AVPlayerItem(url: localURL)) player.play() 🔹 Observations Works reliably on Simulator On device: Playback stops silently Seems related to lifecycle, buffering, or file access No issues when continuously streaming (without switching to local) 🔹 Questions Is there any limitation or known issue with AVPlayer when switching from streaming (HLS) to local file playback on tvOS? Are there specific requirements for playing locally cached media files on tvOS (e.g., file location, permissions, or sandbox behavior)? What is the recommended storage location and size limit for cached media files on tvOS? We understand tvOS has limited persistent storage Is .cachesDirectory the correct approach for this use case? Are there known differences in AVPlayer behavior between Simulator and real Apple TV devices (especially regarding buffering or lifecycle)? What is the recommended approach for implementing offline background audio on tvOS apps? 🔹 Goal We want to implement a reliable system where: Audio streams initially Seamlessly switches to local file after download Continues playing without interruption Supports offline playback on subsequent launches Any guidance or best practices would be greatly appreciated. Thank you!
Replies
1
Boosts
0
Views
51
Activity
4h
Ventura Hack for FireWire Core Audio Support on Supported MacBook Pro and others...
Hi all,  Apple dropping on-going development for FireWire devices that were supported with the Core Audio driver standard is a catastrophe for a lot of struggling musicians who need to both keep up to date on security updates that come with new OS releases, and continue to utilise their hard earned investments in very expensive and still pristine audio devices that have been reduced to e-waste by Apple's seemingly tone-deaf ignorance in the cries for on-going support.  I have one of said audio devices, and I'd like to keep using it while keeping my 2019 Intel Mac Book Pro up to date with the latest security updates and OS features.  Probably not the first time you gurus have had someone make the logical leap leading to a request for something like this, but I was wondering if it might be somehow possible of shoe-horning the code used in previous versions of Mac OS that allowed the Mac to speak with the audio features of such devices to run inside the Ventura version of the OS.  Would it possible? Would it involve a lot of work? I don't think I'd be the only person willing to pay for a third party application or utility that restored this functionality. There has to be 100's of thousands of people who would be happy to spare some cash to stop their multi-thousand dollar investment in gear to be so thoughtlessly resigned to the scrap heap.  Any comments or layman-friendly explanations as to why this couldn’t happen would be gratefully received!  Thanks,  em
Replies
64
Boosts
10
Views
36k
Activity
5h
Setting up video and image capture pipeline creates internal errors in AVFoundation.
I have created code for iOS that allows me to start and stop video acquisition from a proprietary USB camera using AVFoundation's AVCaptureSession and AVCaptureDevice APIs. There is a start and stop method. The start method takes an argument to specify one of two formats that I use for my custom camera application. I can start the session and switch between formats all day without any errors. However, if I start and then stop the camera three times in a row, on the third invocation of start, I get errors in the console output and the CMSampleBuffers stop flowing to my callback. Additionally, once I get AVFoundation into this state, stoping the camera doesn't help. I have to kill the app and start over. Here are the errors. And below these, the code. I'm hoping someone who has experience with these errors or an engineer from Apple who knows the AVFoundation image capture pipeline code, can respond and tell me what I'm doing wrong. Thanks. <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:558) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSourceRemote.m:235) - (err=0) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:253) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:269) - (err=-16453) <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:511) - (err=-16453) Capture session error: The operation could not be completed Capture session error: The operation could not be completed func start(for deviceFormat: String) async throws -> AnyPublisher<CMSampleBuffer, Swift.Error> { func configureCaptureDevice(with deviceFormat: String) throws { guard let format = formatDict[deviceFormat] else { throw Error.captureFormatNotFound } captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } try captureDevice.lockForConfiguration() captureDeviceFormat = deviceFormat captureDevice.activeFormat = format captureDevice.unlockForConfiguration() } return try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Start capture session for \(deviceFormat): \(String(describing: captureSession))") // If we were already steaming camera images from a different mode, terminate that stream. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" do { // Re-configure with the new format; should be harmless if called with the currently configured format. try configureCaptureDevice(with: deviceFormat) // Return a new stream publisher for this invocation. bufferPublisher = PassthroughSubject<CMSampleBuffer, Swift.Error>() // If we are not currently running, start the image capture pipeline. if captureSession.isRunning == false { captureSession.startRunning() } continuation.resume(returning: bufferPublisher!.eraseToAnyPublisher()) } catch { logger.fault("Failed to start camera: \(error.localizedDescription)") continuation.resume(throwing: error) } } } } func stop() async throws { try await withCheckedThrowingContinuation { continuation in sessionQueue.async { [unowned self] in logger.debug("Stop capture session: \(String(describing: captureSession))") // The following invocation is synchronous and takes time to execute; // looks like a stall but you can ignore it as the MainActor is not blocked. captureSession.stopRunning() // Terminate the stream and reset our state. bufferPublisher?.send(completion: .finished) bufferPublisher = nil captureDeviceFormat = "" // Signal the caller that we are done here. continuation.resume() } } }
Replies
0
Boosts
0
Views
78
Activity
14h
How to Validate Now Playing Events on Apple Devices (iOS/tvOS)?
Hi Support Team, I need some guidance regarding Now Playing metadata integration on Apple platforms (iOS/tvOS). We are currently implementing Now Playing events in our application and would like to understand: How can we enable or configure logging for Now Playing metadata updates? Is there any recommended way or tool to verify that Now Playing events are correctly sent and received by the system (e.g., Control Center / external devices)? Are there any debugging techniques or best practices to validate metadata updates during development? Our app is currently in the development phase, and we are working towards meeting Video Partner Program (VPP) requirements. Any documentation, tools, or suggestions would be greatly appreciated. Thanks in advance for your support.
Replies
0
Boosts
0
Views
54
Activity
17h
tvOS: Background audio + local caching works on Simulator but stops on real Apple TV device
Description: I’m developing a tvOS app using SwiftUI where we play background audio (music) in the Welcome screen, with support for offline playback via local caching. Feature Overview: App fetches audio metadata from API Starts streaming audio (HLS .m3u8) immediately In parallel, downloads the raw audio file (.mp3) Once download completes: Switches playback from streaming → local file On next launch (offline mode), app plays audio from local storage Issue: This flow works perfectly on the Simulator, but on a real Apple TV device: Audio plays for a few seconds (2–5 sec) and then stops Especially after switching from streaming → local file No explicit AVPlayer error is logged Playback sometimes stops after UI updates or periodic API refresh Implementation Details: Using AVPlayer with AVPlayerItem Background audio controlled via a shared manager (singleton) Files stored locally using FileManager (currently using .cachesDirectory) Switching playback using: player.replaceCurrentItem(with: AVPlayerItem(url: localURL)) player.play() Observations: Works reliably on Simulator On device: -- Playback stops silently -- Seems related to lifecycle, buffering, or file access No issues when continuously streaming (without switching to local) Questions: Is there any limitation or known issue with AVPlayer when switching from streaming (HLS) to local file playback on tvOS? Are there specific requirements for playing locally cached media files on tvOS (e.g., file location, permissions, or sandbox behavior)? What is the recommended storage location and size limit for cached media files on tvOS? We understand tvOS has limited persistent storage Is .cachesDirectory the correct approach for this use case? Are there known differences in AVPlayer behavior between Simulator and real Apple TV devices (especially regarding buffering or lifecycle)? What is the recommended approach for implementing offline background audio on tvOS apps? Goal: We want to implement a reliable system where: Audio streams initially Seamlessly switches to local file after download Continues playing without interruption Supports offline playback on subsequent launches Any guidance or best practices would be greatly appreciated. Thank you!
Replies
0
Boosts
0
Views
87
Activity
23h
iOS 26.4 regression: The `.pauses` audiovisual background playback policy does not pause video playback anymore when backgrounding the app
Starting with iOS 26.4 and the iOS 26.4 SDK, the .pauses audiovisual background playback policy is not correctly applied anymore to an AVPlayer having an attached video layer displayed on screen. This means that, when backgrounding a video-playing app (without Picture in Picture support) or locking the device, playback is not paused automatically by the system anymore. This issue affects the Apple TV application as well. We have filed FB22488151 with more information.
Replies
0
Boosts
0
Views
77
Activity
1d
iPad Pro M4 giving wrong value for layerPointConverted for ultra wide angle
I am using iPad Pro M4 device to apply exposure point to the camera. While converting layerPointConverted from 0 -1 range to device size point it is giving wrong value. But if same code is used for other iPad like Gen2, it gives proper value. In both cases video gravity used is resizeAspectFill. I tried using true depth camera for M4 device but it does not work.
Replies
0
Boosts
0
Views
53
Activity
1d
Radio stations unable to play on Android with MusicKit SDK
Radio stations are currently not supported by the MusicKit SDK for Android. The SDK has not been updated for years now. It lacks pretty big features of Apple Music
Replies
0
Boosts
0
Views
127
Activity
2d
ScreenCaptureKit stops capturing after ~10–15 minutes unexpectedly
When using the built-in macOS screen recording feature, the recording stops automatically after approximately 10–15 minutes without any warning or error message. No manual stop action is performed. The recording simply ends silently. The same issue also occurs when using ScreenCaptureKit in a custom application, which suggests this may be a system-level issue related to screen capture rather than an app-specific problem. This issue is reproducible and happens consistently after running for a period of time.
Replies
0
Boosts
0
Views
110
Activity
2d
ScreenCaptureKit stops capturing after ~10–15 minutes unexpectedly
When using the built-in macOS screen recording feature, the recording stops automatically after approximately 10–15 minutes without any warning or error message. No manual stop action is performed. The recording simply ends silently. The same issue also occurs when using ScreenCaptureKit in a custom application, which suggests this may be a system-level issue related to screen capture rather than an app-specific problem. This issue is reproducible and happens consistently after running for a period of time.
Replies
0
Boosts
0
Views
102
Activity
2d
ARFaceTrackingConfiguration and AVCaptureMultiCamSession cannot run simultaneously?
This issue affects camera session behavior and UI integration. I would like to request improved support or clarification regarding the simultaneous use of ARFaceTrackingConfiguration and AVCaptureMultiCamSession. Currently, when attempting to use both: Front camera (TrueDepth) for gaze tracking using ARFaceTrackingConfiguration Rear camera for live preview using AVCaptureMultiCamSession the ARKit face tracking stops updating, or the application becomes unstable (e.g., camera preview turns white or the app crashes). Steps to Reproduce: Start ARSession using ARFaceTrackingConfiguration (front camera) Start AVCaptureMultiCamSession using rear camera Overlay both outputs in a single UI Observe that ARKit tracking stops or camera preview becomes invalid Expected Result: ARKit face tracking continues updating while the rear camera is active. Actual Result: ARKit tracking stops updating, and camera output may become unstable or crash. Use Case: This functionality is important for accessibility and educational applications. For example, users can control UI via gaze input (front camera) while observing real-world objects using the rear camera. Request: Support simultaneous use of ARFaceTrackingConfiguration and AVCaptureMultiCamSession, or Improve resource sharing between TrueDepth and rear cameras, or Provide clear documentation about current limitations This feature would significantly enhance accessibility applications on iPad. Attachment: A photo is attached showing the issue on a real iPad device. In the image, the camera preview becomes white while the application is running, indicating unstable behavior when both ARKit face tracking and rear camera capture are active simultaneously.
Replies
0
Boosts
0
Views
63
Activity
3d
AVAudioSession : Audio issues when recording the screen in an app that changes IOBufferDuration on iOS 26.
Among Japanese end users, audio issues during screen recording—primarily in game applications—have become a topic of discussion. We have confirmed that the trigger for this issue is highly likely to be related to changes to IOBufferDuration. When using setPreferredIOBufferDuration and the IOBufferDuration is set to a value smaller than the default, audio problems occur in the recorded screen capture video. Audio playback is performed using AudioUnit (RemoteIO). https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc This issue was not observed on iOS 18, and it appears to have started occurring after upgrading to iOS 26. We provide an audio middleware solution, and we had incorporated changes to IOBufferDuration into our product to achieve low-latency audio playback. As a result, developers using our product as well as their end users are being affected by this issue. We kindly request that this issue be investigated and addressed in a future update. “This document has been translated by AI. The original text is included below for reference.” 日本のエンドユーザー間で主にゲームアプリケーションにおける画面収録時の音声の問題が話題になっています。 こちらの症状のトリガーが、IOBufferDurationの変更によるものである可能性が高いことを確認しました。 setPreferredIOBufferDurationを使用し、IOBufferDurationがデフォルトより小さい状態の時、画面収録された動画の音声に問題が発生することをしています。 音声の再生にはAudioUnit(RemoteIO)を使用しています。 https://developer.apple.com/documentation/avfaudio/avaudiosession/setpreferrediobufferduration(_:)?language=objc iOS 18ではこのような問題は確認されておらず、iOS26になってから問題が発生しているようです。 私たちはオーディオミドルウェアを提供しており、低遅延の再生のためにIOBufferDurationの変更を製品に組み込んでいました。 そのため、弊社製品をご利用いただいている開発者およびエンドユーザーの皆様がこの不具合の影響を受けています。 こちらの不具合の調査及び修正対応を検討いただけますでしょうか。
Replies
3
Boosts
0
Views
623
Activity
4d
AVContentKeySession does not call delegate for repeated processContentKeyRequest with same identifier
I’m working with FairPlay offline licenses using AVContentKeySession and ran into behavior that I cannot find documented. I am explicitly calling: contentKeySession.processContentKeyRequest( withIdentifier: identifier as NSString, initializationData: nil, options: nil ) Expected behavior I expect that each call to processContentKeyRequest will eventually result in the delegate callback: contentKeySession(_:didProvide:) Observed behavior If I call processContentKeyRequest with a new identifier, everything works as expected: didProvide is called I complete the request successfully However, if I call processContentKeyRequest again with the same identifier that was already processed earlier, then: No delegate callbacks are triggered at all The session does not appear to be blocked or stuck If I issue another request with a different identifier, it is processed normally So the behavior looks like the session is silently ignoring repeated requests for the same content key identifier. Important context This is not a concurrency issue — the session continues processing other requests This is reproducible consistently I am not using renewExpiringResponseData(for:) because I do not have a live AVContentKeyRequest at the time of retry Use case My use case is offline playback with periodically refreshed licenses. The app can stay alive for a long time (days/weeks), and I need to proactively refresh licenses before expiration. In this scenario: I only have the contentKeyIdentifier I do not have a current AVContentKeyRequest Calling processContentKeyRequest again for the same identifier does not trigger any delegate callbacks Questions Is this behavior expected — that AVContentKeySession ignores repeated processContentKeyRequest calls for the same identifier? What is the recommended way to re-fetch or refresh a license when: I only have the identifier I do not have a current AVContentKeyRequest I need to refresh proactively (not just in response to playback) What is the intended approach in this case? Maybe to create a new AVContentKeySession to force a new request cycle? Or something else? Is there any way to guarantee that a call to processContentKeyRequest will result in a delegate callback, or is it expected that it may be ignored in some cases? Any clarification on the intended lifecycle of AVContentKeySession and how repeated requests should be handled would be greatly appreciated.
Replies
1
Boosts
0
Views
167
Activity
4d
PHBackgroundResourceUploadExtension is never scheduled when iCloud Photos is enabled
Feedback: PHBackgroundResourceUploadExtension is never scheduled when iCloud Photos is enabled Summary PHBackgroundResourceUploadExtension's init() and process() methods are never called by the system when iCloud Photos is enabled on the device, even though setUploadJobExtensionEnabled(true) succeeds and uploadJobExtensionEnabled returns true. Environment iOS 26.4 (both devices) Xcode 26.x Tested on iPhone 17 Pro (primary device, 10,000+ photos) and an older iPhone (development device, 200+ photos) Same build deployed to both devices Full photo library access (.authorized) on both devices Steps to Reproduce Create an app with a PHBackgroundResourceUploadExtension (ExtensionKit, extension point com.apple.photos.background-upload) Enable iCloud Photos on the device (Settings > Photos > iCloud Photos) In the host app, request .readWrite photo library authorization and receive .authorized Call PHPhotoLibrary.shared().setUploadJobExtensionEnabled(true) — succeeds without error Verify PHPhotoLibrary.shared().uploadJobExtensionEnabled == true Wait for the system to schedule the extension (tested overnight, with device on charger + WiFi) Expected Behavior The system should call the extension's init() and process() methods to allow the app to create upload jobs, as documented in "Uploading asset resources in the background." Actual Behavior The extension's init() and process() are never called. The extension process is never launched. Investigation via Console.app Using Console.app to monitor system logs on both devices revealed the root cause: When iCloud Photos is DISABLED (extension works correctly): assetsd Checked all submitted library bundles. Result: YES dasd SUBMITTING: com.apple.assetsd.backgroundjobservice.assetresourceuploadextensionrunner:35E7B5 dasd Submitted: ...assetresourceuploadextensionrunner at priority 5 dasd submitTaskRequestWithIdentifier: Submitted BGSystemTask com.apple.assetsd.backgroundjobservice.assetresourceuploadextensionrunner The extension is subsequently launched and init() / process() are called as expected. When iCloud Photos is ENABLED (extension never works): assetsd Checked all submitted library bundles. Result: NO No assetresourceuploadextensionrunner background task is ever submitted to dasd. The extension is never launched. Reproducibility 100% reproducible across two different devices Toggling iCloud Photos off (and waiting for sync to complete) immediately resolves the issue Toggling iCloud Photos on immediately causes the issue to reappear Reinstalling the app does not help The same build works on the same device when iCloud Photos is disabled Impact This effectively makes PHBackgroundResourceUploadExtension unusable for the vast majority of users, as most iPhone users have iCloud Photos enabled. Third-party photo backup apps (Google Photos, Dropbox, OneDrive, etc.) would all be affected by this limitation. The documentation for "Uploading asset resources in the background" does not mention any incompatibility with iCloud Photos being enabled. Requested Resolution Please either: Allow PHBackgroundResourceUploadExtension to be scheduled regardless of iCloud Photos status, or Document this limitation clearly in the API documentation if it is intentional behavior
Replies
0
Boosts
0
Views
143
Activity
4d
High bitrate video streaming in avplayer sometimes audio disappears
Hello, I used AVPlayer in my project to play network movie. Most movie could play normally, but I found the sound will disappear sometimes if I play specified 4K video network stream. The video will continue playing but audio stops after video is played for a while. If I pause player and then resume, the sound will be back but disappeared again after several seconds Check AVPlayerItem status: isPlaybackLikelyToKeepUp` == true isPlaybackBufferEmpty` = false player.volume > 0 According the value above, it seems not cause by empty playback buffer or volume issue. I am so confused for this situation. Movie information Video Format : AVC Format/Info : Advanced Video Codec Format profile : High L5.1 Codec ID : avc1 Codec ID/Info : Advanced Video Coding Bit rate mode : Variable Bit rate : 100.0 Mb/s Width : 3 840 pixels Height : 2 160 pixels Display aspect ratio : 16:9 Frame rate mode : Constant Frame rate : 29.970 (30000/1001) FPS Audio Format : AAC LC Format/Info : Advanced Audio Codec Low Complexity Codec ID : mp4a-40-2 Duration : 5 min 19 s Bit rate mode : Constant Bit rate : 192 kb/s Nominal bit rate : 48.0 kb/s Channel(s) : 2 channels Channel layout : L R Sampling rate : 48.0 kHz Frame rate : 46.875 FPS (1024 SPF) Does anyone know if AVPlayer has this limitations when playing high-bitrate movie streams, and are there any solutions?
Replies
2
Boosts
1
Views
725
Activity
4d
AVContentKeySession: Cannot re-fetch content key once obtained — expected behavior?
We are developing a video streaming app that uses AVContentKeySession with FairPlay Streaming. Our implementation supports both online playback (non-persistable keys) and offline playback (persistable keys). We have observed the following behavior: Once a content key has been obtained for a given Content Key ID, AVContentKeySession does not trigger contentKeySession(_:didProvide:) again for that same Key ID We also attempted to explicitly call processContentKeyRequest(withIdentifier:initializationData:options:) on the session to force a new key request for the same identifier, but this did not result in the delegate callback being fired again. The session appears to consider the key already resolved and silently ignores the request. This means that if a user first plays content online (receiving a non-persistable key), and later wants to download the same content for offline use (requiring a persistable key), the delegate callback is not fired again, and we have no opportunity to request a persistable key. Questions Is this the expected behavior? Specifically, is it by design that AVContentKeySession caches the key for a given Key ID and does not re-request it — even when processContentKeyRequest(withIdentifier:) is explicitly called? Should we use distinct Content Key IDs for persistable vs. non-persistable keys? For example, if the same piece of content can be played both online and offline, is the recommended approach to have the server provide different EXT-X-KEY URIs (and thus different key identifiers) for the streaming and download variants? Is there a supported way to force a fresh key request for a Key ID that has already been resolved — for example, to upgrade from a non-persistable to a persistable key? Environment iOS 18+ AVContentKeySession(keySystem: .fairPlayStreaming) Any guidance on the recommended approach for supporting both streaming and offline playback for the same content would be greatly appreciated.
Replies
0
Boosts
0
Views
48
Activity
5d
After upgrade to iOS 26.4, averagePowerLevel and peakHoldLevel are stuck -120
We have an application that capture audio and video. App captures audio PCM on internal or external microphone and displays audio level on the screen. App was working fine for many years but after iOS 26.4 upgrade, averagePowerLevel and peakHoldLevel are stuck to -120 values. Any suggestion?
Replies
6
Boosts
4
Views
536
Activity
5d
Issues with monitoring and changing WebRTC audio output device in WKWebView
I am developing a VoIP app that uses WebRTC inside a WKWebView. Question 1: How can I monitor which audio output device WebRTC is currently using? I want to display this information in the UI for the user . Question 2: How can I change the current audio output device for WebRTC? I am using a JS Bridge to Objective-C code, attempting to change the audio device with the following code: void set_speaker(int n) { session = [AVAudioSession sharedInstance]; NSError *err = nil; if (n == 1) { [session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&err]; } else { [session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&err]; } } However, this approach does not work. I am testing on an iPhone with iOS 16.7. Is a higher iOS version required?
Replies
0
Boosts
0
Views
69
Activity
5d
SpeechAnalyzer error "asset not found after attempted download" for certain languages
I am trying to use the new SpeechAnalyzer framework in my Mac app, and am running into an issue for some languages. When I call AssetInstallationRequest.downloadAndInstall() for some languages, it throws an error: Error Domain=SFSpeechErrorDomain Code=1 "transcription.ar asset not found after attempted download." The ".ar" appears to be the language code, which in this case was Arabic. When I call AssetInventory.status(forModules:) before attempting the download, it is giving me a status of "downloading" (perhaps from an earlier attempt?). If this language was completely unsupported, I would expect it to return a status of "unsupported", so I'm not sure what's going on here. For other languages (Polish, for example) SpeechTranscriber.supportedLocale(equivalentTo:) is returning nil, so that seems like a clearly unsupported language. But I can't tell if the languages I'm trying, like Arabic, are supported and something is going wrong, or if this error represents something I can work around. Here's the relevant section of code. The error is thrown from downloadAndInstall(), so I never even get as far as setting up the SpeechAnalyzer itself. private func setUpAnalyzer() async throws { guard let sourceLanguage else { throw Error.languageNotSpecified } guard let locale = await SpeechTranscriber.supportedLocale(equivalentTo: Locale(identifier: sourceLanguage.rawValue)) else { throw Error.unsupportedLanguage } let transcriber = SpeechTranscriber(locale: locale, preset: .progressiveTranscription) self.transcriber = transcriber let reservedLocales = await AssetInventory.reservedLocales if !reservedLocales.contains(locale) && reservedLocales.count == AssetInventory.maximumReservedLocales { if let oldest = reservedLocales.last { await AssetInventory.release(reservedLocale: oldest) } } do { let status = await AssetInventory.status(forModules: [transcriber]) print("status: \(status)") if let installationRequest = try await AssetInventory.assetInstallationRequest(supporting: [transcriber]) { try await installationRequest.downloadAndInstall() } } ...
Replies
9
Boosts
0
Views
1.1k
Activity
6d
SpeechAnalyzer > AnalysisContext lack of documentation
I'm using the new SpeechAnalyzer framework to detect certain commands and want to improve accuracy by giving context. Seems like AnalysisContext is the solution for this, but couldn't find any usage example. So I want to make sure that I'm doing it right or not. let context = AnalysisContext() context.contextualStrings = [ AnalysisContext.ContextualStringsTag("commands"): [ "set speed level", "set jump level", "increase speed", "decrease speed", ... ], AnalysisContext.ContextualStringsTag("vocabulary"): [ "speed", "jump", ... ] ] try await analyzer.setContext(context) With this implementation, it still gives outputs like "Set some speed level", "It's speed level", etc. Also, is it possible to make it expect number after those commands, in order to eliminate results like "set some speed level to" (instead of two).
Replies
2
Boosts
0
Views
576
Activity
6d