Show HN: NitNab a macOS 26 Privacy Centric AI Transcription App

1 month ago 7

Swift Version Platform Architecture License

Nifty Instant Transcription Nifty AutoSummarize Buddy

A powerful, privacy-focused native macOS application for transcribing audio files using Apple's cutting-edge Speech framework and Apple Intelligence. Built for macOS 26+ with Swift 6.0 and optimized for Apple Silicon.

  • 🎵 Multi-Format Support: M4A, WAV, MP3, AIFF, CAF, FLAC, and more
  • 🌍 Multi-Language: Supports all languages available in macOS Speech framework
  • Fast & Efficient: Leverages Apple's on-device SFSpeechRecognizer API
  • 🔒 Privacy-First: All processing happens locally on your Mac
  • 🔄 Batch Processing: Transcribe multiple files in sequence with automatic error handling
  • 📊 Progress Tracking: Real-time progress updates for each file
  • AI Summaries: Generate concise summaries using Apple Intelligence (FoundationModels)
  • 💬 Interactive Chat: Ask questions about transcripts, draft emails, extract action items
  • 🤖 Context-Aware: AI maintains conversation history for natural interactions
  • 💾 Auto-Save: Automatically saves audio files, transcripts, summaries, and chat history
  • ☁️ iCloud Sync: Built-in iCloud Drive support for seamless device sync
  • 📁 Custom Storage: Choose any folder for local-only storage
  • 🗂️ Organized Structure: Each transcription stored in its own timestamped folder
  • 🔄 Cross-Device Ready: Designed for future iOS/iPadOS app integration
  • 📤 Multiple Export Formats: Plain Text, SRT, WebVTT, JSON, Markdown
  • 📋 One-Click Copy: Copy transcripts, summaries, or chat responses instantly
  • 💾 Flexible Output: Export individual files or batch exports
  • 🎨 Beautiful UI: Modern SwiftUI interface with three-tab view (Transcript/Summary/Chat)
  • 🖱️ Drag & Drop: Add files by dragging or using file picker
  • 👆 Clickable Files: Select any file to view its transcript, summary, or errors
  • 🚫 No Popups: Errors display inline without blocking workflow
  • 🔵 Visual Selection: Blue border highlights selected file
  • macOS 26.0 (Tahoe) or later - Required for Apple Intelligence features
  • Apple Silicon Mac - Required for FoundationModels API
  • Xcode 26.0 or later - For building from source
  • Speech Recognition permission - Granted on first launch

Option 1: Build from Source

  1. Clone the repository:
git clone https://github.com/lanec/nitnab.git cd nitnab
  1. Open the project in Xcode:
  1. Build and run (⌘R)

Option 2: Download Release

Download the latest release from the Releases page.

  1. Launch NitNab
  2. Grant Speech Recognition permission when prompted
  3. Click "Browse Files" to select audio files (recommended over drag-and-drop)
  4. Select your preferred language from the dropdown
  5. Click "Start Transcription"
  6. Click completed files to view transcripts
  7. Use Summary tab to generate AI summaries
  8. Use Chat tab to interact with the transcript
  • File Picker (Recommended): Click "Browse Files" to select files from Finder
    • Automatically grants file access permissions
    • Most reliable method
  • Drag & Drop: Drag audio files directly onto the app window
    • May require Full Disk Access in System Settings
  • Batch Import: Select multiple files at once for batch processing
  1. Add one or more audio files using "Browse Files"
  2. Select the language from the dropdown (defaults to English)
  3. Click "Start Transcription"
  4. Monitor progress in real-time (time-based estimation)
  5. Files process automatically, skipping empty audio files
  6. Click any completed file to view results

Transcript Tab

  • View full transcription text
  • Click "Copy" button to copy entire transcript
  • Text is selectable for partial copying

Summary Tab

  • Click "Generate Summary" to create AI-powered summary
  • Click "Copy" to copy summary text
  • Click "Regenerate" for a new summary
  • Powered by Apple Intelligence (FoundationModels)

Chat Tab

  • Ask questions about the transcript
  • Request email drafts or action items
  • Get context-aware AI responses
  • Conversation history maintained
  • Try suggestions: "Draft an email about this", "What action items were mentioned?"

Data Persistence & iCloud Sync

Automatic Saving (Enabled by Default)

When you transcribe files, NitNab automatically saves everything to a structured folder:

YourChosenFolder/NitNab/ ├── nitnab.db # SQLite database tracking all transcriptions └── 2025-10-09_15-30-45_MyRecording/ ├── Audio/ │ └── MyRecording.m4a # Copy of original audio file ├── Transcript/ │ ├── transcript.txt # Full transcript text │ └── metadata.json # Job details (duration, confidence, etc.) └── AI Summary/ ├── summary.txt # AI-generated summary (after generation) └── chat.json # Chat conversation history

Storage Options:

  1. iCloud Drive (Recommended): Automatically syncs across all your Macs and future iOS devices

    • Uses app-specific ubiquitous container: iCloud.com.lanec.nitnab
    • Location: iCloud~com~lanec~nitnab/Documents/NitNab/
    • Select "Use iCloud Drive" in Settings → Persistence
    • Configured automatically on first launch if iCloud is available
  2. Custom Local Folder: Store files anywhere on your Mac

    • Choose "Choose Folder..." in Settings → Persistence
    • Great for local-only storage or external drives

Managing Persistence:

  • Toggle auto-save on/off in Settings → Persistence
  • Files are saved after transcription, summary generation, and each chat message
  • All data syncs via iCloud if that option is selected
  • Future mobile app will access the same iCloud data

Export transcripts in multiple formats:

  • Plain Text (.txt): Simple, clean transcript
  • SRT (.srt): Subtitle format with timestamps
  • WebVTT (.vtt): Web video text tracks
  • JSON (.json): Structured data with metadata
  • Markdown (.md): Formatted document

Access export options from the Export menu in the header or file context menu.

  • ⌘N: Add new files
  • ⌘R: Start transcription
  • ⌘.: Cancel transcription
  • ⌘C: Copy selected transcript
  • ⌘,: Open settings

NitNab is built with modern Swift and SwiftUI:

NitNab/ ├── Models/ # Data models │ ├── TranscriptionJob.swift # Job state and metadata │ ├── AudioFile.swift # Audio file information │ ├── TranscriptionResult.swift # Transcript data │ └── PersistedJobData.swift # Serializable job data ├── Services/ # Business logic (Actor-based) │ ├── AIService.swift # Apple Intelligence integration │ ├── AudioFileManager.swift # Audio file operations │ ├── TranscriptionService.swift # Speech recognition │ ├── ExportService.swift # Multi-format export │ ├── PersistenceService.swift # File system persistence │ └── DatabaseService.swift # SQLite job tracking ├── ViewModels/ # MVVM view models │ └── TranscriptionViewModel.swift # Main app coordinator └── Views/ # SwiftUI views ├── ContentView.swift # Main app container ├── HeaderView.swift # Top bar with controls ├── DropZoneView.swift # File drop target ├── FileListView.swift # Job list sidebar ├── TranscriptView.swift # Three-tab interface └── SettingsView.swift # App configuration
  • SwiftUI: Modern declarative UI framework
  • Swift Concurrency: Async/await and actors for clean asynchronous code
  • Actors: Thread-safe service layer with isolated state
  • SFSpeechRecognizer: Apple's on-device speech-to-text API
  • FoundationModels: Apple Intelligence for summaries and chat (macOS 26+)
  • LanguageModelSession: On-device LLM integration
  • AVFoundation: Audio file processing and format conversion
  • SQLite: Database for job tracking and metadata
  • FileManager: iCloud and local file system integration
  • MVVM Architecture: Clean separation of concerns
  • Privacy-First: All processing happens on-device, no cloud services
  • Offline-Capable: Works without internet connection
  • Actor-Based Concurrency: Thread-safe by design
  • Persistent State: Jobs and transcripts survive app restarts
  • iCloud Sync Ready: Built for seamless cross-device sync

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add some AmazingFeature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

This project is licensed under the MIT License - see the LICENSE file for details.

  • Built with Apple's SpeechTranscriber API
  • Inspired by the need for privacy-focused, local transcription tools
  • Thanks to the Swift and macOS developer community

Lane Campbell - @lanec

Project Link: https://github.com/lanec/nitnab

Website: https://www.nitnab.com

# Clone the repository git clone https://github.com/lanec/nitnab.git cd nitnab # Open in Xcode open NitNab/NitNab.xcodeproj # Build and run # Press ⌘R or Product > Run

The project follows clean MVVM architecture with actor-based services:

  • Models: Immutable data structures
  • Services: Actor-isolated business logic
  • ViewModels: @MainActor view coordinators
  • Views: SwiftUI declarative UI

Run tests with:

xcodebuild test -project NitNab/NitNab.xcodeproj -scheme NitNab

Speech Recognition Not Working

  • Check Permissions: Go to System Settings → Privacy & Security → Speech Recognition
  • Enable for NitNab: Make sure NitNab is checked
  • Restart the App: Quit and relaunch NitNab
  • Database Migration: The app automatically migrates old database schemas
  • Check Storage: Ensure sufficient disk space in your iCloud or local storage
  • View Logs: Check Console.app for any error messages from NitNab

Apple Intelligence Features Not Available

  • Requirements: macOS 26.0+ and Apple Silicon required
  • Enable Apple Intelligence: System Settings → Apple Intelligence
  • Language: Currently Apple Intelligence requires English (US)
  • Xcode Version: Ensure you're using Xcode 16.0 or later
  • macOS SDK: The project requires macOS 26.0 SDK
  • Clean Build: Try Product → Clean Build Folder (⌘⇧K)
  • Multi-language transcription
  • AI-powered summarization
  • Interactive AI chat
  • Batch processing
  • iCloud sync for transcripts
  • Database persistence
  • Multiple export formats
  • Live audio recording and transcription
  • Speaker diarization (multiple speakers)
  • Streaming transcription with real-time updates
  • Custom vocabulary support
  • iOS/iPadOS companion app
  • Shortcuts integration
  • Share extension for system-wide transcription
  • Timeline view with segment navigation

Screenshots coming soon


Made with ❤️ by Lane Campbell

Read Entire Article