Decrypted Generative Model safety files for Apple Intelligence containing filters
- decrypted_overrides/: Contains decrypted overrides for various models.
- com.apple.*/: Directory named using the Asset Specifier assosciated with the safety info
- Info.plist: Contains metadata for the override
- AssetData/: Contains the decrypted JSON files
- com.apple.*/: Directory named using the Asset Specifier assosciated with the safety info
- get_key_lldb.py: Script to get the encryption key (see usage info below)
- decrypt_overrides.py: Script to decrypt the overrides (see usage info below)
cryptography is the only dependency required to run the decryption script. You can install it using pip:
To retrieve the encryption key (generated by ModelCatalog.Obfuscation.readObfuscatedContents) for the overrides, you must attach LLDB to GenerativeExperiencesSafetyInferenceProvider ( /System/Library/ExtensionKit/Extensions/GenerativeExperiencesSafetyInferenceProvider.appex/Contents/MacOS/GenerativeExperiencesSafetyInferenceProvider). Also it is important that this is Xcode's LLDB, not the default macOS one or LLVM's lldb. The method I recommend to get LLDB to attach:
- Run sudo killall GenerativeExperiencesSafetyInferenceProvider; sudo xcrun lldb -w -n GenerativeExperiencesSafetyInferenceProvider /System/Library/ExtensionKit/Extensions/GenerativeExperiencesSafetyInferenceProvider.appex/Contents/MacOS/GenerativeExperiencesSafetyInferenceProvider
- In the Shortcuts app, create a dummy shortcut that uses the Generative Model action ("Use Model") and select the On-Device option. Type whatever you want into the text field, it doesn't matter. Then run the shortcut.
- You should see LLDB attach to (the newly started instance of) GenerativeExperiencesSafetyInferenceProvider with a message like this:
- In this repository's root, run the command in LLDB: command script import get_key_lldb.py
- Then run c to continue the process. LLDB will print the encryption key to the console and save it to ./key.bin.
To decrypt the overrides, run the following command in the root of this repository:
The decrypted_overrides directory will be created if it does not exist, and the decrypted overrides will be placed in it. This is only necessary if the overrides have been updated, there is already a decrypted version of the overrides in this repository that is up to date as of June 28, 2025.
The overrides are JSON files that contain safety filters for various generative models. Each override is associated with a specific model context (from what I can tell) and contains rules that determine how the model should behave in certain situations, such as filtering out harmful content or ensuring compliance with safety standards.
Here is an example of one of the overrides metadata.json file sourced from dec_out_repo/decrypted_overrides/com.apple.gm.safety_deny.output.code_intelligence.base. Note the output part of the specifier, which indicates that this is a safety override for model output rather than user input:
Here, the reject field contains exact phrases which will result in a guardrail violation. The remove field contains phrases that will be removed from the output, while the replace field contains phrases that will be replaced with other phrases. The regexReject, regexRemove, and regexReplace fields contain regular expressions that will be used to match and filter content in a similar manner.
.png)
