You just fired up an old Linux-based appliance. Once upon a time, it was widely deployed, but in recent years, you discover it’s fallen by the wayside: Vendor support? Neglected. Software updates? Nonexistent.
Of course, you want to figure out how the underpinnings work, perform vulnerability research on it, or red-team it for a cybersecurity effort.
You have basic local access… but the root file system is largely stripped of any useful tools.
You’re thinking how nice it would be to have modern tools deployed on the system – GDB, Python, a better shell, and Frida – but you want them “on the side” and not affecting the running system.
I feel your pain, reader.
This is a quite common scenario. Commercial consumer devices and industrial counterparts most often have an end-of-life (EOL) with respect to support, software upgrades, and security patches. Device manufactures could be acquired or go out of business, and their products stop getting the TLC they deserve. There could be CVE bulletins and n-days from years back that are still applicable to such systems, and they could be ripe for exploitation.
However, it could be challenging to build said software for such targets; you’d inevitably face considerations such as:
- A compatible toolchain is not available; either the device manufacturer never published it or there are problems running it on your currently preferred host.
- If a running toolchain is available, it might be too old to build the current iterations of your favorite tools.
- Many tools, such as Frida and GDB, have a laundry list of dependencies in terms of libraries, which most often are not available.
If you’ve stumbled into a situation similar to the above, please read on, since this article will present a hands-on recipe with a potential solution. (Plus, an additional twist of hacking with Raspberry Pi.)
Don’t miss the prequel
This post assumes you’ve read my Part I: Porting Frida to an Unsupported Platform – be sure to start there if you haven’t yet!
The following write-up will augment Part I, providing you with a more comprehensive solution to the effort of running Frida on an unsupported target – while alleviating your EOL pains.
Let’s build a toolset for your hacking target
Over the course of this article we will produce the following artifacts:
- A custom cross-compiler, which generates code for your target architecture – and also is compatible with the runtime environment of the current system.
- A comprehensive set of library prerequisites for the latest versions of Python and Frida.
- A self-contained mini-rootfs, with libraries and tools, built for installation on the target at a suitable writable location.
I’ve used this scheme for a variety of targets, including iPhone/iOS, a proprietary network appliance, and more. Is it the most elegant or configurable framework? Maybe not, but it’s sufficient in the laid-out context.
Now, let’s turn our attention to RPi running OpenWrt.
OpenWrt, Raspberry Pi 1 + host
Continuing the journey started in Part I
In Part I, we used a virtualized 32-bit x86 OpenWrt target. I selected this platform for simplicity, as the main focus of the article was an exploration into porting Frida onto an unsupported target. Since it’s always more fun to run on dedicated hardware, my initial idea for Part II meant procuring a vintage DD-WRT (or similar) device from eBay.
Eventually, I decided to make use of yet another somewhat contrived target: the Raspberry Pi 1. It’s cheap, and very representative of the hardware you might find inside an old network appliance. Also, the RPi Broadcom chipset is supported by the same OpenWrt version used in Part I.
Just to be clear, for the purposes of this article, please view the RPi target as if it were some sort of obsolete appliance – abandoned in its current state, with no vendor updates in sight.
Homage to the venerable ARM1176JZF-S
Raspberry Pi 1 sports the Broadcom BCM2835 chip (same as BCM2708), which is an ARM1176JZF-S implementation – a real celebrity in ARM lore. While not the first MMU-equipped ARM core, it was the one that launched ARM into ubiquity, having been used in the original 2007 iPhone via the Samsung S5L8900 chipset.
OpenWrt and host
Accessing the OpenWrt download archives, you’ll find the same 17.01.2 version used in Part I available for RPi 1.
And for reference, I use Ubuntu 24.04 as the host platform throughout this exercise.
More #FridaGoals
Specifically, our goals are to provide:
- Custom GCC compiler with musl libc for our RPi 1 target running OpenWrt.
- Semi-complete, custom, mini-rootfs with latest Frida 16.6.6 and Python 13 bindings.
- Since we know that /root is the home directory, we choose /root/pack as the target installation directory.
- All libraries and tools should have RPATH pointing to /root/pack/lib.
- All executables should use the native C runtime in /lib/libc.so. This makes it easier for Frida, see Part I.
- Build scripts for compiler, libraries, and tools available in a public GitHub repository. Everything should be built with one make invocation.
Firing up the RPi
If you make use of the available USB ports with ethernet adapters, the RPi could actually serve as a home router.
Normally, you might run Raspberry Pi OS, but naturally we want to run the identified old OpenWrt software. Let’s start by downloading the image here. Plug a microSD card into your host computer (use an USB adapter, if necessary) and pinpoint the device node:
In my case, the device is /dev/sdc (naturally, the device will be different on your host). Now, let’s overwrite the existing partition table onto the raw device with our downloaded image (see OpenWrt instructions):
The image is not even 300 MB, and thus is far smaller than our – and most – SD cards. Therefore, we want to expand both the partition and file system to utilize the full size.
To set the second partition to the max size of the disk, use fdisk (or similar), and expand the filesystem with expand2fs. (See one of many tutorials on how to do this.)
For good measure, run fsck.ext4 and tune2fs -O^resize_inode on the second partition as well.
At this point, it’s easiest to use the serial console for the initial network configuration, and we assume that the network you’re connecting to has DHCP service (again, see OpenWrt instructions):
Now you have SSH access (as in Part I). You can follow the same procedure for putting your key on the target and creating a host openwrt SSH alias.
One-stop GitHub repo
Wishing you had everything you need to replicate the builds and the experiments laid out in this article? Well, consider your wish granted, as it’s all available in our GitHub repository:
Let’s clone the repo and take a look:
In essence, the folders contain makefiles to build the toolchain, libraries, and Frida tools. Two components I’d initially like to highlight:
- cross folder: the toolchain configuration and makefile.
- Top-level Makefile: recursive makefile to build all Frida components.
Not surprisingly, you have to build the toolchain ahead of Frida – and make sure your environment adds a path to your installation.
Each component is self-contained within one directory:
- The Makefile for each component will, when invoked, download the open-source package into the dnld folder.
- Each component saves the configuration and build logs into files for post-build inspection.
- Each component installs itself into tool-install/pack, which is the staging location for all components.
As we have elaborated on earlier, we have the following requirements for the toolchain:
- ARMv6. It should generate code for the ARMv6 architecture, or more specifically ARMv6kz, optionally tuned for ARM1176JZF-S.
- musl. The libc should be the version used by the target OpenWrt version, which is v1.1.16. It’s worth repeating that the 1.2.x and 1.1.x series are binary incompatible (see Part I).
- Latest GCC. We want to use a new GCC release, otherwise we might not be successful building the latest version of Frida and its dependencies. Let’s be bold and choose the bleeding-edge v14.2.
This toolchain is one of a kind, combining the latest GCC with quite an old musl libc, which targets a vintage 32-bit ARM architecture.
We’re leveraging this GitHub project, which facilitates satisfying all the above goals:
Our cross/makefile will clone this project once invoked, but before that, we’ll inspect the cross/config.mak, which is our custom toolchain configuration.
We observe the following:
And we define the following:
- The desired GCC compiler triplet, which will be the prefix to all our GCC tools (i.e., arm-linux-musleabihf-gcc, etc.)
- The GCC configure-time fine-tuning for our RPi ARM architecture, which will become the default GCC code generation target
- Installation directory
- Desired versions of GCC and musl
Everything else stays default, as per the author’s recommendations. The only thing you might want to change is the OUTPUT installation directory. The build is complex due to multiple stages, but no need to worry about that. Depending on your host computer, the build could take anywhere between 5 to 30 minutes, so some patience is required.
Build as:
Congratulations, you have an oven-fresh custom toolchain installed under cross/install.
Finally, add the bin path to your environment profile, as per your distribution’s best practices. On Ubuntu 24.04, I usually add the below into /etc/profile.d/toolchain-custom.sh. That way, it’s added once you’ve begun your login session. For earlier Ubuntu, add the same line in ~/.profile.
For good measure, you can verify that your compiler functions correctly (as outlined in Part I).
Frida, Python + other select tools
Remember, this repository is not meant to compete with Buildroot, OpenEmbedded, or Yocto, which are comprehensive solutions to build a complete embedded Linux system, including kernel, boot loaders, and rootfs. The intention of this repository is only to build a few select tools for deployment onto an existing target, and to overcome the identified challenges building them.
Host prerequisites
For time’s sake, I won’t list the exact set of required packages that should be available on your host machine; just follow the normal distribution recommendations regarding setting it up for development activities. Some packages might have unusual dependencies, such as textinfo for documentation. Therefore, if a package fails to build, just inspect the log file created and identify what went wrong. The solution, more often than not, is just to install another host package.
That said, I’ll point out a few special prerequisites, which do not fall into the simple “missing package” category:
Python 3.13
The default Python installation on my host is version 3.12, but we’re building a full Python interpreter for version 3.13. Experience has shown that life becomes oh-so-much simpler if the host also has the target version of Python installed. Fortunately, on Ubuntu, this is easily accomplished by installing the deadsnakes PPA, followed by the 3.13 main interpreter and virtual environment.
TL;DR:
Type make.
The recursive top-level Makefile will build all the libraries and tools in the dependency order required. Once done, the make install command will create a tarball for target deployment.
Slightly longer TL;DR
The top-level Makefile will build the libraries first, then the tools that depend on them, and finally Frida itself. Some common libraries frequently used by popular tools include:
- ncurses: Console/terminal library
- zlib, xz, bz2, lz4, zstd: Compression libraries
- readline: Library facilitating implementing full-featured REPL/CLI tools (e.g., the Python interactive prompt)
- PCRE: Perl-style regular expression library
Subsequent packages built can now compile and link against these libraries.
The build
A few conventions that all components adhere to:
- You build each package with the following command: cd && make
- Each folder contains the makefile, patches, etc., necessary to build the component.
- Log files are created during a package build for later inspection, if necessary.
- When invoking make for any package, the makefile will download or git clone the sources and archive the tarball into the dnld folder. Next time you build the component, the download is skipped, since the source package is already present in dnld.
- The components cloned from git repositories are pinned against a commit or tag, and archived as a tarball in dnld.
- All packages install into the staging area at tool-install/pack. Thus, the libraries and header files are installed into tool-install/pack/[include|lib]. In individual package Makefiles, see the usage of CMN_INSTALL in the repo.
- Many packages need to know the path to where they’re being installed. In this case, we can simply choose root’s home directory at /root/pack. See the usage of CMN_TGTINST.
- All packages assume that the CMN_CROSS variable is set for the cross-compilation host triplet – which, in our case, is arm-linux-musleabihf.
- For cross-compilation of GNU autotool packages, normally you invoke the configure script with --host=$(CMN_CROSS) and --prefix=$(CMN_TGTINST), as noted above.
- Since we’re cross-compiling, we’re typically overriding the CC, CFLAGS, and LDFLAGS environment variables during package configure script invocation. Normally it works fine. But if the package doesn’t follow the established GNU autotools conventions, occasionally it’s necessary to do something special.
- Since we want to install to our staging area, rather than the target-only prefix path /root/pack, most package Makefiles invoke the install target with prefix=$(CMN_INSTALL).
And holistically, as summarized in the TL;DR:
- All packages are built in the correct order with the following command: cd frida-musl && make
- Once all packages are built and staged, you can invoke top-level Makefile again as make install, and everything under tool-install/pack is simply made into a compressed tarball.
Ever-increasing list of “good-to-haves”
During the course of putting this article together, I did add some tools not really necessary for the eventual goal of getting Frida running. However, they were useful for verifying functionality, and they include:
- htop: Nice color-enabled terminal tool for measuring system performance
- nano: Terminal-based editor
- Bash: Full bash shell as improvement of system-resident ash
- This package is patched to add paths to bin directories for the target installation at /root/pack/
- binutils: Tools for inspecting binaries
- GDB: Standard Linux debugger
- Quite a comprehensive debugger build, but I could still add libbabeltrace, libsource-highlight, etc. Also, it could be nice to include a gef configuration as well.
I left them in the build, as they might be useful for people trying to replicate this.
Python
As usual with everything related to Python, there are numerous ways and tools for accomplishing things. This is very much true when it comes to cross-compiling to another architecture than that of the host. The actual main Python package, with its many C-language modules, might not pose too much head-scratching, but adding any platform-specific third-party modules surely will.
First, we’ll jump ahead a bit and talk about the Frida python prerequisites. Not only do we need a full distribution of Python for Frida, but we also need some third-party packages added. If you’re running on a normal Linux host machine, you’d normally add these packages to your Python environment with python -m pip install <package name>, and the package manager pip downloads and installs the appropriate package from PyPI.
With our use case, we don’t want to be dependent on network access, as we might conduct our research in an air-gapped environment. As such, we do want to include a number of third-party packages at build time, which satisfy the Frida requirements:
- colorama: For colorful terminal text
- prompt_toolkit: For an interactive CLI experience
- Pygments: For syntax highlighting
- wcwidth: For Unicode string support
- websockets: For RFC 6455 & 7692 support
Therefore, the sources for our Python build will include the main Python tarball from https://www.python.org/ plus the above third-party libraries. We’ll build and install the main Python build to the staging area as usual and install the third-party libraries into the pack/lib/python3.13/site-packages directory. That way, they’re ready to go without pip install the wheels. However, the built wheels are also copied into pack/share for good measure.
Verifying the Python build
Building the main Python tarball is somewhat straightforward. However, please note that if prerequisite libraries are missing, most of the time the consequences are just not to build the corresponding standard libraries. And if a module fails to build, most often it fails silently. Therefore, it’s important to inspect the configuration and build log files and verify that (almost) all modules are built as well (some are teed up for deprecation, or irrelevant for your use case).
First, inspect the configuration log file configure_Python-3.13.2.log towards the end:
Only two modules will be excluded, and neither is critical: _dbm is obsolete, and we won’t do any Tk scripting.
Next, inspect the build log make-default-Python-3.13.2.log towards the end:
We’re all good; the log confirms that, as expected, only two modules were excluded from the build. (And be sure to check the final module count.)
Finally, we want to verify that the third-party modules were built successfully:
Frida
And we save the most complicated tool build for last!
A lot has already been conversed about and elaborated on in Part I, and the same troubleshooting steps still apply here. Basically, we’ll augment the Part I build with full Python support. And please recall the additional module prerequisites that we’ve already taken care of in the Python environment. Note: Some of the patches required in Part I are no longer necessary – the good Frida author accepted my merge requests, so they’re now in mainline.
The mission:
- We need to do a recursive clone of the top-level Frida repository with all submodules included, then we branch the sources from the 16.6.6 tag and archive the sources as usual under dnld.
- As long as you don’t change the toolchain used, the Frida SDK build is archived, once built, under frida/archive. Subsequent recompilations of Frida will use the cached SDK as is. If you want to rebuild the SDK, just delete the archive folder.
- The few patches for the Frida submodules are located under frida/patches.
- As in Part I, the bootstrapper and loader are rebuilt with our custom toolchain.
- And we make sure to enable Python support with the configure-time flags --enable-frida-tools and --enable-frida-python. We also need to tell the Frida environment where the Python 3.13 headers are located with the meson flag -Dc_args="-I$(CMN_INSTALL)/include/$(CMN_PYEXE)".
- The --enable-frida-tools options will install Python scripts into $(CMN_INSTALL)/bin, but in those, the Python interpreter path is incorrect, as it assumes standard /usr/bin/python3. We fix these paths with some crafty sed processing.
Verifying the Frida build
After the make command finishes, you can inspect the log file make-frida-16.6.6.log and look for the following key items:
We’re building everything:
All targets were successfully built!
Tarball
As shown in the TL;DR above, invoking the top-level make install command generates a compressed tarball of the pack/tool-install folder.
The pack tarball filename includes the architecture, git short commit, and git branch for uniqueness purposes.
At this point, I don’t seek to minimize the footprint in any meaningful way: I do not strip, all installed artifacts are included, even the static libraries, etc. Suffice it to say, there’s plenty of room to shrink the tarball — but that’s a mission for another day.
Run on RPi
So, we have our compressed tarball in hand. Now we just need to transfer it to the target, log into the target, and uncompress the fresh tarball.
As we see, we have installed everything in its correct target location under /root/pack.
We’ve gotten acquainted with frida-inject while exploring its functionality in Part I. Therefore, we won’t dwell on it here, leaving it as a reader exercise.
What if Frida could make it even more convenient for the user to trace function invocations, program JavaScript manipulations, etc.?
Well, welcome the frida-tools utilities, such as:
- frida: Python REPL start command
- frida-trace: Automatic generation of JavaScript function hooks (and more)
The above commands and friends (see https://frida.re/docs/home) are all implemented in terms of Python.
Let’s target the exact same daemon as in Part I – but this time, we make use of frida-trace:
This is very interesting. Unpacking the above:
- OpenWrt console prompt lines
- Either we add /root/pack/[s]bin to our $PATH, or we can start our custom shell to gain access to all our new tools. We choose the latter here.
- root@LEDE:~# frida-trace --decorate -i "read*" -i "send*" uhttpd
- With the -i flags, we can use a wildcard syntax to target a family of functions – in this case, all read and send functions in any library mapped into the address space of uhttpd.
- Auto-generated handler ad nauseam
- Trace handlers for all functions matching the wildcards are automatically generated. The default handlers merely print invocations of the functions to stdout. However, users can now edit and augment the tracing as they wish, using any of the available Frida JavaScript APIs. There’s ample awesome functionality to explore, and I’d refer to the Frida documentation.
- Web UI available at http://localhost:33153/
- There’s a Web UI started on the local network interface, but since we’re running on a terminal-only system, I believe it’s of no real use to us here.
- 1104089 ms read() [libc.so]
- When navigating our host web browser to our target IP address, we instantly see continuous traces from the regular read function as long as the browser is rendering pages (huzzah!).
Let’s inspect the generated handler for read():
Now you can experiment with adding more elaborate tracing or live data manipulation. Your imagination is the only limit.
Mission Accomplished: Frida runs on RPi with OpenWrt
We succeeded in making the latest version of Frida with Python bindings to run on a 20-year-old ARM chip with musl-based system software. This project was not without challenges, package dependency chasing, and the usual tinkering, but most of the temporary roadblocks were omitted in this write-up due to article-length concerns.
An interested reader should have no major problems replicating this build as all the necessary build scripts reside in our public GitHub repository.
I’d personally like to extend my gratitude to Ole, the creator of Frida, for making a very interesting and useful piece of software publicly available.
The Raspberry Pi 1, with its ARM11 core, is by no means the first Linux-capable chip. ARM9TDMI and ARM9E(J)S implementations were already running Linux in the early 2000s – at a blistering speed of around 200 MHz.
Over to you…
It would be interesting to know if any readers rise to the occasion of porting Frida & friends to the earliest ARM possible?
If so, it would be quite a trivial task to fork my repository and make the necessary changes. Off the top of my head, you’d need to change the toolchain build to target ARMv4T or ARMv5TE architecture respectively. Apart from that, the rest might just work. Since ARM9 chips have less resources, one potential problem could be RAM size, but again, there are many opportunities to minimize the storage and RAM footprint not explored in this article.
Let our team know how it goes if you venture down this route – we love hearing what readers learn when they go on an engineering adventure.
Illustration by Rebecca DeField.
⇣ Help fellow hackers find this article in your favorite online venue!