Hi HN,
I'm Allen Townsend, an independent ML researcher, and I've developed a novel gradient-free machine learning algorithm that enables faster training and inference than traditional gradient-based approaches, especially in scenarios where:
Gradients are hard or impossible to compute
Simulation or black-box models dominate
Speed and stability are critical
This method avoids backpropagation entirely, yet achieves competitive results in optimization-heavy tasks. It's well-suited for reinforcement learning, simulation-based systems, and edge deployment use cases.
I'm currently looking for contract work to help teams develop custom models using this approach. If you have a tough modeling or optimization problem and want to explore this new direction, I’d love to collaborate.
You can reach me at [email protected].
Happy to provide examples, benchmarks, or run a quick pilot project. Thanks for reading!
— Allen Townsend
.png)

![Software Engineering's Greatest Hits [video]](https://www.youtube.com/img/desktop/supported_browsers/firefox.png)
