Show HN: UHOP – Escaping Nvidia Lock-In with an Open Hardware Optimization Layer

5 hours ago 1

I built UHOP (Universal Hardware Optimization Platform) because I was tired of the invisible cage that GPU developers live in — everything we write seems to orbit around NVIDIA. CUDA is amazing, but it’s also a moat. Porting the same code to ROCm or OpenCL usually means starting over.

UHOP is an open-source attempt to break that lock-in by introducing a cross-vendor optimization layer. It detects your hardware (CUDA, ROCm, OpenCL, etc.), generates or benchmarks kernels, and caches the best performer. You can wrap your ops with a decorator, let UHOP choose or generate the kernel, and it just runs — wherever.

Features so far:

Hardware detection + backend selection

AI-assisted kernel generation (CUDA / OpenCL / Triton)

Fused op demos (conv2d+ReLU, matmul, etc.)

Kernel benchmarking and caching

CLI + early browser dashboard

There’s a long way to go — distributed tuning, compiler IR passes, better PyTorch/JAX hooks — but it’s open, hackable, and community-driven.

Repo: github.com/sevenloops/uhop

Demo: uhop.dev

Would love feedback from compiler engineers, GPU devs, or anyone who’s ever felt boxed in by vendor APIs.


Comments URL: https://news.ycombinator.com/item?id=45669995

Points: 1

# Comments: 0

Read Entire Article