Run any Python function on 1000 computers in 1 second.
Burla is the simplest way to scale python, it's has one function: remote_parallel_map It's open-source, works with GPU's, custom docker containers, and up to 10,000 CPU's at once.
A data-platform any team can learn in minutes:
Scale machine learning systems, or other research efforts without weeks of onboarding or setup. Burla's open-source web platform makes it simple to monitor long running pipelines or training runs.
How it works:
Burla only has one function:
With Burla, running code in the cloud feels the same as coding locally:
Anything you print appears in your local terminal.
Exceptions thrown in your code are thrown on your local machine.
Responses are pretty quick, you run a million function calls in a couple seconds!
Features:
📦 Automatic Package Sync
Burla clusters automatically (and very quickly) install any missing python packages into all containers in the cluster.
🐋 Custom Containers
Easily run code in any linux-based Docker container. Public or private, just paste an image URI in the settings, then hit start!
📂 Network Filesystem
Need to get big data into/out of the cluster? Burla automatically mounts a cloud storage bucket to your working directory.
⚙️ Variable Hardware Per-Function
The func_cpu and func_ram args make it possible to assign more hardware to some functions, and less to others, unlocking new ways to simplify pipelines and architecture.
Easily create pipelines without special syntax.
Nest remote_parallel_map calls to fan code in/out over thousands of machines. Example: Process every record of many files in parallel, then combine results on one big machine.
Watch our Demo:
Get started now:
Questions? Schedule a call, or email [email protected]. We're always happy to talk.
.png)
