Guide: Setting up Nvidia DGX Spark with ggml

4 hours ago 1

In this guide we will configure the NVIDIA DGX™ Spark as a local and private AI assistant using the ggml software stack. The guide is geared towards developers and builders. We are going to setup the following AI capabilities:

  • AI chat
  • AI coding agent
  • Inline text completion service
  • Embeddings service
  • Vision service
  • Speech-to-text (STT) service

These features will run simultaneously, in your local network, allowing you to fully utilize the power of your device at home or in the office.

image

Software

We are going to use the following open-source software:

Simply run the following command in a terminal on your NVIDIA DGX™ Spark:

bash <(curl -s https://ggml.ai/dgx-spark.sh)

Note

The dgx-spark.sh script above is quite basic and is merely one of the many possible ways you can configure your device for AI use cases. It is provided here mainly for convenience and as an example. Feel free to inspect it and adjust it for your needs.

The command downloads and builds the latest version of the ggml software stack and starts multiple HTTP REST services as shown in the following table:

Port Base URL Model Typical Use‑Case
8021 http://localhost:8021 Gemma Generate text embeddings
8022 http://localhost:8022 Qwen Fill‑in‑the‑middle (in‑fill) text generation
8023 http://localhost:8023 GPT-OSS General‑purpose LLM completions, chat and tool use
8024 http://localhost:8024 Gemma (Vision) Vision tasks – image‑to‑text, multimodal inference
8025 http://localhost:8025 Whisper Speech‑to‑text transcription

The first time running the command can take a few minutes to download the model weights. If everything goes OK, you should see the following output:

image

At this point, the machine is fully configured and ready to be used. Internet connection is no longer necessary.

Here's sample output of nvidia-smi while the ggml services are running:

image

Here is a small fraction of the AI use cases that are possible with this configuration.

Basic chat

Simply point your browser to the chat endpoint http://localhost:8023:

image

Inline code completions (FIM)

  • Install the llama.vim plugin in your Vim/Neovim editor and configure it to use the FIM endpoint http://localhost:8022:

    image
  • In VSCode, install the llama.vscode extension and configure it in a similar way to use the FIM endpoint:

    image

Coding agent

In VSCode, configure the llama.vscode extension to use the endpoints for completions, chat, embeddings and tools:

image

Document and image processing

Submit PDFs and image documents in the WebUI to analyze them with a multimodal LLM. For visuals, use the vision endpoint http://localhost:8024:

image

Audio transcription

Use the speech-to-text endpoint at http://localhost:8025 to quickly transcribe audio files:

image

For performance numbers, see Performance of llama.cpp on NVIDIA DGX Spark

The new NVIDIA DGX Spark is a great choice for serving the latest AI models locally and privately. With 128GB of unified system memory it has the capacity to host multiple AI services simultaneously. And the ggml software stack is the best way to do that.

Read Entire Article