NATS 2.12 Atomic Batch Publishing

2 hours ago 2

NATS 2.12 introduces a powerful new feature called Atomic Batch Publishing that brings all-or-nothing multi-message operations to JetStream. When you need to ensure consistency across multiple related messages—like updating several fields in a user profile or recording a multi-step transaction—atomic batches guarantee that either all messages succeed together or none are persisted at all.

In this post, we’ll explore how atomic batch publishing works, show you how to use it with Go and the orbit.go library, and explain the communication patterns, limitations, and best practices.

What is Atomic Batch Publishing?

Atomic batches provide all-or-nothing semantics: either all messages in the batch are successfully persisted, or none are. This is perfect for scenarios where you need consistency across multiple related messages.

For instance, when recording a transaction that spans several steps—such as order creation, inventory reservation, and payment processing—atomic batch publishing guarantees that all messages in the batch are either committed together or not at all. This prevents scenarios where only some updates persist, eliminating the risk of partial or inconsistent state. Please note that atomic batch publishing works only within a single stream, and cannot be used for messages spanning multiple streams.

The key characteristics are:

  • Maximum 1,000 messages per batch
  • 10-second timeout for batch completion
  • Either all messages persist or entire batch is abandoned
  • Batches are scoped to a single stream (cannot span multiple streams)
  • Perfect for maintaining data consistency
  • Supports optimistic locking with sequence expectations

Enabling Atomic Batch Publishing

Before you can use atomic batch publishing, you need to enable it on your JetStream streams.

Using the NATS CLI

# Create a new stream with atomic batch publishing enabled

nats stream add ORDERS \

--subjects "orders.>" \

--defaults \

--allow-batch

You can verify atomic batch publishing is enabled by inspecting the stream:

nats stream info ORDERS

# Look for this field in the output:

# Allow Atomic Publish: true

To enable atomic batch publishing on an existing stream, use the update command:

# Update existing stream to enable atomic batch publishing

nats stream update ORDERS --allow-batch

Programmatic Configuration

Of course, you can also configure this when creating a stream programmatically:

cfg := jetstream.StreamConfig{

Name: "ORDERS",

Subjects: []string{"orders.>"},

AllowAtomicPublish: true, // Enables atomic batch publishing

}

js.CreateStream(context.Background(), cfg)

Using Atomic Batch Publishing

Let’s look at how to use atomic batch publishing with the orbit.go library. We’ll create a practical example that records multiple related events atomically.

Example: Atomic Transaction Recording

Imagine you’re recording a multi-step transaction where each step must be persisted together. You want to ensure all events succeed as a unit:

package main

import (

"context"

"fmt"

"log"

"time"

"github.com/nats-io/nats.go"

"github.com/nats-io/nats.go/jetstream"

"github.com/synadia-io/orbit.go/jetstreamext"

)

func main() {

// Connect to demo NATS server

nc, err := nats.Connect("nats://demo.nats.io")

if err != nil {

log.Fatal(err)

}

defer nc.Close()

// Create JetStream context

js, err := jetstream.New(nc)

if err != nil {

log.Fatal(err)

}

// Create stream with atomic batch publishing enabled

streamName := fmt.Sprintf("TRANSACTIONS_%d", time.Now().Unix())

_, err = js.CreateStream(context.Background(), jetstream.StreamConfig{

Name: streamName,

Subjects: []string{"transactions.>"},

AllowAtomicPublish: true,

})

if err != nil {

log.Fatal(err)

}

// Create batch publisher

batch, err := jetstreamext.NewBatchPublisher(js)

if err != nil {

log.Fatal(err)

}

// Add multiple related transaction events to the batch

txnID := "txn-12345"

err = batch.AddMsg(&nats.Msg{

Subject: fmt.Sprintf("transactions.%s.order-created", txnID),

Data: []byte(`{"order_id":"ORD-789","amount":99.99}`),

})

if err != nil {

log.Fatal(err)

}

err = batch.AddMsg(&nats.Msg{

Subject: fmt.Sprintf("transactions.%s.inventory-updated", txnID),

Data: []byte(`{"sku":"WIDGET-001","quantity":-1}`),

})

if err != nil {

log.Fatal(err)

}

err = batch.AddMsg(&nats.Msg{

Subject: fmt.Sprintf("transactions.%s.payment-processed", txnID),

Data: []byte(`{"payment_id":"PAY-456","status":"success"}`),

})

if err != nil {

log.Fatal(err)

}

// Commit the batch with a final message

ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)

defer cancel()

ack, err := batch.Commit(ctx, fmt.Sprintf("transactions.%s.completed", txnID), []byte(`{"timestamp":"2025-11-05T10:00:00Z"}`))

if err != nil {

log.Fatal(err)

}

fmt.Printf("✓ Transaction batch committed successfully!\n")

fmt.Printf(" Stream: %s\n", ack.Stream)

fmt.Printf(" Batch ID: %s\n", ack.BatchID)

fmt.Printf(" Batch Size: %d messages\n", ack.BatchSize)

fmt.Printf(" Final Sequence: %d\n", ack.Sequence)

}

Understanding the Code

  1. Stream Configuration: We enable AllowAtomicPublish on the stream
  2. Batch Publisher: NewBatchPublisher() creates a batch context with a unique batch ID
  3. Adding Messages: Each AddMsg() call publishes the message immediately to the server with batch headers, but the server stages them without committing
  4. Commit: The Commit() call sends the final message with the commit header, triggering atomic storage of all staged messages
  5. All-or-Nothing: If any message fails validation (e.g., stream doesn’t exist), the entire batch is abandoned by the server

Using Expectations for Optimistic Locking

You can add expectations to ensure consistency and prevent race conditions:

// Only commit if the transaction stream's last sequence is 100

batch.AddMsg(&nats.Msg{

Subject: "transactions.txn-456.order-created",

Data: []byte(`{"order_id":"ORD-999"}`),

}, jetstreamext.WithBatchExpectLastSequence(100))

// Or ensure a specific subject hasn't been updated since sequence 42

batch.AddMsg(&nats.Msg{

Subject: "transactions.txn-456.payment-processed",

Data: []byte(`{"payment_id":"PAY-789"}`),

}, jetstreamext.WithBatchExpectLastSequencePerSubject(42))

Flow Control for Large Batches

By default, the batch publisher waits for an acknowledgment on the first message (AckFirst: true) to ensure the server accepts the batch, but doesn’t wait for intermediate acks during batch building. For large batches with many messages, you can configure additional flow control to receive acknowledgments during the batch process:

// Create a batch publisher with flow control

batch, err := jetstreamext.NewBatchPublisher(js, jetstreamext.WithBatchFlowControl(

jetstreamext.BatchFlowControl{

AckFirst: true, // Wait for ack on the first message (default)

AckEvery: 100, // Wait for ack every 100 messages

AckTimeout: 5 * time.Second, // Timeout for waiting for acks

},

))

if err != nil {

log.Fatal(err)

}

// Add many messages to the batch

for i := 0; i < 500; i++ {

err := batch.AddMsg(&nats.Msg{

Subject: fmt.Sprintf("events.%d", i),

Data: []byte(fmt.Sprintf("Event %d", i)),

})

if err != nil {

log.Fatal(err)

}

}

// Commit the batch

ack, err := batch.Commit(ctx, "events.final", []byte("Batch complete"))

Flow Control Options:

  • AckFirst (default: true): Waits for acknowledgment on the first message in the batch, ensuring the server is accepting the batch before continuing. This is enabled by default.
  • AckEvery (default: 0): Waits for an acknowledgment every N messages. By default this is 0 (disabled), meaning no intermediate acks are requested during batch building. Set to a positive number (e.g., 100) to get feedback every N messages.
  • AckTimeout: Timeout for waiting for acknowledgments when flow control is enabled. Defaults to the timeout from your JetStream context.

When to Configure Flow Control: For large batches (e.g., hundreds of messages), configure AckEvery to detect issues early rather than waiting until the commit. It provides feedback during the batch process without sacrificing atomicity — if any message fails, the entire batch is still abandoned.

Understanding the Communication Pattern

Batch publishing uses special NATS headers to coordinate between client and server. Understanding these headers helps you troubleshoot and monitor batches.

When you send messages in an atomic batch, the orbit.go library automatically adds these headers:

  • Nats-Batch-Id: <uuid>: Unique identifier for the batch (max 64 characters)
  • Nats-Batch-Sequence: <n>: Incrementing sequence number within the batch (starts at 1)
  • Nats-Batch-Commit: 1: Marks the final message AND includes it in the batch
  • Nats-Batch-Commit: eob: Marks the final message but does NOT include it (end-of-batch marker only)

CLI Example: Creating a Batch Manually

You can also create atomic batches using the NATS CLI by manually adding the required headers. This is useful for testing and understanding how batches work:

# Generate a unique batch ID

BATCH_ID="txn-$(date +%s)"

# Create a stream with atomic batch publishing enabled

nats stream add ORDERS --subjects "orders.>" --allow-batch --defaults

# Publish messages with batch headers

# Message 1: Start the batch

nats pub orders.step1 "Order created" \

-J \

-H "Nats-Batch-Id:$BATCH_ID" \

-H "Nats-Batch-Sequence:1"

# Message 2: Continue the batch

nats pub orders.step2 "Inventory reserved" \

-J \

-H "Nats-Batch-Id:$BATCH_ID" \

-H "Nats-Batch-Sequence:2"

# Check stream - no messages yet! The batch is still staging

nats stream info ORDERS

# Output shows: Messages: 0

# Message 3: Commit the batch (final message)

nats pub orders.step3 "Payment processed" \

-J \

-H "Nats-Batch-Id:$BATCH_ID" \

-H "Nats-Batch-Sequence:3" \

-H "Nats-Batch-Commit:1"

# Now check again - all 3 messages appear

nats stream info ORDERS

# Output shows: Messages: 3

# View the messages with their batch headers

nats stream get ORDERS 1 # First message

nats stream get ORDERS 3 # Final message with commit header

After publishing messages 1 and 2, the stream shows zero messages. The batch is being staged on the server but not yet committed. Only when message 3 arrives with the Nats-Batch-Commit:1 header do all three messages appear atomically in the stream. This demonstrates the all-or-nothing semantics in action.

Note: The -J flag tells the NATS CLI to use JetStream publishing. Intermediate messages (sequence 1 and 2) may show JSON parsing errors in the CLI output because they receive zero-byte acknowledgments — so we can ignore those errors.

Server Response Pattern

The server responds differently to intermediate vs. final messages:

  • Intermediate messages: Receive zero-byte acknowledgments (lightweight responses)
  • Final message: Receives a full pub ack with batch metadata including the batch ID and total size

Batch Lifecycle

  1. Active: Server receives first message with batch ID, creates batch context
  2. Accumulating: Subsequent messages are validated and added to the batch
  3. Committed: Final message triggers atomic storage of all messages
  4. Abandoned: Batch is discarded if:
    • 10 seconds pass without receiving the next sequence number
    • A message fails validation (wrong stream, sequence mismatch)
    • The batch exceeds 1,000 messages

Performance Considerations

The NATS 2.12 release focuses heavily on correctness and atomicity rather than raw performance. As stated in the release notes:

This feature currently focuses heavily on correctness and atomicity, ensuring there can be no partially committed batches. Later iterations of this feature will put more focus on the performance aspect that batching can enable, so stay tuned!

The foundation is solid — let’s see what the future brings in terms of performance optimization.

Limitations and Gotchas

Understanding the limitations helps you design robust applications:

Message and Size Limits

  • Atomic batches: Maximum 1,000 messages per batch
  • Batch ID length: 64 characters maximum
  • Timeout: 10 seconds of inactivity causes abandonment (if no new message in the batch is received within 10 seconds, the batch is discarded)
  • Stream scope: Batches cannot span multiple streams—all messages in a batch must target the same stream

Concurrency Limits

  • Per stream: 50 concurrent batches in flight at any time
  • Per server: 1,000 concurrent batches in flight at any time

If you exceed the per-stream or per-server concurrency limits, new batch operations will be rejected until some in-flight batches complete.

Multi-Language Support

The Orbit libraries from Synadia provide extensions and utilities for NATS across multiple programming languages. Besides Go, atomic batch publishing is currently supported in orbit.java, and Rust with the orbit.rs jetstream-extra crate.

If you’re working with other languages like JavaScript/TypeScript or .NET, you can still use atomic batch publishing by manually constructing the batch headers (Nats-Batch-Id, Nats-Batch-Sequence, Nats-Batch-Commit) as demonstrated in the CLI examples above.

Conclusion

NATS 2.12’s atomic batch publishing brings powerful new capabilities to JetStream. While the current release focuses on correctness and atomicity, future NATS releases will optimize batch publishing for performance. The 2.12 release establishes a solid foundation with reliable, predictable behavior.

Here are some interesting resources related to this feature:

Read Entire Article