Heap Profiling Support in platformatic/flame and Watt

3 weeks ago 1

We're excited to announce that @platformatic/flame and Watt now support heap profiling alongside CPU profiling! This powerful addition gives developers comprehensive insights into both how their applications spend CPU time and how they use memory, making it easier than ever to identify performance bottlenecks and memory leaks.

Why Heap Profiling Matters

While CPU profiling helps you understand where your application spends its execution time, heap profiling answers equally critical questions about memory usage:

  • Where is memory being allocated? Identify which functions and code paths are creating the most objects

  • Are there memory leaks? Discover objects that aren't being garbage collected as expected

  • What's causing high memory usage? Find the root causes of excessive memory consumption

  • How can I optimize memory footprint? Pinpoint opportunities to reduce allocations and improve efficiency

Memory issues can be just as detrimental to application performance as CPU bottlenecks. High memory usage can lead to frequent garbage collection pauses, increased infrastructure costs, and in severe cases, out-of-memory crashes. With heap profiling now built into the Platformatic ecosystem, you can tackle these issues head-on.

What's New in @platformatic/flame

The @platformatic/flame module is a comprehensive profiling and flamegraph visualization tool built on top of @platformatic/react-pprof. It now provides dual profiling capabilities, capturing both CPU and heap profiles concurrently for complete performance insights.

Key Features

  • Concurrent Profiling: Capture both CPU and heap profiles simultaneously

  • Auto-Start Mode: Profiling begins immediately when using flame run (default behavior)

  • Automatic Flamegraph Generation: Interactive HTML flamegraphs are created automatically for both profile types on exit

  • Zero Configuration: Works out of the box with sensible defaults

  • Standard Format: Outputs pprof-compatible profile files that work with the entire pprof ecosystem

Installation

Install flame globally to get started:

npm install -g @platformatic/flame

Using Heap Profiling with Flame

The flame CLI makes heap profiling incredibly simple. Here's how to use it:

By default, flame run starts both CPU and heap profiling immediately:

# Start your application with profiling enabled flame run server.js # Your application runs with both CPU and heap profiling active # Exercise your application (make requests, trigger operations, etc.) # When you stop the app (Ctrl-C or normal exit), you'll see: # 🔥 CPU profile written to: cpu-profile-2025-10-09T12-00-00-000Z.pb # 🔥 Heap profile written to: heap-profile-2025-10-09T12-00-00-000Z.pb # 🔥 Generating CPU flamegraph... # 🔥 CPU flamegraph generated: cpu-profile-2025-10-09T12-00-00-000Z.html # 🔥 Generating heap flamegraph... # 🔥 Heap flamegraph generated: heap-profile-2025-10-09T12-00-00-000Z.html # 🔥 Open file:///path/to/cpu-profile-2025-10-09T12-00-00-000Z.html in your browser # 🔥 Open file:///path/to/heap-profile-2025-10-09T12-00-00-000Z.html in your browser

Both profiles share the same timestamp, making it easy to correlate CPU and memory usage patterns during the same time window.

Manual Mode with Signal Control

For more granular control over when profiling occurs, use manual mode:

# Start in manual mode (profiling waits for signal) flame run --manual server.js # In another terminal, start profiling kill -USR2 <PID> # Or use the built-in toggle command flame toggle # Exercise your application... # Stop profiling (saves profiles) kill -USR2 <PID> # or flame toggle # Generate flamegraphs from the saved profiles flame generate heap-profile-2025-10-09T12-00-00-000Z.pb flame generate cpu-profile-2025-10-09T12-00-00-000Z.pb

Practical Example: Profiling a Fastify App

Let's say you have a Fastify application that you suspect has memory issues:

const fastify = require('fastify')() fastify.get('/', async (request, reply) => { return { message: 'Hello World' } }) fastify.get('/users', async (request, reply) => { const users = Array.from({ length: 10000 }, (_, i) => ({ id: i, name: `User ${i}`, email: `user${i}@example.com` })) return users }) fastify.listen({ port: 3000 })

Profile it with flame:

# Start profiling (both CPU and heap) flame run server.js # In another terminal, generate load curl http://localhost:3000/users curl http://localhost:3000/users curl http://localhost:3000/users # Stop the server (Ctrl-C) to automatically save profiles and generate flamegraphs

Now you have interactive HTML flamegraphs for both CPU and heap usage. Open them in your browser to explore:

  • CPU flamegraph: Shows where execution time is spent (route handling, JSON serialization, response generation, etc.)

  • Heap flamegraph: Shows where memory is allocated (array creation, object allocation, etc.)

Understanding Heap Flamegraphs

When viewing a heap flamegraph:

  • Width represents memory usage: Wider sections indicate functions that allocated more memory

  • Height shows call depth: The call stack hierarchy from top-level functions down to specific allocations

  • Interactive navigation: Click sections to zoom in and examine specific code paths

  • Performance hotspots: Quickly identify memory-intensive operations

Look for:

  • Wide sections: Functions allocating significant memory

  • Repeated patterns: May indicate unnecessary allocations in loops

  • Large allocations: Functions creating large buffers or data structures

  • Growing patterns: Potential memory leaks if usage continuously grows

Heap Profiling in Platformatic Watt

The Platformatic Watt runtime integrates the same heap profiling capabilities, allowing you to profile production applications running in the Watt environment.

Prerequisites

Ensure you have the profiling capture package installed:

# Install wattpm globally npm install -g wattpm # Install profiling capture in your application npm install @platformatic/wattpm-pprof-capture # Install flame for visualization npm install -g @platformatic/flame

Starting Heap Profiling in Watt

Watt's pprof command makes it easy to profile applications:

# Start heap profiling for all services wattpm pprof start --type=heap # Start heap profiling for a specific service wattpm pprof start --type=heap api-service # Start heap profiling for a service in a specific application wattpm pprof start my-app --type=heap api-service # Using short option syntax wattpm pprof start -t heap api-service

When profiling starts, you'll see confirmation:

HEAP profiling started for application api-service

Stopping Heap Profiling and Collecting Data

After running your application under load for 30-60 seconds (or longer for intermittent issues), stop profiling to save the data:

# Stop heap profiling for all services wattpm pprof stop --type=heap # Stop heap profiling for a specific service wattpm pprof stop --type=heap api-service # Stop heap profiling for a service in a specific application wattpm pprof stop my-app --type=heap api-service

Profile files are saved with descriptive names:

pprof-heap-api-service-2025-10-09T15-30-45-123Z.pb

Generating Flamegraphs from Watt Profiles

Once you have profile files from Watt, use the flame tool to visualize them:

# Generate interactive heap flamegraph flame generate pprof-heap-api-service-2025-10-09T15-30-45-123Z.pb # This creates an HTML file you can open in your browser

Concurrent CPU and Heap Profiling in Watt

One of the most powerful features is the ability to profile both CPU and heap simultaneously:

# Start both CPU and heap profiling wattpm pprof start --type=cpu api-service wattpm pprof start --type=heap api-service # Generate load on your application # ... exercise your application for 30-60 seconds ... # Stop both profiles wattpm pprof stop --type=cpu api-service wattpm pprof stop --type=heap api-service # Generate flamegraphs for both flame generate pprof-cpu-api-service-2025-10-09T15-30-45-123Z.pb flame generate pprof-heap-api-service-2025-10-09T15-30-45-124Z.pb

This gives you a complete picture: you can see which operations are CPU-intensive and which are memory-intensive during the same time window, helping you make informed optimization decisions.

Real-World Use Cases

Finding Memory Leaks

Memory leaks are notoriously difficult to track down. Heap profiling makes them visible:

# Start heap profiling flame run --manual server.js # In another terminal, start profiling flame toggle # Exercise the suspected leak-prone code path repeatedly for i in {1..1000}; do curl http://localhost:3000/api/users done # Stop profiling flame toggle # Examine the heap flamegraph - look for unexpected growth

In the flamegraph, memory leaks often appear as large, persistent allocations that don't get cleaned up. Look for functions that allocate memory in proportion to the number of requests (indicating objects aren't being freed).

Optimizing Memory-Intensive Operations

If your application handles large datasets, heap profiling helps identify opportunities for optimization:

# Profile a data processing application flame run data-processor.js

The heap flamegraph will show you:

  • Which data transformations allocate the most memory

  • Whether you're creating unnecessary intermediate copies

  • Where you could use streaming or chunking instead of loading everything into memory

Production Memory Analysis

With Watt, you can safely profile production applications:

# Start heap profiling during peak load wattpm pprof start --type=heap api-service # Let it run for 60 seconds to capture representative traffic # ... # Stop and collect the profile wattpm pprof stop --type=heap api-service # Download the profile and analyze locally flame generate pprof-heap-api-service-*.pb

This is invaluable for diagnosing issues that only appear under real-world load patterns.

Best Practices

Profiling Duration

  • Minimum: 10-30 seconds for meaningful data

  • Typical: 30-60 seconds for most analyses

  • Extended: 2-5 minutes for catching intermittent issues

  • Avoid very short profiles: Less than 10 seconds often lack statistical significance

When to Use Heap Profiling

Use heap profiling when:

  • Your application has memory issues or leaks

  • You see increasing memory usage over time

  • You want to reduce memory footprint

  • You're investigating out-of-memory errors

  • You're processing large datasets and want to optimize allocations

Combining CPU and Heap Profiling

The most powerful approach is to use both:

  1. Start with CPU profiling to identify slow operations

  2. Add heap profiling to understand memory usage patterns

  3. Compare the two to see if CPU-intensive operations are also memory-intensive

  4. Optimize accordingly: Some optimizations improve both, others require trade-offs

Performance Impact

Profiling has minimal impact on application performance:

  • CPU profiling overhead: ~1-5% during profiling

  • Heap profiling overhead: ~5-10% during profiling (slightly higher than CPU)

  • Memory overhead: Small amount for storing samples

  • I/O impact: None during profiling, only when saving files

This makes it safe to use in production environments when needed.

Technical Implementation Details

Under the hood, the heap profiling implementation uses:

  • @datadog/pprof: The proven profiling library that supports both CPU and heap profiling

  • pprof-format: Standard protocol buffer format compatible with the entire pprof ecosystem

  • @platformatic/react-pprof: WebGL-based interactive flamegraph visualization

  • Sampling-based approach: Captures a statistical sample of allocations rather than every single allocation, keeping overhead low

Both @platformatic/flame and Watt runtime share the same profiling infrastructure, ensuring consistent behavior and output formats across both tools.

What's Next

We're excited to see how the community uses heap profiling to optimize their applications. This release represents a significant step forward in making performance analysis accessible and actionable for all Node.js developers.

Future enhancements we're considering:

  • Differential profiling (compare profiles before and after changes)

  • Integration with CI/CD for automated performance regression detection

  • Enhanced visualization options for heap profiles

  • Support for additional profile types

Get Started Today

Try out heap profiling in your applications:

# Install the tools npm install -g @platformatic/flame npm install -g wattpm # Profile a standalone application flame run server.js # Profile a Watt application wattpm pprof start --type=heap wattpm pprof stop --type=heap # Generate flamegraphs flame generate heap-profile-*.pb

We'd love to hear about your experiences and any issues you discover. Report bugs or request features on our GitHub repositories:

Happy profiling!

Read Entire Article