Native Node.js bindings for FFmpeg with full TypeScript support. Provides direct access to FFmpeg's C APIs through N-API. Includes both raw FFmpeg bindings for full control and higher-level abstractions. Automatic resource management via Disposable pattern, hardware acceleration support and prebuilt binaries for Windows, Linux, and macOS.
Direct access to FFmpeg's C APIs with minimal abstractions. Perfect when you need full control over FFmpeg functionality.
import{AVERROR_EOF,AVMEDIA_TYPE_VIDEO}from'node-av/constants';import{Codec,CodecContext,FFmpegError,FormatContext,Frame,Packet,Rational}from'node-av/lib';// Open input fileawait using ifmtCtx=newFormatContext();letret=awaitifmtCtx.openInput('input.mp4');FFmpegError.throwIfError(ret,'Could not open input file');ret=awaitifmtCtx.findStreamInfo();FFmpegError.throwIfError(ret,'Could not find stream info');// Find video streamconstvideoStreamIndex=ifmtCtx.findBestStream(AVMEDIA_TYPE_VIDEO);constvideoStream=ifmtCtx.streams?.[videoStreamIndex];if(!videoStream){thrownewError('No video stream found');}// Create codecconstcodec=Codec.findDecoder(videoStream.codecpar.codecId);if(!codec){thrownewError('Codec not found');}// Allocate codec context for the decoder
using decoderCtx=newCodecContext();decoderCtx.allocContext3(codec);ret=decoderCtx.parametersToContext(videoStream.codecpar);FFmpegError.throwIfError(ret,'Could not copy codec parameters to decoder context');// Inform the decoder about the timebase for packet timestamps and the frame ratedecoderCtx.pktTimebase=videoStream.timeBase;decoderCtx.framerate=videoStream.rFrameRate||videoStream.avgFrameRate||newRational(25,1);// Open decoder contextret=awaitdecoderCtx.open2(codec,null);FFmpegError.throwIfError(ret,'Could not open codec');// Process packets
using packet=newPacket();packet.alloc();
using frame=newFrame();frame.alloc();while(true){letret=awaitifmtCtx.readFrame(packet);if(ret<0){break;}if(packet.streamIndex===videoStreamIndex){// Send packet to decoderret=awaitdecoderCtx.sendPacket(packet);if(ret<0&&ret!==AVERROR_EOF){FFmpegError.throwIfError(ret,'Error sending packet to decoder');}// Receive decoded frameswhile(true){constret=awaitdecoderCtx.receiveFrame(frame);if(ret===AVERROR_EOF||ret<0){break;}console.log(`Decoded frame ${frame.pts}, size: ${frame.width}x${frame.height}`);// Process frame data...}}packet.unref();}
Higher-level abstractions for common tasks like decoding, encoding, filtering, and transcoding. Easier to use while still providing access to low-level details when needed.
import{Decoder,Encoder,MediaInput,MediaOutput}from'node-av/api';import{FF_ENCODER_LIBX264}from'node-av/constants';// Open mediaawait using input=awaitMediaInput.open('input.mp4');await using output=awaitMediaOutput.open('output.mp4');// Get video streamconstvideoStream=input.video()!;// Create decoder
using decoder=awaitDecoder.create(videoStream);// Create encoder
using encoder=awaitEncoder.create(FF_ENCODER_LIBX264,{timeBase: videoStream.timeBase,frameRate: videoStream.avgFrameRate,});// Add stream to outputconstoutputIndex=output.addStream(encoder);// Process packetsforawait(using packetofinput.packets(videoStream.index)){
using frame=awaitdecoder.decode(packet);if(frame){
using encoded=awaitencoder.encode(frame);if(encoded){awaitoutput.writePacket(encoded,outputIndex);}}}// Flush decoderforawait(using frameofdecoder.flushFrames()){using encoded =awaitencoder.encode(frame);if(encoded){awaitoutput.writePacket(encoded,outputIndex);}}// Flush encoderforawait(using packetofencoder.flushPackets()){awaitoutput.writePacket(packet,outputIndex);}// Done
A simple way to chain together multiple processing steps like decoding, filtering, encoding, and muxing.
The library supports all hardware acceleration methods available in FFmpeg. The specific hardware types available depend on your FFmpeg build and system configuration.
import{HardwareContext}from'node-av/api';import{FF_ENCODER_LIBX264}from'node-av/constants';// Automatically detect best available hardwareconsthw=HardwareContext.auto();console.log(`Using hardware: ${hw.deviceTypeName}`);// Use with decoderconstdecoder=awaitDecoder.create(stream,{hardware: hw});// Use with encoder (use hardware-specific codec)constencoderCodec=hw?.getEncoderCodec('h264')??FF_ENCODER_LIBX264;constencoder=awaitEncoder.create(encoderCodec,{timeBase: videoStream.timeBase,frameRate: videoStream.avgFrameRate,});
import{AV_HWDEVICE_TYPE_CUDA,AV_HWDEVICE_TYPE_VAAPI}from'node-av/constants';// Use specific hardware typeconstcuda=HardwareContext.create(AV_HWDEVICE_TYPE_CUDA);constvaapi=HardwareContext.create(AV_HWDEVICE_TYPE_VAAPI,'/dev/dri/renderD128');
The library provides multiple entry points for optimal tree shaking:
// High-Level API only - Recommended for most use casesimport{MediaInput,MediaOutput,Decoder,Encoder}from'node-av/api';// Low-Level API only - Direct FFmpeg bindingsimport{FormatContext,CodecContext,Frame,Packet}from'node-av/lib';// Constants only - When you just need FFmpeg constantsimport{AV_PIX_FMT_YUV420P,AV_CODEC_ID_H264}from'node-av/constants';// Channel layouts only - For audio channel configurationsimport{AV_CHANNEL_LAYOUT_STEREO,AV_CHANNEL_LAYOUT_5POINT1}from'node-av/layouts';// Default export - Includes everythingimport*asffmpegfrom'node-av';
// Raw video inputconstrawVideo=awaitMediaInput.open({type: 'video',input: 'input.yuv',width: 1280,height: 720,pixelFormat: AV_PIX_FMT_YUV420P,frameRate: {num: 30,den: 1}});// Raw audio inputconstrawAudio=awaitMediaInput.open({type: 'audio',input: 'input.pcm',sampleRate: 48000,channels: 2,sampleFormat: AV_SAMPLE_FMT_S16},{format: 's16le'});
The library supports automatic resource cleanup using the Disposable pattern:
// Automatic cleanup with 'using'{await using media=awaitMediaInput.open('input.mp4');
using decoder=awaitDecoder.create(media.video());// Resources automatically cleaned up at end of scope}// Manual cleanupconstmedia=awaitMediaInput.open('input.mp4');try{// Process media}finally{awaitmedia.close();}
Need direct access to the FFmpeg binary? The library provides an easy way to get FFmpeg binaries that automatically downloads and manages platform-specific builds.
import{ffmpegPath,isFfmpegAvailable}from'node-av/ffmpeg';import{execFile}from'node:child_process';import{promisify}from'node:util';constexecFileAsync=promisify(execFile);// Check if FFmpeg binary is availableif(isFfmpegAvailable()){console.log('FFmpeg binary found at:',ffmpegPath());// Use FFmpeg binary directlyconst{ stdout }=awaitexecFileAsync(ffmpegPath(),['-version']);console.log(stdout);}else{console.log('FFmpeg binary not available - install may have failed');}// Direct usage exampleasyncfunctionconvertVideo(input: string,output: string){constargs=['-i',input,'-c:v','libx264','-crf','23','-c:a','aac',output];awaitexecFileAsync(ffmpegPath(),args);}
The FFmpeg binary is automatically downloaded during installation from GitHub releases and matches the same build used by the native bindings.
NodeAV executes all media operations directly through FFmpeg's native C libraries. The Node.js bindings add minimal overhead - mostly just the JavaScript-to-C boundary crossings. During typical operations like transcoding or filtering, most processing time is spent in FFmpeg's optimized C code.
Every async method in NodeAV has a corresponding synchronous variant with the Sync suffix:
Async methods (default) - Non-blocking operations using N-API's AsyncWorker. Methods like decode(), encode(), read(), packets() return Promises or AsyncGenerators.
Sync methods - Direct FFmpeg calls without AsyncWorker overhead. Same methods with Sync suffix: decodeSync(), encodeSync(), readSync(), packetsSync().
The key difference: Async methods don't block the Node.js event loop, allowing other operations to run concurrently. Sync methods block until completion but avoid AsyncWorker overhead, making them faster for sequential processing.
Memory Safety Considerations
NodeAV provides direct bindings to FFmpeg's C APIs, which work with raw memory pointers. The high-level API adds safety abstractions and automatic resource management, but incorrect usage can still cause crashes. Common issues include mismatched video dimensions, incompatible pixel formats, or improper frame buffer handling. The library validates parameters where possible, but can't guarantee complete memory safety without limiting functionality. When using the low-level API, pay attention to parameter consistency, resource cleanup, and format compatibility. Following the documented patterns helps avoid memory-related issues.
For hardware-accelerated video processing with Intel GPUs on Linux, you need to install specific system packages. The FFmpeg binaries included with this library are built with libva 2.20, which requires Ubuntu 24.04+ or Debian 13+ as minimum OS versions.
Add Kisak-Mesa PPA (recommended for newer Mesa versions with better hardware support):
After installation, verify hardware acceleration is working:
# Check VAAPI support
vainfo
# Check Vulkan support
vulkaninfo
# Should show available profiles and entrypoints for your Intel GPU
Note: If you're running an older Ubuntu version (< 24.04) or Debian version (< 13), you'll need to upgrade your OS to use hardware acceleration with this library.
This project is licensed under the MIT License. See the LICENSE file for details.
Important: FFmpeg itself is licensed under LGPL/GPL. Please ensure compliance with FFmpeg's license terms when using this library. The FFmpeg libraries themselves retain their original licenses, and this wrapper library does not change those terms. See FFmpeg License for details.
Contributions are welcome! Please read CONTRIBUTING.md for development setup, code standards, and contribution guidelines before submitting pull requests.
For issues and questions, please use the GitHub issue tracker.