GPU-accelerated Python implementation of Stanford OpenIE with comprehensive triplet extraction
Output:
Features:
- Comprehensive extraction using breadth-first search
 - Natural formatting with proper contraction spacing
 - Quantifiers preserved and normalized (percentages, scientific units)
 - LaTeX math preserved for scientific literature
 - Optional GPU acceleration for batch processing
 
This is a GPU-accelerated Python port of Stanford OpenIE that extends the original natural-logic pipeline with breadth-first search for comprehensive triplet extraction. The implementation follows the same three-stage pipeline and uses the trained models from the Stanford NLP Group's research.
To our knowledge, this is the first open-source system that GPU-accelerates the natural-logic forward-entailment search itself — via batched reparsing over dependency parses — rather than replacing the natural-logic OpenIE pipeline with a neural model trained on its outputs.
Prior neural OpenIE models typically train on triples produced by classical OpenIE systems, using GPUs for neural inference over those labels. In contrast, this system keeps the original natural-logic semantics and uses the GPU to accelerate the BFS exploration through batch processing, effectively GPU-accelerating the underlying OpenIE algorithm rather than approximating it with a neural model.
This port uses spaCy for dependency parsing instead of Stanford CoreNLP, providing a pure Python alternative that works without Java dependencies. I'm grateful to the Stanford NLP Group for their groundbreaking research and for making their models available.
Note: This implementation supports English text only. The trained models and natural logic rules are language-specific.
This implementation prioritizes preserving rich semantic context in extracted triples. Unlike some ports that simplify subjects and relations, this port retains qualifiers, quantifiers, and contextual information (e.g., "The U.S. president Barack Obama" rather than just "Barack Obama", or "25% of people" rather than just "people"). This makes the output particularly well-suited for knowledge graph construction, GraphRAG applications, and other systems that benefit from semantically rich representations.
Recommended: GPU-accelerated (more comprehensive extraction):
Requires: CUDA-capable GPU, CUDA 12.x, 8GB+ VRAM recommended Benefit: ~1.9x more triplets with GPU-accelerated BFS (vs default Balanced mode)
Base install (CPU-optimized):
Works on: Any machine, serverless, edge devices Performance: Fast CPU-optimized DFS (13.60/s)
Local development with uv:
The OpenIEExtractor class provides more control over the extraction pipeline:
The extractor uses Balanced mode by default, which is CPU-optimized for production use:
Performance comparison (100 scientific abstracts):
| Deep Search | 16.34 | 1634 | 8.86/s | 11.29s | 100% | 98%+ | Comprehensive extraction (GPU-accelerated) | 
| Baseline (DFS) | 7.93 | 793 | 1.96/s | 51.09s | 48.5% | 100% | Reference quality | 
| Balanced (default) | 8.55 | 855 | 13.60/s | 7.35s | 52.3% | 98.2% | Default: CPU-optimized production | 
| Fast | 6.57 | 657 | 17.22/s | 5.81s | 40.2% | 98.4% | High-throughput APIs | 
| Ultra | 5.21 | 521 | 28.22/s | 3.54s | 31.9% | 99.5% | Maximum speed | 
| Stanford OpenIE | 13.45 | 1345 | 15.42/s | 6.48s | 82.3% | ~95% | Original Java | 
†Coverage: Percentage of Deep Search triplets found (Deep Search finds the most = 100% baseline)
Note: Stanford OpenIE benchmarks executed via stanford-openie-python package. Numbers vary slightly between runs.
Benchmark Hardware:
- GPU tests: NVIDIA RTX 5090 (32GB VRAM), CUDA 12.x
 - CPU tests: AMD Ryzen 7 9800X3D (8-Core, 16 threads), 48GB RAM
 - Dataset: 100 scientific abstracts (LaTeX-free)
 
Workstations, GPU servers, batch processing pipelines
BFS mode with CUDA acceleration - ~1.9x more triplets vs default Balanced mode:
| Deep Search (GPU) | deep_search=True | GPU (CUDA) | Comprehensive extraction, knowledge graphs | 
| Deep Search (CPU fallback) | deep_search=True | CPU only | Same quality, slower throughput | 
*Estimated based on BFS algorithm complexity. Actual performance varies by CPU.
GPU Requirements: CUDA 12.x, 8GB+ VRAM recommended
Example:
AWS Lambda, Cloud Run, serverless functions, edge devices
All DFS modes - optimized for CPU with LRU caching:
| Balanced (recommended) | deep_search=False, speed_preset="balanced" | Production default | 
| Fast | deep_search=False, speed_preset="fast" | High-throughput APIs | 
| Ultra | deep_search=False, speed_preset="ultra" | Maximum speed priority | 
| Baseline | high_quality=True, fast=False, deep_search=False | Reference/compatibility | 
Example:
The extractor implements three stages:
Stage 1: Clause Splitting (enable_clause_split) Breaks complex sentences into simpler clauses using beam search. For example, "Obama, born in Hawaii, is president" becomes ["Obama is president", "Obama born in Hawaii"].
Stage 2: Forward Entailment (enable_entailment) Generates shorter entailed forms using natural logic. For example, "Blue cats play" produces ["Blue cats play", "cats play"]. This applies to all fragments, including those from clause splitting.
Confidence Threshold (min_confidence) Filters triplets below the specified confidence score (0.0 to 1.0). Higher values give fewer but higher-quality results.
For processing multiple texts efficiently:
The system automatically uses GPU acceleration if triplet-extract[deepsearch] is installed and a CUDA GPU is available. Otherwise, it falls back to CPU with identical extraction quality.
Reuse extractor instances when processing multiple texts:
Use batch processing for best performance:
The library is silent by default. Enable logging to see internal operations:
The system implements the three-stage pipeline from the Stanford OpenIE paper:
Stage 1: Clause Splitting Uses a pre-trained linear classifier to break complex sentences into independent clauses. The classifier was trained on the LSOIE dataset and considers dependency parse structure to make splitting decisions.
Stage 2: Forward Entailment Applies natural logic deletion rules to generate shorter entailed forms. Uses prepositional phrase attachment affinities to determine which constituents can be safely deleted while preserving truth.
Stage 3: Pattern Matching Extracts (subject, relation, object) triples from sentence fragments using dependency patterns. Handles various syntactic constructions including copular sentences, prepositional phrases, and clausal complements.
The trained models (clause splitting classifier and PP attachment affinities) are from the original Stanford implementation and are included in this package.
This implementation uses spaCy for dependency parsing instead of Stanford CoreNLP. While the algorithm and models are the same, the parsers may produce different dependency trees for the same sentence. Differences in tokenization, POS tagging, and dependency labels mean that extraction results won't be identical to the original Java implementation.
In practice, core extractions remain highly compatible with Stanford OpenIE, though edge cases may differ, particularly with unusual capitalization or complex grammatical constructions. If you require exact compatibility with Stanford OpenIE output, please use the original Java implementation.
spaCy's statistical parser may misparse bare plural sentences (plural nouns without articles). For example, extract("Dogs chase cats.") returns malformed results because spaCy incorrectly parses "chase" as a noun rather than a verb, treating the entire phrase as a compound noun. Adding articles fixes this: extract("The dog chases the cat.") works correctly. This is a fundamental limitation of spaCy's parser compared to Stanford CoreNLP's constituency parser, affecting all spaCy model sizes. This rarely impacts real-world usage since scientific and formal writing typically uses articles and determiners.
If you use this library in research, please cite both this implementation and the original Stanford OpenIE paper:
This implementation:
Original Stanford OpenIE paper:
Reference: Angeli, Gabor, Melvin Jose Johnson Premkumar, and Christopher D. Manning. "Leveraging Linguistic Structure For Open Domain Information Extraction." Association for Computational Linguistics (ACL), 2015. Paper | Stanford OpenIE | CoreNLP Github
Bug reports and feature requests are welcome. Please open an issue on GitHub if you encounter problems or have suggestions for improvements.
GPL-3.0-or-later
This is a derivative work of Stanford OpenIE, which is licensed under GPL-3.0. The trained models included in this package are from the original Stanford implementation and remain under their GPL-3.0 license.
See LICENSE for details.
.png)
  

