Arc Core v1 – turning static language models into living, self-improving agents
3 months ago
7
ARC Core is a framework for building continual learning AI systems with biological learning mechanisms, enabling true adaptive behavior and knowledge retention.
Biological Learning: Implements contextual gating, cognitive inhibition, and sleep-like consolidation
Continual Learning: Real-time learning with LoRA adapters without catastrophic forgetting
Reasoning Engine: Graph-based reasoning and pattern recognition
Teaching Pack System: Modular training with specialized learning modules
CLI Interface: Simple command-line tools for model management
Basic installation:
pip install metisos-arc-core
For GPU support (NVIDIA):
pip install metisos-arc-core[gpu]
For Apple Silicon:
pip install metisos-arc-core[apple]
# Initialize a new model (default: TinyDolphin-2.8-1.1b)
arc init
# Start an interactive chat
arc chat
# Check system status
arc status
fromarc_coreimportLearningARCConsciousness# Initialize the modelmodel=LearningARCConsciousness()
# Process user inputresponse=model.process_user_interaction("Hello, how can you help me?")
print(f"ARC: {response['thought']}")
arc init Initialize a new ARC model configuration
arc chat Start an interactive chat session
arc pack Manage teaching packs
arc teach Train the model using a teaching pack
arc test Test the model using a teaching pack
arc save Save the current model state
arc status Show current model status and configuration
arc check Check system and package health
ARC Core supports teaching packs for specialized training:
# List available teaching packs
arc pack list
# Install a teaching pack
arc pack install sentiment-basic
# Train using a teaching pack
arc teach sentiment-basic
sentiment-basic: Basic sentiment analysis training
dialogue-basic: Basic conversation patterns
science-facts: General science knowledge
ARC Core is a sophisticated AI learning system that implements biological learning mechanisms in language models, enabling true continual learning and adaptive consciousness.
Key Features:
Biological Learning Mechanisms: Contextual gating, cognitive inhibition, and sleep-like consolidation
Hierarchical Memory Systems: Working, episodic, and semantic memory with temporal context
Continual Learning: Real weight updates without catastrophic forgetting
Safety-First Design: Multi-layered cognitive inhibition and metacognitive monitoring
Teaching Pack System: Modular training with specialized learning modules
Modular Teaching Packs: Easy-to-use training modules for specific domains
CLI Interface: Simple command-line tools for model management
Hugging Face Integration: Seamless model loading and saving
fromarc_coreimportARCTrainer, ARCConfig# Initialize configurationconfig=ARCConfig()
config.device="cuda"# or "cpu", "mps"# Create trainertrainer=ARCTrainer(config)
# Initialize with base modeltrainer.initialize_model("cognitivecomputations/TinyDolphin-2.8-1.1b")
# Train on a teaching packresult=trainer.train_on_pack("sentiment-basic")
# Generate responsesresponse=trainer.generate_response("I'm feeling great today!")
print(response) # Should show positive, supportive response# Save the enhanced modeltrainer.save_model("./my-enhanced-model")
ARC Core implements several biologically-inspired learning mechanisms:
Working Memory: Short-term context and active processing
Episodic Memory: Specific interaction memories with temporal context
Semantic Memory: Extracted concepts and knowledge patterns
Cognitive Inhibition: Filters harmful or inappropriate responses
Contextual Gating: Controls memory encoding and retrieval
Metacognitive Monitoring: Self-assessment of response quality
LoRA Adapters: Efficient parameter updates without full retraining
# Initialize with specific settings
arc init --base-model cognitivecomputations/TinyDolphin-2.8-1.1b --lora-rank 32 --device cuda
# Train with custom data
arc teach my-pack --data-path ./custom-data.jsonl --max-steps 200
# Chat with learning enabled
arc chat --max-turns 20 --learning
# Save in specific format
arc save --out ./models/my-model --format safetensors
ARC Core uses a flexible configuration system:
fromarc_coreimportARCConfigconfig=ARCConfig()
# Model settingsconfig.base_model="cognitivecomputations/TinyDolphin-2.8-1.1b"config.context_length=1024config.device="auto"# LoRA settingsconfig.lora.r=16config.lora.alpha=32config.lora.dropout=0.1# Training settingsconfig.training.learning_rate=5e-4config.training.max_steps=100config.training.ewc_lambda=0.4# Memory settingsconfig.memory.working_memory_size=10config.memory.episodic_memory_size=1000# Safety settingsconfig.safety.enable_cognitive_inhibition=Trueconfig.safety.enable_contextual_gating=Trueconfig.safety.enable_metacognitive_monitoring=True# Save configurationconfig.save("my-config.json")
# Load configurationconfig=ARCConfig.load("my-config.json")
Example 1: Customer Service Bot
fromarc_coreimportARCTrainer, ARCConfig# Setup for customer serviceconfig=ARCConfig()
config.safety.politeness_threshold=0.8config.memory.episodic_memory_size=2000# Remember more interactionstrainer=ARCTrainer(config)
trainer.initialize_model("cognitivecomputations/TinyDolphin-2.8-1.1b")
# Train on customer service pack (custom)trainer.train_on_pack("customer-service-basic")
# Use in productionresponse=trainer.generate_response("I'm having trouble with my order")
Example 2: Educational Assistant
# Setup for educationconfig=ARCConfig()
config.safety.enable_metacognitive_monitoring=True# Self-correctionconfig.memory.semantic_memory_size=5000# Large knowledge basetrainer=ARCTrainer(config)
trainer.initialize_model("cognitivecomputations/TinyDolphin-2.8-1.1b")
# Sequential learningtrainer.train_on_pack("math-basics")
trainer.train_on_pack("science-basics")
trainer.train_on_pack("history-basics")
# The model retains knowledge from all domainsmath_response=trainer.generate_response("What is calculus?")
science_response=trainer.generate_response("Explain photosynthesis")
ARC Core is designed for researchers and developers working on:
Continual Learning: Avoiding catastrophic forgetting in neural networks
Cognitive Architectures: Biologically-inspired AI systems
Memory Systems: Hierarchical and associative memory models
AI Safety: Cognitive safety mechanisms and alignment
Human-AI Interaction: Natural and safe conversational AI