The Amaranth Foundation supports cutting‑edge neuroscience with the belief that insights from the brain can—and should—shape the emergence of artificial general intelligence (AGI) toward equitable, beneficial outcomes. We’ve published this short Request for Information (RFI) to invite the research community to share their ideas around NeuroAI and AI safety. We hope to clarify (1) the broad relevance of neuroscience to AGI and its safety, and (2) strategic directions the field might take if frontier‑scale AGI arrives within ~3 years under today’s large‑model paradigm. We invite you to share this RFI broadly within your network.
How can your area of neuroscience meaningfully inform, constrain, or inspire the design and governance of future AGI systems? We welcome perspectives spanning cellular and circuit mechanisms, cognitive and social neuroscience, computational modelling, neuro‑inspired architectures, brain‑data‑driven training regimes, and ethical or societal insights grounded in the study of natural intelligence. You may wish to reflect on themes surfaced in the “NeuroAI for AI Safety” roadmap (Mineault et al., 2024), which outlines eight ways brain science could mitigate key AI‑safety failure modes—from robustness and out‑of‑distribution generalisation to the cultivation of pro‑social agency.
Now, assume that frontier AI labs achieve highly capable, largely autonomous AI R&D systems by late 2027 (Koktajlo et al. 2025, Aschenbrenner 2024), and that this rapidly leads to AGI. In that world:
What near‑term research and development opportunities exist for the neuroscience community—e.g., data generation, tool development, fundamental theory, interdisciplinary partnerships, etc.—to most effectively shape the safety, alignment, and societal integration of frontier AI systems?
Concrete proposals might touch on rapid‑cycle brain‑data benchmarks for agentic models, causal intervention toolkits inspired by modern neurotech, biologically grounded metrics of value formation, or multi-agent governance frameworks informed by comparative cognition. Feel free to challenge these examples.
Response format: ≤ 2 pages (PDF or equivalent), free‑form prose
Deadline: 23:59 ET, 29 Aug 2025
Submission: [email protected]
Review: Internal; selected authors may be invited to a virtual salon or to submit full proposals. No material will be shared externally without permission.
For clarifications, email [email protected] with the subject “Neuro × AGI RFI Query”.
Thank you for helping map how neuroscience can guide the safe evolution of artificial general intelligence.