NuoNuo: Hippocampal memory module prototype
Hopfield + Hebbian hybrid memory system for LLMs. Two nights of experiments (16 iterations), validated on LongMemEval (ICLR 2025). Architecture: - Single-hop: Two-Stage Hopfield (NN top-20 → softmax settle) - Multi-hop: Hebbian W matrix with WTA pattern separation - 64% on LongMemEval (500 questions), retrieval-only, no LLM dependency - 4ms latency @ 20K memories, ~1GB VRAM Key findings: - Hopfield attention solved noise tolerance (20% → 100% vs flat Hebbian) - WTA pattern separation enables 20K+ capacity - Multi-hop associative chains (6 hops, CosSim=1.0) — RAG can't do this - MiniLM-L6 is optimal (discrimination gap > absolute similarity) - Paraphrase cue augmentation: 55% → 100% on synthetic, 36% → 64% on benchmark - SNN encoder viable (CosSim 0.99) but not needed for current architecture
This commit is contained in:
25
pyproject.toml
Normal file
25
pyproject.toml
Normal file
@@ -0,0 +1,25 @@
|
||||
[project]
|
||||
name = "nuonuo"
|
||||
version = "0.1.0"
|
||||
description = "SNN-based hippocampal memory module for LLMs"
|
||||
requires-python = ">=3.12"
|
||||
dependencies = [
|
||||
"torch>=2.10,<2.11",
|
||||
"snntorch>=0.9",
|
||||
"numpy",
|
||||
"matplotlib",
|
||||
"sentence-transformers>=3.0",
|
||||
"openai>=1.0",
|
||||
"requests>=2.33.1",
|
||||
]
|
||||
|
||||
[tool.uv]
|
||||
index-url = "https://pypi.org/simple"
|
||||
|
||||
[[tool.uv.index]]
|
||||
name = "pytorch-cu128"
|
||||
url = "https://download.pytorch.org/whl/cu128"
|
||||
explicit = true
|
||||
|
||||
[tool.uv.sources]
|
||||
torch = { index = "pytorch-cu128" }
|
||||
Reference in New Issue
Block a user