Hopfield + Hebbian hybrid memory system for LLMs. Two nights of experiments (16 iterations), validated on LongMemEval (ICLR 2025). Architecture: - Single-hop: Two-Stage Hopfield (NN top-20 → softmax settle) - Multi-hop: Hebbian W matrix with WTA pattern separation - 64% on LongMemEval (500 questions), retrieval-only, no LLM dependency - 4ms latency @ 20K memories, ~1GB VRAM Key findings: - Hopfield attention solved noise tolerance (20% → 100% vs flat Hebbian) - WTA pattern separation enables 20K+ capacity - Multi-hop associative chains (6 hops, CosSim=1.0) — RAG can't do this - MiniLM-L6 is optimal (discrimination gap > absolute similarity) - Paraphrase cue augmentation: 55% → 100% on synthetic, 36% → 64% on benchmark - SNN encoder viable (CosSim 0.99) but not needed for current architecture
99 lines
2.0 KiB
JSON
99 lines
2.0 KiB
JSON
{
|
|
"noise": {
|
|
"0.0": {
|
|
"mean_cos": 0.9999999350309372,
|
|
"exact_rate": 1.0
|
|
},
|
|
"0.1": {
|
|
"mean_cos": 0.16949998944997788,
|
|
"exact_rate": 0.09
|
|
},
|
|
"0.2": {
|
|
"mean_cos": 0.06849999487400055,
|
|
"exact_rate": 0.03
|
|
},
|
|
"0.5": {
|
|
"mean_cos": 0.024999997913837432,
|
|
"exact_rate": 0.0
|
|
},
|
|
"1.0": {
|
|
"mean_cos": 0.011999999135732652,
|
|
"exact_rate": 0.0
|
|
},
|
|
"2.0": {
|
|
"mean_cos": 0.002499999850988388,
|
|
"exact_rate": 0.0
|
|
},
|
|
"5.0": {
|
|
"mean_cos": 0.009499999433755875,
|
|
"exact_rate": 0.0
|
|
}
|
|
},
|
|
"partial": {
|
|
"0.0": {
|
|
"mean_cos": 0.9999999344348908,
|
|
"exact_rate": 1.0
|
|
},
|
|
"0.1": {
|
|
"mean_cos": 0.9999999344348908,
|
|
"exact_rate": 1.0
|
|
},
|
|
"0.2": {
|
|
"mean_cos": 0.9999999344348908,
|
|
"exact_rate": 1.0
|
|
},
|
|
"0.3": {
|
|
"mean_cos": 0.9999999344348908,
|
|
"exact_rate": 1.0
|
|
},
|
|
"0.5": {
|
|
"mean_cos": 0.9069999405741691,
|
|
"exact_rate": 0.86
|
|
},
|
|
"0.7": {
|
|
"mean_cos": 0.5879999609291553,
|
|
"exact_rate": 0.45
|
|
},
|
|
"0.9": {
|
|
"mean_cos": 0.1689999896287918,
|
|
"exact_rate": 0.08
|
|
}
|
|
},
|
|
"capacity": {
|
|
"100": {
|
|
"mean_cos": 0.999999930858612,
|
|
"exact_rate": 1.0,
|
|
"w_abs": 0.00014901161193847656
|
|
},
|
|
"500": {
|
|
"mean_cos": 0.9999999320507049,
|
|
"exact_rate": 1.0,
|
|
"w_abs": 0.0007450580596923828
|
|
},
|
|
"1000": {
|
|
"mean_cos": 0.9999999344348908,
|
|
"exact_rate": 1.0,
|
|
"w_abs": 0.0014901161193847656
|
|
},
|
|
"2000": {
|
|
"mean_cos": 0.9999999338388443,
|
|
"exact_rate": 1.0,
|
|
"w_abs": 0.0029802322387695312
|
|
},
|
|
"5000": {
|
|
"mean_cos": 0.9999999314546585,
|
|
"exact_rate": 1.0,
|
|
"w_abs": 0.007450580596923828
|
|
},
|
|
"10000": {
|
|
"mean_cos": 0.9999999326467514,
|
|
"exact_rate": 1.0,
|
|
"w_abs": 0.014901161193847656
|
|
},
|
|
"20000": {
|
|
"mean_cos": 0.9999999272823333,
|
|
"exact_rate": 1.0,
|
|
"w_abs": 0.029802322387695312
|
|
}
|
|
}
|
|
} |