Hierarchical Atomic Nested External Reasoning & Memory Architecture
Perfect long-term memory. Atomic-verified reasoning. Intelligent routing. Calibrated confidence. The definitive open-source framework for reliable AI agents.
Built by Xerv
Perfect
Correctly chose direct for simple questions, reasoning for math, memory for recall
Perfect
Always injected context, even on unrelated questions
Perfect
Applied full atomic reasoning even to trivial math
Perfect
Combined memory + deep reasoning on every query
Perfect
Bypassed all overhead — fastest responses
Perfect
Successfully forced modes on individual calls
Perfect
Accurately recalled multiple facts across turns
Perfect
JSON persistence worked — restored state correctly
Perfect
Full transparency into every stage
HANERMA is purpose-built for core intelligence with zero overhead. LangChain excels at complex tool orchestration — HANERMA wins when you want reliability and simplicity out of the box.
HANERMA is intentionally minimal and fully overridable — perfect for experimentation and custom agents.
class ToolAgent(HANERMA):
def web_search(self, query):
# Your search logic
return result
def ask(self, prompt):
if "search" in prompt.lower():
return self.web_search(prompt)
return super().ask(prompt)
Copied!
class ReActAgent(HANERMA):
def _reasoning(self, prompt):
# Implement ReAct, Tree-of-Thought, etc.
return custom_reasoning(prompt)
Copied!
import json
# Save memory
with open("memory.json", "w") as f:
json.dump(ai.memory_store, f)
# Load memory
with open("memory.json") as f:
loaded = json.load(f)
for item in loaded:
embedding = ai._get_embedding(item["text"])
ai.memory_store.append({"text": item["text"], "embedding": embedding})
Copied!
Subclass HANERMA and override any method. The core is rock-solid — your experiments are limitless.
Decomposes into smallest verifiable atoms with bottom-up synthesis and cross-consistency checks.
Real sentence-transformers embeddings for unlimited conversation history recall.
Zero-shot classifier automatically selects optimal path.
Every response ends with accurate Confidence: X%.
pip install hanerma
Copied!
Python ≥ 3.8 • Dependencies auto-installed
from hanerma import HANERMA
ai = HANERMA(api_key="sk-or-your-key", model="any-openrouter-model")
print(ai.ask("Your question"))
Copied!
import os
os.environ["OPENROUTER_API_KEY"] = "sk-or-your-key"
from hanerma import HANERMA
ai = HANERMA(model="meta-llama/llama-3.1-70b")
print(ai.ask("Question"))
print(ai.ask("What was my previous question?"))
Copied!
HANERMA is laser-focused on core intelligence (memory, reasoning, routing, calibration) with minimal code. LangChain is powerful for complex tool chains but requires heavy configuration.
Yes — subclass and override any method. Add tools, custom reasoning, persistent storage, multi-agent logic. The core is deliberately minimal for maximum experimentation.
In-memory during instance lifetime. Save/load memory_store to JSON or vector DB for persistence.
Yes — ai.logs provides full transparency into classifier, memory, and reasoning stages.