LLMs are powerful, but without memory they can’t maintain continuity, reason across long tasks, or behave like true agents. As organizations move beyond simple RAG systems into production ready agentic applications, Memory Engineering is emerging as the next essential skill for AI developers to master. In this session, we’ll introduce the core principles of Context and Memory Engineering; the systems and structures that allow AI agents to store information, recall it, adapt across interactions, and perform reliably in complex workflows.
You’ll learn how Memory Engineering extends far beyond chat history, and expands to include short-term memory, long-term memory, summarization, compaction, entity tracking, workflow memory, semantic indexing, and more. We’ll also show how Oracle AI Database can serve as a powerful Agent Memory Core, providing unified retrieval (vector, text, relational, JSON), scalable persistence, and strong foundations for building agents that learn and adapt over time. Whether you’re building research agents, autonomous assistants, or multi-step workflow systems, this session gives you the vocabulary, mental models, and coding patterns to start building memory-aware agents today.
- The fundamentals of Agent Memory and why LLM applications need structured memory systems beyond the context window.
- How Memory and Context Engineering shape modern agent design, including summarization, compaction, and just-in-time retrieval.
- The full landscape of memory types (working, episodic, semantic, procedural, entity, workflow) and how agents store, compress, and retrieve information.
- How to build reliable memory pipelines—capture → encode → store → organize → retrieve—using programmatic and agentic operations.
- How Oracle AI Database serves as a high-performance memory core with unified retrieval (vector, text, relational, graph JSON) for scalable, persistent agent memory.
Duration: 1 hour