Time:
Room:
Self-Improving AI Without Fine-Tuning: Patterns That Work
Close the GenAI "learning gap" using self-improving feedback loops and observability. Improve models continuously without costly fine-tuning.
~~
The MIT State of AI in Business report surfaced a brutal truth: "Most GenAI systems do not retain feedback, adapt to context, or improve over time." Meanwhile, 95% of enterprise AI pilots remain stuck with no measurable P&L impact, because our systems don't learn.
Instead of talking theory, we'll build and demonstrate learning-loop architectures that allow enterprise AI systems to get better every week, without expensive fine-tuning, custom-hosted models, or moving away from existing LLMs.
We'll explore:
- GEPA (Genetic-Pareto): A framework that evolves prompts using genetic optimization and textual feedback
- DSPy: Stanford's declarative self-improving framework for optimizing agents over time
- Arize/Observability-based learning: Detecting failure patterns and automatically routing corrections back into your system
- Trust & Auditability: Fitting learning into your existing governance structures rather than fighting them
If your enterprise GenAI initiative is stuck, this demonstration gives you the missing half: the learning loop.

Matt Vincent
Founder
Source Allies
Matt Vincent founded Source Allies, an Iowa-headquartered consultancy with a Data & AI specialization and multiple GenAI systems in production delivering measurable ROI. He is part of its AI practice, moving generative AI from pilot to product.

Ben McHone
Staff Engineering Consultant
Source Allies
Ben McHone is a Staff Engineering Consultant at Source Allies, specializing in deploying agentic AI systems to production. He focuses on metric-driven development and real-world reliability, addressing the question: How do we know we can trust this technology? Ben is a DSPy contributor, LangChain Expert Program member, and Arize/Phoenix Ambassador.