There has been significant discussion around Nested Learning, Google's innovative approach to continual learning that addresses the common "forgetting" problem.
I frequently receive inquiries about the relationship between these concepts and agentic systems, as well as their importance for real architectures.
To clarify, I created a short visual explanation that simplifies how tasks, inner models, and outer learning loops interact, highlighting how this structure provides models with a more stable learning path over time.
For those interested in a deeper dive, Google’s paper is also available:
[Link]
You can watch the video here:
[Link]
My aim with this content is to make complex ideas more accessible and to illustrate their potential impact on the design of agentic AI systems, architectures, and levels of autonomy. I hope you find it useful, and I welcome your thoughts.
If you’re building agentic systems and want to talk about QoD frameworks, feel free to connect on LinkedIn or send an email.
