Agentic AI represents a paradigm shift: from predicting text to reasoning, collaborating and self-organizing. At a recent Tesonet AI-focused conference, Mario Peng Lee shared the ways the great minds at nexos.ai are applying psychology and cognitive science to guide the design of truly intelligent systems - ones that don’t just think, but act, reflect and learn together. Read Marios' article and find out how human psychology is shaping the architecture of agentic AI 👇
Yay! I am a huge fan of Artificial Social Engineering - it indeed helps to get AI agents be more efficient and do what I want them to do. I get a lot of inspiration from reading the system prompts of SOTA agents and LLMs (there;s a github repo by Pliny), do you do it as well?
Interesting read. One question: how much of the psychology-inspired behavior (like chunking or limited working memory) is truly rooted in psychology, and how much is simply a consequence of model constraints (context limits, attention) and the need to structure tasks so models can process them efficiently? Curious how you distinguish genuine cognitive principles from practical engineering necessities.