Part 1: Review of Paper on AI Agents and Agentic AI
Paper reviewed: "AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges” (Sapkota et al., 2025)
This is the first of what I hope will be many explorations into how we design, build, and make sense of intelligent systems. If you find it useful or enjoy then along, feel free to Subscribe below.
I recently read the paper “AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges” authored by Sapkota et al. (2025). The authors are from Cornell University and University of Peloponnese. The paper is grounded in one core insight: the increasing need for a shared taxonomy to make sense of the shift from modular AI Agents to collaborative Agentic AI, and draws a sharp and much-needed line between “AI Agents” and “Agentic AI”.
I found this paper to be fascinating, deep, well-researched and long. The paper provides a much-needed clarity between AI Agents and Agentic AI, - and more importantly, offers strong conceptual framework for thinking about where this space is headed. Hence, I decided to write a review and summarize sections of the paper as I believe that anyone involved in building, researching or even thinking about intelligent systems should be familiar with the concepts and ideas laid by the authors Sapkota et al. (2025).
My goal to write this review is simple - to make the paper by Sapkota et al. (2025) easier to digest. So I am breaking my review write-up into small parts. This Part 1: Review of Paper on AI Agents and Agentic AI will provide an overview of the paper and will set the stage for upcoming reviews of other parts from this paper.
Why this paper stands out?
Structure taxonomy: The paper offers multi-dimensional taxonomy across architectural, operational, and cognitive dimensions comparing AI Agents and Agentic AI.
Layered research: The authors have used a chronological, layered approach that mirrors how agent systems evolved over time.
Literature methodology: The paper’s references are sourced from 12 platforms and also leverages AI-powered search tools (ChatGPT, DeepSeek, Grok, Hugging Face Search, Perplexity).
Application oriented: Each theoritical concept has a corresponding real-world system (think CrewAI, LangChain, AutoGPT) to ground the discussion.
Solution focused: The authors propose 10 design patterns/solutions.
Visual aid: Has easy to understand diagrams, mind-maps, flowcharts and tables.
Wide Systematic Study
The paper covers core domains - Foundational agent theory, Large Language Models (LLMs), Large Image Models (LIMs), Gen AI, tool-augmented Agents, multi-agent coordination, memory architecture, reasoning frameworks, causality & simulation planning and ethics & governance.
Timeline studied
The authors review and explain the evolution of intelligent agents over time - pre-2022, then November 2022 when ChatGPT was released and LLM adoption surges (inflection point), early to mid-2023 when AI Agents started to surge, late 2023 to 2024 was when Agentic AI started to emerge, then 2024-2025 (until May) real-world deployments across domains are increasing with rising complexity of tasks being handled autonomously.
🧩 Paper Summary
This paper cuts through the jargon and provides a structured, well-argued framework to distinguish two fast-evolving AI paradigms: AI Agents and Agentic AI. It’s not just a matter of more autonomy or more tools—it’s about fundamentally different design philosophies. The paper’s foundation is built on research by Castelfranchi (1998) and Ferber & Weiss (1999), which established that “individual social actions and cognitive architectures are fundamental to modeling collective phenomena”. The authors, Sapkota et al. (2025), studied how early reactive systems (think: expert systems, Belief-Desire-Intention agents) evolved to the LLM-powered AI Agents that were being released post-ChatGPT (late 2022). That inflection point was a game changer! Suddenly, AI wasn’t just responding - it was executing tasks, invoking tools, and navigating workflows with increasing independence.
Pre-2022: AI agents operated in predefined boundaries and were incapable of adapting to real-world complexity.
Until late 2022: Theoritical foundations, related to social action and distributed systems, for early architecture of multi-agent systems and expert systems.
November 2022: ChatGPT is released. Inflection point!
Post-2022: Evolution path was Gen AI → Augmented AI Agents → Orchestrated AI Systems
Why does drawing a clear line between these paradigms matter?
Each phase builds on the previous, but introduces a leap in autonomy, interaction, and system coordination, leading to -
Better system design: You can’t solve a complex orchestration problem with a single-agent toolchain. Matching system architecture to task complexity is critical.
Smarter benchmarking: You don’t evaluate a collaborative Agentic AI the same way you test a generative model.
Avoiding design waste: Misunderstanding the paradigm often leads to over-engineering the simple—and under-engineering the hard.
Understanding where each paradigm starts and stops isn’t just semantics—it’s a prerequisite for building the next generation of intelligent systems with clarity and purpose. Through methodical comparisons (think evolution, architecture, autonomy, complexity), the paper drives the distinction that Agentic AI is not just a fancier agent—it’s a fundamentally different species in the AI ecosystem. It’s the difference between hiring a smart intern vs. running a company of coordinated specialists with a CEO orchestrating them all.
This paper provides a taxonomy roadmap for anyone building, researching, or deploying intelligent systems. The authors build a strong conceptual ladder from traditional rule-based agents → LLM-driven AI Agents → full-blown Agentic AI ecosystems. What follows is a head-to-head breakdown: the architecture, behavior, autonomy, planning capacity, and use cases across both paradigms. The paper also takes a hard look at their respective pitfalls—hallucinations, lack of reasoning depth, coordination complexity—and proposes solutions rooted in memory, orchestration, and causal modeling.
Key Highlights
🔄 Evolution of Agent Paradigms
Before ChatGPT: rule-based, reactive agents.
After ChatGPT: modular AI Agents that use tools, then scale into Agentic AI—a system of collaborating agents with shared memory and orchestrated autonomy.
🔍 Core Differences
While AI Agents are modular, tool-augmented, and task-specific (like an AI-enhanced smart assistants that get things done), Agentic AI shifts the game entirely. We’re talking multi-agent systems that dynamically divide work, coordinate with each other, remember what happened last time, and act as decentralized decision-makers.
AI Agents = Single agents, focused on a specific task, tool-augmented, and reactive.
AI Agents = Tools + LLM + feedback loops
Agentic AI = Multi-agent ecosystems, each with a role, collaborating to achieve dynamic, high-level goals.
Agentic AI = Teams of agents + coordination + goal orchestration
Key Traits
🧠 Architectural Leap
One of the biggest shifts the paper highlights is architectural. AI Agents typically operate as single modular system built around a single LLM, has tool integrations, and some logic for task execution. Useful? Absolutely. But it is limited when it comes to handling complexity, context, or collaboration.
Agentic AI, on the other hand, is a fundamental rethinking. It introduces a layered architecture to structure intelligence - coordination, memory and reasoning - and execute it at scale across diverse systems. In practice Agentic AI systems have:
Specialized Agents: Agentic AI decomposes the required work into roles. The thinking here is the role of AI agents by work - retrivers, planners, summarizers, evaluators, and more. Each AI agent has a specific function assigned, and collaboratively the AI agents solve complex, multi-step tasks. This is not just modular, it is scalable.
Persistent Memory: Agentic AI systems can recall from memory. Through episodic and semantic memory systems, they can track context over time, retain steps executued, and adapt when new information becomes available. This makes them feel more like collaborators, rather than assistants.
Advanced Planning & Reasoning: Agentic AI system leverage frameworks like ReAct (reasoning + acting), CoT (Chain-of-Thought), and ToT (Tree-of-Thoughts), which allows it split the tasks, reflect on intermediate steps, and then re-plan (if needed). This approach to planning isn’t hardcoded - it’s dynamic. It is closer to how humans think, which is a leap beyond chaining simple prompts.
Orchestration Mechanisms: The secret source to Agentic AI is coordination of the agent ecosystem. There are Meta-agents (think system-level managers or project managers) that assign roles, monitor progress, resolve conflicts, and ensure alignment.
Application Landscape
AI Agents thrive in: customer support, email triage, calendar automation, and dashboard queries.
Agentic AI powers: collaborative research assistants, drone swarms, clinical decision-support, and autonomous multi-step workflows.
What’s Holding Them Back?
AI Agents: Hallucinations, no causality, short-term planning, shallow reasoning.
Agentic AI: Coordination overhead, emergent behavior risks, explainability gaps, and governance complexity.
My Top 5 Takeaways
From LLM as chatbot to LLM as cognitive substrate.
LLMs are now planners, not just responders. Especially when augmented with tools and agents.Memory isn’t optional anymore.
Agents that remember, adapt, and coordinate will outperform stateless, prompt-bound models.Modularity is table stakes. Orchestration is the differentiator.
Just like in a company, tools are helpful. But a team without a manager? Chaos.It’s not about more power. It’s about smarter design.
Agentic AI shifts design from single-task automation to system-level orchestration.We're architecting AI organizations—not just agents.
Agentic AI is what happens when “digital employees” get structure, roles, goals, and memory.
🧰 Potential Solutions in Sight
Retrieval-Augmented Generation (RAG)
Tool-based reasoning pipelines
Feedback and planning loops (ReAct, CoT)
Memory architectures (episodic, semantic, vector)
Causal modeling and orchestration layers
🛤️ Future Outlook
AI Agents will evolve toward deeper reasoning, proactive behavior, and trustworthy autonomy.
Agentic AI will scale up with more refined memory, simulation-based planning, role-driven orchestration, and domain-specific performance tuning.
Final Thought
This paper isn’t just a literature survey—it’s a design framework for anyone architecting intelligent systems. Whether you’re building AI assistants or full ecosystems, the message is clear: we’re moving from smart tools to collaborative intelligence. The ones who design for that shift—modularity, memory, and orchestration—will be the ones leading the next phase of autonomy.
Part 1: Review of Paper on AI Agents and Agentic AI is a wrap.
If you made it all the way here—thank you for reading, I truly appreciate it.
Feel free to drop your thoughts, reactions, or questions—always happy to dig deeper together. If this space resonates with you, I am open to connecting via LinkedIn, exchanging ideas, and exploring ways to build the future thoughtfully. Let’s keep learning!
Coming Soon …
In my next review (Part 2) on the same paper “AI Agents vs. Agentic AI …” by Sapkota et al. (2025), I will take a closer look at the foundation of AI Agents.
References
Castelfranchi, C. (1998). Modelling social action for AI agents. Artificial Intelligence, 103(1), 157–182. https://doi.org/10.1016/S0004-3702(98)00056-3
Ferber, J. and Weiss, G. (1999). Multi-agent systems: an introduction to distributed artificial intelligence, vol. 1. Addison-Wesley Reading.
Sapkota, R., Roumeliotis, I. K., and Karkee, M. (2025). AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges. arXiv. https://arxiv.org/pdf/2505.10468