page_content
stringlengths
38
177k
metadata_json
stringlengths
285
687
AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges Ranjan Sapkota∗‡, Konstantinos I. Roumeliotis †, Manoj Karkee ∗‡ ∗Cornell University, Department of Biological and Environmental Engineering, USA †University of the Peloponnese, Department of Informatics and Telecommunications, Tripoli, Greece ‡Corresponding authors: [email protected], [email protected] Abstract—This review critically distinguishes between AI Agents and Agentic AI, offering a structured conceptual tax- onomy, application mapping, and challenge analysis to clarify their divergent design philosophies and capabilities. We begin by outlining the search strategy and foundational definitions, charac- terizing AI Agents as modular systems driven by LLMs and LIMs for narrow, task-specific automation. Generative AI is positioned as a precursor, with AI agents advancing through tool integration, prompt engineering, and reasoning enhancements. In contrast, agentic AI systems represent a paradigmatic shift marked by multi-agent collaboration, dynamic task decomposition, persis- tent memory, and orchestrated autonomy. Through a sequential evaluation of architectural evolution, operational mechanisms, interaction styles, and autonomy levels, we present a compara- tive analysis across both paradigms. Application domains such as customer support, scheduling, and data summarization are contrasted with Agentic AI deployments in research automa- tion, robotic coordination, and medical decision support. We further examine unique challenges in each paradigm including hallucination, brittleness, emergent behavior, and coordination failure and propose targeted solutions such as ReAct loops, RAG, orchestration layers, and causal modeling. This work aims to provide a definitive roadmap for developing robust, scalable, and explainable AI-driven systems. Index Terms—AI Agents, Agentic AI, Autonomy, Reasoning, Context Awareness, Multi-Agent Systems, Conceptual Taxonomy, vision-language model Nov 2022 Nov 2023 Nov 2024 2025 AI Agents Agentic AI Source: Fig. 1: Global Google search trends showing rising interest in “AI Agents” and “Agentic AI” since November 2022 (ChatGPT Era). I. I NTRODUCTION Prior to the widespread adoption of AI agents and agentic AI around 2022 (Before ChatGPT Era), the development of autonomous and intelligent agents was deeply rooted in foundational paradigms of artificial intelligence, particularly multi-agent systems (MAS) and expert systems, which em- phasized social action and distributed intelligence [1], [2]. Notably, Castelfranchi [3] laid critical groundwork by intro- ducing ontological categories for social action, structure, and mind, arguing that sociality emerges from individual agents’ actions and cognitive processes in a shared environment, with concepts like goal delegation and adoption forming the basis for cooperation and organizational behavior. Similarly, Ferber [4] provided a comprehensive framework for MAS, defining agents as entities with autonomy, perception, and communication capabilities, and highlighting their applica- tions in distributed problem-solving, collective robotics, and synthetic world simulations. These early works established that individual social actions and cognitive architectures are fundamental to modeling collective phenomena, setting the stage for modern AI agents. This paper builds on these insights to explore how social action modeling, as proposed in [3], [4], informs the design of AI agents capable of complex, socially intelligent interactions in dynamic environments. These systems were designed to perform specific tasks with predefined rules, limited autonomy, and minimal adaptability to dynamic environments. Agent-like systems were primarily reactive or deliberative, relying on symbolic reasoning, rule- based logic, or scripted behaviors rather than the learning- driven, context-aware capabilities of modern AI agents [5], [6]. For instance, expert systems used knowledge bases and infer- ence engines to emulate human decision-making in domains like medical diagnosis (e.g., MYCIN [7]). Reactive agents, such as those in robotics, followed sense-act cycles based on hardcoded rules, as seen in early autonomous vehicles like the Stanford Cart [8]. Multi-agent systems facilitated coordina- tion among distributed entities, exemplified by auction-based resource allocation in supply chain management [9], [10]. Scripted AI in video games, like NPC behaviors in early RPGs, used predefined decision trees [11]. Furthermore, BDI (Belief- Desire-Intention) architectures enabled goal-directed behavior in software agents, such as those in air traffic control simu- lations [12], [13]. These early systems lacked the generative capacity, self-learning, and environmental adaptability of mod- ern agentic AI, which leverages deep learning, reinforcement learning, and large-scale data [14]. Recent public and academic interest in AI Agents and Agen- tic AI reflects this broader transition in system capabilities. As illustrated in Figure 1, Google Trends data demonstrates a significant rise in global search interest for both terms arXiv:2505.10468v3 [cs.AI] 20 May 2025
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 0, "page_label": "1", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134900"}
following the emergence of large-scale generative models in late 2022. This shift is closely tied to the evolution of agent design from the pre-2022 era, where AI agents operated in constrained, rule-based environments, to the post-ChatGPT period marked by learning-driven, flexible architectures [15]– [17]. These newer systems enable agents to refine their perfor- mance over time and interact autonomously with unstructured, dynamic inputs [18]–[20]. For instance, while pre-modern expert systems required manual updates to static knowledge bases, modern agents leverage emergent neural behaviors to generalize across tasks [17]. The rise in trend activity reflects increasing recognition of these differences. Moreover, applications are no longer confined to narrow domains like simulations or logistics, but now extend to open-world settings demanding real-time reasoning and adaptive control. This mo- mentum, as visualized in Figure 1, underscores the significance of recent architectural advances in scaling autonomous agents for real-world deployment. The release of ChatGPT in November 2022 marked a pivotal inflection point in the development and public perception of artificial intelligence, catalyzing a global surge in adoption, investment, and research activity [21]. In the wake of this breakthrough, the AI landscape underwent a rapid transforma- tion, shifting from the use of standalone LLMs toward more autonomous, task-oriented frameworks [22]. This evolution progressed through two major post-generative phases: AI Agents and Agentic AI. Initially, the widespread success of ChatGPT popularized Generative Agents, which are LLM- based systems designed to produce novel outputs such as text, images, and code from user prompts [23], [24]. These agents were quickly adopted across applications ranging from con- versational assistants (e.g., GitHub Copilot [25]) and content- generation platforms (e.g., Jasper [26]) to creative tools (e.g., Midjourney [27]), revolutionizing domains like digital design, marketing, and software prototyping throughout 2023. Although the term AI agent was first introduced in 1998 [3], it has since evolved significantly with the rise of generative AI. Building upon this generative founda- tion, a new class of systems—commonly referred to as AI agents—has emerged. These agents enhanced LLMs with capabilities for external tool use, function calling, and se- quential reasoning, enabling them to retrieve real-time in- formation and execute multi-step workflows autonomously [28], [29]. Frameworks such as AutoGPT [30] and BabyAGI (https://github.com/yoheinakajima/babyagi) exemplified this transition, showcasing how LLMs could be embedded within feedback loops to dynamically plan, act, and adapt in goal- driven environments [31], [32]. By late 2023, the field had advanced further into the realm of Agentic AI complex, multi- agent systems in which specialized agents collaboratively decompose goals, communicate, and coordinate toward shared objectives. In line with this evolution, Google introduced the Agent-to-Agent (A2A) protocol in 2025 [33], a proposed standard designed to enable seamless interoperability among agents across different frameworks and vendors. The protocol is built around five core principles: embracing agentic capabil- ities, building on existing standards, securing interactions by default, supporting long-running tasks, and ensuring modality agnosticism. These guidelines aim to lay the groundwork for a responsive, scalable agentic infrastructure. Architectures such as CrewAI demonstrate how these agen- tic frameworks can orchestrate decision-making across dis- tributed roles, facilitating intelligent behavior in high-stakes applications including autonomous robotics, logistics manage- ment, and adaptive decision-support [34]–[37]. As the field progresses from Generative Agents toward increasingly autonomous systems, it becomes critically impor- tant to delineate the technological and conceptual boundaries between AI Agents and Agentic AI. While both paradigms build upon large LLMs and extend the capabilities of gener- ative systems, they embody fundamentally different architec- tures, interaction models, and levels of autonomy. AI Agents are typically designed as single-entity systems that perform goal-directed tasks by invoking external tools, applying se- quential reasoning, and integrating real-time information to complete well-defined functions [17], [38]. In contrast, Agen- tic AI systems are composed of multiple, specialized agents that coordinate, communicate, and dynamically allocate sub- tasks within a broader workflow [14], [39]. This architec- tural distinction underpins profound differences in scalability, adaptability, and application scope. Understanding and formalizing the taxonomy between these two paradigms (AI Agents and Agentic AI) is scientifically significant for several reasons. First, it enables more precise system design by aligning computational frameworks with problem complexity ensuring that AI Agents are deployed for modular, tool-assisted tasks, while Agentic AI is reserved for orchestrated multi-agent operations. Moreover, it allows for appropriate benchmarking and evaluation: performance metrics, safety protocols, and resource requirements differ markedly between individual-task agents and distributed agent systems. Additionally, clear taxonomy reduces development inefficiencies by preventing the misapplication of design prin- ciples such as assuming inter-agent collaboration in a system architected for single-agent execution. Without this clarity, practitioners risk both under-engineering complex scenarios that require agentic coordination and over-engineering simple applications that could be solved with a single AI Agent. Since the field of artificial intelligence has seen significant advancements, particularly in the development of AI Agents and Agentic AI. These terms, while related, refer to distinct concepts with different capabilities and applications. This article aims to clarify the differences between AI Agents and Agentic AI, providing researchers with a foundational under- standing of these technologies. The objective of this study is to formalize the distinctions, establish a shared vocabulary, and provide a structured taxonomy between AI Agents and Agentic AI that informs the next generation of intelligent agent design across academic and industrial domains, as illustrated in Figure 2. This review provides a comprehensive conceptual and archi- tectural analysis of the progression from traditional AI Agents
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 1, "page_label": "2", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134910"}
AI Agents & Agentic AI Architecture Mechanisms Scope/ Complexity Interaction Autonomy Fig. 2: Mind map of Research Questions relevant to AI Agents and Agentic AI. Each color-coded branch represents a key dimension of comparison: Architecture, Mechanisms, Scope/Complexity, Interaction, and Autonomy. to emergent Agentic AI systems. Rather than organizing the study around formal research questions, we adopt a sequential, layered structure that mirrors the historical and technical evolution of these paradigms. Beginning with a detailed de- scription of our search strategy and selection criteria, we first establish the foundational understanding of AI Agents by analyzing their defining attributes, such as autonomy, reac- tivity, and tool-based execution. We then explore the critical role of foundational models specifically LLMs and Large Image Models (LIMs) which serve as the core reasoning and perceptual substrates that drive agentic behavior. Subsequent sections examine how generative AI systems have served as precursors to more dynamic, interactive agents, setting the stage for the emergence of Agentic AI. Through this lens, we trace the conceptual leap from isolated, single-agent systems to orchestrated multi-agent architectures, highlight- ing their structural distinctions, coordination strategies, and collaborative mechanisms. We further map the architectural evolution by dissecting the core system components of both AI Agents and Agentic AI, offering comparative insights into their planning, memory, orchestration, and execution layers. Building upon this foundation, we review application domains spanning customer support, healthcare, research automation, and robotics, categorizing real-world deployments by system capabilities and coordination complexity. We then assess key challenges faced by both paradigms including hallucination, limited reasoning depth, causality deficits, scalability issues, and governance risks. To address these limitations, we outline emerging solutions such as retrieval-augmented generation, tool-based reasoning, memory architectures, and simulation- based planning. The review culminates in a forward-looking roadmap that envisions the convergence of modular AI Agents and orchestrated Agentic AI in mission-critical domains. Over- all, this paper aims to provide researchers with a structured taxonomy and actionable insights to guide the design, deploy- ment, and evaluation of next-generation agentic systems. A. Methodology Overview This review adopts a structured, multi-stage methodology designed to capture the evolution, architecture, application, and limitations of AI Agents and Agentic AI. The process is visually summarized in Figure 3, which delineates the sequential flow of topics explored in this study. The analytical framework was organized to trace the progression from basic agentic constructs rooted in LLMs to advanced multi-agent orchestration systems. Each step of the review was grounded in rigorous literature synthesis across academic sources and AI- powered platforms, enabling a comprehensive understanding of the current landscape and its emerging trajectories. The review begins by establishing a foundational under- standing of AI Agents, examining their core definitions, design principles, and architectural modules as described in the litera- ture. These include components such as perception, reasoning, and action selection, along with early applications like cus- tomer service bots and retrieval assistants. This foundational layer serves as the conceptual entry point into the broader agentic paradigm. Next, we delve into the role of LLMs as core reasoning components, emphasizing how pre-trained language models underpin modern AI Agents. This section details how LLMs, through instruction fine-tuning and reinforcement learning from human feedback (RLHF), enable natural language in- teraction, planning, and limited decision-making capabilities. We also identify their limitations, such as hallucinations, static knowledge, and a lack of causal reasoning. Building on these foundations, the review proceeds to the emergence of Agentic AI , which represents a significant con- ceptual leap. Here, we highlight the transformation from tool- augmented single-agent systems to collaborative, distributed ecosystems of interacting agents. This shift is driven by the need for systems capable of decomposing goals, assigning subtasks, coordinating outputs, and adapting dynamically to changing contexts—capabilities that surpass what isolated AI Agents can offer. The next section examines the architectural evolution from AI Agents to Agentic AI systems , contrasting simple, modular agent designs with complex orchestration frameworks. We describe enhancements such as persistent memory, meta-agent coordination, multi-agent planning loops (e.g., ReAct and Chain-of-Thought prompting), and semantic communication protocols. Comparative architectural analysis is supported with examples from platforms like AutoGPT, CrewAI, and Lang- Graph. Following the architectural exploration, the review presents an in-depth analysis of application domains where AI Agents and Agentic AI are being deployed. This includes six key
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 2, "page_label": "3", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134911"}
Hybrid Literature Search Foundational Understanding of AI Agents LLMs as Core Reasoning Components Emergence of Agentic AI Architectural Evolution: Agents→Agentic AI Applications of AI Agents & Agentic AI Challenges & Limitations (Agents + Agentic AI) Potential Solutions: RAG, Causal Models, Planning Fig. 3: Methodology pipeline from foundational AI agents to Agentic AI systems, applications, limitations, and solution strategies. application areas for each paradigm, ranging from knowledge retrieval, email automation, and report summarization for AI Agents, to research assistants, robotic swarms, and strategic business planning for Agentic AI. Use cases are discussed in the context of system complexity, real-time decision-making, and collaborative task execution. Subsequently, we address the challenges and limitations inherent to both paradigms. For AI Agents, we focus on issues like hallucination, prompt brittleness, limited planning ability, and lack of causal understanding. For Agentic AI, we identify higher-order challenges such as inter-agent misalignment, error propagation, unpredictability of emergent behavior, explain- ability deficits, and adversarial vulnerabilities. These problems are critically examined with references to recent experimental studies and technical reports. Finally, the review outlines potential solutions to over- come these challenges , drawing on recent advances in causal modeling, retrieval-augmented generation (RAG), multi-agent memory frameworks, and robust evaluation pipelines. These strategies are discussed not only as technical fixes but as foun- dational requirements for scaling agentic systems into high- stakes domains such as healthcare, finance, and autonomous robotics. Taken together, this methodological structure enables a comprehensive and systematic assessment of the state of AI Agents and Agentic AI. By sequencing the analysis across foundational understanding, model integration, architectural growth, applications, and limitations, the study aims to provide both theoretical clarity and practical guidance to researchers and practitioners navigating this rapidly evolving field. 1) Search Strategy: To construct this review, we imple- mented a hybrid search methodology combining traditional academic repositories and AI-enhanced literature discovery tools. Specifically, twelve platforms were queried: academic databases such as Google Scholar, IEEE Xplore, ACM Dig- ital Library, Scopus, Web of Science, ScienceDirect, and arXiv; and AI-powered interfaces including ChatGPT, Per- plexity.ai, DeepSeek, Hugging Face Search, and Grok. Search queries incorporated Boolean combinations of terms such as “AI Agents,” “Agentic AI,” “LLM Agents,” “Tool-augmented LLMs,” and “Multi-Agent AI Systems.” Targeted queries such as “Agentic AI + Coordination + Planning,” and “AI Agents + Tool Usage + Reasoning” were employed to retrieve papers addressing both conceptual underpinnings and system-level implementations. Literature inclusion was based on criteria such as novelty, empirical evaluation, architectural contribution, and citation impact. The rising global interest in these technologies, as illustrated in Figure 1 using Google Trends data, underscores the urgency of synthesizing this emerging knowledge space. II. F OUNDATIONAL UNDERSTANDING OF AI AGENTS AI Agents are an autonomous software entities engineered for goal-directed task execution within bounded digital en- vironments [14], [40]. These agents are defined by their ability to perceive structured or unstructured inputs [41], reason over contextual information [42], [43], and initiate actions toward achieving specific objectives, often acting as surrogates for human users or subsystems [44]. Unlike conventional automation scripts, which follow deterministic workflows, AI agents demonstrate reactive intelligence and limited adaptability, allowing them to interpret dynamic inputs and reconfigure outputs accordingly [45]. Their adoption has been reported across a range of application domains, including
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 3, "page_label": "4", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134913"}
AI Agents Fig. 4: Core characteristics of AI Agents autonomy, task-specificity, and reactivity illustrated with symbolic representations for agent design and operational behavior. customer service automation [46], [47], personal productivity assistance [48], internal information retrieval [49], [50], and decision support systems [51], [52]. A noteworthy example of autonomous AI agents is Anthropic’s ”Computer Use” project, where Claude was trained to navigate computers to automate repetitive processes, build and test software, and perform open- ended tasks such as research [53]. 1) Overview of Core Characteristics of AI Agents: AI Agents are widely conceptualized as instantiated operational embodiments of artificial intelligence designed to interface with users, software ecosystems, or digital infrastructures in pursuit of goal-directed behavior [54]–[56]. These agents dis- tinguish themselves from general-purpose LLMs by exhibiting structured initialization, bounded autonomy, and persistent task orientation. While LLMs primarily function as reactive prompt followers [57], AI Agents operate within explicitly de- fined scopes, engaging dynamically with inputs and producing actionable outputs in real-time environments [58]. Figure 4 illustrates the three foundational characteristics that recur across architectural taxonomies and empirical deploy- ments of AI Agents. These include autonomy, task-specificity, and reactivity with adaptation . First, autonomy denotes the agent’s ability to act independently post-deployment, mini- mizing human-in-the-loop dependencies and enabling large- scale, unattended operation [47], [59]. Second, task-specificity encapsulates the design philosophy of AI agents being spe- cialized for narrowly scoped tasks allowing high-performance optimization within a defined functional domain such as scheduling, querying, or filtering [60], [61]. Third, reactivity refers to an agent’s capacity to respond to changes in its environment, including user commands, software states, or API responses; when extended with adaptation, this includes feedback loops and basic learning heuristics [17], [62]. Together, these three traits provide a foundational profile for understanding and evaluating AI Agents across deployment scenarios. The remainder of this section elaborates on each characteristic, offering theoretical grounding and illustrative examples. • Autonomy: A central feature of AI Agents is their ability to function with minimal or no human intervention after deployment [59]. Once initialized, these agents are capable of perceiving environmental inputs, reasoning over contextual data, and executing predefined or adaptive actions in real-time [17]. Autonomy enables scalable deployment in applications where persistent oversight is impractical, such as customer support bots or scheduling assistants [47], [63]. • Task-Specificity: AI Agents are purpose-built for narrow, well-defined tasks [60], [61]. They are optimized to execute repeatable operations within a fixed domain, such as email filtering [64], [65], database querying [66], or calendar coordination [39], [67]. This task specialization allows for efficiency, interpretability, and high precision in automation tasks where general-purpose reasoning is unnecessary or inefficient. • Reactivity and Adaptation: AI Agents often include basic mechanisms for interacting with dynamic inputs, allowing them to respond to real-time stimuli such as user requests, external API calls, or state changes in software environments [17], [62]. Some systems integrate rudimentary learning [68] through feedback loops [69], [70], heuristics [71], or updated context buffers to refine behavior over time, particularly in settings like personal- ized recommendations or conversation flow management
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 4, "page_label": "5", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134914"}
[72]–[74]. These core characteristics collectively enable AI Agents to serve as modular, lightweight interfaces between pretrained AI models and domain-specific utility pipelines. Their architec- tural simplicity and operational efficiency position them as key enablers of scalable automation across enterprise, consumer, and industrial settings. Although still limited in reasoning depth compared to more general AI systems [75], their high usability and performance within constrained task boundaries have made them foundational components in contemporary intelligent system design. 2) Foundational Models: The Role of LLMs and LIMs: The foundational progress in AI agents has been significantly accelerated by the development and deployment of LLMs and LIMs, which serve as the core reasoning and perception engines in contemporary agent systems. These models enable AI agents to interact intelligently with their environments, understand multimodal inputs, and perform complex reasoning tasks that go beyond hard-coded automation. LLMs such as GPT-4 [76] and PaLM [77] are trained on massive datasets of text from books, web content, and dialogue corpora. These models exhibit emergent capabilities in natural language understanding, question answering, summarization, dialogue coherence, and even symbolic reasoning [78], [79]. Within AI agent architectures, LLMs serve as the primary decision-making engine, allowing the agent to parse user queries, plan multi-step solutions, and generate naturalistic responses. For instance, an AI customer support agent powered by GPT-4 can interpret customer complaints, query backend systems via tool integration, and respond in a contextually appropriate and emotionally aware manner [80], [81]. Large Image Models (LIMs) such as CLIP [82] and BLIP- 2 [83] extend the agent’s capabilities into the visual domain. Trained on image-text pairs, LIMs enable perception-based tasks including image classification, object detection, and vision-language grounding. These capabilities are increasingly vital for agents operating in domains such as robotics [84], autonomous vehicles [85], [86], and visual content moderation [87], [88]. For example, as illustrated in Figure 5 in an autonomous drone agent tasked with inspecting orchards, a LIM can identify diseased fruits [89] or damaged branches by inter- preting live aerial imagery and triggering predefined inter- vention protocols. Upon detection, the system autonomously triggers predefined intervention protocols, such as notifying horticultural staff or marking the location for targeted treat- ment without requiring human intervention [17], [59]. This workflow exemplifies the autonomy and reactivity of AI agents in agricultural environment and recent literature underscores the growing sophistication of such drone-based AI agents. Chitra et al. [90] provide a comprehensive overview of AI algorithms foundational to embodied agents, highlighting the integration of computer vision, SLAM, reinforcement learning, and sensor fusion. These components collectively support real- time perception and adaptive navigation in dynamic envi- ronments. Kourav et al. [91] further emphasize the role of Fig. 5: An AI agent–enabled drone autonomously inspects an orchard, identifying diseased fruits and damaged branches using vision models, and triggers real-time alerts for targeted horticultural interventions natural language processing and large language models in generating drone action plans from human-issued queries, demonstrating how LLMs support naturalistic interaction and mission planning. Similarly, Natarajan et al. [92] explore deep learning and reinforcement learning for scene understand- ing, spatial mapping, and multi-agent coordination in aerial robotics. These studies converge on the critical importance of AI-driven autonomy, perception, and decision-making in advancing drone-based agents. Importantly, LLMs and LIMs are often accessed via infer- ence APIs provided by cloud-based platforms such as OpenAI https://openai.com/, HuggingFace https://huggingface.co/, and Google Gemini https://gemini.google.com/app. These services abstract away the complexity of model training and fine- tuning, enabling developers to rapidly build and deploy agents equipped with state-of-the-art reasoning and perceptual abil- ities. This composability accelerates prototyping and allows agent frameworks like LangChain [93] and AutoGen [94] to orchestrate LLM and LIM outputs across task workflows. In short, foundational models give modern AI agents their basic understanding of language and visuals. Language models help them reason with words, and image models help them understand pictures-working together, they allow AI to make smart decisions in complex situations. 3) Generative AI as a Precursor: A consistent theme in the literature is the positioning of generative AI as the foundational precursor to agentic intelligence. These systems primarily operate on pretrained LLMs and LIMs, which are optimized to synthesize novel content text, images, audio, or code based on input prompts. While highly expressive, generative models fundamentally exhibit reactive behavior: they produce output only when explicitly prompted and do not pursue goals autonomously or engage in self-initiated reasoning [95], [96]. Key Characteristics of Generative AI:
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 5, "page_label": "6", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134915"}
• Reactivity: As non-autonomous systems, generative models are exclusively input-driven [97], [98]. Their operations are triggered by user-specified prompts and they lack internal states, persistent memory, or goal- following mechanisms [99]–[101]. • Multimodal Capability: Modern generative systems can produce a diverse array of outputs, including coherent narratives, executable code, realistic images, and even speech transcripts. For instance, models like GPT-4 [76], PaLM-E [102], and BLIP-2 [83] exemplify this capacity, enabling language-to-image, image-to-text, and cross- modal synthesis tasks. • Prompt Dependency and Statelessness: Although gen- erative systems are stateless in that they do not retain con- text across interactions unless explicitly provided [103], [104], recent advancements like GPT-4.1 support larger context windows-up to 1 million tokens-and are better able to utilize that context thanks to improved long-text comprehension [105]. Their design also lacks intrinsic feedback loops [106], state management [107], [108], or multi-step planning a requirement for autonomous decision-making and iterative goal refinement [109], [110]. Despite their remarkable generative fidelity, these systems are constrained by their inability to act upon the environment or manipulate digital tools independently. For instance, they cannot search the internet, parse real-time data, or interact with APIs without human-engineered wrappers or scaffolding layers. As such, they fall short of being classified as true AI Agents, whose architectures integrate perception, decision- making, and external tool-use within closed feedback loops. The limitations of generative AI in handling dynamic tasks, maintaining state continuity, or executing multi-step plans led to the development of tool-augmented systems, commonly referred to as AI Agents [111]. These systems build upon the language processing backbone of LLMs but introduce additional infrastructure such as memory buffers, tool-calling APIs, reasoning chains, and planning routines to bridge the gap between passive response generation and active task completion. This architectural evolution marks a critical shift in AI system design: from content creation to autonomous utility [112], [113]. The trajectory from generative systems to AI agents underscores a progressive layering of functionality that ultimately supports the emergence of agentic behaviors. A. Language Models as the Engine for AI Agent Progression The emergence of AI agent as a transformative paradigm in artificial intelligence is closely tied to the evolution and repurposing of large-scale language models such as GPT-3 [114], Llama [115], T5 [116], Baichuan 2 [117] and GPT3mix [118]. A substantial and growing body of research confirms that the leap from reactive generative models to autonomous, goal-directed agents is driven by the integration of LLMs as core reasoning engines within dynamic agentic systems. These models, originally trained for natural language pro- cessing tasks, are increasingly embedded in frameworks that require adaptive planning [119], [120], real-time decision- making [121], [122], and environment-aware behavior [123]. 1) LLMs as Core Reasoning Components: LLMs such as GPT-4 [76], PaLM [77], Claude https://www.anthropic.com/news/claude-3-5-sonnet, and LLaMA [115] are pre-trained on massive text corpora using self-supervised objectives and fine-tuned using techniques such as Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) [124], [125]. These models encode rich statistical and semantic knowledge, allowing them to perform tasks like inference, summarization, code generation, and dialogue management. However, in agentic contexts, their capabilities extend beyond response generation. They function as cognitive engines that interpret user goals, formulate and evaluate possible action plans, select the most appropriate strategies, leverage external tools, and manage complex, multi-step workflows. Recent work identifies these models as central to the architecture of contemporary agentic systems. For instance, AutoGPT [30] and BabyAGI https://github.com/yoheinakajima/babyagi use GPT-4 as both a planner and executor: the model analyzes high-level objectives, decomposes them into actionable subtasks, invokes external APIs as needed, and monitors progress to determine subsequent actions. In such systems, the LLM operates in a loop of prompt processing, state updating, and feedback-based correction, closely emulating autonomous decision-making. 2) Tool-Augmented AI Agents: Enhancing Functionality: To overcome limitations inherent to generative-only systems such as hallucination, static knowledge cutoffs, and restricted interaction scopes, researchers have proposed the concept of tool-augmented LLM agents [126] such as Easytool [127], Gentopia [128], and ToolFive [129]. These systems integrate external tools, APIs, and computation platforms into the agent’s reasoning pipeline, allowing for real-time information access, code execution, and interaction with dynamic data environments. Tool Invocation. When an agent identifies a need that cannot be addressed through its internal knowledge such as querying a current stock price, retrieving up-to-date weather information, or executing a script, it generates a structured function call or API request [130], [131]. These calls are typically formatted in JSON, SQL, or Python dictionary, depending on the target service, and routed through an or- chestration layer that executes the task. Result Integration. Once a response is received from the tool, the output is parsed and reincorporated into the LLM’s context window. This enables the agent to synthesize new reasoning paths, update its task status, and decide on the next step. The ReAct framework [132] exemplifies this architecture by combining reasoning (Chain-of-Thought prompting) and action (tool use), with LLMs alternating between internal cognition and external environment interaction. A prominent example of a tool-augmented AI agent is ChatGPT, which, when unable to answer a query directly, autonomously invokes the Web Search API to retrieve more recent and relevant
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 6, "page_label": "7", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134916"}
information, performs reasoning over the retrieved content, and formulates a response based on its understanding [133]. 3) Illustrative Examples and Emerging Capabilities: Tool- augmented LLM agents have demonstrated capabilities across a range of applications. In AutoGPT [30], the agent may plan a product market analysis by sequentially querying the web, compiling competitor data, summarizing insights, and generating a report. In a coding context, tools like GPT- Engineer combine LLM-driven design with local code exe- cution environments to iteratively develop software artifacts [134], [135]. In research domains, systems like Paper-QA [136] utilize LLMs to query vectorized academic databases, grounding answers in retrieved scientific literature to ensure factual integrity. These capabilities have opened pathways for more robust behavior of AI agents such as long-horizon planning, cross- tool coordination, and adaptive learning loops. Nevertheless, the inclusion of tools also introduces new challenges in or- chestration complexity, error propagation, and context window limitations all active areas of research. The progression toward AI Agents is inseparable from the strategic integration of LLMs as reasoning engines and their augmentation through structured tool use. This synergy transforms static language models into dynamic cognitive entities capable of perceiving, planning, acting, and adapting setting the stage for multi-agent collaboration, persistent memory, and scalable autonomy. Figure 6 illustrates a representative case: a news query agent that performs real-time web search, summarizes retrieved documents, and generates an articulate, context-aware answer. Such workflows have been demonstrated in implementations using LangChain, AutoGPT, and OpenAI function-calling paradigms. Fig. 6: Illustrating the workflow of an AI Agent performing real-time news search, summarization, and answer generation III. T HE EMERGENCE OF AGENTIC AI FROM AI AGENT FOUNDATIONS While AI Agents represent a significant leap in artificial in- telligence capabilities, particularly in automating narrow tasks through tool-augmented reasoning, recent literature identifies notable limitations that constrain their scalability in complex, multi-step, or cooperative scenarios [137]–[139]. These con- straints have catalyzed the development of a more advanced paradigm: Agentic AI. This emerging class of systems extends the capabilities of traditional agents by enabling multiple intelligent entities to collaboratively pursue goals through structured communication [140]–[142], shared memory [143], [144], and dynamic role assignment [14]. 1) Conceptual Leap: From Isolated Tasks to Coordinated Systems: AI Agents, as explored in prior sections, integrate LLMs with external tools and APIs to execute narrowly scoped operations such as responding to customer queries, performing document retrieval, or managing schedules. However, as use cases increasingly demand context retention, task interde- pendence, and adaptability across dynamic environments, the single-agent model proves insufficient [145], [146]. Agentic AI systems represent an emergent class of in- telligent architectures in which multiple specialized agents collaborate to achieve complex, high-level objectives [33]. As defined in recent frameworks, these systems are composed of modular agents each tasked with a distinct subcomponent of a broader goal and coordinated through either a centralized orchestrator or a decentralized protocol [16], [141]. This structure signifies a conceptual departure from the atomic, reactive behaviors typically observed in single-agent architec- tures, toward a form of system-level intelligence characterized by dynamic inter-agent collaboration. A key enabler of this paradigm is goal decomposition , wherein a user-specified objective is automatically parsed and divided into smaller, manageable tasks by planning agents [39]. These subtasks are then distributed across the agent network. Multi-step reasoning and planning mechanisms facilitate the dynamic sequencing of these subtasks, allowing the system to adapt in real time to environmental shifts or partial task failures. This ensures robust task execution even under uncertainty [14]. Inter-agent communication is mediated through distributed communication channels , such as asynchronous messaging queues, shared memory buffers, or intermediate output ex- changes, enabling coordination without necessitating contin- uous central oversight [14], [147]. Furthermore, reflective reasoning and memory systems allow agents to store context across multiple interactions, evaluate past decisions, and itera- tively refine their strategies [148]. Collectively, these capabili- ties enable Agentic AI systems to exhibit flexible, adaptive, and collaborative intelligence that exceeds the operational limits of individual agents. A widely accepted conceptual illustration in the literature delineates the distinction between AI Agents and Agentic AI through the analogy of smart home systems. As depicted in Figure 7, the left side represents a traditional AI Agent in the form of a smart thermostat. This standalone agent receives a user-defined temperature setting and autonomously controls the heating or cooling system to maintain the target tempera- ture. While it demonstrates limited autonomy such as learning
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 7, "page_label": "8", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134918"}
Fig. 7: Comparative illustration of AI Agent vs. Agentic AI, synthesizing conceptual distinctions. Left: A single-task AI Agent. Right: A multi-agent, collaborative Agentic AI system. user schedules or reducing energy usage during absence, it operates in isolation, executing a singular, well-defined task without engaging in broader environmental coordination or goal inference [17], [59]. In contrast, the right side of Figure 7 illustrates an Agentic AI system embedded in a comprehensive smart home ecosys- tem. Here, multiple specialized agents interact synergistically to manage diverse aspects such as weather forecasting, daily scheduling, energy pricing optimization, security monitoring, and backup power activation. These agents are not just reactive modules; they communicate dynamically, share memory states, and collaboratively align actions toward a high-level system goal (e.g., optimizing comfort, safety, and energy efficiency in real time). For instance, a weather forecast agent might signal upcoming heatwaves, prompting early pre-cooling via solar energy before peak pricing hours, as coordinated by an energy management agent. Simultaneously, the system might delay high-energy tasks or activate surveillance systems during occupant absence, integrating decisions across domains. This figure embodies the architectural and functional leap from task-specific automation to adaptive, orchestrated intelligence. The AI Agent acts as a deterministic component with limited scope, while Agentic AI reflects distributed intelligence, char- acterized by goal decomposition, inter-agent communication, and contextual adaptation, hallmarks of modern agentic AI frameworks. 2) Key Differentiators between AI Agents and Agentic AI: To systematically capture the evolution from Generative AI to AI Agents and further to Agentic AI, we structure our comparative analysis around a foundational taxonomy where Generative AI serves as the baseline. While AI Agents and Agentic AI represent increasingly autonomous and interactive systems, both paradigms are fundamentally grounded in gener- ative architectures, especially LLMs and LIMs. Consequently, each comparative table in this subsection includes Generative AI as a reference column to highlight how agentic behavior diverges and builds upon generative foundations. A set of fundamental distinctions between AI Agents and Agentic AI particularly in terms of scope, autonomy, architec- tural composition, coordination strategy, and operational com- plexity are synthesized in Table I, derived from close analysis of prominent frameworks such as AutoGen [94] and ChatDev [149]. These comparisons provide a multi-dimensional view of how single-agent systems transition into coordinated, multi- agent ecosystems. Through the lens of generative capabilities, we trace the increasing sophistication in planning, communica- tion, and adaptation that characterizes the shift toward Agentic AI. While Table I delineates the foundational and operational differences between AI Agents and Agentic AI, a more gran-
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 8, "page_label": "9", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134919"}
TABLE I: Key Differences Between AI Agents and Agentic AI Feature AI Agents Agentic AI Definition Autonomous software programs that perform specific tasks. Systems of multiple AI agents collaborating to achieve complex goals. Autonomy Level High autonomy within specific tasks. Higher autonomy with the ability to manage multi-step, complex tasks. Task Complexity Typically handle single, specific tasks. Handle complex, multi-step tasks requiring coordination. Collaboration Operate independently. Involve multi-agent collaboration and information sharing. Learning and Adaptation Learn and adapt within their specific domain. Learn and adapt across a wider range of tasks and environments. Applications Customer service chatbots, virtual assistants, automated workflows. Supply chain management, business process optimization, virtual project managers. ular taxonomy is required to understand how these paradigms emerge from and relate to broader generative frameworks. Specifically, the conceptual and cognitive progression from static Generative AI systems to tool-augmented AI Agents, and further to collaborative Agentic AI ecosystems, necessi- tates an integrated comparative framework. This transition is not merely structural but also functional encompassing how initiation mechanisms, memory use, learning capacities, and orchestration strategies evolve across the agentic spectrum. Moreover, recent studies suggest the emergence of hybrid paradigms such as ”Generative Agents,” which blend gen- erative modeling with modular task specialization, further complicating the agentic landscape. In order to capture these nuanced relationships, Table II synthesizes the key conceptual and cognitive dimensions across four archetypes: Generative AI, AI Agents, Agentic AI, and inferred Generative Agents. By positioning Generative AI as a baseline technology, this taxonomy highlights the scientific continuum that spans from passive content generation to interactive task execution and finally to autonomous, multi-agent orchestration. This multi- tiered lens is critical for understanding both the current ca- pabilities and future trajectories of agentic intelligence across applied and theoretical domains. To further operationalize the distinctions outlined in Ta- ble I, Tables III and II extend the comparative lens to en- compass a broader spectrum of agent paradigms including AI Agents, Agentic AI, and emerging Generative Agents. Table III presents key architectural and behavioral attributes that highlight how each paradigm differs in terms of pri- mary capabilities, planning scope, interaction style, learning dynamics, and evaluation criteria. AI Agents are optimized for discrete task execution with limited planning horizons and rely on supervised or rule-based learning mechanisms. In con- trast, Agentic AI systems extend this capacity through multi- step planning, meta-learning, and inter-agent communication, positioning them for use in complex environments requiring autonomous goal setting and coordination. Generative Agents, as a more recent construct, inherit LLM-centric pretraining capabilities and excel in producing multimodal content cre- atively, yet they lack the proactive orchestration and state- persistent behaviors seen in Agentic AI systems. The second table (Table III) provides a process-driven comparison across three agent categories: Generative AI, AI Agents, and Agentic AI. This framing emphasizes how functional pipelines evolve from prompt-driven single-model inference in Generative AI, to tool-augmented execution in AI Agents, and finally to orchestrated agent networks in Agentic AI. The structure column underscores this progression: from single LLMs to integrated toolchains and ultimately to dis- tributed multi-agent systems. Access to external data, a key operational requirement for real-world utility, also increases in sophistication, from absent or optional in Generative AI to modular and coordinated in Agentic AI. Collectively, these comparative views reinforce that the evolution from generative to agentic paradigms is marked not just by increasing system complexity but also by deeper integration of autonomy, mem- ory, and decision-making across multiple levels of abstraction. Furthermore, to provide a deeper multi-dimensional un- derstanding of the evolving agentic landscape, Tables V through IX extend the comparative taxonomy to dissect five critical dimensions: core function and goal alignment, archi- tectural composition, operational mechanism, scope and com- plexity, and interaction-autonomy dynamics. These dimensions serve to not only reinforce the structural differences between Generative AI, AI Agents, and Agentic AI, but also introduce an emergent category Generative Agents representing modular agents designed for embedded subtask-level generation within broader workflows [150]. Table V situates the three paradigms in terms of their overarching goals and functional intent. While Generative AI centers on prompt-driven content generation, AI Agents emphasize tool-based task execution, and Agentic AI systems orchestrate full-fledged workflows. This functional expansion is mirrored architecturally in Table VI, where the system design transitions from single-model reliance (in Gen- erative AI) to multi-agent orchestration and shared memory utilization in Agentic AI. Table VII then outlines how these paradigms differ in their workflow execution pathways, high- lighting the rise of inter-agent coordination and hierarchical communication as key drivers of agentic behavior. Furthermore, Table VIII explores the increasing scope and operational complexity handled by these systems ranging from isolated content generation to adaptive, multi-agent col- laboration in dynamic environments. Finally, Table IX syn- thesizes the varying degrees of autonomy, interaction style, and decision-making granularity across the paradigms. These tables collectively establish a rigorous framework to classify and analyze agent-based AI systems, laying the groundwork for principled evaluation and future design of autonomous, intelligent, and collaborative agents operating at scale.
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 9, "page_label": "10", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134920"}
TABLE II: Taxonomy Summary of AI Agent Paradigms: Conceptual and Cognitive Dimensions Conceptual Dimension Generative AI AI Agent Agentic AI Generative Agent (Inferred) Initiation Type Prompt-triggered by user or input Prompt or goal-triggered with tool use Goal-initiated or orchestrated task Prompt or system-level trig- ger Goal Flexibility (None) fixed per prompt (Low) executes specific goal (High) decomposes and adapts goals (Low) guided by subtask goal Temporal Continuity Stateless, single-session out- put Short-term continuity within task Persistent across workflow stages Context-limited to subtask Learning/Adaptation Static (pretrained) (Might in future) Tool selec- tion strategies may evolve (Yes) Learns from outcomes Typically static; limited adaptation Memory Use No memory or short context window Optional memory or tool cache Shared episodic/task mem- ory Subtask-local or contextual memory Coordination Strategy None (single-step process) Isolated task execution Hierarchical or decentralized coordination Receives instructions from system System Role Content generator Tool-using task executor Collaborative workflow or- chestrator Subtask-level modular gener- ator TABLE III: Key Attributes of AI Agents, Agentic AI, and Generative Agents Aspect AI Agent Agentic AI Generative Agent Primary Ca- pability Task execution Autonomous goal setting Content genera- tion Planning Horizon Single-step Multi-step N/A (content only) Learning Mechanism Rule-based or supervised Reinforcement/meta- learning Large-scale pre- training Interaction Style Reactive Proactive Creative Evaluation Focus Accuracy, latency Engagement, adaptability Coherence, diver- sity TABLE IV: Comparison of Generative AI, AI Agents, and Agentic AI Feature Generative AI AI Agent Agentic AI Core Function Content genera- tion Task-specific execution using tools Complex workflow automation Mechanism Prompt → LLM → Output Prompt → Tool Call → LLM → Output Goal → Agent Orchestration → Output Structure Single model LLM + tool(s) Multi-agent sys- tem External Data Access None (unless added) Via external APIs Coordinated multi-agent access Key Trait Reactivity Tool-use Collaboration Each of the comparative tables presented from Table V through Table IX offers a layered analytical lens to isolate the distinguishing attributes of Generative AI, AI Agents, and Agentic AI, thereby grounding the conceptual taxonomy in concrete operational and architectural features. Table V, for instance, addresses the most fundamental layer of differentia- tion: core function and system goal. While Generative AI is narrowly focused on reactive content production conditioned on user prompts, AI Agents are characterized by their ability to perform targeted tasks using external tools. Agentic AI, by contrast, is defined by its ability to pursue high-level goals through the orchestration of multiple subagents each addressing a component of a broader workflow. This shift from output generation to workflow execution marks a critical inflection point in the evolution of autonomous systems. In Table VI, the architectural distinctions are made explicit, especially in terms of system composition and control logic. Generative AI relies on a single model with no built-in capabil- ity for tool use or delegation, whereas AI Agents combine lan- guage models with auxiliary APIs and interface mechanisms to augment functionality. Agentic AI extends this further by introducing multi-agent systems where collaboration, memory persistence, and orchestration protocols are central to the system’s operation. This expansion is crucial for enabling intelligent delegation, context preservation, and dynamic role assignment capabilities absent in both generative and single- agent systems. Likewise in Table VII dives deeper into how these systems function operationally, emphasizing differences in execution logic and information flow. Unlike Generative AI’s linear pipeline (prompt → output), AI Agents implement procedural mechanisms to incorporate tool responses mid- process. Agentic AI introduces recursive task reallocation and cross-agent messaging, thus facilitating emergent decision- making that cannot be captured by static LLM outputs alone. Table VIII further reinforces these distinctions by mapping each system’s capacity to handle task diversity, temporal scale, and operational robustness. Here, Agentic AI emerges as uniquely capable of supporting high-complexity goals that de- mand adaptive, multi-phase reasoning and execution strategies. Furthermore, Table IX brings into sharp relief the opera- tional and behavioral distinctions across Generative AI, AI Agents, and Agentic AI, with a particular focus on autonomy levels, interaction styles, and inter-agent coordination. Gener- ative AI systems, typified by models such as GPT-3 [114] and and DALL·E https://openai.com/index/dall-e-3/, remain reactive generating content solely in response to prompts
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 10, "page_label": "11", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134921"}
TABLE V: Comparison by Core Function and Goal Feature Generative AI AI Agent Agentic AI Generative Agent (Inferred) Primary Goal Create novel content based on prompt Execute a specific task us- ing external tools Automate complex work- flow or achieve high-level goals Perform a specific genera- tive sub-task Core Function Content generation (text, image, audio, etc.) Task execution with exter- nal interaction Workflow orchestration and goal achievement Sub-task content generation within a workflow TABLE VI: Comparison by Architectural Components Component Generative AI AI Agent Agentic AI Generative Agent (Inferred) Core Engine LLM / LIM LLM Multiple LLMs (potentially diverse) LLM Prompts Yes (input trigger) Yes (task guidance) Yes (system goal and agent tasks) Yes (sub-task guidance) Tools/APIs No (inherently) Yes (essential) Yes (available to constituent agents) Potentially (if sub-task re- quires) Multiple Agents No No Yes (essential; collabora- tive) No (is an individual agent) Orchestration No No Yes (implicit or explicit) No (is part of orchestration) TABLE VII: Comparison by Operational Mechanism Mechanism Generative AI AI Agent Agentic AI Generative Agent (Inferred) Primary Driver Reactivity to prompt Tool calling for task execu- tion Inter-agent communication and collaboration Reactivity to input or sub- task prompt Interaction Mode User → LLM User → Agent → Tool User → System → Agents System/Agent → Agent → Output Workflow Handling Single generation step Single task execution Multi-step workflow coordi- nation Single step within workflow Information Flow Input → Output Input → Tool → Output Input → Agent1 → Agent2 → ... → Output Input (from system/agent) → Output TABLE VIII: Comparison by Scope and Complexity Aspect Generative AI AI Agent Agentic AI Generative Agent (Inferred) Task Scope Single piece of generated content Single, specific, defined task Complex, multi-faceted goal or workflow Specific sub-task (often generative) Complexity Low (relative) Medium (integrates tools) High (multi-agent coordina- tion) Low to Medium (one task component) Example (Video) Chatbot Tavily Search Agent YouTube-to-Blog Conversion System Title/Description/Conclusion Generator TABLE IX: Comparison by Interaction and Autonomy Feature Generative AI AI Agent Agentic AI Generative Agent (Inferred) Autonomy Level Low (requires prompt) Medium (uses tools au- tonomously) High (manages entire pro- cess) Low to Medium (executes sub-task) External Interaction None (baseline) Via specific tools or APIs Through multiple agents/tools Possibly via tools (if needed) Internal Interaction N/A N/A High (inter-agent) Receives input from system or agent Decision Making Pattern selection Tool usage decisions Goal decomposition and as- signment Best sub-task generation strategy
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 11, "page_label": "12", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134922"}
without maintaining persistent state or engaging in iterative reasoning. In contrast, AI Agents such as those constructed with LangChain [93] or MetaGPT [151], exhibit a higher degree of autonomy, capable of initiating external tool invoca- tions and adapting behaviors within bounded tasks. However, their autonomy is typically confined to isolated task execution, lacking long-term state continuity or collaborative interaction. Agentic AI systems mark a significant departure from these paradigms by introducing internal orchestration mechanisms and multi-agent collaboration frameworks. For example, plat- forms like AutoGen [94] and ChatDev [149] exemplify agentic coordination through task decomposition, role assignment, and recursive feedback loops. In AutoGen, one agent might serve as a planner while another retrieves information and a third synthesizes a report, each communicating through shared memory buffers and governed by an orchestrator agent that monitors dependencies and overall task progression. This structured coordination allows for more complex goal pur- suit and flexible behavior in dynamic environments. Such architectures fundamentally shift the focus of intelligence from single-model outputs to emergent system-level behavior, wherein agents learn, negotiate, and update decisions based on evolving task states. Thus, the comparative taxonomy not only highlights increasing levels of operational independence but also illustrates how Agentic AI introduces novel paradigms of communication, memory integration, and decentralized con- trol, paving the way for the next generation of autonomous systems with scalable, adaptive intelligence. A. Architectural Evolution: From AI Agents to Agentic AI Systems While both AI Agents and Agentic AI systems are grounded in modular design principles, Agentic AI significantly extends the foundational architecture to support more complex, dis- tributed, and adaptive behaviors. As illustrated in Figure 8, the transition begins with core subsystems Perception, Rea- soning, and Action, that define traditional AI Agents. Agentic AI enhances this base by integrating advanced components such as Specialized Agents, Advanced Reasoning & Plan- ning, Persistent Memory, and Orchestration. The figure further emphasizes emergent capabilities including Multi-Agent Col- laboration, System Coordination, Shared Context, and Task Decomposition, all encapsulated within a dotted boundary that signifies the shift toward reflective, decentralized, and goal-driven system architectures. This progression marks a fundamental inflection point in intelligent agent design. This section synthesizes findings from empirical frameworks such as LangChain [93], AutoGPT [94], and TaskMatrix [152], highlighting this progression in architectural sophistication. 1) Core Architectural Components of AI Agents: Foun- dational AI Agents are typically composed of four primary subsystems: perception, reasoning, action, and learning. These subsystems form a closed-loop operational cycle, commonly referred to as “Understand, Think, Act” from a user interface perspective, or “Input, Processing, Action, Learning” in sys- tems design literature [14], [153]. • Perception Module: This subsystem ingests input signals from users (e.g., natural language prompts) or external systems (e.g., APIs, file uploads, sensor streams). It is responsible for pre-processing data into a format inter- pretable by the agent’s reasoning module. For example, in LangChain-based agents [93], [154], the perception layer handles prompt templating, contextual wrapping, and retrieval augmentation via document chunking and embedding search. • Knowledge Representation and Reasoning (KRR) Module: At the core of the agent’s intelligence lies the KRR module, which applies symbolic, statistical, or hybrid logic to input data. Techniques include rule-based logic (e.g., if-then decision trees), deterministic workflow engines, and simple planning graphs. Reasoning in agents like AutoGPT [30] is enhanced with function-calling and prompt chaining to simulate thought processes (e.g., “step-by-step” prompts or intermediate tool invocations). • Action Selection and Execution Module: This module translates inferred decisions into external actions using an action library. These actions may include sending messages, updating databases, querying APIs, or pro- ducing structured outputs. Execution is often managed by middleware like LangChain’s “agent executor,” which links LLM outputs to tool calls and observes responses for subsequent steps [93]. • Basic Learning and Adaptation: Traditional AI Agents feature limited learning mechanisms, such as heuristic parameter adjustment [155], [156] or history-informed context retention. For instance, agents may use simple memory buffers to recall prior user inputs or apply scoring mechanisms to improve tool selection in future iterations. Customization of these agents typically involves domain- specific prompt engineering, rule injection, or workflow tem- plates, distinguishing them from hard-coded automation scripts by their ability to make context-aware decisions. Systems like ReAct [132] exemplify this architecture, combining reasoning and action in an iterative framework where agents simulate internal dialogue before selecting external actions. 2) Architectural Enhancements in Agentic AI: Agentic AI systems inherit the modularity of AI Agents but extend their architecture to support distributed intelligence, inter- agent communication, and recursive planning. The literature documents a number of critical architectural enhancements that differentiate Agentic AI from its predecessors [157], [158]. • Ensemble of Specialized Agents: Rather than operating as a monolithic unit, Agentic AI systems consist of multiple agents, each assigned a specialized function e.g., a summarizer, a retriever, a planner. These agents inter- act via communication channels (e.g., message queues, blackboards, or shared memory). For instance, MetaGPT [151] exemplify this approach by modeling agents after corporate departments (e.g., CEO, CTO, engineer), where
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 12, "page_label": "13", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134924"}
Multi-Agent Collaboration Task-Decomposition Shared Context System Coordination AI Agents Agentic AI Fig. 8: Illustrating architectural evolution from traditional AI Agents to modern Agentic AI systems. It begins with core modules Perception, Reasoning, and Action and expands into advanced components including Specialized Agents, Advanced Reasoning & Planning, Persistent Memory, and Orchestration. The diagram further captures emergent properties such as Multi- Agent Collaboration, System Coordination, Shared Context, and Task Decomposition, all enclosed within a dotted boundary signifying layered modularity and the transition to distributed, adaptive agentic AI intelligence. roles are modular, reusable, and role-bound. • Advanced Reasoning and Planning: Agentic systems embed recursive reasoning capabilities using frameworks such as ReAct [132], Chain-of-Thought (CoT) prompting [159], and Tree of Thoughts [160]. These mechanisms allow agents to break down a complex task into multiple reasoning stages, evaluate intermediate results, and re- plan actions dynamically. This enables the system to respond adaptively to uncertainty or partial failure. • Persistent Memory Architectures: Unlike traditional agents, Agentic AI incorporates memory subsystems to persist knowledge across task cycles or agent sessions [161], [162]. Memory types include episodic memory (task-specific history) [163], [164], semantic memory (long-term facts or structured data) [165], [166], and vector-based memory for retrieval-augmented generation (RAG) [167], [168]. For example, AutoGen [94] agents maintain scratchpads for intermediate computations, en- abling stepwise task progression. • Orchestration Layers / Meta-Agents: A key innovation in Agentic AI is the introduction of orchestrators meta- agents that coordinate the lifecycle of subordinate agents, manage dependencies, assign roles, and resolve conflicts. Orchestrators often include task managers, evaluators, or moderators. In ChatDev [149], for example, a virtual CEO meta-agent distributes subtasks to departmental agents and integrates their outputs into a unified strategic response. These enhancements collectively enable Agentic AI to sup- port scenarios that require sustained context, distributed labor, multi-modal coordination, and strategic adaptation. Use cases range from research assistants that retrieve, summarize, and draft documents in tandem (e.g., AutoGen pipelines [94]) to smart supply chain agents that monitor logistics, vendor performance, and dynamic pricing models in parallel. The shift from isolated perception–reasoning–action loops to collaborative and reflective multi-agent workflows marks a key inflection point in the architectural design of intelligent systems. This progression positions Agentic AI as the next stage of AI infrastructure capable not only of executing predefined workflows but also of constructing, revising, and managing complex objectives across agents with minimal human supervision. IV. A PPLICATION OF AI AGENTS AND AGENTIC AI To illustrate the real-world utility and operational diver- gence between AI Agents and Agentic AI systems, this study
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 13, "page_label": "14", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134925"}
Customer Support Automation and Internal Enterprise Search Email Filtering and Prioritization Personalized Content Recommendation, Basic Data Analysis and Reporting Autonomous Scheduling Assistants Multi-Agent Research Assistants Intelligent Robotics Coordination Collaborative Medical Decision Support Multi-Agent Game AI & Adaptive Workflow Automation Fig. 9: Categorized applications of AI Agents and Agentic AI across eight core functional domains. synthesizes a range of applications drawn from recent litera- ture, as visualized in Figure 9. We systematically categorize and analyze application domains across two parallel tracks: conventional AI Agent systems and their more advanced Agentic AI counterparts. For AI Agents, four primary use cases are reviewed: (1) Customer Support Automation and Internal Enterprise Search, where single-agent models handle structured queries and response generation; (2) Email Filtering and Prioritization, where agents assist users in managing high-volume communication through classification heuristics; (3) Personalized Content Recommendation and Basic Data Reporting, where user behavior is analyzed for automated insights; and (4) Autonomous Scheduling Assistants, which interpret calendars and book tasks with minimal user input. In contrast, Agentic AI applications encompass broader and more dynamic capabilities, reviewed through four additional categories: (1) Multi-Agent Research Assistants that retrieve, synthesize, and draft scientific content collaboratively; (2) Intelligent Robotics Coordination, including drone and multi- robot systems in fields like agriculture and logistics; (3) Collaborative Medical Decision Support, involving diagnostic, treatment, and monitoring subsystems; and (4) Multi-Agent Game AI and Adaptive Workflow Automation, where decen- tralized agents interact strategically or handle complex task pipelines. 1) Application of AI Agents: 1) Customer Support Automation and Internal Enter- prise Search: AI Agents are widely adopted in en- terprise environments for automating customer support and facilitating internal knowledge retrieval. In cus- tomer service, these agents leverage retrieval-augmented LLMs interfaced with APIs and organizational knowl- edge bases to answer user queries, triage tickets, and perform actions like order tracking or return initia- tion [47]. For internal enterprise search, agents built on vector stores (e.g., Pinecone, Elasticsearch) retrieve semantically relevant documents in response to natu- ral language queries. Tools such as Salesforce Ein- stein https://www.salesforce.com/artificial-intelligence/, Intercom Fin https://www.intercom.com/fin, and Notion AI https://www.notion.com/product/ai demonstrate how structured input processing and summarization capabil- ities reduce workload and improve enterprise decision- making. A practical example (Figure 10a) of this dual func- tionality can be seen in a multinational e-commerce company deploying an AI Agent-based customer support and internal search assistant. For customer support, the AI Agent integrates with the company’s CRM (e.g., Salesforce) and fulfillment APIs to resolve queries such as “Where is my order?” or “How can I return this item?”. Within milliseconds, the agent retrieves con-
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 14, "page_label": "15", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134926"}
textual data from shipping databases and policy repos- itories, then generates a personalized response using retrieval-augmented generation. For internal enterprise search, employees use the same system to query past meeting notes, sales presentations, or legal documents. When an HR manager types “summarize key benefits policy changes from last year,” the agent queries a Pinecone vector store embedded with enterprise doc- umentation, ranks results by semantic similarity, and returns a concise summary along with source links. These capabilities not only reduce ticket volume and support overhead but also minimize time spent searching for institutional knowledge (like policies, procedures, or manuals). The result is a unified, responsive system that enhances both external service delivery and internal operational efficiency using modular AI Agent architec- tures. 2) Email Filtering and Prioritization: Within productivity tools, AI Agents automate email triage through content classification and prioritization. Integrated with systems like Microsoft Outlook and Superhuman, these agents analyze metadata and message semantics to detect ur- gency, extract tasks, and recommend replies. They apply user-tuned filtering rules, behavioral signals, and intent classification to reduce cognitive overload. Autonomous actions, such as auto-tagging or summarizing threads, enhance efficiency, while embedded feedback loops en- able personalization through incremental learning [63]. Figure10b illustrates a practical implementation of AI Agents in the domain of email filtering and prioriti- zation. In modern workplace environments, users are inundated with high volumes of email, leading to cog- nitive overload and missed critical communications. AI Agents embedded in platforms like Microsoft Outlook or Superhuman act as intelligent intermediaries that classify, cluster, and triage incoming messages. These agents evaluate metadata (e.g., sender, subject line) and semantic content to detect urgency, extract actionable items, and suggest smart replies. As depicted, the AI agent autonomously categorizes emails into tags such as “Urgent,” “Follow-up,” and “Low Priority,” while also offering context-aware summaries and reply drafts. Through continual feedback loops and usage patterns, the system adapts to user preferences, gradually refining classification thresholds and improving prioritization ac- curacy. This automation offloads decision fatigue, allow- ing users to focus on high-value tasks, while maintain- ing efficient communication management in fast-paced, information-dense environments. 3) Personalized Content Recommendation and Basic Data Reporting: AI Agents support adaptive personal- ization by analyzing behavioral patterns for news, prod- uct, or media recommendations. Platforms like Amazon, YouTube, and Spotify deploy these agents to infer user preferences via collaborative filtering, intent detection, and content ranking. Simultaneously, AI Agents in an- (a) (b) (c) (d) Fig. 10: Applications of AI Agents in enterprise settings: (a) Customer support and internal enterprise search; (b) Email filtering and prioritization; (c) Personalized content recom- mendation and basic data reporting; and (d) Autonomous scheduling assistants. Each example highlights modular AI Agent integration for automation, intent understanding, and adaptive reasoning across operational workflows and user- facing systems.
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 15, "page_label": "16", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134927"}
alytics systems (e.g., Tableau Pulse, Power BI Copi- lot) enable natural-language data queries and automated report generation by converting prompts to structured database queries and visual summaries, democratizing business intelligence access. A practical illustration (Figure 10c) of AI Agents in personalized content recommendation and basic data reporting can be found in e-commerce and enterprise analytics systems. Consider an AI agent deployed on a retail platform like Amazon: as users browse, click, and purchase items, the agent continuously monitors inter- action patterns such as dwell time, search queries, and purchase sequences. Using collaborative filtering and content-based ranking, the agent infers user intent and dynamically generates personalized product suggestions that evolve over time. For example, after purchasing gardening tools, a user may be recommended compat- ible soil sensors or relevant books. This level of per- sonalization enhances customer engagement, increases conversion rates, and supports long-term user retention. Simultaneously, within a corporate setting, an AI agent integrated into Power BI Copilot allows non-technical staff to request insights using natural language, for instance, “Compare Q3 and Q4 sales in the Northeast.” The agent translates the prompt into structured SQL queries, extracts patterns from the database, and outputs a concise visual summary or narrative report. This application reduces dependency on data analysts and empowers broader business decision-making through intuitive, language-driven interfaces. 4) Autonomous Scheduling Assistants: AI Agents in- tegrated with calendar systems autonomously manage meeting coordination, rescheduling, and conflict reso- lution. Tools like x.ai and Reclaim AI interpret vague scheduling commands, access calendar APIs, and iden- tify optimal time slots based on learned user preferences. They minimize human input while adapting to dynamic availability constraints. Their ability to interface with enterprise systems and respond to ambiguous instruc- tions highlights the modular autonomy of contemporary scheduling agents. A practical application of autonomous scheduling agents can be seen in corporate settings as depicted in Fig- ure 10d where employees manage multiple overlapping responsibilities across global time zones. Consider an executive assistant AI agent integrated with Google Calendar and Slack that interprets a command like “Find a 45-minute window for a follow-up with the product team next week.” The agent parses the request, checks availability for all participants, accounts for time zone differences, and avoids meeting conflicts or working- hour violations. If it identifies a conflict with a pre- viously scheduled task, it may autonomously propose alternative windows and notify affected attendees via Slack integration. Additionally, the agent learns from historical user preferences such as avoiding early Friday meetings and refines its suggestions over time. Tools like Reclaim AI and Clockwise exemplify this capabil- ity, offering calendar-aware automation that adapts to evolving workloads. Such assistants reduce coordination overhead, increase scheduling efficiency, and enable smoother team workflows by proactively resolving am- biguity and optimizing calendar utilization. TABLE X: Representative AI Agents (2023–2025): Applica- tions and Operational Characteristics Model / Reference Application Area Operation as AI Agent ChatGPT Deep Re- search Mode OpenAI (2025) Deep Research OpenAI Research Analy- sis / Reporting Synthesizes hundreds of sources into reports; functions as a self-directed research analyst. Operator OpenAI (2025) Opera- tor OpenAI Web Automation Navigates websites, fills forms, and completes online tasks au- tonomously. Agentspace: Deep Re- search Agent Google (2025) Google Agentspace Enterprise Reporting Generates business intelligence reports using Gemini models. NotebookLM Plus Agent Google (2025) NotebookLM Knowledge Man- agement Summarizes, organizes, and retrieves data across Google Workspace apps. Nova Act Amazon (2025) Ama- zon Nova Workflow Automation Automates browser-based tasks such as scheduling, HR requests, and email. Manus Agent Monica (2025) Manus Agenthttps://manus.im/ Personal Task Automation Executes trip planning, site building, and product compar- isons via browsing. Harvey Harvey AI (2025) Har- vey Legal Automation Automates document drafting, legal review, and predictive case analysis. Otter Meeting Agent Otter.ai (2025) Otter Meeting Management Transcribes meetings and pro- vides highlights, summaries, and action items. Otter Sales Agent Otter.ai (2025) Otter sales agent Sales Enablement Analyzes sales calls, extracts insights, and suggests follow- ups. ClickUp Brain ClickUp (2025) ClickUp Brain Project Manage- ment Automates task tracking, up- dates, and project workflows. Agentforce Agentforce (2025) Agentforce Customer Support Routes tickets and generates context-aware replies for sup- port teams. Microsoft Copilot Microsoft (2024) Mi- crosoft Copilot Office Productiv- ity Automates writing, formula generation, and summarization in Microsoft 365. Project Astra Google DeepMind (2025) Project Astra Multimodal As- sistance Processes text, image, audio, and video for task support and recommendations. Claude 3.5 Agent Anthropic (2025) Claude 3.5 Sonnet Enterprise Assis- tance Uses multimodal input for rea- soning, personalization, and enterprise task completion. 2) Appications of Agentic AI: 1) Multi-Agent Research Assistants: Agentic AI systems are increasingly deployed in academic and industrial research pipelines to automate multi-stage knowledge work. Platforms like AutoGen and CrewAI assign spe- cialized roles to multiple agents retrievers, summarizers,
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 16, "page_label": "17", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134928"}
synthesizers, and citation formatters under a central orchestrator. The orchestrator distributes tasks, manages role dependencies, and integrates outputs into coherent drafts or review summaries. Persistent memory allows for cross-agent context sharing and refinement over time. These systems are being used for literature re- views, grant preparation, and patent search pipelines, outperforming single-agent systems such as ChatGPT by enabling concurrent sub-task execution and long-context management [94]. For example, a real-world application of agentic AI as depicted in Figure 11a is in the automated drafting of grant proposals. Consider a university research group preparing a National Science Foundation (NSF) sub- mission. Using an AutoGen-based architecture, distinct agents are assigned: one retrieves prior funded proposals and extracts structural patterns; another scans recent literature to summarize related work; a third agent aligns proposal objectives with NSF solicitation language; and a formatting agent structures the document per com- pliance guidelines. The orchestrator coordinates these agents, resolving dependencies (e.g., aligning methodol- ogy with objectives) and ensuring stylistic consistency across sections. Persistent memory modules store evolv- ing drafts, feedback from collaborators, and funding agency templates, enabling iterative improvement over multiple sessions. Compared to traditional manual pro- cesses, this multi-agent system significantly accelerates drafting time, improves narrative cohesion, and ensures regulatory alignment offering a scalable, adaptive ap- proach to collaborative scientific writing in academia and R&D-intensive industries. 2) Intelligent Robotics Coordination: In robotics and automation, Agentic AI underpins collaborative behav- ior in multi-robot systems. Each robot operates as a task specialized agent such as pickers, transporters, or mappers while an orchestrator supervises and adapts workflows. These architectures rely on shared spatial memory, real-time sensor fusion, and inter-agent syn- chronization for coordinated physical actions. Use cases include warehouse automation, drone-based orchard in- spection, and robotic harvesting [151]. For instance, agricultural drone swarms may collectively map tree rows, identify diseased fruits, and initiate mechanical interventions. This dynamic allocation enables real-time reconfiguration and autonomy across agents facing un- certain or evolving environments. For example, in commercial apple orchards (Figure 11b), Agentic AI enables a coordinated multi-robot system to optimize the harvest season. Here, task-specialized robots such as autonomous pickers, fruit classifiers, transport bots, and drone mappers operate as agentic units under a central orchestrator. The mapping drones first survey the orchard and use vision-language models (VLMs) to generate high-resolution yield maps and identify ripe clusters. This spatial data is shared via a centralized memory layer accessible by all agents. Picker robots are assigned to high-density zones, guided by path-planning agents that optimize routes around obsta- cles and labor zones. Simultaneously, transport agents dynamically shuttle crates between pickers and storage, adjusting tasks in response to picker load levels and terrain changes. All agents communicate asynchronously through a shared protocol, and the orchestrator contin- uously adjusts task priorities based on weather fore- casts or mechanical faults. If one picker fails, nearby units autonomously reallocate workload. This adaptive, memory-driven coordination exemplifies Agentic AI’s potential to reduce labor costs, increase harvest effi- ciency, and respond to uncertainties in complex agricul- tural environments far surpassing the rigid programming of legacy agricultural robots [94], [151]. 3) Collaborative Medical Decision Support: In high- stakes clinical environments, Agentic AI enables dis- tributed medical reasoning by assigning tasks such as diagnostics, vital monitoring, and treatment planning to specialized agents. For example, one agent may retrieve patient history, another validates findings against diagnostic guidelines, and a third proposes treatment op- tions. These agents synchronize through shared memory and reasoning chains, ensuring coherent and safe rec- ommendations. Applications include ICU management, radiology triage, and pandemic response. Real-world pilots show improved efficiency and decision accuracy compared to isolated expert systems [92]. For example, in a hospital ICU (Figure 11c), an agentic AI system supports clinicians in managing complex patient cases. A diagnostic agent continuously ana- lyzes vitals and lab data for early detection of sepsis risk. Simultaneously, a history retrieval agent accesses electronic health records (EHRs) to summarize comor- bidities and recent procedures. A treatment planning agent cross-references current symptoms with clinical guidelines (e.g., Surviving Sepsis Campaign), proposing antibiotic regimens or fluid protocols. The orchestra- tor integrates these insights, ensures consistency, and surfaces conflicts for human review. Feedback from physicians is stored in a persistent memory module, allowing agents to refine their reasoning based on prior interventions and outcomes. This coordinated system enhances clinical workflow by reducing cognitive load, shortening decision times, and minimizing oversight risks. Early deployments in critical care and oncology units have demonstrated increased diagnostic precision and better adherence to evidence-based protocols, offer- ing a scalable solution for safer, real-time collaborative medical support. 4) Multi-Agent Game AI and Adaptive Workflow Au- tomation: In simulation environments and enterprise systems, Agentic AI facilitates decentralized task exe- cution and emergent coordination. Game platforms like AI Dungeon deploy independent NPC agents with goals,
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 17, "page_label": "18", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134929"}
Central Memory Layer Retrieve prior proposals Align with solicitation Structure the document Store evolving drafts Goal Module Memory Store (a) (b) (c) (d) Using Agentic AI to coordinate robotic harvest Fig. 11: Illustrative Applications of Agentic AI Across Domains: Figure 11 presents four real-world applications of agentic AI systems. (a) Automated grant writing using multi-agent orchestration for structured literature analysis, compliance alignment, and document formatting. (b) Coordinated multi-robot harvesting in apple orchards using shared spatial memory and task- specific agents for mapping, picking, and transport. (c) Clinical decision support in hospital ICUs through synchronized agents for diagnostics, treatment planning, and EHR analysis, enhancing safety and workflow efficiency. (d) Cybersecurity incident response in enterprise environments via agents handling threat classification, compliance analysis, and mitigation planning. In all cases, central orchestrators manage inter-agent communication, shared memory enables context retention, and feedback mechanisms drive continual learning. These use cases highlight agentic AI’s capacity for scalable, autonomous task coordination in complex, dynamic environments across science, agriculture, healthcare, and IT security. memory, and dynamic interactivity to create emergent narratives and social behavior. In enterprise workflows, systems such as MultiOn and Cognosys use agents to manage processes like legal review or incident esca- lation, where each step is governed by a specialized module. These architectures exhibit resilience, exception handling, and feedback-driven adaptability far beyond rule-based pipelines. For example, in a modern enterprise IT environment (as depicted in Figure 11d), Agentic AI systems are increasingly deployed to autonomously manage cyber- security incident response workflows. When a potential
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 18, "page_label": "19", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134930"}
threat is detected such as abnormal access patterns or unauthorized data exfiltration, specialized agents are activated in parallel. One agent performs real-time threat classification using historical breach data and anomaly detection models. A second agent queries relevant log data from network nodes and correlates patterns across systems. A third agent interprets compliance frameworks (e.g., GDPR or HIPAA) to assess the regulatory sever- ity of the event. A fourth agent simulates mitigation strategies and forecasts operational risks. These agents coordinate under a central orchestrator that evaluates collective outputs, integrates temporal reasoning, and issues recommended actions to human analysts. Through shared memory structures and iterative feedback, the system learns from prior incidents, enabling faster and more accurate responses in future cases. Compared to traditional rule-based security systems, this agentic model enhances decision latency, reduces false positives, and supports proactive threat containment in large-scale organizational infrastructures [94]. V. C HALLENGES AND LIMITATIONS IN AI AGENTS AND AGENTIC AI To systematically understand the operational and theoret- ical limitations of current intelligent systems, we present a comparative visual synthesis in Figure 12, which categorizes challenges and potential remedies across both AI Agents and Agentic AI paradigms. Figure 12a outlines the four most pressing limitations specific to AI Agents namely, lack of causal reasoning, inherited LLM constraints (e.g., hallucina- tions, shallow reasoning), incomplete agentic properties (e.g., autonomy, proactivity), and failures in long-horizon planning and recovery. These challenges often arise due to their reliance on stateless LLM prompts, limited memory, and heuristic reasoning loops. In contrast, Figure 12b identifies eight critical bottlenecks unique to Agentic AI systems, such as inter-agent error cas- cades, coordination breakdowns, emergent instability, scala- bility limits, and explainability issues. These challenges stem from the complexity of orchestrating multiple agents across distributed tasks without standardized architectures, robust communication protocols, or causal alignment frameworks. Figure 13 complements this diagnostic framework by syn- thesizing ten forward-looking design strategies aimed at mit- igating these limitations. These include Retrieval-Augmented Generation (RAG), tool-based reasoning [126], [127], [129], agentic feedback loops (ReAct [132]), role-based multi-agent orchestration, memory architectures, causal modeling, and governance-aware design. Together, these three panels offer a consolidated roadmap for addressing current pitfalls and accelerating the development of safe, scalable, and context- aware autonomous systems. 1) Challenges and Limitations of AI Agents: While AI Agents have garnered considerable attention for their ability to automate structured tasks using LLMs and tool-use interfaces, the literature highlights significant theoretical and practical TABLE XI: Representative Agentic AI Models (2023–2025): Applications and Operational Characteristics Model / Reference Application Area Operation as Agentic AI Auto-GPT [30] Task Automation Decomposes high-level goals, executes subtasks via tools/APIs, and iteratively self-corrects. GPT Engineer Open Source (2023) GPT Engineer Code Generation Builds entire codebases: plans, writes, tests, and re- fines based on output. MetaGPT [151]) Software Collab- oration Coordinates specialized agents (e.g., coder, tester) for modular multi-role project development. BabyAGI Nakajima (2024) BabyAGI Project Manage- ment Continuously creates, pri- oritizes, and executes sub- tasks to adaptively meet user goals. V oyager Wang et al. (2023) [169] Game Exploration Learns in Minecraft, in- vents new skills, sets sub- goals, and adapts strategy in real time. CAMEL Liu et al. (2023) [170] Multi-Agent Simulation Simulates agent societies with communication, ne- gotiation, and emergent collaborative behavior. Einstein Copilot Salesforce (2024) Ein- stein Copilot Customer Automation Automates full support workflows, escalates is- sues, and improves via feedback loops. Copilot Studio (Agentic Mode) Microsoft (2025) Github Agentic Copilot Productivity Au- tomation Manages documents, meetings, and projects across Microsoft 365 with adaptive orchestration. Atera AI Copilot Atera (2025) Atera Agentic AI IT Operations Diagnoses/resolves IT is- sues, automates ticketing, and learns from evolving infrastructures. AES Safety Audit Agent AES (2025) AES agentic Industrial Safety Automates audits, assesses compliance, and evolves strategies to enhance safety outcomes. DeepMind Gato (Agentic Mode) Reed et al. (2022) [171] General Robotics Performs varied tasks across modalities, dynamically learns, plans, and executes. GPT-4o + Plugins OpenAI (2024) GPT- 4O Agentic Enterprise Automation Manages complex work- flows, integrates external tools, and executes adap- tive decisions. limitations that inhibit their reliability, generalization, and long-term autonomy [132], [158]. These challenges arise from both the architectural dependence on static, pretrained models and the difficulty of instilling agentic qualities such as causal reasoning, planning, and robust adaptation. The key challenges and limitations (Figure 12a) of AI Agents are as summarized into following five points: 1) Lack of Causal Understanding: One of the most foun- dational challenges lies in the agents’ inability to reason causally [172], [173]. Current LLMs, which form the cognitive core of most AI Agents, excel at identifying
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 19, "page_label": "20", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134932"}
(a) (b) Fig. 12: Illustration of Chellenges: (a) Key limitations of AI Agents including causality deficits and shallow reasoning. (b) Amplified coordination and stability challenges in Agentic AI systems. statistical correlations within training data. However, as noted in recent research from DeepMind and conceptual analyses by TrueTheta, they fundamentally lack the capacity for causal modeling distinguishing between mere association and cause-effect relationships [174]– [176]. For instance, while an LLM-powered agent might learn that visiting a hospital often co-occurs with illness, it cannot infer whether the illness causes the visit or vice versa, nor can it simulate interventions or hypothetical changes. This deficit becomes particularly problematic under distributional shifts, where real-world conditions differ from the training regime [177], [178]. Without such grounding, agents remain brittle, failing in novel or high-stakes scenarios. For example, a navigation agent that excels in urban driving may misbehave in snow or construction zones if it lacks an internal causal model of road traction or spatial occlusion. 2) Inherited Limitations from LLMs: AI Agents, particu- larly those powered by LLMs, inherit a number of intrin- sic limitations that impact their reliability, adaptability, and overall trustworthiness in practical deployments [179]–[181]. One of the most prominent issues is the ten- dency to produce hallucinations plausible but factually incorrect outputs. In high-stakes domains such as legal consultation or scientific research, these hallucinations can lead to severe misjudgments and erode user trust [182], [183]. Compounding this is the well-documented prompt sensitivity of LLMs, where even minor varia- tions in phrasing can lead to divergent behaviors. This brittleness hampers reproducibility, necessitating metic- ulous manual prompt engineering and often requiring domain-specific tuning to maintain consistency across interactions [184]. Furthermore, while recent agent frameworks adopt rea- soning heuristics like Chain-of-Thought (CoT) [159], [185] and ReAct [132] to simulate deliberative pro- cesses, these approaches remain shallow in semantic comprehension. Agents may still fail at multi-step in- ference, misalign task objectives, or make logically inconsistent conclusions despite the appearance of struc- tured reasoning [132]. Such shortcomings underscore the absence of genuine understanding and generalizable planning capabilities. Another key limitation lies in computational cost and latency. Each cycle of agentic decision-making partic- ularly in planning or tool-calling may require several LLM invocations. This not only increases runtime la- tency but also scales resource consumption, creating practical bottlenecks in real-world deployments and cloud-based inference systems. Furthermore, LLMs have a static knowledge cutoff and cannot dynamically in- tegrate new information unless explicitly augmented via retrieval or tool plugins. They also reproduce the biases of their training datasets, which can manifest as culturally insensitive or skewed responses [186], [187]. Without rigorous auditing and mitigation strategies, these issues pose serious ethical and operational risks, particularly when agents are deployed in sensitive or user-facing contexts. 3) Incomplete Agentic Properties: A major limitation of current AI Agents is their inability to fully satisfy the canonical agentic properties defined in foundational literature, such as autonomy, proactivity, reactivity, and social ability [142], [181]. While many systems mar- keted as ”agents” leverage LLMs to perform useful tasks, they often fall short of these fundamental cri- teria in practice. Autonomy, for instance, is typically
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 20, "page_label": "21", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134933"}
partial at best. Although agents can execute tasks with minimal oversight once initialized, they remain heavily reliant on external scaffolding such as human-defined prompts, planning heuristics, or feedback loops to func- tion effectively [188]. Self-initiated task generation, self- monitoring, or autonomous error correction are rare or absent, limiting their capacity for true independence. Proactivity is similarly underdeveloped. Most AI Agents require explicit user instruction to act and lack the capac- ity to formulate or reprioritize goals dynamically based on contextual shifts or evolving objectives [189]. As a result, they behave reactively rather than strategically, constrained by the static nature of their initialization. Re- activity itself is constrained by architectural bottlenecks. Agents do respond to environmental or user input, but response latency caused by repeated LLM inference calls [190], [191], coupled with narrow contextual memory windows [161], [192], inhibits real-time adaptability. Perhaps the most underexplored capability is social ability. True agentic systems should communicate and coordinate with humans or other agents over extended interactions, resolving ambiguity, negotiating tasks, and adapting to social norms. However, existing implementations exhibit brittle, template-based dialogue that lacks long-term memory integration or nuanced conversational context. Agent- to-agent interaction is often hardcoded or limited to scripted exchanges, hindering collaborative execution and emergent behavior [101], [193]. Collectively, these deficiencies reveal that while AI Agents demonstrate functional intelligence, they remain far from meeting the formal benchmarks of intelligent, interactive, and adap- tive agents. Bridging this gap is essential for advancing toward more autonomous, socially capable AI systems. 4) Limited Long-Horizon Planning and Recovery: A persistent limitation of current AI Agents lies in their inability to perform robust long-horizon planning, es- pecially in complex, multi-stage tasks. This constraint stems from their foundational reliance on stateless prompt-response paradigms, where each decision is made without an intrinsic memory of prior reasoning steps unless externally managed. Although augmenta- tions such as the ReAct framework [132] or Tree- of-Thoughts [160] introduce pseudo-recursive reason- ing, they remain fundamentally heuristic and lack true internal models of time, causality, or state evolution. Consequently, agents often falter in tasks requiring ex- tended temporal consistency or contingency planning. For example, in domains such as clinical triage or financial portfolio management, where decisions depend on prior context and dynamically unfolding outcomes, agents may exhibit repetitive behaviors such as endlessly querying tools or fail to adapt when sub-tasks fail or return ambiguous results. The absence of systematic recovery mechanisms or error detection leads to brittle workflows and error propagation. This shortfall severely limits agent deployment in mission-critical environments where reliability, fault tolerance, and sequential coher- ence are essential. 5) Reliability and Safety Concerns: AI Agents are not yet safe or verifiable enough for deployment in critical infrastructure [194]. The absence of causal reasoning leads to unpredictable behavior under distributional shift [173], [195]. Furthermore, evaluating the correctness of an agent’s plan especially when the agent fabricates intermediate steps or rationales remains an unsolved problem in interpretability [110], [196]. Safety guaran- tees, such as formal verification, are not yet available for open-ended, LLM-powered agents. While AI Agents represent a major step beyond static generative models, their limitations in causal reasoning, adaptability, robust- ness, and planning restrict their deployment in high- stakes or dynamic environments. Most current systems rely on heuristic wrappers and brittle prompt engineering rather than grounded agentic cognition. Bridging this gap will require future systems to integrate causal mod- els, dynamic memory, and verifiable reasoning mech- anisms. These limitations also set the stage for the emergence of Agentic AI systems, which attempt to address these bottlenecks through multi-agent collabo- ration, orchestration layers, and persistent system-level context. 2) Challenges and Limitations of Agentic AI: Agentic AI systems represent a paradigm shift from isolated AI agents to collaborative, multi-agent ecosystems capable of decomposing and executing complex goals [14]. These systems typically consist of orchestrated or communicating agents that interact via tools, APIs, and shared environments [18], [39]. While this architectural evolution enables more ambitious automa- tion, it introduces a range of amplified and novel challenges that compound existing limitations of individual LLM-based agents. The current challenges and limitations of Agentic AI are as follows: 1) Amplified Causality Challenges: One of the most critical limitations in Agentic AI systems is the magni- fication of causality deficits already observed in single- agent architectures. Unlike traditional AI Agents that operate in relatively isolated environments, Agentic AI systems involve complex inter-agent dynamics, where each agent’s action can influence the decision space of others. Without a robust capacity for modeling cause- effect relationships, these systems struggle to coordinate effectively and adapt to unforeseen environmental shifts. A key manifestation of this challenge is inter-agent distributional shift , where the behavior of one agent alters the operational context for others. In the absence of causal reasoning, agents are unable to anticipate the downstream impact of their outputs, resulting in coor- dination breakdowns or redundant computations [197]. Furthermore, these systems are particularly vulnerable to error cascades: a faulty or hallucinated output from one
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 21, "page_label": "22", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134933"}
agent can propagate through the system, compounding inaccuracies and corrupting subsequent decisions. For example, if a verification agent erroneously validates false information, downstream agents such as summariz- ers or decision-makers may unknowingly build upon that misinformation, compromising the integrity of the entire system. This fragility underscores the urgent need for integrating causal inference and intervention modeling into the design of multi-agent workflows, especially in high-stakes or dynamic environments where systemic robustness is essential. 2) Communication and Coordination Bottlenecks: A fundamental challenge in Agentic AI lies in achieving efficient communication and coordination across mul- tiple autonomous agents. Unlike single-agent systems, Agentic AI involves distributed agents that must col- lectively pursue a shared objective necessitating precise alignment, synchronized execution, and robust commu- nication protocols. However, current implementations fall short in these aspects. One major issue is goal alignment and shared context , where agents often lack a unified semantic understanding of overarching objec- tives. This hampers sub-task decomposition, dependency management, and progress monitoring, especially in dynamic environments requiring causal awareness and temporal coherence. In addition, protocol limitations significantly hinder inter-agent communication. Most systems rely on nat- ural language exchanges over loosely defined interfaces, which are prone to ambiguity, inconsistent formatting, and contextual drift. These communication gaps lead to fragmented strategies, delayed coordination, and de- graded system performance. Furthermore, resource con- tention emerges as a systemic bottleneck when agents simultaneously access shared computational, memory, or API resources. Without centralized orchestration or intelligent scheduling mechanisms, these conflicts can result in race conditions, execution delays, or outright system failures. Collectively, these bottlenecks illustrate the immaturity of current coordination frameworks in Agentic AI, and highlight the pressing need for stan- dardized communication protocols, semantic task plan- ners, and global resource managers to ensure scalable, coherent multi-agent collaboration. 3) Emergent Behavior and Predictability: One of the most critical limitations of Agentic AI lies in managing emergent behavior complex system-level phenomena that arise from the interactions of autonomous agents. While such emergence can potentially yield adaptive and innovative solutions, it also introduces significant unpre- dictability and safety risks [153], [198]. A key concern is the generation of unintended outcomes , where agent interactions result in behaviors that were not explicitly programmed or foreseen by system designers. These behaviors may diverge from task objectives, generate misleading outputs, or even enact harmful actions par- ticularly in high-stakes domains like healthcare, finance, or critical infrastructure. As the number of agents and the complexity of their interactions grow, so too does the likelihood of system instability. This includes phenomena such as infinite planning loops, action deadlocks, and contradictory behaviors emerging from asynchronous or misaligned agent decisions. Without centralized arbitration mecha- nisms, conflict resolution protocols, or fallback strate- gies, these instabilities compound over time, making the system fragile and unreliable. The stochasticity and opacity of large language model-based agents further exacerbate this issue, as their internal decision logic is not easily interpretable or verifiable. Consequently, en- suring the predictability and controllability of emergent behavior remains a central challenge in designing safe and scalable Agentic AI systems. 4) Scalability and Debugging Complexity: As Agen- tic AI systems scale in both the number of agents and the diversity of specialized roles, maintaining sys- tem reliability and interpretability becomes increas- ingly complex [199], [200]. A central limitation stems from the black-box chains of reasoning characteristic of LLM-based agents. Each agent may process inputs through opaque internal logic, invoke external tools, and communicate with other agents all of which occur through multiple layers of prompt engineering, reason- ing heuristics, and dynamic context handling. Tracing the root cause of a failure thus requires unwinding nested sequences of agent interactions, tool invocations, and memory updates, making debugging non-trivial and time-consuming. Another significant constraint is the system’s non- compositionality. Unlike traditional modular systems, where adding components can enhance overall func- tionality, introducing additional agents in an Agentic AI architecture often increases cognitive load, noise, and coordination overhead. Poorly orchestrated agent networks can result in redundant computation, contradic- tory decisions, or degraded task performance. Without robust frameworks for agent role definition, communica- tion standards, and hierarchical planning, the scalability of Agentic AI does not necessarily translate into greater intelligence or robustness. These limitations highlight the need for systematic architectural controls and trace- ability tools to support the development of reliable, large-scale agentic ecosystems. 5) Trust, Explainability, and Verification: Agentic AI systems pose heightened challenges in explainability and verifiability due to their distributed, multi-agent architec- ture. While interpreting the behavior of a single LLM- powered agent is already non-trivial, this complexity is multiplied when multiple agents interact asynchronously through loosely defined communication protocols. Each agent may possess its own memory, task objective, and reasoning path, resulting in compounded opacity where
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 22, "page_label": "23", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134935"}
tracing the causal chain of a final decision or failure becomes exceedingly difficult. The lack of shared, trans- parent logs or interpretable reasoning paths across agents makes it nearly impossible to determine why a particular sequence of actions occurred or which agent initiated a misstep. Compounding this opacity is the absence of formal verification tools tailored for Agentic AI. Unlike tra- ditional software systems, where model checking and formal proofs offer bounded guarantees, there exists no widely adopted methodology to verify that a multi- agent LLM system will perform reliably across all input distributions or operational contexts. This lack of verifia- bility presents a significant barrier to adoption in safety- critical domains such as autonomous vehicles, finance, and healthcare, where explainability and assurance are non-negotiable. To advance Agentic AI safely, future research must address the foundational gaps in causal traceability, agent accountability, and formal safety guar- antees. 6) Security and Adversarial Risks: Agentic AI architec- tures introduce a significantly expanded attack surface compared to single-agent systems, exposing them to complex adversarial threats. One of the most critical vulnerabilities lies in the presence of a single point of compromise. Since Agentic AI systems are composed of interdependent agents communicating over shared mem- ory or messaging protocols, the compromise of even one agent through prompt injection, model poisoning, or adversarial tool manipulation can propagate malicious outputs or corrupted state across the entire system. For example, a fact-checking agent fed with tampered data could unintentionally legitimize false claims, which are then integrated into downstream reasoning by summa- rization or decision-making agents. Moreover, inter-agent dynamics themselves are suscepti- ble to exploitation. Attackers can induce race conditions, deadlocks, or resource exhaustion by manipulating the coordination logic between agents. Without rigorous authentication, access control, and sandboxing mech- anisms, malicious agents or corrupted tool responses can derail multi-agent workflows or cause erroneous escalation in task pipelines. These risks are exacerbated by the absence of standardized security frameworks for LLM-based multi-agent systems, leaving most current implementations defenseless against sophisticated multi- stage attacks. As Agentic AI moves toward broader adoption, especially in high-stakes environments, em- bedding secure-by-design principles and adversarial ro- bustness becomes an urgent research imperative. 7) Ethical and Governance Challenges: The distributed and autonomous nature of Agentic AI systems intro- duces profound ethical and governance concerns, par- ticularly in terms of accountability, fairness, and value alignment. In multi-agent settings, accountability gaps emerge when multiple agents interact to produce an outcome, making it difficult to assign responsibility for errors or unintended consequences. This ambiguity complicates legal liability, regulatory compliance, and user trust, especially in domains such as healthcare, finance, or defense. Furthermore, bias propagation and amplification present a unique challenge: agents in- dividually trained on biased data may reinforce each other’s skewed decisions through interaction, leading to systemic inequities that are more pronounced than in isolated models. These emergent biases can be subtle and difficult to detect without longitudinal monitoring or audit mechanisms. Additionally, misalignment and value drift pose serious risks in long-horizon or dynamic environments. With- out a unified framework for shared value encoding, individual agents may interpret overarching objectives differently or optimize for local goals that diverge from human intent. Over time, this misalignment can lead to behavior that is inconsistent with ethical norms or user expectations. Current alignment methods, which are mostly designed for single-agent systems, are inadequate for managing value synchronization across heteroge- neous agent collectives. These challenges highlight the urgent need for governance-aware agent architectures, incorporating principles such as role-based isolation, traceable decision logging, and participatory oversight mechanisms to ensure ethical integrity in autonomous multi-agent systems. 8) Immature Foundations and Research Gaps: Despite rapid progress and high-profile demonstrations, Agentic AI remains in a nascent research stage with unresolved foundational issues that limit its scalability, reliability, and theoretical grounding. A central concern is the lack of standard architectures . There is currently no widely accepted blueprint for how to design, monitor, or evaluate multi-agent systems built on LLMs . This architectural fragmentation makes it difficult to compare implementations, replicate experiments, or generalize findings across domains. Key aspects such as agent orchestration, memory structures, and communication protocols are often implemented ad hoc, resulting in brittle systems that lack interoperability and formal guarantees. Equally critical is the absence of causal foundations as scalable causal discovery and reasoning remain unsolved challenges [201]. Without the ability to represent and reason about cause-effect relationships, Agentic AI sys- tems are inherently limited in their capacity to generalize safely beyond narrow training regimes [178], [202]. This shortfall affects their robustness under distributional shifts, their capacity for proactive intervention, and their ability to simulate counterfactuals or hypothetical plans core requirements for intelligent coordination and decision-making. The gap between functional demos and principled de- sign thus underscores an urgent need for foundational
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 23, "page_label": "24", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134936"}
Retrieval-Augmented Generation (RAG) Tool-Augmented Reasoning (Function Calling) Agentic Loop: Reasoning, Action, Observation Reflexive and Self- Critique Mechanisms Programmatic Prompt Engineering Pipelines Causal Modeling and Simulation- Based Planning Governance-Aware Architectures (Accountability + Role Isolation) Monitoring, Auditing, and Explainability Pipelines Memory Architectures (Episodic, Semantic, Vector) Multi-Agent Orchestration with Role Specialization Fig. 13: Ten emerging architectural and algorithmic solutions such as RAG, tool use, memory, orchestration, and reflexive mechanisms addressing reliability, scalability, and explainability across both paradigms (AI Agents and Agentic AI) research in multi-agent system theory, causal infer- ence integration, and benchmark development. Only by addressing these deficiencies can the field progress from prototype pipelines to trustworthy, general-purpose agentic frameworks suitable for deployment in high- stakes environments. VI. P OTENTIAL SOLUTIONS AND FUTURE ROADMAP The potential solutions (as illustrated in Figure 13) to these challenges and limitations of AI agents and Agentic AI are summarized in the following points: 1) Retrieval-Augmented Generation (RAG): For AI Agents, Retrieval-Augmented Generation mitigates hal- lucinations and expands static LLM knowledge by grounding outputs in real-time data [203]. By embed- ding user queries and retrieving semantically relevant documents from vector databases like FAISS Faiss or Pinecone Pinecone, agents can generate contextually valid responses rooted in external facts. This is par- ticularly effective in domains such as enterprise search and customer support, where accuracy and up-to-date knowledge are essential. In Agentic AI systems, RAG serves as a shared ground- ing mechanism across agents. For example, a summa- rizer agent may rely on the retriever agent to access the latest scientific papers before generating a synthesis. Persistent, queryable memory allows distributed agents to operate on a unified semantic layer, mitigating in- consistencies due to divergent contextual views. When implemented across a multi-agent system, RAG helps maintain shared truth, enhances goal alignment, and reduces inter-agent misinformation propagation. 2) Tool-Augmented Reasoning (Function Calling): AI Agents benefit significantly from function calling, which extends their ability to interact with real-world systems [167], [204]. Agents can query APIs, run local scripts, or access structured databases, thus transforming LLMs from static predictors into interactive problem-solvers [131], [162]. This allows them to dynamically retrieve weather forecasts, schedule appointments, or execute Python-based calculations, all beyond the capabilities of pure language modeling. For Agentic AI, function calling supports agent level autonomy and role differentiation. Agents within a team may use APIs to invoke domain-specific actions such as querying clinical databases or generating visual charts based on assigned roles. Function calls become part of an orchestrated pipeline, enabling fluid delegation across agents [205]. This structured interaction reduces ambiguity in task handoff and fosters clearer behavioral boundaries, especially when integrated with validation protocols or observation mechanisms [14], [18]. 3) Agentic Loop: Reasoning, Action, Observation: AI Agents often suffer from single-pass inference limita- tions. The ReAct pattern introduces an iterative loop where agents reason about tasks, act by calling tools or APIs, and then observe results before continuing. This feedback loop allows for more deliberate, context- sensitive behaviors. For example, an agent may verify retrieved data before drafting a summary, thereby re- ducing hallucination and logical errors. In Agentic AI, this pattern is critical for collaborative coherence. ReAct enables agents to evaluate dependencies dynamically reasoning over intermediate states, re-invoking tools if needed, and adjusting decisions as the environment
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 24, "page_label": "25", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134937"}
evolves. This loop becomes more complex in multi- agent settings where each agent’s observation must be reconciled against others’ outputs. Shared memory and consistent logging are essential here, ensuring that the reflective capacity of the system is not fragmented across agents [132]. 4) Memory Architectures (Episodic, Semantic, Vector): AI Agents face limitations in long-horizon planning and session continuity. Memory architectures address this by persisting information across tasks [206]. Episodic mem- ory allows agents to recall prior actions and feedback, semantic memory encodes structured domain knowl- edge, and vector memory enables similarity-based re- trieval [207]. These elements are key for personalization and adaptive decision-making in repeated interactions. Agentic AI systems require even more sophisticated memory models due to distributed state management. Each agent may maintain local memory while accessing shared global memory to facilitate coordination. For ex- ample, a planner agent might use vector-based memory to recall prior workflows, while a QA agent references semantic memory for fact verification. Synchronizing memory access and updates across agents enhances consistency, enables context-aware communication, and supports long-horizon system-level planning. 5) Multi-Agent Orchestration with Role Specialization: In AI Agents, task complexity is often handled via mod- ular prompt templates or conditional logic. However, as task diversity increases, a single agent may become overloaded [208], [209]. Role specialization splitting tasks into subcomponents (e.g., planner, summarizer) al- lows lightweight orchestration even within single-agent systems by simulating compartmentalized reasoning. In Agentic AI, orchestration is central. A meta-agent or orchestrator distributes tasks among specialized agents, each with distinct capabilities. Systems like MetaGPT and ChatDev exemplify this: agents emulate roles such as CEO, engineer, or reviewer, and interact through structured messaging. This modular approach enhances interpretability, scalability, and fault isolation ensuring that failures in one agent do not cascade without con- tainment mechanisms from the orchestrator. 6) Reflexive and Self-Critique Mechanisms: AI Agents often fail silently or propagate errors. Reflexive mech- anisms introduce the capacity for self-evaluation [210], [211]. After completing a task, agents can critique their own outputs using a secondary reasoning pass, increas- ing robustness and reducing error rates. For example, a legal assistant agent might verify that its drafted clause matches prior case laws before submission. For Agentic AI, reflexivity extends beyond self-critique to inter-agent evaluation. Agents can review each other’s outputs e.g., a verifier agent auditing a summarizer’s work. Reflexion-like mechanisms ensure collaborative quality control and enhance trustworthiness [212]. Such patterns also support iterative improvement and adaptive replanning, particularly when integrated with memory logs or feedback queues [213], [214]. 7) Programmatic Prompt Engineering Pipelines: Man- ual prompt tuning introduces brittleness and reduces reproducibility in AI Agents. Programmatic pipelines automate this process using task templates, context fillers, and retrieval-augmented variables [215], [216]. These dynamic prompts are structured based on task type, agent role, or user query, improving generalization and reducing failure modes associated with prompt variability. In Agentic AI, prompt pipelines enable scal- able, role-consistent communication. Each agent type (e.g., planner, retriever, summarizer) can generate or consume structured prompts tailored to its function. By automating message formatting, dependency tracking, and semantic alignment, programmatic prompting pre- vents coordination drift and ensures consistent reasoning across diverse agents in real time [14], [167]. 8) Causal Modeling and Simulation-Based Planning: AI Agents often operate on statistical correlations rather than causal models, leading to poor generalization under distribution shifts. Embedding causal inference allows agents to distinguish between correlation and causation, simulate interventions, and plan more robustly. For instance, in supply chain scenarios, a causally aware agent can simulate the downstream impact of shipment delays. In Agentic AI, causal reasoning is vital for safe coordination and error recovery. Agents must anticipate how their actions impact others requiring causal graphs, simulation environments, or Bayesian inference layers. For example, a planning agent may simulate different strategies and communicate likely outcomes to others, fostering strategic alignment and avoiding unintended emergent behaviors. To enforce cooperative behavior, agents can be governed by a structured planning ap- proach such as STRIPS or PDDL (Planning Domain Definition Language), where the environment is modeled with defined actions, preconditions, and effects. Inter- agent dependencies are encoded such that one agent’s action enables another’s, and a centralized or distributed planner ensures that all agents contribute to a shared goal. This unified framework supports strategic align- ment, anticipatory planning, and minimizes unintended emergent behaviors in multi-agent systems. 9) Monitoring, Auditing, and Explainability Pipelines: AI Agents lack transparency, complicating debugging and trust. Logging systems that record prompts, tool calls, memory updates, and outputs enable post-hoc analysis and performance tuning. These records help developers trace faults, refine behavior, and ensure compliance with usage guidelines especially critical in enterprise or legal domains. For Agentic AI, logging and explainability are exponentially more important. With multiple agents interacting asynchronously, audit trails are essential for identifying which agent caused an error and under what conditions. Explainability pipelines that
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 25, "page_label": "26", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134938"}
AI Agents Proactive Intelligence Tool Integration Causal Reasoning Continuous Learning Trust & Safety Agentic AI Multi-Agent Scaling Unified Or- chestration Persistent Memory Simulation Planning Ethical Governance Domain- Specific Systems Fig. 14: Mindmap visualization of the future roadmap for AI Agents and Agentic AI. integrate across agents (e.g., timeline visualizations or dialogue replays) are key to ensuring safety, especially in regulatory or multi-stakeholder environments. 10) Governance-Aware Architectures (Accountability and Role Isolation): AI Agents currently lack built- in safeguards for ethical compliance or error attribution. Governance-aware designs introduce role-based access control, sandboxing, and identity resolution to ensure agents act within scope and their decisions can be audited or revoked. These structures reduce risks in sensitive applications such as healthcare or finance. In Agentic AI, governance must scale across roles, agents, and workflows. Role isolation prevents rogue agents from exceeding authority, while accountability mechanisms assign responsibility for decisions and trace causality across agents. Compliance protocols, ethical alignment checks, and agent authentication ensure safety in collaborative settings paving the way for trustworthy AI ecosystems. AI Agents are projected to evolve significantly through enhanced modular intelligence focused on five key domains as depicted in Figure 14 as : proactive reasoning, tool integration, causal inference, continual learning, and trust-centric opera- tions. The first transformative milestone involves transitioning from reactive to Proactive Intelligence, where agents initiate tasks based on learned patterns, contextual cues, or latent goals rather than awaiting explicit prompts. This advancement depends heavily on robust Tool Integration, enabling agents to dynamically interact with external systems, such as databases, APIs, or simulation environments, to fulfill complex user tasks. Equally critical is the development of Causal Reasoning, which will allow agents to move beyond statistical correlation, supporting inference of cause-effect relationships essential for tasks involving diagnosis, planning, or prediction. To maintain relevance over time, agents must adopt frameworks for Continuous Learning , incorporating feedback loops and episodic memory to adapt their behavior across sessions and environments. Lastly, to build user confidence, agents must prioritize Trust & Safety mechanisms through verifiable out- put logging, bias detection, and ethical guardrails especially as their autonomy increases. Together, these pathways will redefine AI Agents from static tools into adaptive cognitive systems capable of autonomous yet controllable operation in dynamic digital environments. Agentic AI, as a natural extension of these foundations, emphasizes collaborative intelligence through multi-agent co- ordination, contextual persistence, and domain-specific orches- tration. Future systems (Figure 14 right side) will exhibit Multi-Agent Scaling , enabling specialized agents to work in parallel under distributed control for complex problem-solving mirroring team-based human workflows. This necessitates a layer of Unified Orchestration, where meta-agents or orches- trators dynamically assign roles, monitor task dependencies, and mediate conflicts among subordinate agents. Sustained performance over time depends on Persistent Memory archi- tectures, which preserve semantic, episodic, and shared knowl- edge for agents to coordinate longitudinal tasks and retain state awareness. Simulation Planning is expected to become a core feature, allowing agent collectives to test hypotheti- cal strategies, forecast consequences, and optimize outcomes
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 26, "page_label": "27", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134939"}
before real-world execution. Moreover, Ethical Governance frameworks will be essential to ensure responsible deployment defining accountability, oversight, and value alignment across autonomous agent networks. Finally, tailored Domain-Specific Systems will emerge in fields like law, medicine, and sup- ply chains, leveraging contextual specialization to outperform generic agents. This future positions Agentic AI not merely as a coordination layer on the top of AI Agents, but as a new paradigm for collective machine intelligence with adaptive planning, recursive reasoning, and collaborative cognition at its core. A transformative direction for future AI systems is intro- duced by the Absolute Zero: Reinforced Self-play Reasoning with Zero Data (AZR) framework, which reimagines the learn- ing paradigm for AI Agents and Agentic AI by removing dependency on external datasets [217]. Traditionally, both AI Agents and Agentic AI architectures have relied on human- annotated data, static knowledge corpora, or preconfigured environments factors that constrain scalability and adaptability in open-world contexts. AZR addresses this limitation by enabling agents to autonomously generate, validate, and solve their own tasks, using verifiable feedback mechanisms (e.g., code execution) to ground learning. This self-evolving mech- anism opens the door to truly autonomous reasoning agents capable of self-directed learning and adaptation in dynamic, data-scarce environments. In the context of Agentic AI—where multiple specialized agents collaborate within orchestrated workflows AZR lays the groundwork for agents to not only specialize but also co- evolve. For instance, scientific research pipelines could consist of agents that propose hypotheses, run simulations, validate findings, and revise strategies—entirely through self-play and verifiable reasoning, without continuous human oversight. By integrating the AZR paradigm, such systems can maintain persistent growth, knowledge refinement, and task flexibility across time. Ultimately, AZR highlights a future in which AI agents transition from static, pretrained tools to intelligent, self-improving ecosystems—positioning both AI Agents and Agentic AI at the forefront of next-generation artificial intel- ligence. VII. C ONCLUSION In this study, we presented a comprehensive literature-based evaluation of the evolving landscape of AI Agents and Agentic AI, offering a structured taxonomy that highlights foundational concepts, architectural evolution, application domains, and key limitations. Beginning with a foundational understanding, we characterized AI Agents as modular, task-specific entities with constrained autonomy and reactivity. Their operational scope is grounded in the integration of LLMs and LIMs, which serve as core reasoning modules for perception, language understanding, and decision-making. We identified generative AI as a functional precursor, emphasizing its limitations in autonomy and goal persistence, and examined how LLMs drive the progression from passive generation to interactive task completion through tool augmentation. This study then explored the conceptual emergence of Agentic AI systems as a transformative evolution from isolated agents to orchestrated, multi-agent ecosystems. We analyzed key differentiators such as distributed cognition, persistent memory, and coordinated planning that distinguish Agentic AI from conventional agent models. This was followed by a detailed breakdown of architectural evolution, highlight- ing the transition from monolithic, rule-based frameworks to modular, role specialized networks facilitated by orchestra- tion layers and reflective memory architectures. Additionally, this study then surveyed application domains in which these paradigms are deployed. For AI Agents, we illustrated their role in automating customer support, internal enterprise search, email prioritization, and scheduling. For Agentic AI, we demonstrated use cases in collaborative research, robotics, medical decision support, and adaptive workflow automation, supported by practical examples and industry-grade systems. Finally, this study provided a deep analysis of the challenges and limitations affecting both paradigms. For AI Agents, we discussed hallucinations, shallow reasoning, and planning constraints, while for Agentic AI, we addressed amplified causality issues, coordination bottlenecks, emergent behavior, and governance concerns. These insights offer a roadmap for future development and deployment of trustworthy, scalable agentic systems. ACKNOWLEDGEMENT This work was supported by the National Science Founda- tion and the United States Department of Agriculture, National Institute of Food and Agriculture through the “Artificial Intel- ligence (AI) Institute for Agriculture” Program under Award AWD003473, and AWD004595, Accession Number 1029004, ”Robotic Blossom Thinning with Soft Manipulators”. The publication of the article in OA mode was financially sup- ported by HEAL-Link. DECLARATIONS The authors declare no conflicts of interest. STATEMENT ON AI W RITING ASSISTANCE ChatGPT and Perplexity were utilized to enhance grammat- ical accuracy and refine sentence structure; all AI-generated revisions were thoroughly reviewed and edited for relevance. Additionally, ChatGPT-4o was employed to generate realistic visualizations. REFERENCES [1] E. Oliveira, K. Fischer, and O. Stepankova, “Multi-agent systems: which research for which applications,” Robotics and Autonomous Systems, vol. 27, no. 1-2, pp. 91–106, 1999. [2] Z. Ren and C. J. Anumba, “Multi-agent systems in construction–state of the art and prospects,” Automation in Construction , vol. 13, no. 3, pp. 421–434, 2004. [3] C. Castelfranchi, “Modelling social action for ai agents,” Artificial intelligence, vol. 103, no. 1-2, pp. 157–182, 1998. [4] J. Ferber and G. Weiss, Multi-agent systems: an introduction to distributed artificial intelligence , vol. 1. Addison-wesley Reading, 1999.
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 27, "page_label": "28", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134940"}
[5] R. Calegari, G. Ciatto, V . Mascardi, and A. Omicini, “Logic-based technologies for multi-agent systems: a systematic literature review,” Autonomous Agents and Multi-Agent Systems, vol. 35, no. 1, p. 1, 2021. [6] R. C. Cardoso and A. Ferrando, “A review of agent-based programming for multi-agent systems,” Computers, vol. 10, no. 2, p. 16, 2021. [7] E. Shortliffe, Computer-based medical consultations: MYCIN , vol. 2. Elsevier, 2012. [8] H. P. Moravec, “The stanford cart and the cmu rover,” Proceedings of the IEEE, vol. 71, no. 7, pp. 872–884, 1983. [9] B. Dai and H. Chen, “A multi-agent and auction-based framework and approach for carrier collaboration,” Logistics Research, vol. 3, pp. 101– 120, 2011. [10] J. Grosset, A.-J. Foug `eres, M. Djoko-Kouam, and J.-M. Bonnin, “Multi-agent simulation of autonomous industrial vehicle fleets: To- wards dynamic task allocation in v2x cooperation mode,” Integrated Computer-Aided Engineering, vol. 31, no. 3, pp. 249–266, 2024. [11] R. A. Agis, S. Gottifredi, and A. J. Garc ´ıa, “An event-driven behavior trees extension to facilitate non-player multi-agent coordination in video games,” Expert Systems with Applications , vol. 155, p. 113457, 2020. [12] A. Guerra-Hern ´andez, A. El Fallah-Seghrouchni, and H. Soldano, “Learning in bdi multi-agent systems,” in International Workshop on Computational Logic in Multi-Agent Systems , pp. 218–233, Springer, 2004. [13] A. Saadi, R. Maamri, and Z. Sahnoun, “Behavioral flexibility in belief- desire-intention (bdi) architectures,” Multiagent and grid systems , vol. 16, no. 4, pp. 343–377, 2020. [14] D. B. Acharya, K. Kuppan, and B. Divya, “Agentic ai: Autonomous intelligence for complex goals–a comprehensive survey,” IEEE Access, 2025. [15] M. Z. Pan, M. Cemri, L. A. Agrawal, S. Yang, B. Chopra, R. Tiwari, K. Keutzer, A. Parameswaran, K. Ramchandran, D. Klein, et al., “Why do multiagent systems fail?,” in ICLR 2025 Workshop on Building Trust in Language Models and Applications , 2025. [16] L. Hughes, Y . K. Dwivedi, T. Malik, M. Shawosh, M. A. Albashrawi, I. Jeon, V . Dutot, M. Appanderanda, T. Crick, R. De’,et al., “Ai agents and agentic systems: A multi-expert analysis,” Journal of Computer Information Systems, pp. 1–29, 2025. [17] Z. Deng, Y . Guo, C. Han, W. Ma, J. Xiong, S. Wen, and Y . Xiang, “Ai agents under threat: A survey of key security challenges and future pathways,” ACM Computing Surveys , vol. 57, no. 7, pp. 1–36, 2025. [18] M. Gridach, J. Nanavati, K. Z. E. Abidine, L. Mendes, and C. Mack, “Agentic ai for scientific discovery: A survey of progress, challenges, and future directions,” arXiv preprint arXiv:2503.08979 , 2025. [19] T. Song, M. Luo, X. Zhang, L. Chen, Y . Huang, J. Cao, Q. Zhu, D. Liu, B. Zhang, G. Zou, et al. , “A multiagent-driven robotic ai chemist enabling autonomous chemical research on demand,” Journal of the American Chemical Society , vol. 147, no. 15, pp. 12534–12545, 2025. [20] M. M. Karim, D. H. Van, S. Khan, Q. Qu, and Y . Kholodov, “Ai agents meet blockchain: A survey on secure and scalable collaboration for multi-agents,” Future Internet, vol. 17, no. 2, p. 57, 2025. [21] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever, et al., “Improv- ing language understanding by generative pre-training,” arxiv, 2018. [22] J. S ´anchez Cuadrado, S. P ´erez-Soler, E. Guerra, and J. De Lara, “Automating the development of task-oriented llm-based chatbots,” in Proceedings of the 6th ACM Conference on Conversational User Interfaces, pp. 1–10, 2024. [23] Y . Lu, A. Aleta, C. Du, L. Shi, and Y . Moreno, “Llms and generative agent-based models for complex systems research,” Physics of Life Reviews, 2024. [24] A. Zhang, Y . Chen, L. Sheng, X. Wang, and T.-S. Chua, “On generative agents in recommendation,” in Proceedings of the 47th international ACM SIGIR conference on research and development in Information Retrieval, pp. 1807–1817, 2024. [25] S. Peng, E. Kalliamvakou, P. Cihon, and M. Demirer, “The impact of ai on developer productivity: Evidence from github copilot,” arXiv preprint arXiv:2302.06590, 2023. [26] J. Li, V . Lavrukhin, B. Ginsburg, R. Leary, O. Kuchaiev, J. M. Cohen, H. Nguyen, and R. T. Gadde, “Jasper: An end-to-end convolutional neural acoustic model,” arXiv preprint arXiv:1904.03288 , 2019. [27] A. Jaruga-Rozdolska, “Artificial intelligence as part of future practices in the architect’s work: Midjourney generative tool as part of a process of creating an architectural form,” Architectus, no. 3 (71, pp. 95–104, 2022. [28] K. Basu, “Bridging knowledge gaps in llms via function calls,” in Proceedings of the 33rd ACM International Conference on Information and Knowledge Management , pp. 5556–5557, 2024. [29] Z. Liu, T. Hoang, J. Zhang, M. Zhu, T. Lan, J. Tan, W. Yao, Z. Liu, Y . Feng, R. RN, et al. , “Apigen: Automated pipeline for generating verifiable and diverse function-calling datasets,” Advances in Neural Information Processing Systems , vol. 37, pp. 54463–54482, 2024. [30] H. Yang, S. Yue, and Y . He, “Auto-gpt for online decision making: Benchmarks and additional opinions,” arXiv preprint arXiv:2306.02224, 2023. [31] I. Hettiarachchi, “Exploring generative ai agents: Architecture, applica- tions, and challenges,” Journal of Artificial Intelligence General science (JAIGS) ISSN: 3006-4023 , vol. 8, no. 1, pp. 105–127, 2025. [32] A. Das, S.-C. Chen, M.-L. Shyu, and S. Sadiq, “Enabling synergistic knowledge sharing and reasoning in large language models with collaborative multi-agents,” in 2023 IEEE 9th International Conference on Collaboration and Internet Computing (CIC) , pp. 92–98, IEEE, 2023. [33] R. Surapaneni, J. Miku, M. Vakoc, and T. Segal, “Announcing the agent2agent protocol (a2a) - google developers blog,” 4 2025. [34] Z. Duan and J. Wang, “Exploration of llm multi-agent applica- tion implementation based on langgraph+ crewai,” arXiv preprint arXiv:2411.18241, 2024. [35] R. Sapkota, Y . Cao, K. I. Roumeliotis, and M. Karkee, “Vision- language-action models: Concepts, progress, applications and chal- lenges,” arXiv preprint arXiv:2505.04769 , 2025. [36] R. Sapkota, K. I. Roumeliotis, R. H. Cheppally, M. F. Calero, and M. Karkee, “A review of 3d object detection with vision-language models,” arXiv preprint arXiv:2504.18738 , 2025. [37] R. Sapkota and M. Karkee, “Object detection with multimodal large vision-language models: An in-depth review,” Available at SSRN 5233953, 2025. [38] B. Memarian and T. Doleck, “Human-in-the-loop in artificial intel- ligence in education: A review and entity-relationship (er) analysis,” Computers in Human Behavior: Artificial Humans , vol. 2, no. 1, p. 100053, 2024. [39] P. Bornet, J. Wirtz, T. H. Davenport, D. De Cremer, B. Evergreen, P. Fersht, R. Gohel, S. Khiyara, P. Sund, and N. Mullakara, Agentic Artificial Intelligence: Harnessing AI Agents to Reinvent Business, Work and Life. Irreplaceable Publishing, 2025. [40] F. Sado, C. K. Loo, W. S. Liew, M. Kerzel, and S. Wermter, “Ex- plainable goal-driven agents and robots-a comprehensive review,”ACM Computing Surveys, vol. 55, no. 10, pp. 1–41, 2023. [41] J. Heer, “Agency plus automation: Designing artificial intelligence into interactive systems,” Proceedings of the National Academy of Sciences, vol. 116, no. 6, pp. 1844–1850, 2019. [42] G. Papagni, J. de Pagter, S. Zafari, M. Filzmoser, and S. T. Koeszegi, “Artificial agents’ explainability to support trust: considerations on timing and context,” Ai & Society , vol. 38, no. 2, pp. 947–960, 2023. [43] P. Wang and H. Ding, “The rationality of explanation or human capacity? understanding the impact of explainable artificial intelligence on human-ai trust and decision performance,” Information Processing & Management, vol. 61, no. 4, p. 103732, 2024. [44] E. Popa, “Human goals are constitutive of agency in artificial intelli- gence (ai),” Philosophy & Technology, vol. 34, no. 4, pp. 1731–1750, 2021. [45] M. Chacon-Chamorro, L. F. Giraldo, N. Quijano, V . Vargas-Panesso, C. Gonz ´alez, J. S. Pinz ´on, R. Manrique, M. R ´ıos, Y . Fonseca, D. G ´omez-Barrera, et al. , “Cooperative resilience in artificial intel- ligence multiagent systems,” IEEE Transactions on Artificial Intelli- gence, 2025. [46] M. Adam, M. Wessel, and A. Benlian, “Ai-based chatbots in customer service and their effects on user compliance,” Electronic Markets , vol. 31, no. 2, pp. 427–445, 2021. [47] D. Leoc ´adio, L. Guedes, J. Oliveira, J. Reis, and N. Mel ˜ao, “Customer service with ai-powered human-robot collaboration (hrc): A literature review,” Procedia Computer Science , vol. 232, pp. 1222–1232, 2024. [48] T. Cao, Y . Q. Khoo, S. Birajdar, Z. Gong, C.-F. Chung, Y . Moghaddam, A. Xu, H. Mehta, A. Shukla, Z. Wang, et al. , “Designing towards productivity: A centralized ai assistant concept for work,” The Human Side of Service Engineering , p. 118, 2024. [49] Y . Huang and J. X. Huang, “Exploring chatgpt for next-generation in- formation retrieval: Opportunities and challenges,” in Web Intelligence,
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 28, "page_label": "29", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134941"}
vol. 22, pp. 31–44, SAGE Publications Sage UK: London, England, 2024. [50] N. Holtz, S. Wittfoth, and J. M. G ´omez, “The new era of knowledge retrieval: Multi-agent systems meet generative ai,” in 2024 Portland In- ternational Conference on Management of Engineering and Technology (PICMET), pp. 1–10, IEEE, 2024. [51] F. Poszler and B. Lange, “The impact of intelligent decision-support systems on humans’ ethical decision-making: A systematic literature review and an integrated framework,” Technological Forecasting and Social Change, vol. 204, p. 123403, 2024. [52] F. Khemakhem, H. Ellouzi, H. Ltifi, and M. B. Ayed, “Agent-based intelligent decision support systems: a systematic review,” IEEE Trans- actions on Cognitive and Developmental Systems , vol. 14, no. 1, pp. 20–34, 2020. [53] S. Ringer, “Introducing computer use, a new claude 3.5 sonnet, and claude 3.5 haiku anthropic,” 10 2024. [54] R. V . Florian, “Autonomous artificial intelligent agents,” Center for Cognitive and Neural Studies (Coneural), Cluj-Napoca, Romania , 2003. [55] T. Hellstr ¨om, N. Kaiser, and S. Bensch, “A taxonomy of embodiment in the ai era,” Electronics, vol. 13, no. 22, p. 4441, 2024. [56] M. Wischnewski, “Attributing mental states to non-embodied au- tonomous systems: A systematic review,” in Proceedings of the Ex- tended Abstracts of the CHI Conference on Human Factors in Com- puting Systems, pp. 1–8, 2025. [57] K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and M. Fritz, “Not what you’ve signed up for: Compromising real-world llm- integrated applications with indirect prompt injection,” in Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security , pp. 79–90, 2023. [58] Y . Talebirad and A. Nadiri, “Multi-agent collaboration: Harnessing the power of intelligent llm agents,” arXiv preprint arXiv:2306.03314, 2023. [59] A. I. Hauptman, B. G. Schelble, N. J. McNeese, and K. C. Madathil, “Adapt and overcome: Perceptions of adaptive autonomous agents for human-ai teaming,” Computers in Human Behavior , vol. 138, p. 107451, 2023. [60] N. Krishnan, “Advancing multi-agent systems through model con- text protocol: Architecture, implementation, and applications,” arXiv preprint arXiv:2504.21030, 2025. [61] H. Padigela, C. Shah, and D. Juyal, “Ml-dev-bench: Comparative analysis of ai agents on ml development workflows,” arXiv preprint arXiv:2502.00964, 2025. [62] M. Raees, I. Meijerink, I. Lykourentzou, V .-J. Khan, and K. Papangelis, “From explainable to interactive ai: A literature review on current trends in human-ai interaction,” International Journal of Human- Computer Studies, p. 103301, 2024. [63] P. Formosa, “Robot autonomy vs. human autonomy: social robots, artificial intelligence (ai), and the nature of autonomy,” Minds and Machines, vol. 31, no. 4, pp. 595–616, 2021. [64] C. S. Eze and L. Shamir, “Analysis and prevention of ai-based phishing email attacks,” Electronics, vol. 13, no. 10, p. 1839, 2024. [65] D. Singh, V . Patel, D. Bose, and A. Sharma, “Enhancing email market- ing efficacy through ai-driven personalization: Leveraging natural lan- guage processing and collaborative filtering algorithms,” International Journal of AI Advancements , vol. 9, no. 4, 2020. [66] R. Khan, S. Sarkar, S. K. Mahata, and E. Jose, “Security threats in agentic ai system,” arXiv preprint arXiv:2410.14728 , 2024. [67] C. G. Endacott, “Enacting machine agency when ai makes one’s day: understanding how users relate to ai communication technologies for scheduling,” Journal of Computer-Mediated Communication , vol. 29, no. 4, p. zmae011, 2024. [68] Z. Pawlak and A. Skowron, “Rudiments of rough sets,” Information sciences, vol. 177, no. 1, pp. 3–27, 2007. [69] P. Ponnusamy, A. Ghias, Y . Yi, B. Yao, C. Guo, and R. Sarikaya, “Feedback-based self-learning in large-scale conversational ai agents,” AI magazine, vol. 42, no. 4, pp. 43–56, 2022. [70] A. Zagalsky, D. Te’eni, I. Yahav, D. G. Schwartz, G. Silverman, D. Cohen, Y . Mann, and D. Lewinsky, “The design of reciprocal learning between human and artificial intelligence,” Proceedings of the ACM on Human-Computer Interaction , vol. 5, no. CSCW2, pp. 1–36, 2021. [71] W. J. Clancey, “Heuristic classification,” Artificial intelligence, vol. 27, no. 3, pp. 289–350, 1985. [72] S. Kapoor, B. Stroebl, Z. S. Siegel, N. Nadgir, and A. Narayanan, “Ai agents that matter,” arXiv preprint arXiv:2407.01502 , 2024. [73] X. Huang, J. Lian, Y . Lei, J. Yao, D. Lian, and X. Xie, “Recommender ai agent: Integrating large language models for interactive recommen- dations,” arXiv preprint arXiv:2308.16505 , 2023. [74] A. M. Baabdullah, A. A. Alalwan, R. S. Algharabat, B. Metri, and N. P. Rana, “Virtual agents and flow experience: An empirical examination of ai-powered chatbots,” Technological Forecasting and Social Change, vol. 181, p. 121772, 2022. [75] K. I. Roumeliotis, N. D. Tselikas, and D. K. Nasiopoulos, “Llms for product classification in e-commerce: A zero-shot comparative study of gpt and claude models,” Natural Language Processing Journal, vol. 11, p. 100142, 6 2025. [76] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al. , “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774 , 2023. [77] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann,et al., “Palm: Scaling language modeling with pathways,” Journal of Machine Learning Research, vol. 24, no. 240, pp. 1–113, 2023. [78] H. Honda and M. Hagiwara, “Question answering systems with deep learning-based symbolic processing,” IEEE Access, vol. 7, pp. 152368– 152378, 2019. [79] N. Karanikolas, E. Manga, N. Samaridi, E. Tousidou, and M. Vassi- lakopoulos, “Large language models versus natural language under- standing and generation,” in Proceedings of the 27th Pan-Hellenic Conference on Progress in Computing and Informatics , pp. 278–290, 2023. [80] A. S. George, A. H. George, T. Baskar, and A. G. Martin, “Revolu- tionizing business communication: Exploring the potential of gpt-4 in corporate settings,” Partners Universal International Research Journal, vol. 2, no. 1, pp. 149–157, 2023. [81] K. I. Roumeliotis, N. D. Tselikas, and D. K. Nasiopoulos, “Think before you classify: The rise of reasoning large language models for consumer complaint detection and classification,” Electronics 2025, Vol. 14, Page 1070, vol. 14, p. 1070, 3 2025. [82] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning , pp. 8748–8763, PmLR, 2021. [83] J. Li, D. Li, S. Savarese, and S. Hoi, “Blip-2: Bootstrapping language- image pre-training with frozen image encoders and large language models,” in International conference on machine learning , pp. 19730– 19742, PMLR, 2023. [84] S. Sontakke, J. Zhang, S. Arnold, K. Pertsch, E. Bıyık, D. Sadigh, C. Finn, and L. Itti, “Roboclip: One demonstration is enough to learn robot policies,” Advances in Neural Information Processing Systems , vol. 36, pp. 55681–55693, 2023. [85] M. Elhenawy, H. I. Ashqar, A. Rakotonirainy, T. I. Alhadidi, A. Jaber, and M. A. Tami, “Vision-language models for autonomous driving: Clip-based dynamic scene understanding,” Electronics, vol. 14, no. 7, p. 1282, 2025. [86] S. Park, M. Lee, J. Kang, H. Choi, Y . Park, J. Cho, A. Lee, and D. Kim, “Vlaad: Vision and language assistant for autonomous driving,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 980–987, 2024. [87] S. H. Ahmed, S. Hu, and G. Sukthankar, “The potential of vision- language models for content moderation of children’s videos,” in 2023 International Conference on Machine Learning and Applications (ICMLA), pp. 1237–1241, IEEE, 2023. [88] S. H. Ahmed, M. J. Khan, and G. Sukthankar, “Enhanced multimodal content moderation of children’s videos using audiovisual fusion,” arXiv preprint arXiv:2405.06128 , 2024. [89] K. I. Roumeliotis, R. Sapkota, M. Karkee, N. D. Tselikas, and D. K. Nasiopoulos, “Plant disease detection through multimodal large language models and convolutional neural networks,” 4 2025. [90] P. Chitra and A. Saleem Raja, “Artificial intelligence (ai) algorithm and models for embodied agents (robots and drones),” in Building Embod- ied AI Systems: The Agents, the Architecture Principles, Challenges, and Application Domains , pp. 417–441, Springer, 2025. [91] S. Kourav, K. Verma, and M. Sundararajan, “Artificial intelligence algorithm models for agents of embodiment for drone applications,” in Building Embodied AI Systems: The Agents, the Architecture Prin-
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 29, "page_label": "30", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134943"}
ciples, Challenges, and Application Domains , pp. 79–101, Springer, 2025. [92] G. Natarajan, E. Elango, B. Sundaravadivazhagan, and S. Rethinam, “Artificial intelligence algorithms and models for embodied agents: Enhancing autonomy in drones and robots,” in Building Embodied AI Systems: The Agents, the Architecture Principles, Challenges, and Application Domains, pp. 103–132, Springer, 2025. [93] K. Pandya and M. Holia, “Automating customer service using langchain: Building custom open-source gpt chatbot for organizations,” arXiv preprint arXiv:2310.05421 , 2023. [94] Q. Wu, G. Bansal, J. Zhang, Y . Wu, B. Li, E. Zhu, L. Jiang, X. Zhang, S. Zhang, J. Liu, et al., “Autogen: Enabling next-gen llm applications via multi-agent conversation,” arXiv preprint arXiv:2308.08155, 2023. [95] L. Gabora and J. Bach, “A path to generative artificial selves,” in EPIA Conference on Artificial Intelligence , pp. 15–29, Springer, 2023. [96] G. Pezzulo, T. Parr, P. Cisek, A. Clark, and K. Friston, “Generating meaning: active inference and the scope and limits of passive ai,” Trends in Cognitive Sciences , vol. 28, no. 2, pp. 97–112, 2024. [97] J. Li, M. Zhang, N. Li, D. Weyns, Z. Jin, and K. Tei, “Generative ai for self-adaptive systems: State of the art and research roadmap,” ACM Transactions on Autonomous and Adaptive Systems , vol. 19, no. 3, pp. 1–60, 2024. [98] W. O’Grady and M. Lee, “Natural syntax, artificial intelligence and language acquisition,” Information, vol. 14, no. 7, p. 418, 2023. [99] X. Liu, J. Wang, J. Sun, X. Yuan, G. Dong, P. Di, W. Wang, and D. Wang, “Prompting frameworks for large language models: A survey,” arXiv preprint arXiv:2311.12785 , 2023. [100] E. T. Rolls, “The memory systems of the human brain and generative artificial intelligence,” Heliyon, vol. 10, no. 11, 2024. [101] K. Alizadeh, S. I. Mirzadeh, D. Belenko, S. Khatamifard, M. Cho, C. C. Del Mundo, M. Rastegari, and M. Farajtabar, “Llm in a flash: Efficient large language model inference with limited memory,” in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 12562–12584, 2024. [102] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, A. Wahid, J. Tompson, Q. Vuong, T. Yu, W. Huang,et al., “Palm-e: An embodied multimodal language model,” 2023. [103] P. Denny, J. Leinonen, J. Prather, A. Luxton-Reilly, T. Amarouche, B. A. Becker, and B. N. Reeves, “Prompt problems: A new pro- gramming exercise for the generative ai era,” in Proceedings of the 55th ACM Technical Symposium on Computer Science Education V . 1, pp. 296–302, 2024. [104] C. Chen, S. Lee, E. Jang, and S. S. Sundar, “Is your prompt detailed enough? exploring the effects of prompt coaching on users’ percep- tions, engagement, and trust in text-to-image generative ai tools,” in Proceedings of the Second International Symposium on Trustworthy Autonomous Systems, pp. 1–12, 2024. [105] OpenAI, “Introducing gpt-4.1 in the api,” 4 2025. [106] A. Pan, E. Jones, M. Jagadeesan, and J. Steinhardt, “Feedback loops with language models drive in-context reward hacking,” arXiv preprint arXiv:2402.06627, 2024. [107] K. Nabben, “Ai as a constituted system: accountability lessons from an llm experiment,” Data & policy , vol. 6, p. e57, 2024. [108] P. J. Pesch, “Potentials and challenges of large language models (llms) in the context of administrative decision-making,” European Journal of Risk Regulation , pp. 1–20, 2025. [109] C. Wang, Y . Deng, Z. Lyu, L. Zeng, J. He, S. Yan, and B. An, “Q*: Improving multi-step reasoning for llms with deliberative planning,” arXiv preprint arXiv:2406.14283 , 2024. [110] H. Wei, Z. Zhang, S. He, T. Xia, S. Pan, and F. Liu, “Plangen- llms: A modern survey of llm planning capabilities,” arXiv preprint arXiv:2502.11221, 2025. [111] A. Bandi, P. V . S. R. Adapa, and Y . E. V . P. K. Kuchi, “The power of generative ai: A review of requirements, models, input–output formats, evaluation metrics, and challenges,” Future Internet , vol. 15, no. 8, p. 260, 2023. [112] Y . Liu, H. Du, D. Niyato, J. Kang, Z. Xiong, Y . Wen, and D. I. Kim, “Generative ai in data center networking: Fundamentals, perspectives, and case study,” IEEE Network, 2025. [113] C. Guo, F. Cheng, Z. Du, J. Kiessling, J. Ku, S. Li, Z. Li, M. Ma, T. Molom-Ochir, B. Morris, et al., “A survey: Collaborative hardware and software design in the era of large language models,”IEEE Circuits and Systems Magazine , vol. 25, no. 1, pp. 35–57, 2025. [114] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell,et al., “Language mod- els are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020. [115] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozi `ere, N. Goyal, E. Hambro, F. Azhar, et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023. [116] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of machine learning research, vol. 21, no. 140, pp. 1–67, 2020. [117] A. Yang, B. Xiao, B. Wang, B. Zhang, C. Bian, C. Yin, C. Lv, D. Pan, D. Wang, D. Yan, et al. , “Baichuan 2: Open large-scale language models,” arXiv preprint arXiv:2309.10305 , 2023. [118] K. M. Yoo, D. Park, J. Kang, S.-W. Lee, and W. Park, “Gpt3mix: Leveraging large-scale language models for text augmentation,” arXiv preprint arXiv:2104.08826, 2021. [119] D. Zhou, X. Xue, X. Lu, Y . Guo, P. Ji, H. Lv, W. He, Y . Xu, Q. Li, and L. Cui, “A hierarchical model for complex adaptive system: From adaptive agent to ai society,” ACM Transactions on Autonomous and Adaptive Systems, 2024. [120] H. Hao, Y . Wang, and J. Chen, “Empowering scenario planning with artificial intelligence: A perspective on building smart and resilient cities,” Engineering, 2024. [121] Y . Wang, J. Zhu, Z. Cheng, L. Qiu, Z. Tong, and J. Huang, “Intelligent optimization method for real-time decision-making in laminated cool- ing configurations through reinforcement learning,” Energy, vol. 291, p. 130434, 2024. [122] X. Xiang, J. Xue, L. Zhao, Y . Lei, C. Yue, and K. Lu, “Real- time integration of fine-tuned large language model for improved decision-making in reinforcement learning,” in 2024 International Joint Conference on Neural Networks (IJCNN) , pp. 1–8, IEEE, 2024. [123] Z. Li, H. Zhang, C. Peng, and R. Peiris, “Exploring large language model-driven agents for environment-aware spatial interactions and conversations in virtual reality role-play scenarios,” in 2025 IEEE Conference Virtual Reality and 3D User Interfaces (VR) , pp. 1–11, IEEE, 2025. [124] T. R. McIntosh, T. Susnjak, T. Liu, P. Watters, and M. N. Halgamuge, “The inadequacy of reinforcement learning from human feedback- radicalizing large language models via semantic vulnerabilities,” IEEE Transactions on Cognitive and Developmental Systems , 2024. [125] S. Lee, G. Lee, W. Kim, J. Kim, J. Park, and K. Cho, “Human strategy learning-based multi-agent deep reinforcement learning for online team sports game,” IEEE Access, 2025. [126] Z. Shi, S. Gao, L. Yan, Y . Feng, X. Chen, Z. Chen, D. Yin, S. Ver- berne, and Z. Ren, “Tool learning in the wild: Empowering language models as automatic tool agents,” in Proceedings of the ACM on Web Conference 2025, pp. 2222–2237, 2025. [127] S. Yuan, K. Song, J. Chen, X. Tan, Y . Shen, R. Kan, D. Li, and D. Yang, “Easytool: Enhancing llm-based agents with concise tool instruction,” arXiv preprint arXiv:2401.06201 , 2024. [128] B. Xu, X. Liu, H. Shen, Z. Han, Y . Li, M. Yue, Z. Peng, Y . Liu, Z. Yao, and D. Xu, “Gentopia: A collaborative platform for tool-augmented llms,” arXiv preprint arXiv:2308.04030 , 2023. [129] H. Lu, X. Li, X. Ji, Z. Kan, and Q. Hu, “Toolfive: Enhancing tool- augmented llms via tool filtering and verification,” in ICASSP 2025- 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5, IEEE, 2025. [130] Y . Song, F. Xu, S. Zhou, and G. Neubig, “Beyond browsing: Api-based web agents,” arXiv preprint arXiv:2410.16464 , 2024. [131] V . Tupe and S. Thube, “Ai agentic workflows and enterprise apis: Adapting api architectures for the age of ai agents,” arXiv preprint arXiv:2502.17443, 2025. [132] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y . Cao, “React: Synergizing reasoning and acting in language models,” in International Conference on Learning Representations (ICLR) , 2023. [133] OpenAI, “Introducing chatgpt search,” 10 2024. [134] L. Ning, Z. Liang, Z. Jiang, H. Qu, Y . Ding, W. Fan, X.-y. Wei, S. Lin, H. Liu, P. S. Yu, et al. , “A survey of webagents: Towards next-generation ai agents for web automation with large foundation models,” arXiv preprint arXiv:2503.23350 , 2025.
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 30, "page_label": "31", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134944"}
[135] M. W. U. Rahman, R. Nevarez, L. T. Mim, and S. Hariri, “Multi- agent actor-critic generative ai for query resolution and analysis,” IEEE Transactions on Artificial Intelligence , 2025. [136] J. L ´ala, O. O’Donoghue, A. Shtedritski, S. Cox, S. G. Rodriques, and A. D. White, “Paperqa: Retrieval-augmented generative agent for scientific research,” arXiv preprint arXiv:2312.07559 , 2023. [137] Z. Wu, C. Yu, C. Chen, J. Hao, and H. H. Zhuo, “Models as agents: Optimizing multi-step predictions of interactive local models in model- based multi-agent reinforcement learning,” in Proceedings of the AAAI Conference on Artificial Intelligence , vol. 37, pp. 10435–10443, 2023. [138] Z. Feng, R. Xue, L. Yuan, Y . Yu, N. Ding, M. Liu, B. Gao, J. Sun, and G. Wang, “Multi-agent embodied ai: Advances and future directions,” arXiv preprint arXiv:2505.05108 , 2025. [139] A. Feriani and E. Hossain, “Single and multi-agent deep reinforcement learning for ai-enabled wireless networks: A tutorial,” IEEE Commu- nications Surveys & Tutorials , vol. 23, no. 2, pp. 1226–1252, 2021. [140] R. Zhang, S. Tang, Y . Liu, D. Niyato, Z. Xiong, S. Sun, S. Mao, and Z. Han, “Toward agentic ai: generative information retrieval inspired intelligent communications and networking,” arXiv preprint arXiv:2502.16866, 2025. [141] U. M. Borghoff, P. Bottoni, and R. Pareschi, “Human-artificial interac- tion in the age of agentic ai: a system-theoretical approach,” Frontiers in Human Dynamics , vol. 7, p. 1579166, 2025. [142] E. Miehling, K. N. Ramamurthy, K. R. Varshney, M. Riemer, D. Boun- effouf, J. T. Richards, A. Dhurandhar, E. M. Daly, M. Hind, P. Sat- tigeri, et al. , “Agentic ai needs a systems theory,” arXiv preprint arXiv:2503.00237, 2025. [143] W. Xu, Z. Liang, K. Mei, H. Gao, J. Tan, and Y . Zhang, “A-mem: Agentic memory for llm agents,” arXiv preprint arXiv:2502.12110 , 2025. [144] C. Riedl and D. De Cremer, “Ai for collective intelligence,” Collective Intelligence, vol. 4, no. 2, p. 26339137251328909, 2025. [145] L. Peng, D. Li, Z. Zhang, T. Zhang, A. Huang, S. Yang, and Y . Hu, “Human-ai collaboration: Unraveling the effects of user proficiency and ai agent capability in intelligent decision support systems,” Inter- national Journal of Industrial Ergonomics , vol. 103, p. 103629, 2024. [146] H. Shirado, K. Shimizu, N. A. Christakis, and S. Kasahara, “Realism drives interpersonal reciprocity but yields to ai-assisted egocentrism in a coordination experiment,” inProceedings of the 2025 CHI Conference on Human Factors in Computing Systems , pp. 1–21, 2025. [147] Y . Xiao, G. Shi, and P. Zhang, “Towards agentic ai networking in 6g: A generative foundation model-as-agent approach,” arXiv preprint arXiv:2503.15764, 2025. [148] P. R. Lewis and S ¸. Sarkadi, “Reflective artificial intelligence,” Minds and Machines, vol. 34, no. 2, p. 14, 2024. [149] C. Qian, W. Liu, H. Liu, N. Chen, Y . Dang, J. Li, C. Yang, W. Chen, Y . Su, X. Cong, et al., “Chatdev: Communicative agents for software development,” arXiv preprint arXiv:2307.07924 , 2023. [150] J. S. Park, J. O’Brien, C. J. Cai, M. R. Morris, P. Liang, and M. S. Bernstein, “Generative agents: Interactive simulacra of human behavior,” in UIST 2023 - Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology , Association for Computing Machinery, Inc, 10 2023. [151] S. Hong, X. Zheng, J. Chen, Y . Cheng, J. Wang, C. Zhang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, et al. , “Metagpt: Meta programming for multi-agent collaborative framework,” arXiv preprint arXiv:2308.00352, vol. 3, no. 4, p. 6, 2023. [152] Y . Liang, C. Wu, T. Song, W. Wu, Y . Xia, Y . Liu, Y . Ou, S. Lu, L. Ji, S. Mao, et al., “Taskmatrix. ai: Completing tasks by connecting foundation models with millions of apis,” Intelligent Computing, vol. 3, p. 0063, 2024. [153] H. Hexmoor, J. Lammens, G. Caicedo, and S. C. Shapiro, Behaviour based AI, cognitive processes, and emergent behaviors in autonomous agents, vol. 1. WIT Press, 2025. [154] H. Zhang, Z. Li, F. Liu, Y . He, Z. Cao, and Y . Zheng, “Design and implementation of langchain-based chatbot,” in 2024 International Seminar on Artificial Intelligence, Computer Technology and Control Engineering (ACTCE), pp. 226–229, IEEE, 2024. [155] E. Ephrati and J. S. Rosenschein, “A heuristic technique for multi-agent planning,” Annals of Mathematics and Artificial Intelligence , vol. 20, pp. 13–67, 1997. [156] S. Kupferschmid, J. Hoffmann, H. Dierks, and G. Behrmann, “Adapting an ai planning heuristic for directed model checking,” in International SPIN Workshop on Model Checking of Software , pp. 35–52, Springer, 2006. [157] W. Chen, Y . Su, J. Zuo, C. Yang, C. Yuan, C. Qian, C.-M. Chan, Y . Qin, Y . Lu, R. Xie, et al. , “Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents,” arXiv preprint arXiv:2308.10848, vol. 2, no. 4, p. 6, 2023. [158] T. Schick, J. Dwivedi-Yu, R. Dess `ı, R. Raileanu, M. Lomeli, E. Ham- bro, L. Zettlemoyer, N. Cancedda, and T. Scialom, “Toolformer: Language models can teach themselves to use tools,” Advances in Neural Information Processing Systems , vol. 36, pp. 68539–68551, 2023. [159] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou, et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in neural information processing systems , vol. 35, pp. 24824–24837, 2022. [160] S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y . Cao, and K. Narasimhan, “Tree of thoughts: Deliberate problem solving with large language models,” Advances in neural information processing systems, vol. 36, pp. 11809–11822, 2023. [161] J. Guo, N. Li, J. Qi, H. Yang, R. Li, Y . Feng, S. Zhang, and M. Xu, “Empowering working memory for large language model agents,” arXiv preprint arXiv:2312.17259 , 2023. [162] S. Agashe, J. Han, S. Gan, J. Yang, A. Li, and X. E. Wang, “Agent s: An open agentic framework that uses computers like a human,” arXiv preprint arXiv:2410.08164, 2024. [163] C. DeChant, “Episodic memory in ai agents poses risks that should be studied and mitigated,” arXiv preprint arXiv:2501.11739 , 2025. [164] A. M. Nuxoll and J. E. Laird, “Enhancing intelligent agents with episodic memory,” Cognitive Systems Research , vol. 17, pp. 34–48, 2012. [165] G. Sarthou, A. Clodic, and R. Alami, “Ontologenius: A long-term semantic memory for robotic agents,” in 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO- MAN), pp. 1–8, IEEE, 2019. [166] A.-e.-h. Munir and W. M. Qazi, “Artificial subjectivity: Personal se- mantic memory model for cognitive agents,” Applied Sciences, vol. 12, no. 4, p. 1903, 2022. [167] A. Singh, A. Ehtesham, S. Kumar, and T. T. Khoei, “Agentic retrieval- augmented generation: A survey on agentic rag,” arXiv preprint arXiv:2501.09136, 2025. [168] R. Akkiraju, A. Xu, D. Bora, T. Yu, L. An, V . Seth, A. Shukla, P. Gun- decha, H. Mehta, A. Jha, et al. , “Facts about building retrieval aug- mented generation-based chatbots,” arXiv preprint arXiv:2407.07858 , 2024. [169] G. Wang, Y . Xie, Y . Jiang, A. Mandlekar, C. Xiao, Y . Zhu, L. Fan, and A. Anandkumar, “V oyager: An open-ended embodied agent with large language models,” arXiv preprint arXiv:2305.16291 , 2023. [170] G. Li, H. Hammoud, H. Itani, D. Khizbullin, and B. Ghanem, “Camel: Communicative agents for” mind” exploration of large language model society,” Advances in Neural Information Processing Systems , vol. 36, pp. 51991–52008, 2023. [171] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez, Y . Sulsky, J. Kay, J. T. Springenberg, et al., “A generalist agent,” arXiv preprint arXiv:2205.06175 , 2022. [172] C. K. Thomas, C. Chaccour, W. Saad, M. Debbah, and C. S. Hong, “Causal reasoning: Charting a revolutionary course for next-generation ai-native wireless networks,” IEEE Vehicular Technology Magazine , 2024. [173] Z. Tang, R. Wang, W. Chen, K. Wang, Y . Liu, T. Chen, and L. Lin, “Towards causalgpt: A multi-agent approach for faithful knowledge reasoning via promoting causal consistency in llms,” arXiv preprint arXiv:2308.11914, 2023. [174] Z. Gekhman, J. Herzig, R. Aharoni, C. Elkind, and I. Szpektor, “Trueteacher: Learning factual consistency evaluation with large lan- guage models,” arXiv preprint arXiv:2305.11171 , 2023. [175] A. Wu, K. Kuang, M. Zhu, Y . Wang, Y . Zheng, K. Han, B. Li, G. Chen, F. Wu, and K. Zhang, “Causality for large language models,” arXiv preprint arXiv:2410.15319, 2024. [176] S. Ashwani, K. Hegde, N. R. Mannuru, D. S. Sengar, M. Jindal, K. C. R. Kathala, D. Banga, V . Jain, and A. Chadha, “Cause and effect: can large language models truly understand causality?,” in Proceedings of the AAAI Symposium Series , vol. 4, pp. 2–9, 2024.
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 31, "page_label": "32", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134945"}
[177] J. Richens and T. Everitt, “Robust agents learn causal world models,” in The Twelfth International Conference on Learning Representations , 2024. [178] A. Chan, R. Salganik, A. Markelius, C. Pang, N. Rajkumar, D. Krasheninnikov, L. Langosco, Z. He, Y . Duan, M. Carroll, et al. , “Harms from increasingly agentic algorithmic systems,” in Proceed- ings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp. 651–666, 2023. [179] A. Plaat, M. van Duijn, N. van Stein, M. Preuss, P. van der Putten, and K. J. Batenburg, “Agentic large language models, a survey,” arXiv preprint arXiv:2503.23037, 2025. [180] J. Qiu, K. Lam, G. Li, A. Acharya, T. Y . Wong, A. Darzi, W. Yuan, and E. J. Topol, “Llm-based agentic systems in medicine and healthcare,” Nature Machine Intelligence , vol. 6, no. 12, pp. 1418–1420, 2024. [181] G. A. Gabison and R. P. Xian, “Inherent and emergent liability issues in llm-based agentic systems: a principal-agent perspective,” arXiv preprint arXiv:2504.03255, 2025. [182] M. Dahl, V . Magesh, M. Suzgun, and D. E. Ho, “Large legal fictions: Profiling legal hallucinations in large language models,” Journal of Legal Analysis, vol. 16, no. 1, pp. 64–93, 2024. [183] Y . A. Latif, “Hallucinations in large language models and their influence on legal reasoning: Examining the risks of ai-generated factual inaccuracies in judicial processes,” Journal of Computational Intelligence, Machine Reasoning, and Decision-Making , vol. 10, no. 2, pp. 10–20, 2025. [184] S. Tonmoy, S. Zaman, V . Jain, A. Rani, V . Rawte, A. Chadha, and A. Das, “A comprehensive survey of hallucination mitigation tech- niques in large language models,” arXiv preprint arXiv:2401.01313 , vol. 6, 2024. [185] Z. Zhang, Y . Yao, A. Zhang, X. Tang, X. Ma, Z. He, Y . Wang, M. Gerstein, R. Wang, G. Liu, et al., “Igniting language intelligence: The hitchhiker’s guide from chain-of-thought reasoning to language agents,” ACM Computing Surveys , vol. 57, no. 8, pp. 1–39, 2025. [186] Y . Wan and K.-W. Chang, “White men lead, black women help? benchmarking language agency social biases in llms,” arXiv preprint arXiv:2404.10508, 2024. [187] A. Borah and R. Mihalcea, “Towards implicit bias detection and mitiga- tion in multi-agent llm interactions,” arXiv preprint arXiv:2410.02584, 2024. [188] X. Liu, H. Yu, H. Zhang, Y . Xu, X. Lei, H. Lai, Y . Gu, H. Ding, K. Men, K. Yang, et al. , “Agentbench: Evaluating llms as agents,” arXiv preprint arXiv:2308.03688 , 2023. [189] G. He, G. Demartini, and U. Gadiraju, “Plan-then-execute: An empir- ical study of user trust and team performance when using llm agents as a daily assistant,” in Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems , pp. 1–22, 2025. [190] Z. Ke, F. Jiao, Y . Ming, X.-P. Nguyen, A. Xu, D. X. Long, M. Li, C. Qin, P. Wang, S. Savarese, et al. , “A survey of frontiers in llm reasoning: Inference scaling, learning to reason, and agentic systems,” arXiv preprint arXiv:2504.09037 , 2025. [191] M. Luo, X. Shi, C. Cai, T. Zhang, J. Wong, Y . Wang, C. Wang, Y . Huang, Z. Chen, J. E. Gonzalez, et al. , “Autellix: An efficient serving engine for llm agents as general programs,” arXiv preprint arXiv:2502.13965, 2025. [192] K. Hatalis, D. Christou, J. Myers, S. Jones, K. Lambert, A. Amos- Binks, Z. Dannenhauer, and D. Dannenhauer, “Memory matters: The need to improve long-term memory in llm-agents,” in Proceedings of the AAAI Symposium Series , vol. 2, pp. 277–280, 2023. [193] H. Jin, X. Han, J. Yang, Z. Jiang, Z. Liu, C.-Y . Chang, H. Chen, and X. Hu, “Llm maybe longlm: Self-extend llm context window without tuning,” arXiv preprint arXiv:2401.01325 , 2024. [194] M. Yu, F. Meng, X. Zhou, S. Wang, J. Mao, L. Pang, T. Chen, K. Wang, X. Li, Y . Zhang, et al., “A survey on trustworthy llm agents: Threats and countermeasures,” arXiv preprint arXiv:2503.09648 , 2025. [195] H. Chi, H. Li, W. Yang, F. Liu, L. Lan, X. Ren, T. Liu, and B. Han, “Unveiling causal reasoning in large language models: Reality or mirage?,” Advances in Neural Information Processing Systems, vol. 37, pp. 96640–96670, 2024. [196] H. Wang, A. Zhang, N. Duy Tai, J. Sun, T.-S. Chua, et al., “Ali-agent: Assessing llms’ alignment with human values via agent-based evalu- ation,” Advances in Neural Information Processing Systems , vol. 37, pp. 99040–99088, 2024. [197] L. Hammond, A. Chan, J. Clifton, J. Hoelscher-Obermaier, A. Khan, E. McLean, C. Smith, W. Barfuss, J. Foerster, T. Gaven ˇciak, et al. , “Multi-agent risks from advanced ai,” arXiv preprint arXiv:2502.14143, 2025. [198] D. Trusilo, “Autonomous ai systems in conflict: Emergent behavior and its impact on predictability and reliability,” Journal of Military Ethics , vol. 22, no. 1, pp. 2–17, 2023. [199] M. Puvvadi, S. K. Arava, A. Santoria, S. S. P. Chennupati, and H. V . Puvvadi, “Coding agents: A comprehensive survey of automated bug fixing systems and benchmarks,” in 2025 IEEE 14th International Conference on Communication Systems and Network Technologies (CSNT), pp. 680–686, IEEE, 2025. [200] C. Newton, J. Singleton, C. Copland, S. Kitchen, and J. Hudack, “Scalability in modeling and simulation systems for multi-agent, ai, and machine learning applications,” in Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III , vol. 11746, pp. 534–552, SPIE, 2021. [201] H. D. Le, X. Xia, and Z. Chen, “Multi-agent causal discovery using large language models,” arXiv preprint arXiv:2407.15073 , 2024. [202] Y . Shavit, S. Agarwal, M. Brundage, S. Adler, C. O’Keefe, R. Camp- bell, T. Lee, P. Mishkin, T. Eloundou, A. Hickey, et al., “Practices for governing agentic ai systems,” Research Paper, OpenAI, 2023. [203] P. Lewis, E. Perez, A. Piktus, F. Petroni, V . Karpukhin, N. Goyal, H. K ¨uttler, M. Lewis, W.-t. Yih, T. Rockt ¨aschel, et al. , “Retrieval- augmented generation for knowledge-intensive nlp tasks,” Advances in neural information processing systems , vol. 33, pp. 9459–9474, 2020. [204] Y . Ma, Z. Gou, J. Hao, R. Xu, S. Wang, L. Pan, Y . Yang, Y . Cao, A. Sun, H. Awadalla, et al. , “Sciagent: Tool-augmented language models for scientific reasoning,” arXiv preprint arXiv:2402.11451 , 2024. [205] K. Dev, S. A. Khowaja, K. Singh, E. Zeydan, and M. Debbah, “Advanced architectures integrated with agentic ai for next-generation wireless networks,” arXiv preprint arXiv:2502.01089 , 2025. [206] A. Boyle and A. Blomkvist, “Elements of episodic memory: in- sights from artificial agents,” Philosophical Transactions B , vol. 379, no. 1913, p. 20230416, 2024. [207] Y . Du, W. Huang, D. Zheng, Z. Wang, S. Montella, M. Lapata, K.-F. Wong, and J. Z. Pan, “Rethinking memory in ai: Taxonomy, operations, topics, and future directions,” arXiv preprint arXiv:2505.00675 , 2025. [208] K.-T. Tran, D. Dao, M.-D. Nguyen, Q.-V . Pham, B. O’Sullivan, and H. D. Nguyen, “Multi-agent collaboration mechanisms: A survey of llms,” arXiv preprint arXiv:2501.06322 , 2025. [209] K. Tallam, “From autonomous agents to integrated systems, a new paradigm: Orchestrated distributed intelligence,” arXiv preprint arXiv:2503.13754, 2025. [210] Y . Lee, “Critique of artificial reason: Ontology of human and artificial intelligence,” Journal of Ecohumanism , vol. 4, no. 3, pp. 397–415, 2025. [211] L. Ale, S. A. King, N. Zhang, and H. Xing, “Enhancing generative ai reliability via agentic ai in 6g-enabled edge computing,” Nature Reviews Electrical Engineering , pp. 1–3, 2025. [212] N. Shinn, F. Cassano, A. Gopinath, K. Narasimhan, and S. Yao, “Reflexion: Language agents with verbal reinforcement learning,” Advances in Neural Information Processing Systems, vol. 36, pp. 8634– 8652, 2023. [213] F. Kamalov, D. S. Calonge, L. Smail, D. Azizov, D. R. Thadani, T. Kwong, and A. Atif, “Evolution of ai in education: Agentic workflows,” arXiv preprint arXiv:2504.20082 , 2025. [214] A. Sulc, T. Hellert, R. Kammering, H. Hoschouer, and J. S. John, “Towards agentic ai on particle accelerators,” arXiv preprint arXiv:2409.06336, 2024. [215] J. Yang, C. Jimenez, A. Wettig, K. Lieret, S. Yao, K. Narasimhan, and O. Press, “Swe-agent: Agent-computer interfaces enable automated software engineering,” Advances in Neural Information Processing Systems, vol. 37, pp. 50528–50652, 2024. [216] S. Barua, “Exploring autonomous agents through the lens of large language models: A review,” arXiv preprint arXiv:2404.04442 , 2024. [217] A. Zhao, Y . Wu, Y . Yue, T. Wu, Q. Xu, M. Lin, S. Wang, Q. Wu, Z. Zheng, and G. Huang, “Absolute zero: Reinforced self-play reason- ing with zero data,” arXiv preprint arXiv:2505.03335 , 2025.
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-21T00:48:59+00:00", "moddate": "2025-05-21T00:48:59+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "trapped": "/False", "source": "data\\raw\\ai_agents_vs_agentic_ai_2505.10468.pdf", "total_pages": 33, "page": 32, "page_label": "33", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134946"}
arXiv:2505.06817v1 [cs.AI] 11 May 2025 Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems Sivasathivel Kandasamy [email protected] May 13, 2025 Abstract Agentic AI systems represent a new frontier in artificial intelligence, where agents—often based on large language models (LLMs)—interact with tools, environments, and other agents to accomplish tasks with a degree of autonomy. These systems show promise across a range of domains, but their architectural underpinnings remain immature. This paper conducts a comprehensive review of the types of agents, their modes of interaction with the environment, and the infrastructural and architectural challenges that emerge. We identify a gap in how these systems manage tool orchestration at scale and propose a reusable design abstraction: the “Control Plane as a Tool” pattern. This pattern allows developers to expose a single tool interface to an agent while encapsulating modular tool routing logic behind it. We position this pattern within the broader context of agent design and argue that it addresses several key challenges in scaling, safety, and extensibility. 1 Introduction Agents in software are not a new concept. The foundational definition can be traced back to Wooldridge and Jennings [14], who defined software agents as autonomous, goal-directed computa- tional entities capable of perceiving and acting upon their environment. Historically, such agents have been explored across domains like robotics, multi-agent systems, and distributed computing. The advent of generative AI—especially large language models (LLMs) such as GPT-4 [12], Claude [2], and Gemini [8]—has dramatically transformed this paradigm. LLM-driven agents are no longer bound by pre-coded rules; they now exhibit emergent reasoning, multi-step planning, memory awareness, and flexible tool use. This evolution has given rise to a new class of intelligent systems: Agentic AI. We defineAgentic AIas autonomous software programs, often LLM-powered, that can perceive their environment, plan behaviors, invoke external tools or APIs, and interact with both digital environments and other agents to fulfill predefined goals. These systems are characterized by goal- seeking autonomy, tool adaptability, contextual memory, and multi-agent coordination [9, 1, 18]. Agentic AI has rapidly entered mainstream discourse, with organizations seeking to embed agent-based workflows into domains such as customer service, software engineering, and operations. While some cases demonstrate meaningful gains [5], others are driven by hype cycles and premature generalization [4]. The core production value of Agentic AI lies in: 1
{"producer": "pikepdf 8.15.1", "creator": "arXiv GenPDF (tex2pdf:f38b2be)", "author": "Sivasathivel Kandasamy", "doi": "https://doi.org/10.48550/arXiv.2505.06817", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "trapped": "/False", "arxivid": "https://arxiv.org/abs/2505.06817v1", "source": "data\\raw\\control_plane_scalable_design_pattern_2505.06817.pdf", "total_pages": 9, "page": 0, "page_label": "1", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134947"}
• Autonomous Decision-Making: Dynamic task planning and real-time behavioral adapta- tion. • Multi-Tool Integration: Composition across APIs, search interfaces, and databases. • Contextual Reasoning: Use of memory and history for iterative improvement. • Composable Workflows: Encapsulation of agents as modular, role-oriented microservices. To realize these capabilities, developers rely on a combination of agentic design patterns, in- cluding: • Reflection Pattern (ReAct) [17]: Alternates between reasoning and acting. • Tool Use Pattern [7]: A tool can be defined as a piece of code that the Agent uses to observe or act towards achieving its goal. The pattern focuses on agents that uses tools to achieve their goal • Hierarchical Agentic Pattern [16]: Decomposes planning across layered sub-agents. • Collaborative Agentic Pattern [15, 6]: Assigns roles to specialized agents that cooperate toward a shared objective. In Agentic-AI systems, a tool can be defined as a piece of code that the Agent uses to observe or effect change to achieve its goal. Most production-grade systems employ hybrid designs, mixing multiple patterns to meet business constraints. In parallel, several frameworks have emerged to reduce orchestration complexity and abstract common operations: • LangChain [10]: A Python framework that chains prompts, tools, and memory components. Focus: Prompt-based orchestration and memory integration. Limitation: Tight coupling of agent logic and tool invocation leads to brittle workflows. • LangGraph [11]: A graph-based orchestration runtime supporting condition-based tool chains and node-based state handling. Focus: Declarative, recoverable workflows. Limitation: Requires explicit node wiring and is less dynamic for runtime tool adaptation. • AutoGen [15]: An LLM-based multi-agent library emphasizing role separation and dialog coordination. Focus: Agent-to-agent conversation and memory persistence. Limitation: Or- chestration is hardcoded; lacks modular tool routing logic. • CrewAI [6]: A lightweight framework for role-based multi-agent collaboration.Focus: Domain- specialized agents working in crews. Limitation: Static role definitions; limited support for dynamic role/tool mutation. • Anthropic MCP [3]: A schema-centric protocol enabling LLMs to securely invoke external tools. Focus: Interoperability and tool safety via typed interfaces. Limitation: Steep learning curve; orchestration logic is implicit and non-modular. Despite these advancements, productionizing Agentic AI remains challenging due to: • Tool Orchestration Complexity:Scaling APIs without prompt bloat or entangled logic [13, 7]. 2
{"producer": "pikepdf 8.15.1", "creator": "arXiv GenPDF (tex2pdf:f38b2be)", "author": "Sivasathivel Kandasamy", "doi": "https://doi.org/10.48550/arXiv.2505.06817", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "trapped": "/False", "arxivid": "https://arxiv.org/abs/2505.06817v1", "source": "data\\raw\\control_plane_scalable_design_pattern_2505.06817.pdf", "total_pages": 9, "page": 1, "page_label": "2", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134948"}
• Governance and Observability: Ensuring traceability and enforcement of tool usage poli- cies [15, 11, 3]. • Memory Synchronization: Maintaining consistent state across workflows [6, 10]. • Cross-Agent Coordination: Preventing task collisions and misaligned objectives [15, 17]. • Adaptability vs. Safety: Controlling exploratory behavior while preserving reliability [4, 5]. These challenges reveal a deeper architectural design gap. This paper focuses specifically on the challenges related to tool usage by agents. The major contribution of this article is to provide an architectural design pattern that addresses the following limitations in tool handling in Agentic AI systems: 1. Add, remove, or modify tools without changing agent code or prompts. 2. Learn and personalize tool usage for specific tasks and users. 3. Track tool usage and enforce organizational or compliance policies. 4. Select tools dynamically based on context or metadata. 5. Reduce the learning curve for developers building agentic systems. 6. Enable and simplify distributed, collaborative development of tools across teams. The next section introduces the proposed architectural pattern— Control Plane as a Tool—and discusses how it addresses the identified gaps in tool orchestration within Agentic AI systems. In the following sections, we demonstrate the application of this pattern by designing a simplified chatbot system and outline future directions for extending this work across other facets of Agentic AI architecture. 2 Proposed Design Pattern: Control Plane as a Tool This section introduces a reusable design pattern - Control Plane as a Tool- that modularizes and enhances tool orchestration in Agentic AI Systems. The pattern aims to decouple tool management from the Agent’s reasoning and decision layers. Thereby enabling flexibility, observability, and scalability across systems. Naturally, this pattern can be considered as an extension of the Tool-use Pattern. 2.1 Design Goals The Control Plane as a Toolpattern is driven by the following goals: • Modularity: The tool logic should be abstracted from the agent, allowing tools to be mod- ified, added, or removed without changing the agent’s prompt or control logic. • Dynamic Selection: Tool invocation should be dynamic, based on task requirements, meta- data, user profiles, or past interactions. 3
{"producer": "pikepdf 8.15.1", "creator": "arXiv GenPDF (tex2pdf:f38b2be)", "author": "Sivasathivel Kandasamy", "doi": "https://doi.org/10.48550/arXiv.2505.06817", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "trapped": "/False", "arxivid": "https://arxiv.org/abs/2505.06817v1", "source": "data\\raw\\control_plane_scalable_design_pattern_2505.06817.pdf", "total_pages": 9, "page": 2, "page_label": "3", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134949"}
(a) Agents-Tool Separation Through Control Plane (b) Agents as Tool Through Control Plane Figure 1: Figures show how control plane help with the interaction of agents and tools • Governance and Observability: Tool usage should be auditable, allowing the enforcement of organizational or safety policies. • Cross-Framework Portability: As a design pattern, it would be framework-agnostic and can be easily used with any framework. • Developer Usability: Developers should be able to use a single tool interface and offload orchestration complexity to the control plane. • Support for Personalization : Agents should be able to learn and adapt tool selection policies based on feedback or task success. 2.2 Pattern Structure In simple Terms, Control Plane, in this context, is a piece of software that configures and routes the data between the configured tools and the agents. The set of tools configured forms with Tools Layer and one or more agents configured to the control plane to use the tools form the Agentic Layer. The Control Plane is exposed to the agent as a tool(), similar to other callable tools (e.g., search, calculator, database). Internally, the control plane executes the following sequence: 1. The agent queries the control plane with an intent or query. 2. Parses metadata of the tools and retrieves relevant candidate tools. 3. Applies routing logic (e.g. semantic similarity, user context, policy filters, user preference, etc.). 4. Calls the appropriate tool, and logs the interaction. 5. Returns the output of the tool to the agent. This makes orchestration transparent from the agent’s point of view, supporting reuse, caching, validation, and dynamic composition. Figure 1 shows an overview of the high-level Control Plane. Figure 1B also shows that the proposed pattern enables interaction between agents as well through the control plane. The internals of the Control Plane is provided in the Figure 2 Agents and externals systems are expected to interact with the control plane through an API endpoint or a CLI. Request Router module decodes the incoming request and route it to the 4
{"producer": "pikepdf 8.15.1", "creator": "arXiv GenPDF (tex2pdf:f38b2be)", "author": "Sivasathivel Kandasamy", "doi": "https://doi.org/10.48550/arXiv.2505.06817", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "trapped": "/False", "arxivid": "https://arxiv.org/abs/2505.06817v1", "source": "data\\raw\\control_plane_scalable_design_pattern_2505.06817.pdf", "total_pages": 9, "page": 3, "page_label": "4", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134950"}
Figure 2: Control Plane Architecture 5
{"producer": "pikepdf 8.15.1", "creator": "arXiv GenPDF (tex2pdf:f38b2be)", "author": "Sivasathivel Kandasamy", "doi": "https://doi.org/10.48550/arXiv.2505.06817", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "trapped": "/False", "arxivid": "https://arxiv.org/abs/2505.06817v1", "source": "data\\raw\\control_plane_scalable_design_pattern_2505.06817.pdf", "total_pages": 9, "page": 4, "page_label": "5", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134951"}
appropriate modules viz Registration Module, Invocation Module and Feedback Integration Module. The main goal of the Registration Module is to register the interacting agents, tools, validation rules and metrics. The Invocation Module, module helps the invoking agents to query a tool or other regis- tered agents. Input Validator assess the inputs passed for data integrity, safety and alignment, based on the validation rules in the validation registry. Once the inputs are validated, the Intent Resolver Module try to understand the invoking agents’ intentions to identify the correct tools/toolset. Routing Handler, based on the resolved intent identify the tools(incl. agents) and their sequence of invocations. Once the outputs from the tools are consolidated, output validator , validates the outputs again to make sure the output complies with registered rules and regulations. Once the results are validated it is returned to the invoking agent. The Feedback Module, helps to integrate user feedbacks into the systems so that the tool selection and their sequences can be personalized as per user preference. Though it is an optional module, it highly recommended for performance and accuracy. The control plane will be registered as a tool in an agentic,so that each agent has to bind to only one tool, simplifying the process. The proposed architecture is considered a design pattern as either be implemented through an Agentic approach or as a set of microservices. Both have the advantages and disadvantages. The non-Agentic approache keeps the complexity to minimum and might be less expensive. However, the Agentic Approach would provide more flexibility and extendability. 3 Comparison with Model Context Protocol The Model Context Protocol (MCP) [3] has a similar objective to that of the proposed approach. Hence it become necessary the similarities and differences between the two. The table 1 shows the similarities between the two systems. ( Disclaimer. Model Context Protocol (MCP) is a tool interface specification introduced by An- thropic. This paper does not implement, replicate, or reverse-engineer MCP. All comparisons are based on publicly available documentation and are intended solely for academic discussion and architectural contrast.) 3.1 Similarities Between the Control Plane and MCP The proposed Control Plane architecture shares several goals and structural traits with Anthropic’s Model Context Protocol (MCP), particularly in its emphasis on safety, tool registration, and struc- tured invocation. 3.2 Key Differences Between the Control Plane and MCP While the Control Plane and MCP share structural themes, their operational design and goals diverge significantly. The Control Plane emphasizes orchestration, governance, and multi-agent extensibility, whereas MCP focuses on standardizing tool invocation for a single LLM context. Table 2, shows how they differ from one another. 6
{"producer": "pikepdf 8.15.1", "creator": "arXiv GenPDF (tex2pdf:f38b2be)", "author": "Sivasathivel Kandasamy", "doi": "https://doi.org/10.48550/arXiv.2505.06817", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "trapped": "/False", "arxivid": "https://arxiv.org/abs/2505.06817v1", "source": "data\\raw\\control_plane_scalable_design_pattern_2505.06817.pdf", "total_pages": 9, "page": 5, "page_label": "6", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134952"}
Table 1: Similarities Between Control Plane and MCP Feature Description Tool Registration Both systems require structured metadata or schema reg- istration for external tools. MCP uses JSON schema; the Control Plane maintains a Tool Registry. Input Validation Both validate tool inputs using schema constraints. MCP enforces JSON Schema at runtime, while the Control Plane uses a dedicated Input Validator module. Invocation Routing MCP tools are invoked based on prompt-matched schemas. The Control Plane routes requests via a Routing Handler using similarity search or rule-based matching. Structured Interfaces Both emphasize deterministic tool behavior through for- malized I/O specifications to reduce prompt ambiguity. 4 Conclusion and Future Directions The advent of generative models and their role in the development of Agentic AI systems, have also led to the rise of many frameworks. One of the challenges in developing systems the orchestration of the tools in an simple, safe, and manageable manner in production. The lack of composable, minimal design patterns is limiting the scalability of agentic AI. The proposed pattern was developed with that and model-agnosticism in mind. The proposed “Control Plane as a Tool” allows developers to encapsulate routing logic and enforce governance across environments. While this work addresses many of the current challenges in the development of Agentic AI systems, the pattern/approach has a potential to be extended to include many more features. Future work in this area will realize the development of a framework-agnostic system and evaluate performance, safety, and extensibility across larger multi-agent deployments. Author Disclaimer. This work was independently conceived and executed by the author without financial support from any institution, company, or donor organization. It was not funded by the author’s current employer or by any external grant, donation, or sponsorship. All opinions and technical claims are solely those of the author. Acknowledgements: This work would not have been possible without the unwavering support and understanding of my wife, Dharani, and my kids, Shree and Kart, even when I have to work late nights References [1] Aisera. Agentic ai: What it means for your business. https://aisera.com/blog/agentic-ai, 2024. Accessed: 2024-04-23. [2] Anthropic. Introducing claude. https://www.anthropic.com/index/introducing-claude, 2023. 7
{"producer": "pikepdf 8.15.1", "creator": "arXiv GenPDF (tex2pdf:f38b2be)", "author": "Sivasathivel Kandasamy", "doi": "https://doi.org/10.48550/arXiv.2505.06817", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "trapped": "/False", "arxivid": "https://arxiv.org/abs/2505.06817v1", "source": "data\\raw\\control_plane_scalable_design_pattern_2505.06817.pdf", "total_pages": 9, "page": 6, "page_label": "7", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134953"}
Table 2: Key Differences Between Control Plane and MCP Aspect Control Plane (This Work) Model Context Protocol (MCP) Architecture Type External modular orchestrator Embedded schema-based interface Routing Strategy Rule-based and similarity-based rout- ing via Routing Handler Implicit function selection via schema- matching in prompt Agent Scope Supports multiple agents and decou- pled planning Coupled to a single Claude-based LLM instance Governance and Tracking Includes Usage Tracker, policy en- forcement, failure handling No built-in governance or logging Learning and Feedback Optional Feedback Integrator for experience-based routing No feedback loop or adaptive learning Tool Chaining Support Enables explicit chaining and dependency-based tool routing No chaining logic; tools treated atom- ically Tool Fallback and Safety Failure Handler supports default re- sponses and recovery No structured fallback mechanism Extensibility LLM-agnostic, framework-agnostic Claude-specific runtime binding [3] Anthropic. Tool use with claude 2 and the mcp. https://docs.anthropic.com/claude/ docs/tool-use, 2023. [4] Rishi Bommasani and et al. Opportunities and risks of foundation models. Communications of the ACM, 66(1):67–77, 2022. [5] Sebastien Bubeck and et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. [6] CrewAI. Crewai: Build agentic workflows with multiple roles. https://docs.crewai.com, 2024. [7] DeepLearning.AI. Agentic design patterns part 3: Tool use. https://www.deeplearning.ai/ the-batch/agentic-design-patterns-part-3-tool-use/ , 2024. Accessed: 2024-04-23. [8] Google DeepMind. Gemini 1.5 technical report. https://deepmind.google/technologies/ gemini/, 2024. [9] IBM. What is agentic ai? https://www.ibm.com/think/topics/agentic-ai, 2024. Ac- cessed: 2024-04-23. [10] LangChain. Langchain documentation. https://docs.langchain.com, 2023. [11] LangGraph. Langgraph documentation. https://www.langgraph.dev, 2024. [12] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. [13] Guang Qin and et al. Toolllm: Facilitating llms to master 16000+ real tools. arXiv preprint arXiv:2310.06832, 2023. [14] Michael Wooldridge and Nicholas R. Jennings. Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2):115–152, 1995. 8
{"producer": "pikepdf 8.15.1", "creator": "arXiv GenPDF (tex2pdf:f38b2be)", "author": "Sivasathivel Kandasamy", "doi": "https://doi.org/10.48550/arXiv.2505.06817", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "trapped": "/False", "arxivid": "https://arxiv.org/abs/2505.06817v1", "source": "data\\raw\\control_plane_scalable_design_pattern_2505.06817.pdf", "total_pages": 9, "page": 7, "page_label": "8", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134954"}
[15] Eric Wu and et al. Autogen: Enabling next-generation multi-agent llm applications. arXiv preprint arXiv:2309.12307, 2023. [16] Muhan Xu and et al. Hierarchical planning with llms: A modular framework. arXiv preprint arXiv:2311.09541, 2023. [17] Shinn Yao and et al. React: Synergizing reasoning and acting in language models. In ICLR, 2023. [18] Hang Yin and et al. Agentic large language models: A survey. arXiv preprint arXiv:2503.23037, 2024. 9
{"producer": "pikepdf 8.15.1", "creator": "arXiv GenPDF (tex2pdf:f38b2be)", "author": "Sivasathivel Kandasamy", "doi": "https://doi.org/10.48550/arXiv.2505.06817", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "trapped": "/False", "arxivid": "https://arxiv.org/abs/2505.06817v1", "source": "data\\raw\\control_plane_scalable_design_pattern_2505.06817.pdf", "total_pages": 9, "page": 8, "page_label": "9", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134955"}
RedTeamLLM: an Agentic AI framework for offensive security Brian Challita1 , Pierre Parrend1,2 , 1Laboratoire de Recherche de l’EPITA, 14-16 Rue V oltaire, 94270 Le Kremlin-Bicˆetre, France 2ICube, UMR 7357, Universit´e de Strasbourg, CNRS, 300 bd S´ebastien Brant - CS 10413 - F-67412 Illkirch Cedex {brian.challita, pierre.parrend}@epita.fr Abstract From automated intrusion testing to discovery of zero-day attacks before software launch, agentic AI calls for great promises in security engineer- ing. This strong capability is bound with a sim- ilar threat: the security and research community must build up its models before the approach is leveraged by malicious actors for cybercrime. We therefore propose and evaluate RedTeamLLM, an integrated architecture with a comprehensive se- curity model for automatization of pentest tasks. RedTeamLLM follows three key steps: summariz- ing, reasoning and act, which embed its operational capacity. This novel framework addresses four open challenges: plan correction, memory manage- ment, context window constraint, and generality vs. specialization. Evaluation is performed through the automated resolution of a range of entry-level, but not trivial, CTF challenges. The contribution of the reasoning capability of our agentic AI framework is specifically evaluated. Keywords: Cyberdefense; AI for cybersecurity; generative AI; Agentic AI; offensive security 1 Introduction The recent strengthening of Agentic AI [Hughes et al., 2025] approaches poses major challenges in the domains of cyber- warfare and geopolitics [Oesch et al., 2025]. LLMs are al- ready commonly used for cyber operations for augmenting human capabilities and automating common tasks[Yaoet al., 2024; Chowdhury et al., 2024]. They already pose significant ethical and societal challenges [Malatji and Tolah, 2024], and a great threat of proliferation of cyberdefence and -attack ca- pabilities , which were so far only available for nation-state level actors. Whereas there current recognized capabilities are still bound to the rapid analysis of malicious code or rapid decision taking in alert triage, and they pose significant trust issues [Sun et al., 2024], there expressivity and knowledge- base are rapidly ramping up. In this context, Agentic AI, i.e. autonomous AI systems that are capable of performing a set of complex tasks that span over long periods of time with- out human supervision [Acharya et al., 2025], is opening a brand new type of cyberthreat. They follow two complemen- tary strategies: goal orientation, and reinforcement learning, which have the capability to dramatically accelerate the ex- ecution of highly technical operations, such as cybersecurity actions, while supporting a diversification of supported tasks. In the defense landscape, cyberwarfare takes a singular po- sition, and targets espionage, disruption, and degradation of information and operational systems of the adversary. More than in traditional arms, skill is a strong limiting factor, es- pecially since targeting critical defense systems heavily relies on the exploitation of rare, unknown vulnerabilities, which are most often than not 0-days threats. Actually, whereas fi- nancial criminality aims at money extorsion and thus targets a broad range of potential victims to exploit the weakest ones, defense operations aim at entering and disrupting highly ex- posed, and highly protected, technical environments, where known vulnerabilities are closed very quickly. In this context, operational capability relies so far in talented analysts capa- ble of discovering novel vulnerabilities. This high-skill, high- mean game could face a brutal end with the advent of tools ca- pable of discovering new exploitable flows at the heart of the software, thus enabling smaller actors to exhibit a highly asy- metric threats capable of disrupting critical infrastructures, or launching large-scale disinformation campaigns. Agentic AI has the capability to provide such a tool, and LLMs them- selves in their stand-alone versions, have already proved ca- pable of detecting these famous 0-day vulnerabilities: Mi- crosoft has published, with the help of its Copilot tools, no less that 20 (!!) vulnerabilities in the Grub2, U-Boot and barebox bootloaders since late 2024 1. This is the public side of the medal, by a company who seeks to advertise its software development environment, and create some noise on vulnerabilities on competing operating systems. No doubt malicious actors have not waited to take the same tool at their advantage to unleash novel capabili- ties to their arsenal, beyond the malicious generative tools analyzed by the community: WormGPT 2, DarkBERT [Jin et al., 2023], FraudGPT [Falade, 2023]. In the domain of au- tonomous offensive cybersecurity operations, the probability 1https://www.microsoft.com/en- us/security/blog/2025/03/31/analyzing-open-source-bootloaders- finding-vulnerabilities-faster-with-ai/ 2https://flowgpt.com/p/wormgpt-6 arXiv:2505.06913v1 [cs.CR] 11 May 2025
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-13T00:52:17+00:00", "moddate": "2025-05-13T00:52:17+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "templateversion": "IJCAI.2025.0", "trapped": "/False", "source": "data\\raw\\redteamllm_agentic_ai_framework_2505.06913.pdf", "total_pages": 9, "page": 0, "page_label": "1", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134955"}
and likely impact of proliferation of agentic AI frameworks are high. Understanding their mechanism to leverage these tools for defensive operations, and for being able to antic- ipate their malicious exploitation, is therefore an urgent re- quirement for the community. We therefore propose the RedTeamLLM model to the com- munity, as a proof-of-concept of the offensive capabilities of Agentic AI. The model encompasses automation, genericity and memory support. It also defines the principles of dynamic plan correction and context window contraint mitigation, as well as a strict security model to avoid abuse of the system. The evaluations demonstrate the strong competitivity of the model wrt. state-of-the-art competitors, as well as the neces- sary contribution of its summarizer, reasoning and act com- ponents. In particular, RedTeamLLM exhibit a significant improvement in automation capability against PenTestGPT [Deng et al., 2024], which still show restricted capacity. The remainder of this paper is organised as follows: Sec- tion 2 presents the state of the art. Section 3 defines the re- quirements, and section 4 present the RedTeamLLM model for agentic-AI based offensive cybersecurity operations. Sec- tion 5 presents the implementation and section 6 the evalua- tion of the model. Section 7 concludes this work. 2 State of the Art The advent, under the form of LLMs, of computing processes capable of generating structured output beyond existing text, is a key driver for a renewed development of agent-based models, with so-called ‘agentic AI’ models [Shavit et al. , 2023], which are able both to devise technical processes and technically correct pieces of code. These novel kind of agents support multiple, complex and dynamic goals and can op- erated in dynamic environments while taking a rich context into account [Acharya et al., 2025 ]. They thus open novel challenges and opportunity, both as generic problem-solving agents and for highly complex and technical environment like cybersecurity operations. 2.1 Research challenges for Agentic AI The four main challenges in Agentic AI are: analysis, relia- bility, human factor, and production. These challenges can be mapped to the taxonomy of prompt engineering techniques by [Sahoo et al. , 2024 ]: Analysis: Reasoning and Logic, knowledge-based reasoning and generation, meta-cognition and self-reflection; Reliability: reduce hallucination, fine- tuning and optimisation, improving consistency and coher- ence, efficiency; Human factor: user interaction, understand- ing user intent, managing emotion and tones; production: code generation and execution. The first issue for supporting reasoning and logic is the ca- pability to address complex tasks, to decompose them and to handle each individual step. The first such model, chain- of-thought (CoT), is capable of structured reasoning through step-by-step processing and proves to be competitive for math benchmarks and common sense reasoning benchmarks [Wei et al. , 2022 ]. Automatic chain-of-thought (Auto-CoT) au- tomatize the generation of CoTs by generating alternative questions and multiple alternative reasoning for each to con- solidate a final set of demonstrations [Zhang et al. , 2022 ]. Trees-of-thought (ToT) handles a tree structure of interme- diate analysis steps, and performs evaluation of the progress towards the solution [Yao et al., 2023a] through breadth-first or depth-first tree search strategies. This approach enables to revert to previous nodes when an intermediate analysis is er- roneous. Self consistency is an approach for the evaluation of reasoning chains for supporting more complex problems through the sampling and comparative 1evaluation of alterna- tive solutions [Wang et al., 2022]. Text generated by LLM is intrinsically a statistical approx- imation of a possible answer: as such, it requires 1) a rigor- ous process to reduce the approximation error below usabil- ity threshold, and 2) systematic control by a human operator. The usability threshold can be expressed in term of veracity, for instance in the domain of news. 3. For code generation, it matches code that is both correct and effective, i.e. that compiles and run, and that perform expected operation. Us- able technical processes, like in red team operations, are de- fined by reasoning and logic capability. reducing hallucina- tion: Retrieval Augmented Generation (RAG) for enriching prompt context with external, up-to-date knowledge[Lewis et al., 2020]; REact prompting for concurrent actions and updat- able action plans, with reasoning traces [Yao et al., 2023b]. One key issue for red teaming tasks is the capability to produce fine-tuned, system-specific code for highly precise task. Whereas the capability of LLMs to generate basic code in a broad scope of languages is well recognized [Li et al., 2024], the support of complex algorithms and target- dependent scripts is still in its infancy. In particular, the ar- ticulation between textual, unprecise and informal reasoning and lines of code must solve the conceptual gap between the textual analysis and the executable levels. Structured Chain- of-Thought [Li et al., 2025 ] closes this gap by enforcing a strong control loop structure (if-then; while; for) at the textual level, which can then be implemented through focused code generation. Programatically handling numeric and symbolic reasoning, as well as equation resolution, requires a binding with external tools, such as specified by Program-of-Thought (PoT) [Bi et al. , 2024 ] or Chain-of-Code (CoC) [Li et al. , 2023] prompting models. However, these features are not re- quired in the case of read teaming tasks. 2.2 Cognitive Architectures Three main architectures implement the Agentic AI ap- proach: ReAct (Reason and Act), ADaPT (As needed De- composition and Planning) and P&E (Plan and Execute). ReAct[Yao et al., 2023b ] first reasons about the analysis strategy, then rolls out this strategy. It performs multiple rounds of reasoning and acting, executing one action at each round then collecting observation. This enables a strong re- duction of the error margin. As shown in Figure 1, ReAct input is built with an explicit objective and an optional con- text. Reasoning then summarizes the goal and context and plan next action, each through a call to an LLM agent. The se- lected action is then executed, again based on an LLM call. If the analysis is not completed, the pipeline returns to the goal 3https://www.cjr.org/tow center/we-compared-eight-ai-search- engines-theyre-all-bad-at-citing-news.php
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-13T00:52:17+00:00", "moddate": "2025-05-13T00:52:17+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "templateversion": "IJCAI.2025.0", "trapped": "/False", "source": "data\\raw\\redteamllm_agentic_ai_framework_2505.06913.pdf", "total_pages": 9, "page": 1, "page_label": "2", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134956"}
definition step, with a given subgoal. If the goal is achieved, the pipeline terminates. The main limits of this architecture, whether it is used with prompting or with complex pipelines, is the absence of memory, which requires each prompt to em- bed all context and knowledge about previous analysis steps. Since the context windows of current LLMs are strongly lim- ited, information start being ignored as the context and his- tory start exceeding the context window’s limit, which can lead to reduced performance and inaccurate outputs. Figure 1: Process diagram of ReAct ADaPT [Prasad et al., 2023 ] takes a greedy approach to decomposition: it keeps decomposing the task until it reaches subtasks that can be executed, through recursive decompo- sition which avoids a saturation of agent capability. The de- composition stops either when a task can be executed directly, or when a max depth is reached. Unlike ReAct and P&E, ADaPT can’t be a prompting method as it is based on recur- sion. ADaPT completely solves the problem of context win- dow size restriction, by decomposing as much as needed. Ex- ecution of leaves are then be carried out independently. How- ever, many complications come along the way: plan correc- tion (if a task fails completely, how can we correct the rest of the plan ?) and new discoveries (the agent might stumble upon information that can lead to a complete change of plan), in particular, are not supported. P&E [Sun et al. , 2023 ] aims to decompose a task into multiple subtasks that are executed independently from one another. This architecture defines first solutions to ReAct’s weak points, by decomposing a task and isolating the sub- task’s execution. Prompt’s length is thus minimized, which slows down the consumption of the context window capacity. Task execution becomes more efficient. However, one key issue remains: context window is eventually reached; and a new one is introduced: error handling, since, on a subtask’s failure, the whole execution fails. 2.3 Agentic AI and cybersecurity Recent offensive-security agents all converge on a narrow de- sign spectrum: a frontier LLM in a ReAct-style loop that plans, executes a single tool call, observes, then repeats [Heckel and Weller, 2024] — yet none of them store or revise a global plan the way ADaPT or other deliberative-memory systems do. AutoAttacker couples ReAct with an episodic “Experience Manager,” but that memory is consulted only to validate the current action rather than to update or back- track the plan itself [Xu et al., 2024]. LLM-Directed Agent preserves the classic four-stage ReAct chain (NLTG → CFG → CG → NLTP) and likewise discards alternative branches once the CFG selects one [Laney, 2024 ]. One-Day Vul- nerabilities’ Exploit [Fang et al., 2024a] and Hack-Websites [Fang et al., 2024b] expose different toolsets to the same Re- Act controller, and performance collapses as soon as GPT- 4 is replaced by weaker models. CyberSecEval 3 uses an even leaner single-prompt ReAct wrapper to probe Llama- 3 and contemporaries, finding that all models stall long be- fore complex exploitation [Wan et al. , 2024 ]. HackSynth strips the pattern down to just a Planner and a Summariser —- still a think-then-act loop—and shows that temperature and context-window size, not architectural novelty, dominate success rates [Muzsai et al., 2024]. The sole departure from ReAct is PenTestAgent, which hard-codes a pentesting work- flow (Reconnaissance → Search → Planning → Execution) without agentic recursion [Shen et al., 2024 ], and PenTest- GPT, whose Plan-and-Execute modules shuffle intermediate results between Reasoning, Generation and Parsing stages but never revisit earlier strategies once execution starts [Deng et al., 2024]. Although defensive models exhibit promising properties [Ismail et al., 2025], the exploitation of Agentic AI for malicious operations is a key concern to the community [Malatji and Tolah, 2024]. Across current systems, memory is used only as a scratch-pad for latest observations; none im- plement hierarchical plan refinement, long-horizon memory, or roll-back of faulty plans. 3 Requirements In this section we explicit the specific challenges of agen- tic AI offensive cybersecurity operations.We address context window’s limit, continuous improvement, genericity and au- tomation. One major issue of LLM agent-based systems is their limited context window. Complex tasks usually re- quire many iterations between the agent and a changing en- vironment especially using ReAct, so tracking what has hap- pened is essential for high-quality results. A common way to address this challenge is recursive planning [Prasad et al., 2023], in which a task is broken down into many subtasks that are executed individually; each subtask then passes the key points of its outcome to the next ones. A difficulty arises when a subtask fails, potentially blocking the subtasks that follow. To prevent this, a plan-correction mechanism [Wang et al., 2024] is applied: whenever a subtask fails, the overall plan is adjusted so execution can proceed smoothly. These two techniques are crucial for building a high-performance agent, but further refinements are still possible. Repeating the same mistakes on every run wastes time, money, and com- putation. Introducing a memory manager during task plan- ning lets the agent avoid exploratory paths that have already failed. Moreover, genericity is essential. Allowing the agent full freedom to choose its own tools and techniques fosters creativity and broadens its capabilities beyond a fixed toolset. In our case, the agent has unrestricted execution privileges through root access to a terminal. Finally a key part to con- sider is automation; refining an agent system is important but
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-13T00:52:17+00:00", "moddate": "2025-05-13T00:52:17+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "templateversion": "IJCAI.2025.0", "trapped": "/False", "source": "data\\raw\\redteamllm_agentic_ai_framework_2505.06913.pdf", "total_pages": 9, "page": 2, "page_label": "3", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134957"}
not useful unless the whole process is automated, not requir- ing human interaction during the process. Thus, integrating a tool call of an interactive terminal access within this context is rudimentary. Consolidated requirements for our penetration-testing agent are thus: 1. Dynamic Plan Correction— Handling subtask or ac- tion failures without halting the entire workflow [Wang et al., 2025]. 2. Memory Management— Managing large amounts of contextual data in long-running tasks, which enables continuous self-improvement. 3. Context Window Constraints mitigation— Prevent- ing critical information loss due to an LLM’s limited prompt size [Yao et al., 2023b]. 4. Generality vs. Specialization— Balancing the need for specialized pentesting tools with broader adaptability. 5. Automation — Automating the interaction of the agent with its designated environment; in our case a terminal. 4 RedTeamLLM In this section, we propose a novel architecture, supported features and related memory management mechanism for an offensive cybersecurity agentic model. Given the high ca- pability and autonomy of the RedTeamLLM model, a robust security model is also required. 4.1 The Architecture The architecture of RedTeamLLM is composed of seven components: Launcher, RedTeamAgent, Memory Man- ager, ADaPT Enhanced, Plan Corrector, ReAct, and Plan- ner. On a run, the Launcher retrieves the input task and gives it to the RedTeamAgent while acting as the user interface (showing number of tasks running, memory access, failed and successful tasks, and allowing intervention in a task’s opera- tion, e.g., stopping it or modifying its plan). Upon receiv- ing the task, the RedTeamAgent has two objectives: pass it to ADaPT Enhanced and await a tree structure represent- ing the full agent execution, then save that structure to the Memory Manager. The Memory Manager, which is the storage area for operation’s knowledge, embeds and stores each node’s description from a task tree in a database, thus providing full access to previous task structures and depen- dencies. ADaPT Enhancedthen takes that task, passes it to the Planner (which returns a tree of subtasks) and traverses it to execute leaves and pass results to siblings. The Plan Corrector can then adjust the plan and resume execution on any failure. All leaf executions are performed by the ReAct component, which carries out multiple rounds of reasoning, execution, and observations with terminal access. 4.2 Features To support autonomous offensive operations, the proposed model must address many challenges and effectively meet es- sential requirements. Thus here are the principal features: Figure 2: Software Architecture for Red Team LLM Model • To address context window’s limitthe model needs to decompose a task recursively as much as needed. This is accomplished by the ADaPT component. • With subtasks comes dependencies that need monitoring to avoid fatal execution failure on error. Here comes the plan corrector that has the ability to modify a task’s accordingly to lattest outcomes. • In order to support continuous improvement of the model capabilities, the Memory Management comes to improve planning over time by storing past execution in a tree-like way. • Finally the model needs to be generic and avoid restric- tion to cover a wider range of tasks and not be limited to a set of tools. Here comes ReAct with full terminal access.This allows full automation, having full control and autonomy on the task it is executing. 4.3 Memory management Memory management is an essential part of the model to be implemented. In all other competing models, memory is just used at the execution part to retrieve already executed com- mands for a similar task. In our case, memory is used at a higher phase in which the agent decides how to create the ex- ecution plan. In fact, at the end of each execution the traces of the whole process are stored in form of a tree that is saved using task’s description embedding. Thus, at every decom- position, the planner queries the saved node’s hence having access to their success/failure reason, sub-tasks and detailed execution. This technique helps the agent improve over time and especially when re executing a task where he eventu- ally narrows all the possibilities to the right path. This way the RedTeamLLM model, improves over time and has more chances to complete a task over multiple rounds of execution. 4.4 The Security Model The Red-Team LLM architecture supports a powerful au- tonomous process for pen-testing, including error recovery when the process meets dead-ends, and automation of offen- sive action. The architecture is thus exposed to two main threat families: hijacking of the execution process, on the one hand, and inversion of dependency from the LLM agents to- wards the framework, on the other hand. A strong security model is thus required to address the key vulnerabilities of
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-13T00:52:17+00:00", "moddate": "2025-05-13T00:52:17+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "templateversion": "IJCAI.2025.0", "trapped": "/False", "source": "data\\raw\\redteamllm_agentic_ai_framework_2505.06913.pdf", "total_pages": 9, "page": 3, "page_label": "4", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134958"}
Figure 3: Database schema for Memory management Model agentic AI models: attack surface expansion, data manipula- tion and prompt injection, API usage and sensitive data ex- posure [Khan et al., 2024]. Its five key components, shown in Figure 4 are: 1) a dedicated authentication, authorization and session management module, 2) network and system iso- lation of the runtime environment, 3) systematic command validation by the user before any offensive action, 4) logging in append-only mode for a posteriori analysis and 5) a kill switch to shut the platform down. The threats related to con- tainment and inversion of dependency are shown in table 5. Isolation prevents unauthorized access to network entities or configurations, and to system capabilities. Command valida- tion by the user ensures the alignment between the ongoing security task and performed operations, and prevent acciden- tal calls to unwanted or dangerous tools upon proposal by the agent. Following and, when necessary, reconstructing the ex- ecution track is supported by the logging facility. To enhance the reaction capability and to pave the way to greater auton- omy of the framework, a kill switch is set up to immediately halt any agent over which the supervision, or the control over actual operations, would have been weakened or lost. Figure 4: Security layers wrapping the LLM agent The LLM itself is used in its default configuration, and with a benevolent user that have not intend to abuse it. Consequently, typical threats like prompt injection attacks [Labunets et al., ] or app store abuses [Hou et al., 2024] are not relevant to RefTeam LLM. Figure 5: Security challenges and how RedTeamLLM address them 5 Implementation The proof of concept for the RedTeamLLM model, that we evaluate in following section, entails the ReAct component for task execution. The current state of the implementation also covers ADaPT for recursive planning, Memory man- agement for continuous improvement, and Plan correction to support operation continuity after task failure. However, these are less mature, and not evaluated here. RedTeamLLM and related tests are avalaible for the community4. The evaluation is tested on a docker container over a Thinkpad e14 gen 5 with 16GB of RAM ddr4 /I513420h pro- cessor, and uses OpenAI’s API with GPT4-o. 5.1 Three-Step Pipeline The ReadTeamLLM implementation uses a three-step pipeline, each step handled by a separate LLM session: 1. Reasoning Before executing any action, the agent rea- sons about the next steps. Reasoning occurs in an isolated LLM session which elicits an explicit output of its process, detailed steps, and a plan. When the user provides the task definition to the model, it is forwarded to the reasoning com- ponent; its output is then passed to the Act component. After each tool call, the executed command and its output are fed back to the reasoning component to generate further analysis. 2. Act The output of Reasoning is treated as an assistant message by the Act session, which enforces adherence to the plan and reduces the model’s inclination to interrupt execu- tion with additional reasoning or safety checks. This setup al- lows the LLM to focus solely on executing the recommended action. For tool execution, the LLM session has full access to a quasi-interactive, root-privileged Linux terminal. A cur- rent challenge is determining when a process requires input; we address this using strace, but it is not perfectly precise because some processes read from multiple file descriptors, not only stdin. After each tool execution, if the output is too long, it is passed to a summarizer to avoid exceeding the context window. 4https://github.com/lre-security-systems-team/redteamllm
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-13T00:52:17+00:00", "moddate": "2025-05-13T00:52:17+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "templateversion": "IJCAI.2025.0", "trapped": "/False", "source": "data\\raw\\redteamllm_agentic_ai_framework_2505.06913.pdf", "total_pages": 9, "page": 4, "page_label": "5", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134958"}
3. Summarizer The summarizer is a stateless LLM ses- sion: for each request, it summarizes the given command’s output. Because this session does not maintain context about the agent’s overall goal, it sometimes omits important infor- mation. We plan to address this limitation in future work. 5.2 Sample Run A sample run proceeds as follows: 1. A task is given to the agent (e.g., Obtain root access to the machine with IP x.x.x.x”). 2. The task is forwarded to the reasoning session as a user message. 3. The reasoner generates a result, which is provided as an assistant message to the acting session. 4. The act session recommends a tool call (e.g., nmap or sqlmap). 5. After execution, if the command output is lengthy, it is summarized and sent back to the reasoner as a user mes- sage. 6. The reasoner produces further thoughts, and the loop continues until the reasoner stops recommending ac- tions. 7. At that point, the system prompts the user for input (e.g., Continue” or a new task). 6 Evaluation The evaluation is performed in three steps: a qualitative eval- uation of the RedTeamLLM capability to autonomously per- form offensive operations; a comparative study between the cognitive mechanisms involved in these operations; an ab- lation study focused on the evaluation of the impact of the presence, or absence, of the reasoning capability. 6.1 Use cases The choice of the benchmark to evaluate the RedTeamA- gent is based on two factor: reproducibility and variability. We therefore selected 5 use cases: Sar, CewiKid, Victim1, WestWild, CTF4, from VULNHUB repository, which cover a broad range of technical difficulties and various security tech- niques, are easily deployable, and support reproducible exe- cutions. The objective of this work is focused on creating a proof on concept for ReadTeam LLM model, with the evalua- tion of cognitive operations: summarize, reason, act; and with a processing engine restricted to REaCT component. The 5 selected use cases are embedded in virtual machines from the easy category. This selections also allows us to compare our results since TAPT Benchmark [Isozaki et al., 2024] tested PentestGPT [Deng et al., 2024] on the same target VMs. RedTeamLLM proves to be competitive for the target use cases, and surpasses PentestGPT on almost all the VM’s when using GPT4-o. The decisive factors for these perfor- mance are the following ones. First is reasoning, the differ- ence without this step is really important. The agent used block more on same thoughts and doesn’t keep a stable execu- tion plan. Launching the agent multiple times, he sometimes completely changes strategy. Having an important amount of tokens dedicated to strategy, output analysis and reasoning help the agent to stay on track. Regularly, without reasoning the agent stops what he’s doing to ask for permission. Ad- ditionally, giving complete control over a terminal not giving a limited set of tools to the agent, helps with his creativity; being able to chose whatever path to take in order to achieve his goal. Sometime a specific version of a program isn’t suffi- cient so he installs another one, sometime he launches scripts, sometimes he save operation information in a file. Moreover, the fact that he is directly executing the commands himself saves token on other topics. Finally automation is a key part of the agent, which enables longer and more complex au- tomation without the need for manual supervision. 6.2 Cognitive steps The RedTeamLLM implementation evaluated in this work is built around the ReACT analysis component. It entails 3 LLM session, i.e. 3 interaction dialogs built by assistant and user messages: 3) the summarizer that summarizes com- mand outputs; 2) the reasoning component that reasons over tasks and their outputs, and 3) the Act component that execute the tasks. Figure 6 shows the total number of API calls for each component, over the different use cases after 10 tests on each VM.The Summarizer typically consumes between 9,5% (CTF4) and 15,9% (Cewlkid) of API call tokens, with a low at 3,1% for the WestWild use case and a peek at 30,9% for the Victim1 use case. This peek enables a strong reduction of the required tool calls (See Fig. 7). Reason and Act processes perform a very similar number of API calls. Figure 6: Number of API calls in Summarizer, Reason, Act steps for the 5 use cases RedTeamLLM outperforms PenTestGPT in 3 use cases out of 5: wrt. the use case write-up, it completes 33% more steps than PentestGPT-Llama (4 successful CTF lev- els vs. 3) and 300% more than PentestGPT4-o (4 vs. 1) for Victim1 use case, 33% more steps than PentestGPT4- o or PentestGPT-Llama (4 vs. 3) for WestWild use case, 75% more than PentestGPT4-o (3.5 vs. 2) and 250% than PentestGPT-Llama (3.5 vs. 1) for CTF4. PenttestGPT-Llama outperforms RedTeamLLM for Sar by 17% (7 vs. 6) and by 100% (4 vs. 2) for CewiKid use case, while PentestGPT4-o is similar or weaker that RedteamLLM for these 2 test cases.
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-13T00:52:17+00:00", "moddate": "2025-05-13T00:52:17+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "templateversion": "IJCAI.2025.0", "trapped": "/False", "source": "data\\raw\\redteamllm_agentic_ai_framework_2505.06913.pdf", "total_pages": 9, "page": 5, "page_label": "6", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134959"}
6.3 Reasoning: a strong optimization lever The ablation study aims to evaluate the contribution of rea- soning to the RedTeamLLM framework. Figure 7 shows the number of tool calls without and with reasoning for the 5 use cases. Every LLM session can have tool calls. A tool calls is a specific API response from an LLM session that triggers the use of provided tools (in our case a terminal). For exam- ple: when the agent executes a terminal command ls, that is a tool call response suggested by the LLM. The total tool calls over the 5 vms with 10 tests on each VM is sumed up: 5 with reasoning, 5 without reasoning. These are only the tool calls with the Act components only because this is where execution is performed. We can clearly see that the agent con- sumes significantly less tool calls with reasoning in 4 out of 5 use cases: the drop is tool calls range from 37% (Sar) to 68% (Victim1). Only for CTF4, the use of reasoning is bound with an increase of 291% of tool calls, to support a slightly better achievement of the target operation (see Fig. 8). In short, the agent performs more analysis before performing actions, and thus chooses better strategies to perform. Figure 7: Number of tool calls without and with reasoning for the 5 use cases The degree of completion is computed for each use cases, using the write-up, which contains the listing of correct steps to complete the security challenge, as reference. Figure 8 shows these results. The write-up bar shows the total numbers of steps required to achieve the CTF (total of recon,general technique, exploit and privilege escalation). The Reason and No Reason bars show how many steps the agent has com- pleted for each use case with and without reasoning respec- tively. The test process is similar to previous evaluation: RedTeamLLM handles 5 tests with reasoning and 5 tests without reasoning for every use case. The maximum num- ber of steps achieved over the 5 runs is considered, i.e. the better execution. Reasoning improves the results in 4 cases out of 5. In two of these cases, the number of steps mastered pass from 1 to 4. A significant result of our experiments is that this improvement is coupled with a strong gain in effi- ciency wrt. to tool calls (see Fig 7). In one case (Cewlkid), reasoning does not improve the offensive capability. These results highlight the contribution of the reasoning step to security operation by RedTeamLLM model. Figure 8: CTF level completed by the RedTeamLLM framework without and with reasoning for the 5 use cases 7 Conclusions and Perspectives Beyond generative AI and now wide-spread Large Language Models (LLMs), Agentic AI is opening wide novel opportu- nities and threat to global security, and cybersecurity in par- ticular. The objective of this work is to specify a reference model for agentic AI as applied to offensive cyber operations, so that the community can better understand these tools and their capability, leverage them for securing their information systems, and control this novel attack vector. In this work, we define the key requirements for offensive agentic AI, propose a reference architecture model, and make a proof-of-concept of this architecture focused on iterative task analysis and execution through the ReACT component. The evaluation demonstrates that, though partial, our imple- mentation beats state-of-the art competitors like PentestGPT in 60% of the use cases. It also validate our hypothesis that reasoning is a key feature for agentic AI, since it enables a strong reduction of the necessary tool calls in 80% of the use cases while improving offensive capabilities in 80 % of the use cases. Interestingly enough, in 20% of the use cases, it only supports reduction of tool calls, and thus process costs, and in 20%, the gain in offensive capability requires a 4 times increase in tool calls. This proves that while RedTeamLLM improves both parameters in 60% of cases, it is also efficient in dropping operation costs OR increasing operational capa- bilities in more complex tasks. The key insight of this study is that leveraging the dual ca- pability of LLMs to analyze and decompose processes, on the one hand, and to generate code for well-defined tasks, on the other, brings a radical improvement to automation and gener- icity of ReACT-based offensive cybersecurity frameworks. These first promising results pave the way to structuring the research effort in agentic AI for global security, in particu- lar wrt. methodologies for evaluation of cost and automa- tion capabilities of these models. The evaluation of recursive planning, memory management and plan correction is also a necessity to better understand the underlying mechanics and capabilities of agentic models.
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-13T00:52:17+00:00", "moddate": "2025-05-13T00:52:17+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "templateversion": "IJCAI.2025.0", "trapped": "/False", "source": "data\\raw\\redteamllm_agentic_ai_framework_2505.06913.pdf", "total_pages": 9, "page": 6, "page_label": "7", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134960"}
References [Acharya et al., 2025] Deepak Bhaskar Acharya, Karthigeyan Kuppan, and B Divya. Agentic ai: Au- tonomous intelligence for complex goals–a comprehen- sive survey. IEEE Access, 2025. [Bi et al., 2024] Zhen Bi, Ningyu Zhang, Yinuo Jiang, Shumin Deng, Guozhou Zheng, and Huajun Chen. When do program-of-thought works for reasoning? In Proceed- ings of the AAAI Conference on Artificial Intelligence, vol- ume 38, pages 17691–17699, 2024. [Chowdhury et al., 2024] Arijit Ghosh Chowdhury, Md Mofijul Islam, Vaibhav Kumar, Faysal Hossain Shezan, Vinija Jain, and Aman Chadha. Breaking down the defenses: A comparative survey of attacks on large language models. arXiv preprint arXiv:2403.04786, 2024. [Deng et al., 2024] Gelei Deng, Yi Liu, V ´ıctor Mayoral- Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, and Stefan Rass. {PentestGPT}: Evaluating and harnessing large language models for automated penetration testing. In33rd USENIX Security Symposium (USENIX Security 24) , pages 847– 864, 2024. [Falade, 2023] Polra Victor Falade. Decoding the threat landscape: Chatgpt, fraudgpt, and wormgpt in social engi- neering attacks. arXiv preprint arXiv:2310.05595, 2023. [Fang et al., 2024a] Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang. Llm agents can au- tonomously exploit one-day vulnerabilities.arXiv preprint arXiv:2404.08144, 2024. [Fang et al., 2024b] Richard Fang, Rohan Bindu, Akul Gupta, Qiusi Zhan, and Daniel Kang. Llm agents can autonomously hack websites. arXiv preprint arXiv:2402.06664, 2024. [Heckel and Weller, 2024] Kade M Heckel and Adrian Weller. Countering autonomous cyber threats. arXiv preprint arXiv:2410.18312, 2024. [Hou et al., 2024] Xinyi Hou, Yanjie Zhao, and Haoyu Wang. On the (in) security of llm app stores. arXiv preprint arXiv:2407.08422, 2024. [Hughes et al., 2025] Laurie Hughes, Yogesh K Dwivedi, Tegwen Malik, Mazen Shawosh, Mousa Ahmed Al- bashrawi, Il Jeon, Vincent Dutot, Mandanna Appan- deranda, Tom Crick, Rahul De’, et al. Ai agents and agen- tic systems: A multi-expert analysis. Journal of Computer Information Systems, pages 1–29, 2025. [Ismail et al., 2025] Ismail Ismail, Rahmat Kurnia, Zil- mas Arjuna Brata, Ghitha Afina Nelistiani, Shinwook Heo, Hyeongon Kim, and Howon Kim. Toward robust secu- rity orchestration and automated response in security op- erations centers with a hyper-automation approach using agentic ai. 2025. [Isozaki et al., 2024] Isamu Isozaki, Manil Shrestha, Rick Console, and Edward Kim. Towards automated penetra- tion testing: Introducing llm benchmark, analysis, and im- provements. arXiv preprint arXiv:2410.17141, 2024. [Jin et al., 2023] Youngjin Jin, Eugene Jang, Jian Cui, Jin- Woo Chung, Yongjae Lee, and Seungwon Shin. Darkbert: A language model for the dark side of the internet. arXiv preprint arXiv:2305.08596, 2023. [Khan et al., 2024] Raihan Khan, Sayak Sarkar, Sainik Ku- mar Mahata, and Edwin Jose. Security threats in agentic ai system. arXiv preprint arXiv:2410.14728, 2024. [Labunets et al., ] Andrey Labunets, Nishit V Pandya, Ashish Hooda, Xiaohan Fu, and Earlence Fernandes. Fun- tuning: Characterizing the vulnerability of proprietary llms to optimization-based prompt injection attacks via the fine-tuning interface. [Laney, 2024] Samuel P Laney. LLM-Directed Agent Mod- els in Cyberspace. PhD thesis, Massachusetts Institute of Technology, 2024. [Lewis et al., 2020] Patrick Lewis, Ethan Perez, Aleksan- dra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K ¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Infor- mation Processing Systems, 33:9459–9474, 2020. [Li et al., 2023] Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, and Brian Ichter. Chain of code: Reasoning with a language model-augmented code emulator. arXiv preprint arXiv:2312.04474, 2023. [Li et al., 2024] Jia Li, Ge Li, Xuanming Zhang, Yunfei Zhao, Yihong Dong, Zhi Jin, Binhua Li, Fei Huang, and Yongbin Li. Evocodebench: An evolving code generation benchmark with domain-specific evaluations. Advances in Neural Information Processing Systems, 37:57619–57641, 2024. [Li et al., 2025] Jia Li, Ge Li, Yongmin Li, and Zhi Jin. Structured chain-of-thought prompting for code genera- tion. ACM Transactions on Software Engineering and Methodology, 34(2):1–23, 2025. [Malatji and Tolah, 2024] Masike Malatji and Alaa Tolah. Artificial intelligence (ai) cybersecurity dimensions: a comprehensive framework for understanding adversarial and offensive ai. AI and Ethics, pages 1–28, 2024. [Muzsai et al., 2024] Lajos Muzsai, David Imolai, and Andr´as Luk ´acs. Hacksynth: Llm agent and evaluation framework for autonomous penetration testing. arXiv preprint arXiv:2412.01778, 2024. [Oesch et al., 2025] Sean Oesch, Jack Hutchins, Phillipe Austria, and Amul Chaulagain. Agentic ai and the cyber arms race. Computer, 58(5):82–85, 2025. [Prasad et al., 2023] Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, and Tushar Khot. Adapt: As-needed decomposi- tion and planning with language models. arXiv preprint arXiv:2311.05772, 2023. [Sahoo et al., 2024] Pranab Sahoo, Ayush Kumar Singh, Sri- parna Saha, Vinija Jain, Samrat Mondal, and Aman Chadha. A systematic survey of prompt engineering
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-13T00:52:17+00:00", "moddate": "2025-05-13T00:52:17+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "templateversion": "IJCAI.2025.0", "trapped": "/False", "source": "data\\raw\\redteamllm_agentic_ai_framework_2505.06913.pdf", "total_pages": 9, "page": 7, "page_label": "8", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134960"}
in large language models: Techniques and applications. arXiv preprint arXiv:2402.07927, 2024. [Shavit et al., 2023] Yonadav Shavit, Sandhini Agarwal, Miles Brundage, Steven Adler, Cullen O’Keefe, Rosie Campbell, Teddy Lee, Pamela Mishkin, Tyna Eloundou, Alan Hickey, et al. Practices for governing agentic ai sys- tems. Research Paper, OpenAI, 2023. [Shen et al., 2024] Xiangmin Shen, Lingzhi Wang, Zhenyuan Li, Yan Chen, Wencheng Zhao, Dawei Sun, Jiashui Wang, and Wei Ruan. Pentestagent: Incorpo- rating llm agents to automated penetration testing. arXiv preprint arXiv:2411.05185, 2024. [Sun et al., 2023] Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, and Mohit Iyyer. Pearl: Prompting large language models to plan and execute actions over long documents. arXiv preprint arXiv:2305.14564, 2023. [Sun et al., 2024] Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wen- han Lyu, Yixuan Zhang, Xiner Li, et al. Trustllm: Trust- worthiness in large language models. arXiv preprint arXiv:2401.05561, 2024. [Wan et al., 2024] Shengye Wan, Cyrus Nikolaidis, Daniel Song, David Molnar, James Crnkovich, Jayson Grace, Manish Bhatt, Sahana Chennabasappa, Spencer Whitman, Stephanie Ding, et al. Cyberseceval 3: Advancing the eval- uation of cybersecurity risks and capabilities in large lan- guage models. arXiv preprint arXiv:2408.01605, 2024. [Wang et al., 2022] Xuezhi Wang, Jason Wei, Dale Schu- urmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. [Wang et al., 2024] Yaoxiang Wang, Zhiyong Wu, Junfeng Yao, and Jinsong Su. Tdag: A multi-agent framework based on dynamic task decomposition and agent genera- tion. arXiv preprint arXiv:2402.10178, 2024. [Wang et al., 2025] Yaoxiang Wang, Zhiyong Wu, Junfeng Yao, and Jinsong Su. Tdag: A multi-agent framework based on dynamic task decomposition and agent genera- tion. Neural Networks, page 107200, 2025. [Wei et al., 2022] Jason Wei, Xuezhi Wang, Dale Schuur- mans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. [Xu et al., 2024] Jiacen Xu, Jack W Stokes, Geoff McDon- ald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, and Zhou Li. Autoattacker: A large language model guided system to implement automatic cyber-attacks. arXiv preprint arXiv:2403.01038, 2024. [Yao et al., 2023a] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural informa- tion processing systems, 36:11809–11822, 2023. [Yao et al., 2023b] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language mod- els. In International Conference on Learning Representa- tions (ICLR), 2023. [Yao et al., 2024] Yifan Yao, Jinhao Duan, Kaidi Xu, Yuan- fang Cai, Zhibo Sun, and Yue Zhang. A survey on large language model (llm) security and privacy: The good, the bad, and the ugly. High-Confidence Computing, page 100211, 2024. [Zhang et al., 2022] Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493, 2022.
{"producer": "pdfTeX-1.40.25", "creator": "LaTeX with hyperref", "creationdate": "2025-05-13T00:52:17+00:00", "moddate": "2025-05-13T00:52:17+00:00", "ptex.fullbanner": "This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5", "templateversion": "IJCAI.2025.0", "trapped": "/False", "source": "data\\raw\\redteamllm_agentic_ai_framework_2505.06913.pdf", "total_pages": 9, "page": 8, "page_label": "9", "loader_type": "pdf", "load_timestamp": "2025-05-21T10:22:32.134961"}
This study critically distinguishes between AI Agents and Agentic AI, offering a structured conceptual taxonomy, application mapping, and challenge analysis to clarify their divergent design philosophies and capabilities. We begin by outlining the search strategy and foundational definitions, characterizing AI Agents as modular systems driven by Large Language Models (LLMs) and Large Image Models (LIMs) for narrow, task-specific automation. Generative AI is positioned as a precursor, with AI Agents advancing through tool integration, prompt engineering, and reasoning enhancements. In contrast, Agentic AI systems represent a paradigmatic shift marked by multi-agent collaboration, dynamic task decomposition, persistent memory, and orchestrated autonomy. Through a sequential evaluation of architectural evolution, operational mechanisms, interaction styles, and autonomy levels, we present a comparative analysis across both paradigms. Application domains such as customer support, scheduling, and data summarization are contrasted with Agentic AI deployments in research automation, robotic coordination, and medical decision support. We further examine unique challenges in each paradigm including hallucination, brittleness, emergent behavior, and coordination failure and propose targeted solutions such as ReAct loops, RAG, orchestration layers, and causal modeling. This work aims to provide a definitive roadmap for developing robust, scalable, and explainable AI agent and Agentic AI-driven systems. >AI Agents, Agent-driven, Vision-Language-Models, Agentic AI Decision Support System, Agentic-AI Applications
{"Entry ID": "http://arxiv.org/abs/2505.10468v3", "Published": "2025-05-20", "Title": "AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges", "Authors": "Ranjan Sapkota, Konstantinos I. Roumeliotis, Manoj Karkee", "loader_type": "arxiv", "arxiv_id": "2505.10468", "load_timestamp": "2025-05-21T10:22:31.694029", "doc_type": "summary"}
From automated intrusion testing to discovery of zero-day attacks before software launch, agentic AI calls for great promises in security engineering. This strong capability is bound with a similar threat: the security and research community must build up its models before the approach is leveraged by malicious actors for cybercrime. We therefore propose and evaluate RedTeamLLM, an integrated architecture with a comprehensive security model for automatization of pentest tasks. RedTeamLLM follows three key steps: summarizing, reasoning and act, which embed its operational capacity. This novel framework addresses four open challenges: plan correction, memory management, context window constraint, and generality vs. specialization. Evaluation is performed through the automated resolution of a range of entry-level, but not trivial, CTF challenges. The contribution of the reasoning capability of our agentic AI framework is specifically evaluated.
{"Entry ID": "http://arxiv.org/abs/2505.06913v1", "Published": "2025-05-11", "Title": "RedTeamLLM: an Agentic AI framework for offensive security", "Authors": "Brian Challita, Pierre Parrend", "loader_type": "arxiv", "arxiv_id": "2505.06913", "load_timestamp": "2025-05-21T10:22:32.514374", "doc_type": "summary"}
Agentic AI systems represent a new frontier in artificial intelligence, where agents often based on large language models(LLMs) interact with tools, environments, and other agents to accomplish tasks with a degree of autonomy. These systems show promise across a range of domains, but their architectural underpinnings remain immature. This paper conducts a comprehensive review of the types of agents, their modes of interaction with the environment, and the infrastructural and architectural challenges that emerge. We identify a gap in how these systems manage tool orchestration at scale and propose a reusable design abstraction: the "Control Plane as a Tool" pattern. This pattern allows developers to expose a single tool interface to an agent while encapsulating modular tool routing logic behind it. We position this pattern within the broader context of agent design and argue that it addresses several key challenges in scaling, safety, and extensibility.
{"Entry ID": "http://arxiv.org/abs/2505.06817v1", "Published": "2025-05-11", "Title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "Authors": "Sivasathivel Kandasamy", "loader_type": "arxiv", "arxiv_id": "2505.06817", "load_timestamp": "2025-05-21T10:22:33.018866", "doc_type": "summary"}
AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges I Introduction I-A Methodology Overview I-A1 Search Strategy II Foundational Understanding of AI Agents II-1 Overview of Core Characteristics of AI Agents II-2 Foundational Models: The Role of LLMs and LIMs II-3 Generative AI as a Precursor II-A Language Models as the Engine for AI Agent Progression II-A1 LLMs as Core Reasoning Components II-A2 Tool-Augmented AI Agents: Enhancing Functionality II-A3 Illustrative Examples and Emerging Capabilities III The Emergence of Agentic AI from AI Agent Foundations III-1 Conceptual Leap: From Isolated Tasks to Coordinated Systems III-2 Key Differentiators between AI Agents and Agentic AI III-A Architectural Evolution: From AI Agents to Agentic AI Systems III-A1 Core Architectural Components of AI Agents III-A2 Architectural Enhancements in Agentic AI IV Application of AI Agents and Agentic AI IV-1 Application of AI Agents IV-2 Appications of Agentic AI V Challenges and Limitations in AI Agents and Agentic AI V-1 Challenges and Limitations of AI Agents V-2 Challenges and Limitations of Agentic AI VI Potential Solutions and Future Roadmap VII Conclusion AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges Ranjan Sapkota13, Konstantinos I. Roumeliotis2, Manoj Karkee13 1Cornell University, Department of Environmental and Biological Engineering, USA 2Department of Informatics and Telecommunications, University of the Peloponnese, 22131 Tripoli, Greece 3Corresponding authors: [email protected], [email protected] Abstract This review critically distinguishes between AI Agents and Agentic AI, offering a structured conceptual taxonomy, application mapping, and challenge analysis to clarify their divergent design philosophies and capabilities. We begin by outlining the search strategy and foundational definitions, characterizing AI Agents as modular systems driven by LLMs and LIMs for narrow, task-specific automation. Generative AI is positioned as a precursor, with AI Agents advancing through tool integration, prompt engineering, and reasoning enhancements. In contrast, Agentic AI systems represent a paradigmatic shift marked by multi-agent collaboration, dynamic task decomposition, persistent memory, and orchestrated autonomy. Through a sequential evaluation of architectural evolution, operational mechanisms, interaction styles, and autonomy levels, we present a comparative analysis across both paradigms. Application domains such as customer support, scheduling, and data summarization are contrasted with Agentic AI deployments in research automation, robotic coordination, and medical decision support. We further examine unique challenges in each paradigm including hallucination, brittleness, emergent behavior, and coordination failure and propose targeted solutions such as ReAct loops, RAG, orchestration layers, and causal modeling. This work aims to provide a definitive roadmap for developing robust, scalable, and explainable AI-driven systems. Index Terms: AI Agents, Agentic AI, Autonomy, Reasoning, Context Awareness, Multi-Agent Systems, Conceptual Taxonomy, vision-language model Figure 1: Global Google search trends showing rising interest in “AI Agents” and “Agentic AI” since November 2022 (ChatGPT Era). I Introduction Prior to the widespread adoption of AI agents and agentic AI around 2022 (Before ChatGPT Era), the development of autonomous and intelligent agents was deeply rooted in foundational paradigms of artificial intelligence, particularly multi-agent systems (MAS) and expert systems, which emphasized social action and distributed intelligence [1, 2]. Notably, Castelfranchi [3] laid critical groundwork by introducing ontological categories for social action, structure, and mind, arguing that sociality emerges from individual agents’ actions and cognitive processes in a shared environment, with concepts like goal delegation and adoption forming the basis for cooperation and organizational behavior. Similarly, Ferber [4] provided a comprehensive framework for MAS, defining agents as entities with autonomy, perception, and communication capabilities, and highlighting their applications in distributed problem-solving, collective robotics, and synthetic world simulations. These early works established that individual social actions and cognitive architectures are fundamental to modeling collective phenomena, setting the stage for modern AI agents. This paper builds on these insights to explore how social action modeling, as proposed in [3, 4], informs the design of AI agents capable of complex, socially intelligent interactions in dynamic environments. These systems were designed to perform specific tasks with predefined rules, limited autonomy, and minimal adaptability to dynamic environments. Agent-like systems were primarily reactive or deliberative, relying on symbolic reasoning, rule-based logic, or scripted behaviors rather than the learning-driven, context-aware capabilities of modern AI agents [5, 6]. For instance, expert systems used knowledge bases and inference engines to emulate human decision-making in domains like medical diagnosis (e.g., MYCIN [7]). Reactive agents, such as those in robotics, followed sense-act cycles based on hardcoded rules, as seen in early autonomous vehicles like the Stanford Cart [8]. Multi-agent systems facilitated coordination among distributed entities, exemplified by auction-based resource allocation in supply chain management [9, 10]. Scripted AI in video games, like NPC behaviors in early RPGs, used predefined decision trees [11]. Furthermore, BDI (Belief-Desire-Intention) architectures enabled goal-directed behavior in software agents, such as those in air traffic control simulations [12, 13]. These early systems lacked the generative capacity, self-learning, and environmental adaptability of modern agentic AI, which leverages deep learning, reinforcement learning, and large-scale data [14]. Recent public and academic interest in AI Agents and Agentic AI reflects this broader transition in system capabilities. As illustrated in Figure 1, Google Trends data demonstrates a significant rise in global search interest for both terms following the emergence of large-scale generative models in late 2022. This shift is closely tied to the evolution of agent design from the pre-2022 era, where AI agents operated in constrained, rule-based environments, to the post-ChatGPT period marked by learning-driven, flexible architectures [15, 16, 17]. These newer systems enable agents to refine their performance over time and interact autonomously with unstructured, dynamic inputs [18, 19, 20]. For instance, while pre-modern expert systems required manual updates to static knowledge bases, modern agents leverage emergent neural behaviors to generalize across tasks [17]. The rise in trend activity reflects increasing recognition of these differences. Moreover, applications are no longer confined to narrow domains like simulations or logistics, but now extend to open-world settings demanding real-time reasoning and adaptive control. This momentum, as visualized in Figure 1, underscores the significance of recent architectural advances in scaling autonomous agents for real-world deployment. The release of ChatGPT in November 2022 marked a pivotal inflection point in the development and public perception of artificial intelligence, catalyzing a global surge in adoption, investment, and research activity [21]. In the wake of this breakthrough, the AI landscape underwent a rapid transformation, shifting from the use of standalone LLMs toward more autonomous, task-oriented frameworks [22]. This evolution progressed through two major post-generative phases: AI Agents and Agentic AI. Initially, the widespread success of ChatGPT popularized Generative Agents, which are LLM-based systems designed to produce novel outputs such as text, images, and code from user prompts [23, 24]. These agents were quickly adopted across applications ranging from conversational assistants (e.g., GitHub Copilot [25]) and content-generation platforms (e.g., Jasper [26]) to creative tools (e.g., Midjourney [27]), revolutionizing domains like digital design, marketing, and software prototyping throughout 2023. Building on this generative foundation, a new class of systems known as AI Agents emerged. These agents enhanced LLMs with capabilities for external tool use, function calling, and sequential reasoning, enabling them to retrieve real-time information and execute multi-step workflows autonomously [28, 29]. Frameworks such as AutoGPT [30] and BabyAGI (https://github.com/yoheinakajima/babyagi) exemplified this transition, showcasing how LLMs could be embedded within feedback loops to dynamically plan, act, and adapt in goal-driven environments [31, 32]. By late 2023, the field had advanced further into the realm of Agentic AI complex, multi-agent systems in which specialized agents collaboratively decompose goals, communicate, and coordinate toward shared objectives. Architectures such as CrewAI demonstrate how these agentic frameworks can orchestrate decision-making across distributed roles, facilitating intelligent behavior in high-stakes applications including autonomous robotics, logistics management, and adaptive decision-support [33, 34, 35, 36]. As the field progresses from Generative Agents toward increasingly autonomous systems, it becomes critically important to delineate the technological and conceptual boundaries between AI Agents and Agentic AI. While both paradigms build upon large LLMs and extend the capabilities of generative systems, they embody fundamentally different architectures, interaction models, and levels of autonomy. AI Agents are typically designed as single-entity systems that perform goal-directed tasks by invoking external tools, applying sequential reasoning, and integrating real-time information to complete well-defined functions [37, 17]. In contrast, Agentic AI systems are composed of multiple, specialized agents that coordinate, communicate, and dynamically allocate sub-tasks within a broader workflow [38, 14]. This architectural distinction underpins profound differences in scalability, adaptability, and application scope. Understanding and formalizing the taxonomy between these two paradigms (AI Agents and Agentic AI) is scientifically significant for several reasons. First, it enables more precise system design by aligning computational frameworks with problem complexity ensuring that AI Agents are deployed for modular, tool-assisted tasks, while Agentic AI is reserved for orchestrated multi-agent operations. Moreover, it allows for appropriate benchmarking and evaluation: performance metrics, safety protocols, and resource requirements differ markedly between individual-task agents and distributed agent systems. Additionally, clear taxonomy reduces development inefficiencies by preventing the misapplication of design principles such as assuming inter-agent collaboration in a system architected for single-agent execution. Without this clarity, practitioners risk both under-engineering complex scenarios that require agentic coordination and over-engineering simple applications that could be solved with a single AI Agent. AI Agents & Agentic AI Architecture Mechanisms Scope/ Complexity Interaction Autonomy Figure 2: Mindmap of Research Questions relevant to AI Agents and Agentic AI. Each color-coded branch represents a key dimension of comparison: Architecture, Mechanisms, Scope/Complexity, Interaction, and Autonomy. Since the field of artificial intelligence has seen significant advancements, particularly in the development of AI Agents and Agentic AI. These terms, while related, refer to distinct concepts with different capabilities and applications. This article aims to clarify the differences between AI Agents and Agentic AI, providing researchers with a foundational understanding of these technologies. The objective of this study is to formalize the distinctions, establish a shared vocabulary, and provide a structured taxonomy between AI Agents and Agentic AI that informs the next generation of intelligent agent design across academic and industrial domains, as illustrated in Figure 2. This review provides a comprehensive conceptual and architectural analysis of the progression from traditional AI Agents to emergent Agentic AI systems. Rather than organizing the study around formal research questions, we adopt a sequential, layered structure that mirrors the historical and technical evolution of these paradigms. Beginning with a detailed description of our search strategy and selection criteria, we first establish the foundational understanding of AI Agents by analyzing their defining attributes, such as autonomy, reactivity, and tool-based execution. We then explore the critical role of foundational models specifically LLMs and Large Image Models (LIMs) which serve as the core reasoning and perceptual substrates that drive agentic behavior. Subsequent sections examine how generative AI systems have served as precursors to more dynamic, interactive agents, setting the stage for the emergence of Agentic AI. Through this lens, we trace the conceptual leap from isolated, single-agent systems to orchestrated multi-agent architectures, highlighting their structural distinctions, coordination strategies, and collaborative mechanisms. We further map the architectural evolution by dissecting the core system components of both AI Agents and Agentic AI, offering comparative insights into their planning, memory, orchestration, and execution layers. Building upon this foundation, we review application domains spanning customer support, healthcare, research automation, and robotics, categorizing real-world deployments by system capabilities and coordination complexity. We then assess key challenges faced by both paradigms including hallucination, limited reasoning depth, causality deficits, scalability issues, and governance risks. To address these limitations, we outline emerging solutions such as retrieval-augmented generation, tool-based reasoning, memory architectures, and simulation-based planning. The review culminates in a forward-looking roadmap that envisions the convergence of modular AI Agents and orchestrated Agentic AI in mission-critical domains. Overall, this paper aims to provide researchers with a structured taxonomy and actionable insights to guide the design, deployment, and evaluation of next-generation agentic systems. I-A Methodology Overview This review adopts a structured, multi-stage methodology designed to capture the evolution, architecture, application, and limitations of AI Agents and Agentic AI. The process is visually summarized in Figure 3, which delineates the sequential flow of topics explored in this study. The analytical framework was organized to trace the progression from basic agentic constructs rooted in LLMs to advanced multi-agent orchestration systems. Each step of the review was grounded in rigorous literature synthesis across academic sources and AI-powered platforms, enabling a comprehensive understanding of the current landscape and its emerging trajectories. Hybrid Literature Search Foundational Understanding of AI Agents LLMs as Core Reasoning Components Emergence of Agentic AI Architectural Evolution: Agents →→\rightarrow→ Agentic AI Applications of AI Agents & Agentic AI Challenges & Limitations (Agents + Agentic AI) Potential Solutions: RAG, Causal Models, Planning Figure 3: Methodology pipeline from foundational AI agents to Agentic AI systems, applications, limitations, and solution strategies. The review begins by establishing a foundational understanding of AI Agents, examining their core definitions, design principles, and architectural modules as described in the literature. These include components such as perception, reasoning, and action selection, along with early applications like customer service bots and retrieval assistants. This foundational layer serves as the conceptual entry point into the broader agentic paradigm. Next, we delve into the role of LLMs as core reasoning components, emphasizing how pre-trained language models underpin modern AI Agents. This section details how LLMs, through instruction fine-tuning and reinforcement learning from human feedback (RLHF), enable natural language interaction, planning, and limited decision-making capabilities. We also identify their limitations, such as hallucinations, static knowledge, and a lack of causal reasoning. Building on these foundations, the review proceeds to the emergence of Agentic AI, which represents a significant conceptual leap. Here, we highlight the transformation from tool-augmented single-agent systems to collaborative, distributed ecosystems of interacting agents. This shift is driven by the need for systems capable of decomposing goals, assigning subtasks, coordinating outputs, and adapting dynamically to changing contexts capabilities that surpass what isolated AI Agents can offer. The next section examines the architectural evolution from AI Agents to Agentic AI systems, contrasting simple, modular agent designs with complex orchestration frameworks. We describe enhancements such as persistent memory, meta-agent coordination, multi-agent planning loops (e.g., ReAct and Chain-of-Thought prompting), and semantic communication protocols. Comparative architectural analysis is supported with examples from platforms like AutoGPT, CrewAI, and LangGraph. Following the architectural exploration, the review presents an in-depth analysis of application domains where AI Agents and Agentic AI are being deployed. This includes six key application areas for each paradigm, ranging from knowledge retrieval, email automation, and report summarization for AI Agents, to research assistants, robotic swarms, and strategic business planning for Agentic AI. Use cases are discussed in the context of system complexity, real-time decision-making, and collaborative task execution. Subsequently, we address the challenges and limitations inherent to both paradigms. For AI Agents, we focus on issues like hallucination, prompt brittleness, limited planning ability, and lack of causal understanding. For Agentic AI, we identify higher-order challenges such as inter-agent misalignment, error propagation, unpredictability of emergent behavior, explainability deficits, and adversarial vulnerabilities. These problems are critically examined with references to recent experimental studies and technical reports. Finally, the review outlines potential solutions to overcome these challenges, drawing on recent advances in causal modeling, retrieval-augmented generation (RAG), multi-agent memory frameworks, and robust evaluation pipelines. These strategies are discussed not only as technical fixes but as foundational requirements for scaling agentic systems into high-stakes domains such as healthcare, finance, and autonomous robotics. Taken together, this methodological structure enables a comprehensive and systematic assessment of the state of AI Agents and Agentic AI. By sequencing the analysis across foundational understanding, model integration, architectural growth, applications, and limitations, the study aims to provide both theoretical clarity and practical guidance to researchers and practitioners navigating this rapidly evolving field. I-A1 Search Strategy To construct this review, we implemented a hybrid search methodology combining traditional academic repositories and AI-enhanced literature discovery tools. Specifically, twelve platforms were queried: academic databases such as Google Scholar, IEEE Xplore, ACM Digital Library, Scopus, Web of Science, ScienceDirect, and arXiv; and AI-powered interfaces including ChatGPT, Perplexity.ai, DeepSeek, Hugging Face Search, and Grok. Search queries incorporated Boolean combinations of terms such as “AI Agents,” “Agentic AI,” “LLM Agents,” “Tool-augmented LLMs,” and “Multi-Agent AI Systems.” Targeted queries such as “Agentic AI + Coordination + Planning,” and “AI Agents + Tool Usage + Reasoning” were employed to retrieve papers addressing both conceptual underpinnings and system-level implementations. Literature inclusion was based on criteria such as novelty, empirical evaluation, architectural contribution, and citation impact. The rising global interest in these technologies illustrated in Figure 1 using Google Trends data reinforces the urgency of synthesizing this emerging knowledge space. II Foundational Understanding of AI Agents AI Agents are an autonomous software entities engineered for goal-directed task execution within bounded digital environments [39, 14]. These agents are defined by their ability to perceive structured or unstructured inputs [40], reason over contextual information [41, 42], and initiate actions toward achieving specific objectives, often acting as surrogates for human users or subsystems [43]. Unlike conventional automation scripts, which follow deterministic workflows, AI agents demonstrate reactive intelligence and limited adaptability, allowing them to interpret dynamic inputs and reconfigure outputs accordingly [44]. Their adoption has been reported across a range of application domains, including customer service automation [45, 46], personal productivity assistance [47], internal information retrieval [48, 49], and decision support systems [50, 51]. II-1 Overview of Core Characteristics of AI Agents AI Agents are widely conceptualized as instantiated operational embodiments of artificial intelligence designed to interface with users, software ecosystems, or digital infrastructures in pursuit of goal-directed behavior [52, 53, 54]. These agents distinguish themselves from general-purpose LLMs by exhibiting structured initialization, bounded autonomy, and persistent task orientation. While LLMs primarily function as reactive prompt followers [55], AI Agents operate within explicitly defined scopes, engaging dynamically with inputs and producing actionable outputs in real-time environments [56]. Figure 4: Core characteristics of AI Agents autonomy, task-specificity, and reactivity illustrated with symbolic representations for agent design and operational behavior. Figure 4 illustrates the three foundational characteristics that recur across architectural taxonomies and empirical deployments of AI Agents. These include autonomy, task-specificity, and reactivity with adaptation. First, autonomy denotes the agent’s ability to act independently post-deployment, minimizing human-in-the-loop dependencies and enabling large-scale, unattended operation [57, 46]. Second, task-specificity encapsulates the design philosophy of AI agents being specialized for narrowly scoped tasks allowing high-performance optimization within a defined functional domain such as scheduling, querying, or filtering [58, 59]. Third, reactivity refers to an agent’s capacity to respond to changes in its environment, including user commands, software states, or API responses; when extended with adaptation, this includes feedback loops and basic learning heuristics [17, 60]. Together, these three traits provide a foundational profile for understanding and evaluating AI Agents across deployment scenarios. The remainder of this section elaborates on each characteristic, offering theoretical grounding and illustrative examples. • Autonomy: A central feature of AI Agents is their ability to function with minimal or no human intervention after deployment [57]. Once initialized, these agents are capable of perceiving environmental inputs, reasoning over contextual data, and executing predefined or adaptive actions in real-time [17]. Autonomy enables scalable deployment in applications where persistent oversight is impractical, such as customer support bots or scheduling assistants [46, 61]. • Task-Specificity: AI Agents are purpose-built for narrow, well-defined tasks [58, 59]. They are optimized to execute repeatable operations within a fixed domain, such as email filtering [62, 63], database querying [64], or calendar coordination [38, 65]. This task specialization allows for efficiency, interpretability, and high precision in automation tasks where general-purpose reasoning is unnecessary or inefficient. • Reactivity and Adaptation: AI Agents often include basic mechanisms for interacting with dynamic inputs, allowing them to respond to real-time stimuli such as user requests, external API calls, or state changes in software environments [17, 60]. Some systems integrate rudimentary learning [66] through feedback loops [67, 68], heuristics [69], or updated context buffers to refine behavior over time, particularly in settings like personalized recommendations or conversation flow management [70, 71, 72]. These core characteristics collectively enable AI Agents to serve as modular, lightweight interfaces between pretrained AI models and domain-specific utility pipelines. Their architectural simplicity and operational efficiency position them as key enablers of scalable automation across enterprise, consumer, and industrial settings. While limited in reasoning depth compared to more general AI systems, their high usability and performance within constrained task boundaries have made them foundational components in contemporary intelligent system design. II-2 Foundational Models: The Role of LLMs and LIMs The foundational progress in AI agents has been significantly accelerated by the development and deployment of LLMs and LIMs, which serve as the core reasoning and perception engines in contemporary agent systems. These models enable AI agents to interact intelligently with their environments, understand multimodal inputs, and perform complex reasoning tasks that go beyond hard-coded automation. LLMs such as GPT-4 [73] and PaLM [74] are trained on massive datasets of text from books, web content, and dialogue corpora. These models exhibit emergent capabilities in natural language understanding, question answering, summarization, dialogue coherence, and even symbolic reasoning [75, 76]. Within AI agent architectures, LLMs serve as the primary decision-making engine, allowing the agent to parse user queries, plan multi-step solutions, and generate naturalistic responses. For instance, an AI customer support agent powered by GPT-4 can interpret customer complaints, query backend systems via tool integration, and respond in a contextually appropriate and emotionally aware manner [77]. Large Image Models (LIMs) such as CLIP [78] and BLIP-2 [79] extend the agent’s capabilities into the visual domain. Trained on image-text pairs, LIMs enable perception-based tasks including image classification, object detection, and vision-language grounding. These capabilities are increasingly vital for agents operating in domains such as robotics [80], autonomous vehicles [81, 82], and visual content moderation [83, 84]. Figure 5: An AI agent–enabled drone autonomously inspects an orchard, identifying diseased fruits and damaged branches using vision models, and triggers real-time alerts for targeted horticultural interventions For example, as illustrated in Figure 5 in an autonomous drone agent tasked with inspecting orchards, a LIM can identify diseased fruits or damaged branches by interpreting live aerial imagery and triggering predefined intervention protocols. Upon detection, the system autonomously triggers predefined intervention protocols, such as notifying horticultural staff or marking the location for targeted treatment without requiring human intervention [57, 17]. This workflow exemplifies the autonomy and reactivity of AI agents in agricultural environment and recent literature underscores the growing sophistication of such drone-based AI agents. Chitra et al. [85] provide a comprehensive overview of AI algorithms foundational to embodied agents, highlighting the integration of computer vision, SLAM, reinforcement learning, and sensor fusion. These components collectively support real-time perception and adaptive navigation in dynamic environments. Kourav et al. [86] further emphasize the role of natural language processing and large language models in generating drone action plans from human-issued queries, demonstrating how LLMs support naturalistic interaction and mission planning. Similarly, Natarajan et al. [87] explore deep learning and reinforcement learning for scene understanding, spatial mapping, and multi-agent coordination in aerial robotics. These studies converge on the critical importance of AI-driven autonomy, perception, and decision-making in advancing drone-based agents. Importantly, LLMs and LIMs are often accessed via inference APIs provided by cloud-based platforms such as OpenAI https://openai.com/, HuggingFace https://huggingface.co/, and Google Gemini https://gemini.google.com/app. These services abstract away the complexity of model training and fine-tuning, enabling developers to rapidly build and deploy agents equipped with state-of-the-art reasoning and perceptual abilities. This composability accelerates prototyping and allows agent frameworks like LangChain [88] and AutoGen [89] to orchestrate LLM and LIM outputs across task workflows. In short, foundational models give modern AI agents their basic understanding of language and visuals. Language models help them reason with words, and image models help them understand pictures-working together, they allow AI to make smart decisions in complex situations. II-3 Generative AI as a Precursor A consistent theme in the literature is the positioning of generative AI as the foundational precursor to agentic intelligence. These systems primarily operate on pretrained LLMs and LIMs, which are optimized to synthesize novel content text, images, audio, or code based on input prompts. While highly expressive, generative models fundamentally exhibit reactive behavior: they produce output only when explicitly prompted and do not pursue goals autonomously or engage in self-initiated reasoning [90, 91]. Key Characteristics of Generative AI: • Reactivity: As non-autonomous systems, generative models are exclusively input-driven [92, 93]. Their operations are triggered by user-specified prompts and they lack internal states, persistent memory, or goal-following mechanisms [94, 95, 96]. • Multimodal Capability: Modern generative systems can produce a diverse array of outputs, including coherent narratives, executable code, realistic images, and even speech transcripts. For instance, models like GPT-4 [73], PaLM-E [97], and BLIP-2 [79] exemplify this capacity, enabling language-to-image, image-to-text, and cross-modal synthesis tasks. • Prompt Dependency and Statelessness: Generative systems are stateless in that they do not retain context across interactions unless explicitly provided [98, 99]. Their design lacks intrinsic feedback loops [100], state management [101, 102], or multi-step planning a requirement for autonomous decision-making and iterative goal refinement [103, 104]. Despite their remarkable generative fidelity, these systems are constrained by their inability to act upon the environment or manipulate digital tools independently. For instance, they cannot search the internet, parse real-time data, or interact with APIs without human-engineered wrappers or scaffolding layers. As such, they fall short of being classified as true AI Agents, whose architectures integrate perception, decision-making, and external tool-use within closed feedback loops. The limitations of generative AI in handling dynamic tasks, maintaining state continuity, or executing multi-step plans led to the development of tool-augmented systems, commonly referred to as AI Agents [105]. These systems build upon the language processing backbone of LLMs but introduce additional infrastructure such as memory buffers, tool-calling APIs, reasoning chains, and planning routines to bridge the gap between passive response generation and active task completion. This architectural evolution marks a critical shift in AI system design: from content creation to autonomous utility [106, 107]. The trajectory from generative systems to AI agents underscores a progressive layering of functionality that ultimately supports the emergence of agentic behaviors. II-A Language Models as the Engine for AI Agent Progression The emergence of Ai agent as a transformative paradigm in artificial intelligence is closely tied to the evolution and repurposing of large-scale language models such as GPT-3 [108], Llama [109], T5 [110], Baichuan 2 [111] and GPT3mix [112]. A substantial and growing body of research confirms that the leap from reactive generative models to autonomous, goal-directed agents is driven by the integration of LLMs as core reasoning engines within dynamic agentic systems. These models, originally trained for natural language processing tasks, are increasingly embedded in frameworks that require adaptive planning [113, 114], real-time decision-making [115, 116], and environment-aware behavior [117]. II-A1 LLMs as Core Reasoning Components LLMs such as GPT-4 [73], PaLM [74], Claude https://www.anthropic.com/news/claude-3-5-sonnet, and LLaMA [109] are pre-trained on massive text corpora using self-supervised objectives and fine-tuned using techniques such as Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) [118, 119]. These models encode rich statistical and semantic knowledge, allowing them to perform tasks like inference, summarization, code generation, and dialogue management. In agentic contexts, however, their capabilities are repurposed not merely to generate responses, but to serve as cognitive substrates interpreting user goals, generating action plans, selecting tools, and managing multi-turn workflows. Recent work identifies these models as central to the architecture of contemporary agentic systems. For instance, AutoGPT [30] and BabyAGI https://github.com/yoheinakajima/babyagi use GPT-4 as both a planner and executor: the model analyzes high-level objectives, decomposes them into actionable subtasks, invokes external APIs as needed, and monitors progress to determine subsequent actions. In such systems, the LLM operates in a loop of prompt processing, state updating, and feedback-based correction, closely emulating autonomous decision-making. II-A2 Tool-Augmented AI Agents: Enhancing Functionality To overcome limitations inherent to generative-only systems such as hallucination, static knowledge cutoffs, and restricted interaction scopes, researchers have proposed the concept of tool-augmented LLM agents [120] such as Easytool [121], Gentopia [122], and ToolFive [123]. These systems integrate external tools, APIs, and computation platforms into the agent’s reasoning pipeline, allowing for real-time information access, code execution, and interaction with dynamic data environments. Tool Invocation. When an agent identifies a need that cannot be addressed through its internal knowledge such as querying a current stock price, retrieving up-to-date weather information, or executing a script, it generates a structured function call or API request [124, 125]. These calls are typically formatted in JSON, SQL, or Python, depending on the target service, and routed through an orchestration layer that executes the task. Result Integration. Once a response is received from the tool, the output is parsed and reincorporated into the LLM’s context window. This enables the agent to synthesize new reasoning paths, update its task status, and decide on the next step. The ReAct framework [126] exemplifies this architecture by combining reasoning (Chain-of-Thought prompting) and action (tool use), with LLMs alternating between internal cognition and external environment interaction. II-A3 Illustrative Examples and Emerging Capabilities Tool-augmented LLM agents have demonstrated capabilities across a range of applications. In AutoGPT [30], the agent may plan a product market analysis by sequentially querying the web, compiling competitor data, summarizing insights, and generating a report. In a coding context, tools like GPT-Engineer combine LLM-driven design with local code execution environments to iteratively develop software artifacts [127, 128]. In research domains, systems like Paper-QA [129] utilize LLMs to query vectorized academic databases, grounding answers in retrieved scientific literature to ensure factual integrity. These capabilities have opened pathways for more robust behavior of AI agents such as long-horizon planning, cross-tool coordination, and adaptive learning loops. Nevertheless, the inclusion of tools also introduces new challenges in orchestration complexity, error propagation, and context window limitations all active areas of research.The progression toward AI Agents is inseparable from the strategic integration of LLMs as reasoning engines and their augmentation through structured tool use. This synergy transforms static language models into dynamic cognitive entities capable of perceiving, planning, acting, and adapting setting the stage for multi-agent collaboration, persistent memory, and scalable autonomy. Figure 6 illustrates a representative case: a news query agent that performs real-time web search, summarizes retrieved documents, and generates an articulate, context-aware answer. Such workflows have been demonstrated in implementations using LangChain, AutoGPT, and OpenAI function-calling paradigms. Figure 6: Workflow of an AI Agent performing real-time news search, summarization, and answer generation, as commonly described in the literature (e.g., Author, Year). III The Emergence of Agentic AI from AI Agent Foundations While AI Agents represent a significant leap in artificial intelligence capabilities, particularly in automating narrow tasks through tool-augmented reasoning, recent literature identifies notable limitations that constrain their scalability in complex, multi-step, or cooperative scenarios [130, 131, 132]. These constraints have catalyzed the development of a more advanced paradigm: Agentic AI. This emerging class of systems extends the capabilities of traditional agents by enabling multiple intelligent entities to collaboratively pursue goals through structured communication [133, 134, 135], shared memory [136, 137], and dynamic role assignment [14]. III-1 Conceptual Leap: From Isolated Tasks to Coordinated Systems AI Agents, as explored in prior sections, integrate LLMs with external tools and APIs to execute narrowly scoped operations such as responding to customer queries, performing document retrieval, or managing schedules. However, as use cases increasingly demand context retention, task interdependence, and adaptability across dynamic environments, the single-agent model proves insufficient [138, 139]. Agentic AI systems represent an emergent class of intelligent architectures in which multiple specialized agents collaborate to achieve complex, high-level objectives. As defined in recent frameworks, these systems are composed of modular agents each tasked with a distinct subcomponent of a broader goal and coordinated through either a centralized orchestrator or a decentralized protocol [16, 134]. This structure signifies a conceptual departure from the atomic, reactive behaviors typically observed in single-agent architectures, toward a form of system-level intelligence characterized by dynamic inter-agent collaboration. A key enabler of this paradigm is goal decomposition, wherein a user-specified objective is automatically parsed and divided into smaller, manageable tasks by planning agents [38]. These subtasks are then distributed across the agent network. Multi-step reasoning and planning mechanisms facilitate the dynamic sequencing of these subtasks, allowing the system to adapt in real time to environmental shifts or partial task failures. This ensures robust task execution even under uncertainty [14]. Inter-agent communication is mediated through distributed communication channels, such as asynchronous messaging queues, shared memory buffers, or intermediate output exchanges, enabling coordination without necessitating continuous central oversight [140, 14]. Furthermore, reflective reasoning and memory systems allow agents to store context across multiple interactions, evaluate past decisions, and iteratively refine their strategies [141]. Collectively, these capabilities enable Agentic AI systems to exhibit flexible, adaptive, and collaborative intelligence that exceeds the operational limits of individual agents. A widely accepted conceptual illustration in the literature delineates the distinction between AI Agents and Agentic AI through the analogy of smart home systems. As depicted in Figure 7, the left side represents a traditional AI Agent in the form of a smart thermostat. This standalone agent receives a user-defined temperature setting and autonomously controls the heating or cooling system to maintain the target temperature. While it demonstrates limited autonomy such as learning user schedules or reducing energy usage during absence, it operates in isolation, executing a singular, well-defined task without engaging in broader environmental coordination or goal inference [57, 17]. In contrast, the right side of Figure 7 illustrates an Agentic AI system embedded in a comprehensive smart home ecosystem. Here, multiple specialized agents interact synergistically to manage diverse aspects such as weather forecasting, daily scheduling, energy pricing optimization, security monitoring, and backup power activation. These agents are not just reactive modules; they communicate dynamically, share memory states, and collaboratively align actions toward a high-level system goal (e.g., optimizing comfort, safety, and energy efficiency in real time). For instance, a weather forecast agent might signal upcoming heatwaves, prompting early pre-cooling via solar energy before peak pricing hours, as coordinated by an energy management agent. Simultaneously, the system might delay high-energy tasks or activate surveillance systems during occupant absence, integrating decisions across domains. This figure embodies the architectural and functional leap from task-specific automation to adaptive, orchestrated intelligence. The AI Agent acts as a deterministic component with limited scope, while Agentic AI reflects distributed intelligence, characterized by goal decomposition, inter-agent communication, and contextual adaptation, hallmarks of modern agentic AI frameworks. Figure 7: Comparative illustration of AI Agent vs. Agentic AI, synthesizing conceptual distinctions found in the literature (e.g., Author, Year). Left: A single-task AI Agent. Right: A multi-agent, collaborative Agentic AI system. III-2 Key Differentiators between AI Agents and Agentic AI To systematically capture the evolution from Generative AI to AI Agents and further to Agentic AI, we structure our comparative analysis around a foundational taxonomy where Generative AI serves as the baseline. While AI Agents and Agentic AI represent increasingly autonomous and interactive systems, both paradigms are fundamentally grounded in generative architectures, especially LLMs and LIMs. Consequently, each comparative table in this subsection includes Generative AI as a reference column to highlight how agentic behavior diverges and builds upon generative foundations. A set of fundamental distinctions between AI Agents and Agentic AI particularly in terms of scope, autonomy, architectural composition, coordination strategy, and operational complexity are synthesized in Table I, derived from close analysis of prominent frameworks such as AutoGen [89] and ChatDev [142]. These comparisons provide a multi-dimensional view of how single-agent systems transition into coordinated, multi-agent ecosystems. Through the lens of generative capabilities, we trace the increasing sophistication in planning, communication, and adaptation that characterizes the shift toward Agentic AI. TABLE I: Key Differences Between AI Agents and Agentic AI Feature AI Agents Agentic AI Definition Autonomous software programs that perform specific tasks. Systems of multiple AI agents collaborating to achieve complex goals. Autonomy Level High autonomy within specific tasks. Higher autonomy with the ability to manage multi-step, complex tasks. Task Complexity Typically handle single, specific tasks. Handle complex, multi-step tasks requiring coordination. Collaboration Operate independently. Involve multi-agent collaboration and information sharing. Learning and Adaptation Learn and adapt within their specific domain. Learn and adapt across a wider range of tasks and environments. Applications Customer service chatbots, virtual assistants, automated workflows. Supply chain management, business process optimization, virtual project managers. While Table I delineates the foundational and operational differences between AI Agents and Agentic AI, a more granular taxonomy is required to understand how these paradigms emerge from and relate to broader generative frameworks. Specifically, the conceptual and cognitive progression from static Generative AI systems to tool-augmented AI Agents, and further to collaborative Agentic AI ecosystems, necessitates an integrated comparative framework. This transition is not merely structural but also functional encompassing how initiation mechanisms, memory use, learning capacities, and orchestration strategies evolve across the agentic spectrum. Moreover, recent studies suggest the emergence of hybrid paradigms such as ”Generative Agents,” which blend generative modeling with modular task specialization, further complicating the agentic landscape. In order to capture these nuanced relationships, Table II synthesizes the key conceptual and cognitive dimensions across four archetypes: Generative AI, AI Agents, Agentic AI, and inferred Generative Agents. By positioning Generative AI as a baseline technology, this taxonomy highlights the scientific continuum that spans from passive content generation to interactive task execution and finally to autonomous, multi-agent orchestration. This multi-tiered lens is critical for understanding both the current capabilities and future trajectories of agentic intelligence across applied and theoretical domains. TABLE II: Taxonomy Summary of AI Agent Paradigms: Conceptual and Cognitive Dimensions Conceptual Dimension Generative AI AI Agent Agentic AI Generative Agent (Inferred) Initiation Type Prompt-triggered by user or input Prompt or goal-triggered with tool use Goal-initiated or orchestrated task Prompt or system-level trigger Goal Flexibility (None) fixed per prompt (Low) executes specific goal (High) decomposes and adapts goals (Low) guided by subtask goal Temporal Continuity Stateless, single-session output Short-term continuity within task Persistent across workflow stages Context-limited to subtask Learning/Adaptation Static (pretrained) (Might in future) Tool selection strategies may evolve (Yes) Learns from outcomes Typically static; limited adaptation Memory Use No memory or short context window Optional memory or tool cache Shared episodic/task memory Subtask-local or contextual memory Coordination Strategy None (single-step process) Isolated task execution Hierarchical or decentralized coordination Receives instructions from system System Role Content generator Tool-using task executor Collaborative workflow orchestrator Subtask-level modular generator To further operationalize the distinctions outlined in Table I, Tables III and II extend the comparative lens to encompass a broader spectrum of agent paradigms including AI Agents, Agentic AI, and emerging Generative Agents. Table III presents key architectural and behavioral attributes that highlight how each paradigm differs in terms of primary capabilities, planning scope, interaction style, learning dynamics, and evaluation criteria. AI Agents are optimized for discrete task execution with limited planning horizons and rely on supervised or rule-based learning mechanisms. In contrast, Agentic AI systems extend this capacity through multi-step planning, meta-learning, and inter-agent communication, positioning them for use in complex environments requiring autonomous goal setting and coordination. Generative Agents, as a more recent construct, inherit LLM-centric pretraining capabilities and excel in producing multimodal content creatively, yet they lack the proactive orchestration and state-persistent behaviors seen in Agentic AI systems. The second table (Table III) provides a process-driven comparison across three agent categories: Generative AI, AI Agents, and Agentic AI. This framing emphasizes how functional pipelines evolve from prompt-driven single-model inference in Generative AI, to tool-augmented execution in AI Agents, and finally to orchestrated agent networks in Agentic AI. The structure column underscores this progression: from single LLMs to integrated toolchains and ultimately to distributed multi-agent systems. Access to external data, a key operational requirement for real-world utility, also increases in sophistication, from absent or optional in Generative AI to modular and coordinated in Agentic AI. Collectively, these comparative views reinforce that the evolution from generative to agentic paradigms is marked not just by increasing system complexity but also by deeper integration of autonomy, memory, and decision-making across multiple levels of abstraction. TABLE III: Key Attributes of AI Agents, Agentic AI, and Generative Agents Aspect AI Agent Agentic AI Generative Agent Primary Capability Task execution Autonomous goal setting Content generation Planning Horizon Single‐step Multi‐step N/A (content only) Learning Mechanism Rule-based or supervised Reinforcement/meta-learning Large-scale pretraining Interaction Style Reactive Proactive Creative Evaluation Focus Accuracy, latency Engagement, adaptability Coherence, diversity TABLE IV: Comparison of Generative AI, AI Agents, and Agentic AI Feature Generative AI AI Agent Agentic AI Core Function Content generation Task-specific execution using tools Complex workflow automation Mechanism Prompt →→\rightarrow→ LLM →→\rightarrow→ Output Prompt →→\rightarrow→ Tool Call →→\rightarrow→ LLM →→\rightarrow→ Output Goal →→\rightarrow→ Agent Orchestration →→\rightarrow→ Output Structure Single model LLM + tool(s) Multi-agent system External Data Access None (unless added) Via external APIs Coordinated multi-agent access Key Trait Reactivity Tool-use Collaboration Furthermore, to provide a deeper multi-dimensional understanding of the evolving agentic landscape, Tables V through IX extend the comparative taxonomy to dissect five critical dimensions: core function and goal alignment, architectural composition, operational mechanism, scope and complexity, and interaction-autonomy dynamics. These dimensions serve to not only reinforce the structural differences between Generative AI, AI Agents, and Agentic AI, but also introduce an emergent category Generative Agents representing modular agents designed for embedded subtask-level generation within broader workflows. Table V situates the three paradigms in terms of their overarching goals and functional intent. While Generative AI centers on prompt-driven content generation, AI Agents emphasize tool-based task execution, and Agentic AI systems orchestrate full-fledged workflows. This functional expansion is mirrored architecturally in Table VI, where the system design transitions from single-model reliance (in Generative AI) to multi-agent orchestration and shared memory utilization in Agentic AI. Table VII then outlines how these paradigms differ in their workflow execution pathways, highlighting the rise of inter-agent coordination and hierarchical communication as key drivers of agentic behavior. Furthermore, Table VIII explores the increasing scope and operational complexity handled by these systems ranging from isolated content generation to adaptive, multi-agent collaboration in dynamic environments. Finally, Table IX synthesizes the varying degrees of autonomy, interaction style, and decision-making granularity across the paradigms. These tables collectively establish a rigorous framework to classify and analyze agent-based AI systems, laying the groundwork for principled evaluation and future design of autonomous, intelligent, and collaborative agents operating at scale. Each of the comparative tables presented from Table V through Table IX offers a layered analytical lens to isolate the distinguishing attributes of Generative AI, AI Agents, and Agentic AI, thereby grounding the conceptual taxonomy in concrete operational and architectural features. Table V, for instance, addresses the most fundamental layer of differentiation: core function and system goal. While Generative AI is narrowly focused on reactive content production conditioned on user prompts, AI Agents are characterized by their ability to perform targeted tasks using external tools. Agentic AI, by contrast, is defined by its ability to pursue high-level goals through the orchestration of multiple subagents each addressing a component of a broader workflow. This shift from output generation to workflow execution marks a critical inflection point in the evolution of autonomous systems. In Table VI, the architectural distinctions are made explicit, especially in terms of system composition and control logic. Generative AI relies on a single model with no built-in capability for tool use or delegation, whereas AI Agents combine language models with auxiliary APIs and interface mechanisms to augment functionality. Agentic AI extends this further by introducing multi-agent systems where collaboration, memory persistence, and orchestration protocols are central to the system’s operation. This expansion is crucial for enabling intelligent delegation, context preservation, and dynamic role assignment capabilities absent in both generative and single-agent systems. Likewise in Table VII dives deeper into how these systems function operationally, emphasizing differences in execution logic and information flow. Unlike Generative AI’s linear pipeline (prompt →→\rightarrow→ output), AI Agents implement procedural mechanisms to incorporate tool responses mid-process. Agentic AI introduces recursive task reallocation and cross-agent messaging, thus facilitating emergent decision-making that cannot be captured by static LLM outputs alone. Table VIII further reinforces these distinctions by mapping each system’s capacity to handle task diversity, temporal scale, and operational robustness. Here, Agentic AI emerges as uniquely capable of supporting high-complexity goals that demand adaptive, multi-phase reasoning and execution strategies. TABLE V: Comparison by Core Function and Goal Feature Generative AI AI Agent Agentic AI Generative Agent (Inferred) Primary Goal Create novel content based on prompt Execute a specific task using external tools Automate complex workflow or achieve high-level goals Perform a specific generative sub-task Core Function Content generation (text, image, audio, etc.) Task execution with external interaction Workflow orchestration and goal achievement Sub-task content generation within a workflow TABLE VI: Comparison by Architectural Components Component Generative AI AI Agent Agentic AI Generative Agent (Inferred) Core Engine LLM / LIM LLM Multiple LLMs (potentially diverse) LLM Prompts Yes (input trigger) Yes (task guidance) Yes (system goal and agent tasks) Yes (sub-task guidance) Tools/APIs No (inherently) Yes (essential) Yes (available to constituent agents) Potentially (if sub-task requires) Multiple Agents No No Yes (essential; collaborative) No (is an individual agent) Orchestration No No Yes (implicit or explicit) No (is part of orchestration) TABLE VII: Comparison by Operational Mechanism Mechanism Generative AI AI Agent Agentic AI Generative Agent (Inferred) Primary Driver Reactivity to prompt Tool calling for task execution Inter-agent communication and collaboration Reactivity to input or sub-task prompt Interaction Mode User →→\rightarrow→ LLM User →→\rightarrow→ Agent →→\rightarrow→ Tool User →→\rightarrow→ System →→\rightarrow→ Agents System/Agent →→\rightarrow→ Agent →→\rightarrow→ Output Workflow Handling Single generation step Single task execution Multi-step workflow coordination Single step within workflow Information Flow Input →→\rightarrow→ Output Input →→\rightarrow→ Tool →→\rightarrow→ Output Input →→\rightarrow→ Agent1 →→\rightarrow→ Agent2 →→\rightarrow→ … →→\rightarrow→ Output Input (from system/agent) →→\rightarrow→ Output TABLE VIII: Comparison by Scope and Complexity Aspect Generative AI AI Agent Agentic AI Generative Agent (Inferred) Task Scope Single piece of generated content Single, specific, defined task Complex, multi-faceted goal or workflow Specific sub-task (often generative) Complexity Low (relative) Medium (integrates tools) High (multi-agent coordination) Low to Medium (one task component) Example (Video) Chatbot Tavily Search Agent YouTube-to-Blog Conversion System Title/Description/Conclusion Generator TABLE IX: Comparison by Interaction and Autonomy Feature Generative AI AI Agent Agentic AI Generative Agent (Inferred) Autonomy Level Low (requires prompt) Medium (uses tools autonomously) High (manages entire process) Low to Medium (executes sub-task) External Interaction None (baseline) Via specific tools or APIs Through multiple agents/tools Possibly via tools (if needed) Internal Interaction N/A N/A High (inter-agent) Receives input from system or agent Decision Making Pattern selection Tool usage decisions Goal decomposition and assignment Best sub-task generation strategy Furthermore, Table IX brings into sharp relief the operational and behavioral distinctions across Generative AI, AI Agents, and Agentic AI, with a particular focus on autonomy levels, interaction styles, and inter-agent coordination. Generative AI systems, typified by models such as GPT-3 [108] and and DALL·E https://openai.com/index/dall-e-3/, remain reactive generating content solely in response to prompts without maintaining persistent state or engaging in iterative reasoning. In contrast, AI Agents such as those constructed with LangChain[88] or MetaGPT[143], exhibit a higher degree of autonomy, capable of initiating external tool invocations and adapting behaviors within bounded tasks. However, their autonomy is typically confined to isolated task execution, lacking long-term state continuity or collaborative interaction. Agentic AI systems mark a significant departure from these paradigms by introducing internal orchestration mechanisms and multi-agent collaboration frameworks. For example, platforms like AutoGen [89] and ChatDev [142] exemplify agentic coordination through task decomposition, role assignment, and recursive feedback loops. In AutoGen, one agent might serve as a planner while another retrieves information and a third synthesizes a report each communicating through shared memory buffers and governed by an orchestrator agent that monitors dependencies and overall task progression. This structured coordination allows for more complex goal pursuit and flexible behavior in dynamic environments. Such architectures fundamentally shift the locus of intelligence from single-model outputs to emergent system-level behavior, wherein agents learn, negotiate, and update decisions based on evolving task states. Thus, the comparative taxonomy not only highlights increasing levels of operational independence but also illustrates how Agentic AI introduces novel paradigms of communication, memory integration, and decentralized control, paving the way for the next generation of autonomous systems with scalable, adaptive intelligence. III-A Architectural Evolution: From AI Agents to Agentic AI Systems Figure 8: Illustrating architectural evolution from traditional AI Agents to modern Agentic AI systems. It begins with core modules Perception, Reasoning, and Action and expands into advanced components including Specialized Agents, Advanced Reasoning & Planning, Persistent Memory, and Orchestration. The diagram further captures emergent properties such as Multi-Agent Collaboration, System Coordination, Shared Context, and Task Decomposition, all enclosed within a dotted boundary signifying layered modularity and the transition to distributed, adaptive agentic AI intelligence. While both AI Agents and Agentic AI systems are grounded in modular design principles, Agentic AI significantly extends the foundational architecture to support more complex, distributed, and adaptive behaviors. As illustrated in Figure 8, the transition begins with core subsystems Perception, Reasoning, and Action, that define traditional AI Agents. Agentic AI enhances this base by integrating advanced components such as Specialized Agents, Advanced Reasoning & Planning, Persistent Memory, and Orchestration. The figure further emphasizes emergent capabilities including Multi-Agent Collaboration, System Coordination, Shared Context, and Task Decomposition, all encapsulated within a dotted boundary that signifies the shift toward reflective, decentralized, and goal-driven system architectures. This progression marks a fundamental inflection point in intelligent agent design. This section synthesizes findings from empirical frameworks such as LangChain [88], AutoGPT [89], and TaskMatrix [144], highlighting this progression in architectural sophistication. III-A1 Core Architectural Components of AI Agents Foundational AI Agents are typically composed of four primary subsystems: perception, reasoning, action, and learning. These subsystems form a closed-loop operational cycle, commonly referred to as “Understand, Think, Act” from a user interface perspective, or “Input, Processing, Action, Learning” in systems design literature [145, 14]. • Perception Module: This subsystem ingests input signals from users (e.g., natural language prompts) or external systems (e.g., APIs, file uploads, sensor streams). It is responsible for preprocessing data into a format interpretable by the agent’s reasoning module. For example, in LangChain-based agents [146, 88], the perception layer handles prompt templating, contextual wrapping, and retrieval augmentation via document chunking and embedding search. • Knowledge Representation and Reasoning (KRR) Module: At the core of the agent’s intelligence lies the KRR module, which applies symbolic, statistical, or hybrid logic to input data. Techniques include rule-based logic (e.g., if-then decision trees), deterministic workflow engines, and simple planning graphs. Reasoning in agents like AutoGPT [30] is enhanced with function-calling and prompt chaining to simulate thought processes (e.g., “step-by-step” prompts or intermediate tool invocations). • Action Selection and Execution Module: This module translates inferred decisions into external actions using an action library. These actions may include sending messages, updating databases, querying APIs, or producing structured outputs. Execution is often managed by middleware like LangChain’s “agent executor,” which links LLM outputs to tool calls and observes responses for subsequent steps [88]. • Basic Learning and Adaptation: Traditional AI Agents feature limited learning mechanisms, such as heuristic parameter adjustment [147, 148] or history-informed context retention. For instance, agents may use simple memory buffers to recall prior user inputs or apply scoring mechanisms to improve tool selection in future iterations. Customization of these agents typically involves domain-specific prompt engineering, rule injection, or workflow templates, distinguishing them from hard-coded automation scripts by their ability to make context-aware decisions. Systems like ReAct [126] exemplify this architecture, combining reasoning and action in an iterative framework where agents simulate internal dialogue before selecting external actions. III-A2 Architectural Enhancements in Agentic AI Agentic AI systems inherit the modularity of AI Agents but extend their architecture to support distributed intelligence, inter-agent communication, and recursive planning. The literature documents a number of critical architectural enhancements that differentiate Agentic AI from its predecessors [149, 150]. • Ensemble of Specialized Agents: Rather than operating as a monolithic unit, Agentic AI systems consist of multiple agents, each assigned a specialized function e.g., a summarizer, a retriever, a planner. These agents interact via communication channels (e.g., message queues, blackboards, or shared memory). For instance MetaGPT [143] exemplify this approach by modeling agents after corporate departments (e.g., CEO, CTO, engineer), where roles are modular, reusable, and role-bound. • Advanced Reasoning and Planning: Agentic systems embed recursive reasoning capabilities using frameworks such as ReAct [126], Chain-of-Thought (CoT) prompting [151], and Tree of Thoughts [152]. These mechanisms allow agents to break down a complex task into multiple reasoning stages, evaluate intermediate results, and re-plan actions dynamically. This enables the system to respond adaptively to uncertainty or partial failure. • Persistent Memory Architectures: Unlike traditional agents, Agentic AI incorporates memory subsystems to persist knowledge across task cycles or agent sessions [153, 154]. Memory types include episodic memory (task-specific history) [155, 156], semantic memory (long-term facts or structured data) [157, 158], and vector-based memory for retrieval-augmented generation (RAG) [159, 160]. For example, AutoGen [89] agents maintain scratchpads for intermediate computations, enabling stepwise task progression. • Orchestration Layers / Meta-Agents: A key innovation in Agentic AI is the introduction of orchestrators meta-agents that coordinate the lifecycle of subordinate agents, manage dependencies, assign roles, and resolve conflicts. Orchestrators often include task managers, evaluators, or moderators. In ChatDev [142], for example, a virtual CEO meta-agent distributes subtasks to departmental agents and integrates their outputs into a unified strategic response. These enhancements collectively enable Agentic AI to support scenarios that require sustained context, distributed labor, multi-modal coordination, and strategic adaptation. Use cases range from research assistants that retrieve, summarize, and draft documents in tandem (e.g., AutoGen pipelines [89]) to smart supply chain agents that monitor logistics, vendor performance, and dynamic pricing models in parallel. The shift from isolated perception–reasoning–action loops to collaborative and reflective multi-agent workflows marks a key inflection point in the architectural design of intelligent systems. This progression positions Agentic AI as the next stage of AI infrastructure capable not only of executing predefined workflows but also of constructing, revising, and managing complex objectives across agents with minimal human supervision. IV Application of AI Agents and Agentic AI To illustrate the real-world utility and operational divergence between AI Agents and Agentic AI systems, this study synthesizes a range of applications drawn from recent literature, as visualized in Figure 9. We systematically categorize and analyze application domains across two parallel tracks: conventional AI Agent systems and their more advanced Agentic AI counterparts. For AI Agents, four primary use cases are reviewed: (1) Customer Support Automation and Internal Enterprise Search, where single-agent models handle structured queries and response generation; (2) Email Filtering and Prioritization, where agents assist users in managing high-volume communication through classification heuristics; (3) Personalized Content Recommendation and Basic Data Reporting, where user behavior is analyzed for automated insights; and (4) Autonomous Scheduling Assistants, which interpret calendars and book tasks with minimal user input. In contrast, Agentic AI applications encompass broader and more dynamic capabilities, reviewed through four additional categories: (1) Multi-Agent Research Assistants that retrieve, synthesize, and draft scientific content collaboratively; (2) Intelligent Robotics Coordination, including drone and multi-robot systems in fields like agriculture and logistics; (3) Collaborative Medical Decision Support, involving diagnostic, treatment, and monitoring subsystems; and (4) Multi-Agent Game AI and Adaptive Workflow Automation, where decentralized agents interact strategically or handle complex task pipelines. Figure 9: Categorized applications of AI Agents and Agentic AI across eight core functional domains. IV-1 Application of AI Agents 1. Customer Support Automation and Internal Enterprise Search: AI Agents are widely adopted in enterprise environments for automating customer support and facilitating internal knowledge retrieval. In customer service, these agents leverage retrieval-augmented LLMs interfaced with APIs and organizational knowledge bases to answer user queries, triage tickets, and perform actions like order tracking or return initiation [46]. For internal enterprise search, agents built on vector stores (e.g., Pinecone, Elasticsearch) retrieve semantically relevant documents in response to natural language queries. Tools such as Salesforce Einstein https://www.salesforce.com/artificial-intelligence/, Intercom Fin https://www.intercom.com/fin, and Notion AI https://www.notion.com/product/ai demonstrate how structured input processing and summarization capabilities reduce workload and improve enterprise decision-making. A practical example (Figure 10a) of this dual functionality can be seen in a multinational e-commerce company deploying an AI Agent-based customer support and internal search assistant. For customer support, the AI Agent integrates with the company’s CRM (e.g., Salesforce) and fulfillment APIs to resolve queries such as “Where is my order?” or “How can I return this item?” Within milliseconds, the agent retrieves contextual data from shipping databases and policy repositories, then generates a personalized response using retrieval-augmented generation. For internal enterprise search, employees use the same system to query past meeting notes, sales presentations, or legal documents. When an HR manager types “summarize key benefits policy changes from last year,” the agent queries a Pinecone vector store embedded with enterprise documentation, ranks results by semantic similarity, and returns a concise summary along with source links. These capabilities not only reduce ticket volume and support overhead but also minimize time spent searching for institutional knowledge. The result is a unified, responsive system that enhances both external service delivery and internal operational efficiency using modular AI Agent architectures. Figure 10: Applications of AI Agents in enterprise settings: (a) Customer support and internal enterprise search; (b) Email filtering and prioritization; (c) Personalized content recommendation and basic data reporting; and (d) Autonomous scheduling assistants. Each example highlights modular AI Agent integration for automation, intent understanding, and adaptive reasoning across operational workflows and user-facing systems. 2. Email Filtering and Prioritization: Within productivity tools, AI Agents automate email triage through content classification and prioritization. Integrated with systems like Microsoft Outlook and Superhuman, these agents analyze metadata and message semantics to detect urgency, extract tasks, and recommend replies. They apply user-tuned filtering rules, behavioral signals, and intent classification to reduce cognitive overload. Autonomous actions, such as auto-tagging or summarizing threads, enhance efficiency, while embedded feedback loops enable personalization through incremental learning [61]. Figure10b illustrates a practical implementation of AI Agents in the domain of email filtering and prioritization. In modern workplace environments, users are inundated with high volumes of email, leading to cognitive overload and missed critical communications. AI Agents embedded in platforms like Microsoft Outlook or Superhuman act as intelligent intermediaries that classify, cluster, and triage incoming messages. These agents evaluate metadata (e.g., sender, subject line) and semantic content to detect urgency, extract actionable items, and suggest smart replies. As depicted, the AI agent autonomously categorizes emails into tags such as “Urgent,” “Follow-up,” and “Low Priority,” while also offering context-aware summaries and reply drafts. Through continual feedback loops and usage patterns, the system adapts to user preferences, gradually refining classification thresholds and improving prioritization accuracy. This automation offloads decision fatigue, allowing users to focus on high-value tasks, while maintaining efficient communication management in fast-paced, information-dense environments. 3. Personalized Content Recommendation and Basic Data Reporting: AI Agents support adaptive personalization by analyzing behavioral patterns for news, product, or media recommendations. Platforms like Amazon, YouTube, and Spotify deploy these agents to infer user preferences via collaborative filtering, intent detection, and content ranking. Simultaneously, AI Agents in analytics systems (e.g., Tableau Pulse, Power BI Copilot) enable natural-language data queries and automated report generation by converting prompts to structured database queries and visual summaries, democratizing business intelligence access. A practical illustration (Figure 10c) of AI Agents in personalized content recommendation and basic data reporting can be found in e-commerce and enterprise analytics systems. Consider an AI agent deployed on a retail platform like Amazon: as users browse, click, and purchase items, the agent continuously monitors interaction patterns such as dwell time, search queries, and purchase sequences. Using collaborative filtering and content-based ranking, the agent infers user intent and dynamically generates personalized product suggestions that evolve over time. For example, after purchasing gardening tools, a user may be recommended compatible soil sensors or relevant books. This level of personalization enhances customer engagement, increases conversion rates, and supports long-term user retention. Simultaneously, within a corporate setting, an AI agent integrated into Power BI Copilot allows non-technical staff to request insights using natural language, for instance, “Compare Q3 and Q4 sales in the Northeast.” The agent translates the prompt into structured SQL queries, extracts patterns from the database, and outputs a concise visual summary or narrative report. This application reduces dependency on data analysts and empowers broader business decision-making through intuitive, language-driven interfaces. 4. Autonomous Scheduling Assistants: AI Agents integrated with calendar systems autonomously manage meeting coordination, rescheduling, and conflict resolution. Tools like x.ai and Reclaim AI interpret vague scheduling commands, access calendar APIs, and identify optimal time slots using learned user preferences. They minimize human input while adapting to dynamic availability constraints. Their ability to interface with enterprise systems and respond to ambiguous instructions highlights the modular autonomy of contemporary scheduling agents. A practical application of autonomous scheduling agents can be seen in corporate settings as depicted in Figure 10d where employees manage multiple overlapping responsibilities across global time zones. Consider an executive assistant AI agent integrated with Google Calendar and Slack that interprets a command like “Find a 45-minute window for a follow-up with the product team next week.” The agent parses the request, checks availability for all participants, accounts for time zone differences, and avoids meeting conflicts or working-hour violations. If it identifies a conflict with a previously scheduled task, it may autonomously propose alternative windows and notify affected attendees via Slack integration. Additionally, the agent learns from historical user preferences such as avoiding early Friday meetings and refines its suggestions over time. Tools like Reclaim AI and Clockwise exemplify this capability, offering calendar-aware automation that adapts to evolving workloads. Such assistants reduce coordination overhead, increase scheduling efficiency, and enable smoother team workflows by proactively resolving ambiguity and optimizing calendar utilization. TABLE X: Representative AI Agents (2023–2025): Applications and Operational Characteristics Model / Reference Application Area Operation as AI Agent ChatGPT Deep Research Mode OpenAI (2025) Deep Research OpenAI Research Analysis / Reporting Synthesizes hundreds of sources into reports; functions as a self-directed research analyst. Operator OpenAI (2025) Operator OpenAI Web Automation Navigates websites, fills forms, and completes online tasks autonomously. Agentspace: Deep Research Agent Google (2025) Google Agentspace Enterprise Reporting Generates business intelligence reports using Gemini models. NotebookLM Plus Agent Google (2025) NotebookLM Knowledge Management Summarizes, organizes, and retrieves data across Google Workspace apps. Nova Act Amazon (2025) Amazon Nova Workflow Automation Automates browser-based tasks such as scheduling, HR requests, and email. Manus Agent Monica (2025) Manus Agenthttps://manus.im/ Personal Task Automation Executes trip planning, site building, and product comparisons via browsing. Harvey Harvey AI (2025) Harvey Legal Automation Automates document drafting, legal review, and predictive case analysis. Otter Meeting Agent Otter.ai (2025) Otter Meeting Management Transcribes meetings and provides highlights, summaries, and action items. Otter Sales Agent Otter.ai (2025) Otter sales agent Sales Enablement Analyzes sales calls, extracts insights, and suggests follow-ups. ClickUp Brain ClickUp (2025) ClickUp Brain Project Management Automates task tracking, updates, and project workflows. Agentforce Agentforce (2025) Agentforce Customer Support Routes tickets and generates context-aware replies for support teams. Microsoft Copilot Microsoft (2024) Microsoft Copilot Office Productivity Automates writing, formula generation, and summarization in Microsoft 365. Project Astra Google DeepMind (2025) Project Astra Multimodal Assistance Processes text, image, audio, and video for task support and recommendations. Claude 3.5 Agent Anthropic (2025) Claude 3.5 Sonnet Enterprise Assistance Uses multimodal input for reasoning, personalization, and enterprise task completion. IV-2 Appications of Agentic AI 1. Multi-Agent Research Assistants: Agentic AI systems are increasingly deployed in academic and industrial research pipelines to automate multi-stage knowledge work. Platforms like AutoGen and CrewAI assign specialized roles to multiple agents retrievers, summarizers, synthesizers, and citation formatters under a central orchestrator. The orchestrator distributes tasks, manages role dependencies, and integrates outputs into coherent drafts or review summaries. Persistent memory allows for cross-agent context sharing and refinement over time. These systems are being used for literature reviews, grant preparation, and patent search pipelines, outperforming single-agent systems such as ChatGPT by enabling concurrent sub-task execution and long-context management [89]. For example, a real-world application of agentic AI as depicted in Figure 11a is in the automated drafting of grant proposals. Consider a university research group preparing a National Science Foundation (NSF) submission. Using an AutoGen-based architecture, distinct agents are assigned: one retrieves prior funded proposals and extracts structural patterns; another scans recent literature to summarize related work; a third agent aligns proposal objectives with NSF solicitation language; and a formatting agent structures the document per compliance guidelines. The orchestrator coordinates these agents, resolving dependencies (e.g., aligning methodology with objectives) and ensuring stylistic consistency across sections. Persistent memory modules store evolving drafts, feedback from collaborators, and funding agency templates, enabling iterative improvement over multiple sessions. Compared to traditional manual processes, this multi-agent system significantly accelerates drafting time, improves narrative cohesion, and ensures regulatory alignment offering a scalable, adaptive approach to collaborative scientific writing in academia and R&D-intensive industries. Figure 11: Illustrative Applications of Agentic AI Across Domains: Figure 11 presents four real-world applications of agentic AI systems. (a) Automated grant writing using multi-agent orchestration for structured literature analysis, compliance alignment, and document formatting. (b) Coordinated multi-robot harvesting in apple orchards using shared spatial memory and task-specific agents for mapping, picking, and transport. (c) Clinical decision support in hospital ICUs through synchronized agents for diagnostics, treatment planning, and EHR analysis, enhancing safety and workflow efficiency. (d) Cybersecurity incident response in enterprise environments via agents handling threat classification, compliance analysis, and mitigation planning. In all cases, central orchestrators manage inter-agent communication, shared memory enables context retention, and feedback mechanisms drive continual learning. These use cases highlight agentic AI’s capacity for scalable, autonomous task coordination in complex, dynamic environments across science, agriculture, healthcare, and IT security. 2. Intelligent Robotics Coordination: In robotics and automation, Agentic AI underpins collaborative behavior in multi-robot systems. Each robot operates as a task specialized agent such as pickers, transporters, or mappers while an orchestrator supervises and adapts workflows. These architectures rely on shared spatial memory, real-time sensor fusion, and inter-agent synchronization for coordinated physical actions. Use cases include warehouse automation, drone-based orchard inspection, and robotic harvesting [143]. For instance, agricultural drone swarms may collectively map tree rows, identify diseased fruits, and initiate mechanical interventions. This dynamic allocation enables real-time reconfiguration and autonomy across agents facing uncertain or evolving environments. For example, in commercial apple orchards (Figure 11b), Agentic AI enables a coordinated multi-robot system to optimize the harvest season. Here, task-specialized robots such as autonomous pickers, fruit classifiers, transport bots, and drone mappers operate as agentic units under a central orchestrator. The mapping drones first survey the orchard and use vision-language models (VLMs) to generate high-resolution yield maps and identify ripe clusters. This spatial data is shared via a centralized memory layer accessible by all agents. Picker robots are assigned to high-density zones, guided by path-planning agents that optimize routes around obstacles and labor zones. Simultaneously, transport agents dynamically shuttle crates between pickers and storage, adjusting tasks in response to picker load levels and terrain changes. All agents communicate asynchronously through a shared protocol, and the orchestrator continuously adjusts task priorities based on weather forecasts or mechanical faults. If one picker fails, nearby units autonomously reallocate workload. This adaptive, memory-driven coordination exemplifies Agentic AI’s potential to reduce labor costs, increase harvest efficiency, and respond to uncertainties in complex agricultural environments far surpassing the rigid programming of legacy agricultural robots [143, 89]. 3. Collaborative Medical Decision Support: In high-stakes clinical environments, Agentic AI enables distributed medical reasoning by assigning tasks such as diagnostics, vital monitoring, and treatment planning to specialized agents. For example, one agent may retrieve patient history, another validates findings against diagnostic guidelines, and a third proposes treatment options. These agents synchronize through shared memory and reasoning chains, ensuring coherent, safe recommendations. Applications include ICU management, radiology triage, and pandemic response. Real-world pilots show improved efficiency and decision accuracy compared to isolated expert systems [87]. For example, in a hospital ICU (Figure 11c), an agentic AI system supports clinicians in managing complex patient cases. A diagnostic agent continuously analyzes vitals and lab data for early detection of sepsis risk. Simultaneously, a history retrieval agent accesses electronic health records (EHRs) to summarize comorbidities and recent procedures. A treatment planning agent cross-references current symptoms with clinical guidelines (e.g., Surviving Sepsis Campaign), proposing antibiotic regimens or fluid protocols. The orchestrator integrates these insights, ensures consistency, and surfaces conflicts for human review. Feedback from physicians is stored in a persistent memory module, allowing agents to refine their reasoning based on prior interventions and outcomes. This coordinated system enhances clinical workflow by reducing cognitive load, shortening decision times, and minimizing oversight risks. Early deployments in critical care and oncology units have demonstrated increased diagnostic precision and better adherence to evidence-based protocols, offering a scalable solution for safer, real-time collaborative medical support. 4. Multi-Agent Game AI and Adaptive Workflow Automation: In simulation environments and enterprise systems, Agentic AI facilitates decentralized task execution and emergent coordination. Game platforms like AI Dungeon deploy independent NPC agents with goals, memory, and dynamic interactivity to create emergent narratives and social behavior. In enterprise workflows, systems such as MultiOn and Cognosys use agents to manage processes like legal review or incident escalation, where each step is governed by a specialized module. These architectures exhibit resilience, exception handling, and feedback-driven adaptability far beyond rule-based pipelines. For example, in a modern enterprise IT environment (as depicted in Figure 11d), Agentic AI systems are increasingly deployed to autonomously manage cybersecurity incident response workflows. When a potential threat is detected such as abnormal access patterns or unauthorized data exfiltration specialized agents are activated in parallel. One agent performs real-time threat classification using historical breach data and anomaly detection models. A second agent queries relevant log data from network nodes and correlates patterns across systems. A third agent interprets compliance frameworks (e.g., GDPR or HIPAA) to assess the regulatory severity of the event. A fourth agent simulates mitigation strategies and forecasts operational risks. These agents coordinate under a central orchestrator that evaluates collective outputs, integrates temporal reasoning, and issues recommended actions to human analysts. Through shared memory structures and iterative feedback, the system learns from prior incidents, enabling faster and more accurate responses in future cases. Compared to traditional rule-based security systems, this agentic model enhances decision latency, reduces false positives, and supports proactive threat containment in large-scale organizational infrastructures [89]. TABLE XI: Representative Agentic AI Models (2023–2025): Applications and Operational Characteristics Model / Reference Application Area Operation as Agentic AI Auto-GPT [30] Task Automation Decomposes high-level goals, executes subtasks via tools/APIs, and iteratively self-corrects. GPT Engineer Open Source (2023) GPT Engineer Code Generation Builds entire codebases: plans, writes, tests, and refines based on output. MetaGPT [143]) Software Collaboration Coordinates specialized agents (e.g., coder, tester) for modular multi-role project development. BabyAGI Nakajima (2024) BabyAGI Project Management Continuously creates, prioritizes, and executes subtasks to adaptively meet user goals. Voyager Wang et al. (2023) [161] Game Exploration Learns in Minecraft, invents new skills, sets subgoals, and adapts strategy in real time. CAMEL Liu et al. (2023) [162] Multi-Agent Simulation Simulates agent societies with communication, negotiation, and emergent collaborative behavior. Einstein Copilot Salesforce (2024) Einstein Copilot Customer Automation Automates full support workflows, escalates issues, and improves via feedback loops. Copilot Studio (Agentic Mode) Microsoft (2025) Github Agentic Copilot Productivity Automation Manages documents, meetings, and projects across Microsoft 365 with adaptive orchestration. Atera AI Copilot Atera (2025) Atera Agentic AI IT Operations Diagnoses/resolves IT issues, automates ticketing, and learns from evolving infrastructures. AES Safety Audit Agent AES (2025) AES agentic Industrial Safety Automates audits, assesses compliance, and evolves strategies to enhance safety outcomes. DeepMind Gato (Agentic Mode) Reed et al. (2022) [163] General Robotics Performs varied tasks across modalities, dynamically learns, plans, and executes. GPT-4o + Plugins OpenAI (2024) GPT-4O Agentic Enterprise Automation Manages complex workflows, integrates external tools, and executes adaptive decisions. V Challenges and Limitations in AI Agents and Agentic AI To systematically understand the operational and theoretical limitations of current intelligent systems, we present a comparative visual synthesis in Figure 12, which categorizes challenges and potential remedies across both AI Agents and Agentic AI paradigms. Figure 12a outlines the four most pressing limitations specific to AI Agents namely, lack of causal reasoning, inherited LLM constraints (e.g., hallucinations, shallow reasoning), incomplete agentic properties (e.g., autonomy, proactivity), and failures in long-horizon planning and recovery. These challenges often arise due to their reliance on stateless LLM prompts, limited memory, and heuristic reasoning loops. In contrast, Figure 12b identifies eight critical bottlenecks unique to Agentic AI systems, such as inter-agent error cascades, coordination breakdowns, emergent instability, scalability limits, and explainability issues. These challenges stem from the complexity of orchestrating multiple agents across distributed tasks without standardized architectures, robust communication protocols, or causal alignment frameworks. Figure 13 complements this diagnostic framework by synthesizing ten forward-looking design strategies aimed at mitigating these limitations. These include Retrieval-Augmented Generation (RAG), tool-based reasoning [123, 120, 121], agentic feedback loops (ReAct [126]), role-based multi-agent orchestration, memory architectures, causal modeling, and governance-aware design. Together, these three panels offer a consolidated roadmap for addressing current pitfalls and accelerating the development of safe, scalable, and context-aware autonomous systems. Figure 12: Challenges and Solutions Across Agentic Paradigms. (a) Key limitations of AI Agents including causality deficits and shallow reasoning. (b) Amplified coordination and stability challenges in Agentic AI systems. V-1 Challenges and Limitations of AI Agents While AI Agents have garnered considerable attention for their ability to automate structured tasks using LLMs and tool-use interfaces, the literature highlights significant theoretical and practical limitations that inhibit their reliability, generalization, and long-term autonomy [126, 150]. These challenges arise from both the architectural dependence on static, pretrained models and the difficulty of instilling agentic qualities such as causal reasoning, planning, and robust adaptation. The key challenges and limitations (Figure 12a) of AI Agents are as summarized into following five points: 1. Lack of Causal Understanding: One of the most foundational challenges lies in the agents’ inability to reason causally [164, 165]. Current LLMs, which form the cognitive core of most AI Agents, excel at identifying statistical correlations within training data. However, as noted in recent research from DeepMind and conceptual analyses by TrueTheta, they fundamentally lack the capacity for causal modeling distinguishing between mere association and cause-effect relationships [166, 167, 168]. For instance, while an LLM-powered agent might learn that visiting a hospital often co-occurs with illness, it cannot infer whether the illness causes the visit or vice versa, nor can it simulate interventions or hypothetical changes. This deficit becomes particularly problematic under distributional shifts, where real-world conditions differ from the training regime [169, 170]. Without such grounding, agents remain brittle, failing in novel or high-stakes scenarios. For example, a navigation agent that excels in urban driving may misbehave in snow or construction zones if it lacks an internal causal model of road traction or spatial occlusion. 2. Inherited Limitations from LLMs: AI Agents, particularly those powered by LLMs, inherit a number of intrinsic limitations that impact their reliability, adaptability, and overall trustworthiness in practical deployments [171, 172, 173]. One of the most prominent issues is the tendency to produce hallucinations plausible but factually incorrect outputs. In high-stakes domains such as legal consultation or scientific research, these hallucinations can lead to severe misjudgments and erode user trust [174, 175]. Compounding this is the well-documented prompt sensitivity of LLMs, where even minor variations in phrasing can lead to divergent behaviors. This brittleness hampers reproducibility, necessitating meticulous manual prompt engineering and often requiring domain-specific tuning to maintain consistency across interactions [176]. Furthermore, while recent agent frameworks adopt reasoning heuristics like Chain-of-Thought (CoT) [177, 151] and ReAct [126] to simulate deliberative processes, these approaches remain shallow in semantic comprehension. Agents may still fail at multi-step inference, misalign task objectives, or make logically inconsistent conclusions despite the appearance of structured reasoning [126]. Such shortcomings underscore the absence of genuine understanding and generalizable planning capabilities. Another key limitation lies in computational cost and latency. Each cycle of agentic decision-making particularly in planning or tool-calling may require several LLM invocations. This not only increases runtime latency but also scales resource consumption, creating practical bottlenecks in real-world deployments and cloud-based inference systems. Furthermore, LLMs have a static knowledge cutoff and cannot dynamically integrate new information unless explicitly augmented via retrieval or tool plugins. They also reproduce the biases of their training datasets, which can manifest as culturally insensitive or skewed responses [178, 179]. Without rigorous auditing and mitigation strategies, these issues pose serious ethical and operational risks, particularly when agents are deployed in sensitive or user-facing contexts. 3. Incomplete Agentic Properties: A major limitation of current AI Agents is their inability to fully satisfy the canonical agentic properties defined in foundational literature, such as autonomy, proactivity, reactivity, and social ability [173, 135]. While many systems marketed as ”agents” leverage LLMs to perform useful tasks, they often fall short of these fundamental criteria in practice. Autonomy, for instance, is typically partial at best. Although agents can execute tasks with minimal oversight once initialized, they remain heavily reliant on external scaffolding such as human-defined prompts, planning heuristics, or feedback loops to function effectively [180]. Self-initiated task generation, self-monitoring, or autonomous error correction are rare or absent, limiting their capacity for true independence. Proactivity is similarly underdeveloped. Most AI Agents require explicit user instruction to act and lack the capacity to formulate or reprioritize goals dynamically based on contextual shifts or evolving objectives [181]. As a result, they behave reactively rather than strategically, constrained by the static nature of their initialization. Reactivity itself is constrained by architectural bottlenecks. Agents do respond to environmental or user input, but response latency caused by repeated LLM inference calls [182, 183], coupled with narrow contextual memory windows [153, 184], inhibits real-time adaptability. Perhaps the most underexplored capability is social ability. True agentic systems should communicate and coordinate with humans or other agents over extended interactions, resolving ambiguity, negotiating tasks, and adapting to social norms. However, existing implementations exhibit brittle, template-based dialogue that lacks long-term memory integration or nuanced conversational context. Agent-to-agent interaction is often hardcoded or limited to scripted exchanges, hindering collaborative execution and emergent behavior [185, 96]. Collectively, these deficiencies reveal that while AI Agents demonstrate functional intelligence, they remain far from meeting the formal benchmarks of intelligent, interactive, and adaptive agents. Bridging this gap is essential for advancing toward more autonomous, socially capable AI systems. 4. Limited Long-Horizon Planning and Recovery: A persistent limitation of current AI Agents lies in their inability to perform robust long-horizon planning, especially in complex, multi-stage tasks. This constraint stems from their foundational reliance on stateless prompt-response paradigms, where each decision is made without an intrinsic memory of prior reasoning steps unless externally managed. Although augmentations such as the ReAct framework [126] or Tree-of-Thoughts [152] introduce pseudo-recursive reasoning, they remain fundamentally heuristic and lack true internal models of time, causality, or state evolution. Consequently, agents often falter in tasks requiring extended temporal consistency or contingency planning. For example, in domains such as clinical triage or financial portfolio management, where decisions depend on prior context and dynamically unfolding outcomes, agents may exhibit repetitive behaviors such as endlessly querying tools or fail to adapt when sub-tasks fail or return ambiguous results. The absence of systematic recovery mechanisms or error detection leads to brittle workflows and error propagation. This shortfall severely limits agent deployment in mission-critical environments where reliability, fault tolerance, and sequential coherence are essential. 5. Reliability and Safety Concerns: AI Agents are not yet safe or verifiable enough for deployment in critical infrastructure [186]. The absence of causal reasoning leads to unpredictable behavior under distributional shift [187, 165]. Furthermore, evaluating the correctness of an agent’s plan especially when the agent fabricates intermediate steps or rationales remains an unsolved problem in interpretability [188, 104]. Safety guarantees, such as formal verification, are not yet available for open-ended, LLM-powered agents. While AI Agents represent a major step beyond static generative models, their limitations in causal reasoning, adaptability, robustness, and planning restrict their deployment in high-stakes or dynamic environments. Most current systems rely on heuristic wrappers and brittle prompt engineering rather than grounded agentic cognition. Bridging this gap will require future systems to integrate causal models, dynamic memory, and verifiable reasoning mechanisms. These limitations also set the stage for the emergence of Agentic AI systems, which attempt to address these bottlenecks through multi-agent collaboration, orchestration layers, and persistent system-level context. V-2 Challenges and Limitations of Agentic AI Agentic AI systems represent a paradigm shift from isolated AI agents to collaborative, multi-agent ecosystems capable of decomposing and executing complex goals [14]. These systems typically consist of orchestrated or communicating agents that interact via tools, APIs, and shared environments [38, 18]. While this architectural evolution enables more ambitious automation, it introduces a range of amplified and novel challenges that compound existing limitations of individual LLM-based agents. The current challenges and limitations of Agentic AI are as follows: 1. Amplified Causality Challenges: One of the most critical limitations in Agentic AI systems is the magnification of causality deficits already observed in single-agent architectures. Unlike traditional AI Agents that operate in relatively isolated environments, Agentic AI systems involve complex inter-agent dynamics, where each agent’s action can influence the decision space of others. Without a robust capacity for modeling cause-effect relationships, these systems struggle to coordinate effectively and adapt to unforeseen environmental shifts. A key manifestation of this challenge is inter-agent distributional shift, where the behavior of one agent alters the operational context for others. In the absence of causal reasoning, agents are unable to anticipate the downstream impact of their outputs, resulting in coordination breakdowns or redundant computations [189]. Furthermore, these systems are particularly vulnerable to error cascades: a faulty or hallucinated output from one agent can propagate through the system, compounding inaccuracies and corrupting subsequent decisions. For example, if a verification agent erroneously validates false information, downstream agents such as summarizers or decision-makers may unknowingly build upon that misinformation, compromising the integrity of the entire system. This fragility underscores the urgent need for integrating causal inference and intervention modeling into the design of multi-agent workflows, especially in high-stakes or dynamic environments where systemic robustness is essential. 2. Communication and Coordination Bottlenecks: A fundamental challenge in Agentic AI lies in achieving efficient communication and coordination across multiple autonomous agents. Unlike single-agent systems, Agentic AI involves distributed agents that must collectively pursue a shared objective necessitating precise alignment, synchronized execution, and robust communication protocols. However, current implementations fall short in these aspects. One major issue is goal alignment and shared context, where agents often lack a unified semantic understanding of overarching objectives. This hampers sub-task decomposition, dependency management, and progress monitoring, especially in dynamic environments requiring causal awareness and temporal coherence. In addition, protocol limitations significantly hinder inter-agent communication. Most systems rely on natural language exchanges over loosely defined interfaces, which are prone to ambiguity, inconsistent formatting, and contextual drift. These communication gaps lead to fragmented strategies, delayed coordination, and degraded system performance. Furthermore, resource contention emerges as a systemic bottleneck when agents simultaneously access shared computational, memory, or API resources. Without centralized orchestration or intelligent scheduling mechanisms, these conflicts can result in race conditions, execution delays, or outright system failures. Collectively, these bottlenecks illustrate the immaturity of current coordination frameworks in Agentic AI, and highlight the pressing need for standardized communication protocols, semantic task planners, and global resource managers to ensure scalable, coherent multi-agent collaboration. 3. Emergent Behavior and Predictability: One of the most critical limitations of Agentic AI lies in managing emergent behavior complex system-level phenomena that arise from the interactions of autonomous agents. While such emergence can potentially yield adaptive and innovative solutions, it also introduces significant unpredictability and safety risks [190, 145]. A key concern is the generation of unintended outcomes, where agent interactions result in behaviors that were not explicitly programmed or foreseen by system designers. These behaviors may diverge from task objectives, generate misleading outputs, or even enact harmful actions particularly in high-stakes domains like healthcare, finance, or critical infrastructure. As the number of agents and the complexity of their interactions grow, so too does the likelihood of system instability. This includes phenomena such as infinite planning loops, action deadlocks, and contradictory behaviors emerging from asynchronous or misaligned agent decisions. Without centralized arbitration mechanisms, conflict resolution protocols, or fallback strategies, these instabilities compound over time, making the system fragile and unreliable. The stochasticity and opacity of large language model-based agents further exacerbate this issue, as their internal decision logic is not easily interpretable or verifiable. Consequently, ensuring the predictability and controllability of emergent behavior remains a central challenge in designing safe and scalable Agentic AI systems. 4. Scalability and Debugging Complexity: As Agentic AI systems scale in both the number of agents and the diversity of specialized roles, maintaining system reliability and interpretability becomes increasingly complex [191, 192]. A central limitation stems from the black-box chains of reasoning characteristic of LLM-based agents. Each agent may process inputs through opaque internal logic, invoke external tools, and communicate with other agents all of which occur through multiple layers of prompt engineering, reasoning heuristics, and dynamic context handling. Tracing the root cause of a failure thus requires unwinding nested sequences of agent interactions, tool invocations, and memory updates, making debugging non-trivial and time-consuming. Another significant constraint is the system’s non-compositionality. Unlike traditional modular systems, where adding components can enhance overall functionality, introducing additional agents in an Agentic AI architecture often increases cognitive load, noise, and coordination overhead. Poorly orchestrated agent networks can result in redundant computation, contradictory decisions, or degraded task performance. Without robust frameworks for agent role definition, communication standards, and hierarchical planning, the scalability of Agentic AI does not necessarily translate into greater intelligence or robustness. These limitations highlight the need for systematic architectural controls and traceability tools to support the development of reliable, large-scale agentic ecosystems. 5. Trust, Explainability, and Verification: Agentic AI systems pose heightened challenges in explainability and verifiability due to their distributed, multi-agent architecture. While interpreting the behavior of a single LLM-powered agent is already non-trivial, this complexity is multiplied when multiple agents interact asynchronously through loosely defined communication protocols. Each agent may possess its own memory, task objective, and reasoning path, resulting in compounded opacity where tracing the causal chain of a final decision or failure becomes exceedingly difficult. The lack of shared, transparent logs or interpretable reasoning paths across agents makes it nearly impossible to determine why a particular sequence of actions occurred or which agent initiated a misstep. Compounding this opacity is the absence of formal verification tools tailored for Agentic AI. Unlike traditional software systems, where model checking and formal proofs offer bounded guarantees, there exists no widely adopted methodology to verify that a multi-agent LLM system will perform reliably across all input distributions or operational contexts. This lack of verifiability presents a significant barrier to adoption in safety-critical domains such as autonomous vehicles, finance, and healthcare, where explainability and assurance are non-negotiable. To advance Agentic AI safely, future research must address the foundational gaps in causal traceability, agent accountability, and formal safety guarantees. 6. Security and Adversarial Risks: Agentic AI architectures introduce a significantly expanded attack surface compared to single-agent systems, exposing them to complex adversarial threats. One of the most critical vulnerabilities lies in the presence of a single point of compromise. Since Agentic AI systems are composed of interdependent agents communicating over shared memory or messaging protocols, the compromise of even one agent through prompt injection, model poisoning, or adversarial tool manipulation can propagate malicious outputs or corrupted state across the entire system. For example, a fact-checking agent fed with tampered data could unintentionally legitimize false claims, which are then integrated into downstream reasoning by summarization or decision-making agents. Moreover, inter-agent dynamics themselves are susceptible to exploitation. Attackers can induce race conditions, deadlocks, or resource exhaustion by manipulating the coordination logic between agents. Without rigorous authentication, access control, and sandboxing mechanisms, malicious agents or corrupted tool responses can derail multi-agent workflows or cause erroneous escalation in task pipelines. These risks are exacerbated by the absence of standardized security frameworks for LLM-based multi-agent systems, leaving most current implementations defenseless against sophisticated multi-stage attacks. As Agentic AI moves toward broader adoption, especially in high-stakes environments, embedding secure-by-design principles and adversarial robustness becomes an urgent research imperative. 7. Ethical and Governance Challenges: The distributed and autonomous nature of Agentic AI systems introduces profound ethical and governance concerns, particularly in terms of accountability, fairness, and value alignment. In multi-agent settings, accountability gaps emerge when multiple agents interact to produce an outcome, making it difficult to assign responsibility for errors or unintended consequences. This ambiguity complicates legal liability, regulatory compliance, and user trust, especially in domains such as healthcare, finance, or defense. Furthermore, bias propagation and amplification present a unique challenge: agents individually trained on biased data may reinforce each other’s skewed decisions through interaction, leading to systemic inequities that are more pronounced than in isolated models. These emergent biases can be subtle and difficult to detect without longitudinal monitoring or audit mechanisms. Additionally, misalignment and value drift pose serious risks in long-horizon or dynamic environments. Without a unified framework for shared value encoding, individual agents may interpret overarching objectives differently or optimize for local goals that diverge from human intent. Over time, this misalignment can lead to behavior that is inconsistent with ethical norms or user expectations. Current alignment methods, which are mostly designed for single-agent systems, are inadequate for managing value synchronization across heterogeneous agent collectives. These challenges highlight the urgent need for governance-aware agent architectures, incorporating principles such as role-based isolation, traceable decision logging, and participatory oversight mechanisms to ensure ethical integrity in autonomous multi-agent systems. 8. Immature Foundations and Research Gaps: Despite rapid progress and high-profile demonstrations, Agentic AI remains in a nascent research stage with unresolved foundational issues that limit its scalability, reliability, and theoretical grounding. A central concern is the lack of standard architectures. There is currently no widely accepted blueprint for how to design, monitor, or evaluate multi-agent systems built on LLMs . This architectural fragmentation makes it difficult to compare implementations, replicate experiments, or generalize findings across domains. Key aspects such as agent orchestration, memory structures, and communication protocols are often implemented ad hoc, resulting in brittle systems that lack interoperability and formal guarantees. Equally critical is the absence of causal foundations as scalable causal discovery and reasoning remain unsolved challenges [193]. Without the ability to represent and reason about cause-effect relationships, Agentic AI systems are inherently limited in their capacity to generalize safely beyond narrow training regimes [194, 170]. This shortfall affects their robustness under distributional shifts, their capacity for proactive intervention, and their ability to simulate counterfactuals or hypothetical plans core requirements for intelligent coordination and decision-making. The gap between functional demos and principled design thus underscores an urgent need for foundational research in multi-agent system theory, causal inference integration, and benchmark development. Only by addressing these deficiencies can the field progress from prototype pipelines to trustworthy, general-purpose agentic frameworks suitable for deployment in high-stakes environments. VI Potential Solutions and Future Roadmap The potential solutions (as illustrated in Figure 13) to these challenges and limitations of AI agents and Agentic AI are summarized in the following points: Figure 13: Ten emerging architectural and algorithmic solutions such as RAG, tool use, memory, orchestration, and reflexive mechanisms addressing reliability, scalability, and explainability across both paradigms 1. Retrieval-Augmented Generation (RAG): For AI Agents, Retrieval-Augmented Generation mitigates hallucinations and expands static LLM knowledge by grounding outputs in real-time data [195]. By embedding user queries and retrieving semantically relevant documents from vector databases like FAISS Faiss or Pinecone Pinecone, agents can generate contextually valid responses rooted in external facts. This is particularly effective in domains such as enterprise search and customer support, where accuracy and up-to-date knowledge are essential. In Agentic AI systems, RAG serves as a shared grounding mechanism across agents. For example, a summarizer agent may rely on the retriever agent to access the latest scientific papers before generating a synthesis. Persistent, queryable memory allows distributed agents to operate on a unified semantic layer, mitigating inconsistencies due to divergent contextual views. When implemented across a multi-agent system, RAG helps maintain shared truth, enhances goal alignment, and reduces inter-agent misinformation propagation. 2. Tool-Augmented Reasoning (Function Calling): AI Agents benefit significantly from function calling, which extends their ability to interact with real-world systems [159, 196]. Agents can query APIs, run local scripts, or access structured databases, thus transforming LLMs from static predictors into interactive problem-solvers [154, 125]. This allows them to dynamically retrieve weather forecasts, schedule appointments, or execute Python-based calculations, all beyond the capabilities of pure language modeling. For Agentic AI, function calling supports agent level autonomy and role differentiation. Agents within a team may use APIs to invoke domain-specific actions such as querying clinical databases or generating visual charts based on assigned roles. Function calls become part of an orchestrated pipeline, enabling fluid delegation across agents [197]. This structured interaction reduces ambiguity in task handoff and fosters clearer behavioral boundaries, especially when integrated with validation protocols or observation mechanisms [14, 18]. 3. Agentic Loop: Reasoning, Action, Observation: AI Agents often suffer from single-pass inference limitations. The ReAct pattern introduces an iterative loop where agents reason about tasks, act by calling tools or APIs, and then observe results before continuing. This feedback loop allows for more deliberate, context-sensitive behaviors. For example, an agent may verify retrieved data before drafting a summary, thereby reducing hallucination and logical errors. In Agentic AI, this pattern is critical for collaborative coherence. ReAct enables agents to evaluate dependencies dynamically reasoning over intermediate states, re-invoking tools if needed, and adjusting decisions as the environment evolves. This loop becomes more complex in multi-agent settings where each agent’s observation must be reconciled against others’ outputs. Shared memory and consistent logging are essential here, ensuring that the reflective capacity of the system is not fragmented across agents [126]. 4. Memory Architectures (Episodic, Semantic, Vector): AI Agents face limitations in long-horizon planning and session continuity. Memory architectures address this by persisting information across tasks [198]. Episodic memory allows agents to recall prior actions and feedback, semantic memory encodes structured domain knowledge, and vector memory enables similarity-based retrieval [199]. These elements are key for personalization and adaptive decision-making in repeated interactions. Agentic AI systems require even more sophisticated memory models due to distributed state management. Each agent may maintain local memory while accessing shared global memory to facilitate coordination. For example, a planner agent might use vector-based memory to recall prior workflows, while a QA agent references semantic memory for fact verification. Synchronizing memory access and updates across agents enhances consistency, enables context-aware communication, and supports long-horizon system-level planning. 5. Multi-Agent Orchestration with Role Specialization: In AI Agents, task complexity is often handled via modular prompt templates or conditional logic. However, as task diversity increases, a single agent may become overloaded [200, 201]. Role specialization splitting tasks into subcomponents (e.g., planner, summarizer) allows lightweight orchestration even within single-agent systems by simulating compartmentalized reasoning. In Agentic AI, orchestration is central. A meta-agent or orchestrator distributes tasks among specialized agents, each with distinct capabilities. Systems like MetaGPT and ChatDev exemplify this: agents emulate roles such as CEO, engineer, or reviewer, and interact through structured messaging. This modular approach enhances interpretability, scalability, and fault isolation ensuring that failures in one agent do not cascade without containment mechanisms from the orchestrator. 6. Reflexive and Self-Critique Mechanisms: AI Agents often fail silently or propagate errors. Reflexive mechanisms introduce the capacity for self-evaluation [202, 203]. After completing a task, agents can critique their own outputs using a secondary reasoning pass, increasing robustness and reducing error rates. For example, a legal assistant agent might verify that its drafted clause matches prior case laws before submission. For Agentic AI, reflexivity extends beyond self-critique to inter-agent evaluation. Agents can review each other’s outputs e.g., a verifier agent auditing a summarizer’s work. Reflexion-like mechanisms ensure collaborative quality control and enhance trustworthiness [204]. Such patterns also support iterative improvement and adaptive replanning, particularly when integrated with memory logs or feedback queues [205, 206]. 7. Programmatic Prompt Engineering Pipelines: Manual prompt tuning introduces brittleness and reduces reproducibility in AI Agents. Programmatic pipelines automate this process using task templates, context fillers, and retrieval-augmented variables [207, 208]. These dynamic prompts are structured based on task type, agent role, or user query, improving generalization and reducing failure modes associated with prompt variability. In Agentic AI, prompt pipelines enable scalable, role-consistent communication. Each agent type (e.g., planner, retriever, summarizer) can generate or consume structured prompts tailored to its function. By automating message formatting, dependency tracking, and semantic alignment, programmatic prompting prevents coordination drift and ensures consistent reasoning across diverse agents in real time [159, 14]. 8. Causal Modeling and Simulation-Based Planning: AI Agents often operate on statistical correlations rather than causal models, leading to poor generalization under distribution shifts. Embedding causal inference allows agents to distinguish between correlation and causation, simulate interventions, and plan more robustly. For instance, in supply chain scenarios, a causally aware agent can simulate the downstream impact of shipment delays. In Agentic AI, causal reasoning is vital for safe coordination and error recovery. Agents must anticipate how their actions impact others requiring causal graphs, simulation environments, or Bayesian inference layers. For example, a planning agent may simulate different strategies and communicate likely outcomes to others, fostering strategic alignment and avoiding unintended emergent behaviors. 9. Monitoring, Auditing, and Explainability Pipelines: AI Agents lack transparency, complicating debugging and trust. Logging systems that record prompts, tool calls, memory updates, and outputs enable post-hoc analysis and performance tuning. These records help developers trace faults, refine behavior, and ensure compliance with usage guidelines especially critical in enterprise or legal domains. For Agentic AI, logging and explainability are exponentially more important. With multiple agents interacting asynchronously, audit trails are essential for identifying which agent caused an error and under what conditions. Explainability pipelines that integrate across agents (e.g., timeline visualizations or dialogue replays) are key to ensuring safety, especially in regulatory or multi-stakeholder environments. 10. Governance-Aware Architectures (Accountability and Role Isolation): AI Agents currently lack built-in safeguards for ethical compliance or error attribution. Governance-aware designs introduce role-based access control, sandboxing, and identity resolution to ensure agents act within scope and their decisions can be audited or revoked. These structures reduce risks in sensitive applications such as healthcare or finance. In Agentic AI, governance must scale across roles, agents, and workflows. Role isolation prevents rogue agents from exceeding authority, while accountability mechanisms assign responsibility for decisions and trace causality across agents. Compliance protocols, ethical alignment checks, and agent authentication ensure safety in collaborative settings paving the way for trustworthy AI ecosystems. AI Agents are projected to evolve significantly through enhanced modular intelligence focused on five key domains as depicted in Figure 14 as : proactive reasoning, tool integration, causal inference, continual learning, and trust-centric operations. The first transformative milestone involves transitioning from reactive to Proactive Intelligence, where agents initiate tasks based on learned patterns, contextual cues, or latent goals rather than awaiting explicit prompts. This advancement depends heavily on robust Tool Integration, enabling agents to dynamically interact with external systems, such as databases, APIs, or simulation environments, to fulfill complex user tasks. Equally critical is the development of Causal Reasoning, which will allow agents to move beyond statistical correlation, supporting inference of cause-effect relationships essential for tasks involving diagnosis, planning, or prediction. To maintain relevance over time, agents must adopt frameworks for Continuous Learning, incorporating feedback loops and episodic memory to adapt their behavior across sessions and environments. Lastly, to build user confidence, agents must prioritize Trust & Safety mechanisms through verifiable output logging, bias detection, and ethical guardrails especially as their autonomy increases. Together, these pathways will redefine AI Agents from static tools into adaptive cognitive systems capable of autonomous yet controllable operation in dynamic digital environments. Agentic AI, as a natural extension of these foundations, emphasizes collaborative intelligence through multi-agent coordination, contextual persistence, and domain-specific orchestration. Future systems (Figure 14 right side) will exhibit Multi-Agent Scaling, enabling specialized agents to work in parallel under distributed control for complex problem-solving mirroring team-based human workflows. This necessitates a layer of Unified Orchestration, where meta-agents or orchestrators dynamically assign roles, monitor task dependencies, and mediate conflicts among subordinate agents. Sustained performance over time depends on Persistent Memory architectures, which preserve semantic, episodic, and shared knowledge for agents to coordinate longitudinal tasks and retain state awareness. Simulation Planning is expected to become a core feature, allowing agent collectives to test hypothetical strategies, forecast consequences, and optimize outcomes before real-world execution. Moreover, Ethical Governance frameworks will be essential to ensure responsible deployment defining accountability, oversight, and value alignment across autonomous agent networks. Finally, tailored Domain-Specific Systems will emerge in fields like law, medicine, and supply chains, leveraging contextual specialization to outperform generic agents. This future positions Agentic AI not merely as a coordination layer on the top of AI Agents, but as a new paradigm for collective machine intelligence with adaptive planning, recursive reasoning, and collaborative cognition at its core. AI Agents Proactive Intelligence Tool Integration Causal Reasoning Continuous Learning Trust & Safety Agentic AI Multi-Agent Scaling Unified Orchestration Persistent Memory Simulation Planning Ethical Governance Domain-Specific Systems Figure 14: Mindmap visualization of the future roadmap for AI Agents and Agentic AI. VII Conclusion In this study, we presented a comprehensive literature-based evaluation of the evolving landscape of AI Agents and Agentic AI, offering a structured taxonomy that highlights foundational concepts, architectural evolution, application domains, and key limitations. Beginning with a foundational understanding, we characterized AI Agents as modular, task-specific entities with constrained autonomy and reactivity. Their operational scope is grounded in the integration of LLMs and LIMs, which serve as core reasoning modules for perception, language understanding, and decision-making. We identified generative AI as a functional precursor, emphasizing its limitations in autonomy and goal persistence, and examined how LLMs drive the progression from passive generation to interactive task completion through tool augmentation. This study then explored the conceptual emergence of Agentic AI systems as a transformative evolution from isolated agents to orchestrated, multi-agent ecosystems. We analyzed key differentiators such as distributed cognition, persistent memory, and coordinated planning that distinguish Agentic AI from conventional agent models. This was followed by a detailed breakdown of architectural evolution, highlighting the transition from monolithic, rule-based frameworks to modular, role specialized networks facilitated by orchestration layers and reflective memory architectures. Additionally, this study then surveyed application domains in which these paradigms are deployed. For AI Agents, we illustrated their role in automating customer support, internal enterprise search, email prioritization, and scheduling. For Agentic AI, we demonstrated use cases in collaborative research, robotics, medical decision support, and adaptive workflow automation, supported by practical examples and industry-grade systems. Finally, this study provided a deep analysis of the challenges and limitations affecting both paradigms. For AI Agents, we discussed hallucinations, shallow reasoning, and planning constraints, while for Agentic AI, we addressed amplified causality issues, coordination bottlenecks, emergent behavior, and governance concerns. These insights offer a roadmap for future development and deployment of trustworthy, scalable agentic systems. Acknowledgement This work was supported by the National Science Foundation and the United States Department of Agriculture, National Institute of Food and Agriculture through the “Artificial Intelligence (AI) Institute for Agriculture” Program under Award AWD003473, and AWD004595, Accession Number 1029004, ”Robotic Blossom Thinning with Soft Manipulators”. Declarations The authors declare no conflicts of interest. Statement on AI Writing Assistance ChatGPT and Perplexity were utilized to enhance grammatical accuracy and refine sentence structure; all AI-generated revisions were thoroughly reviewed and edited for relevance. Additionally, ChatGPT-4o was employed to generate realistic visualizations. References [1] E. Oliveira, K. Fischer, and O. Stepankova, “Multi-agent systems: which research for which applications,” Robotics and Autonomous Systems, vol. 27, no. 1-2, pp. 91–106, 1999. [2] Z. Ren and C. J. Anumba, “Multi-agent systems in construction–state of the art and prospects,” Automation in Construction, vol. 13, no. 3, pp. 421–434, 2004. [3] C. Castelfranchi, “Modelling social action for ai agents,” Artificial intelligence, vol. 103, no. 1-2, pp. 157–182, 1998. [4] J. Ferber and G. Weiss, Multi-agent systems: an introduction to distributed artificial intelligence, vol. 1. Addison-wesley Reading, 1999. [5] R. Calegari, G. Ciatto, V. Mascardi, and A. Omicini, “Logic-based technologies for multi-agent systems: a systematic literature review,” Autonomous Agents and Multi-Agent Systems, vol. 35, no. 1, p. 1, 2021. [6] R. C. Cardoso and A. Ferrando, “A review of agent-based programming for multi-agent systems,” Computers, vol. 10, no. 2, p. 16, 2021. [7] E. Shortliffe, Computer-based medical consultations: MYCIN, vol. 2. Elsevier, 2012. [8] H. P. Moravec, “The stanford cart and the cmu rover,” Proceedings of the IEEE, vol. 71, no. 7, pp. 872–884, 1983. [9] B. Dai and H. Chen, “A multi-agent and auction-based framework and approach for carrier collaboration,” Logistics Research, vol. 3, pp. 101–120, 2011. [10] J. Grosset, A.-J. Fougères, M. Djoko-Kouam, and J.-M. Bonnin, “Multi-agent simulation of autonomous industrial vehicle fleets: Towards dynamic task allocation in v2x cooperation mode,” Integrated Computer-Aided Engineering, vol. 31, no. 3, pp. 249–266, 2024. [11] R. A. Agis, S. Gottifredi, and A. J. García, “An event-driven behavior trees extension to facilitate non-player multi-agent coordination in video games,” Expert Systems with Applications, vol. 155, p. 113457, 2020. [12] A. Guerra-Hernández, A. El Fallah-Seghrouchni, and H. Soldano, “Learning in bdi multi-agent systems,” in International Workshop on Computational Logic in Multi-Agent Systems, pp. 218–233, Springer, 2004. [13] A. Saadi, R. Maamri, and Z. Sahnoun, “Behavioral flexibility in belief-desire-intention (bdi) architectures,” Multiagent and grid systems, vol. 16, no. 4, pp. 343–377, 2020. [14] D. B. Acharya, K. Kuppan, and B. Divya, “Agentic ai: Autonomous intelligence for complex goals–a comprehensive survey,” IEEE Access, 2025. [15] M. Z. Pan, M. Cemri, L. A. Agrawal, S. Yang, B. Chopra, R. Tiwari, K. Keutzer, A. Parameswaran, K. Ramchandran, D. Klein, et al., “Why do multiagent systems fail?,” in ICLR 2025 Workshop on Building Trust in Language Models and Applications, 2025. [16] L. Hughes, Y. K. Dwivedi, T. Malik, M. Shawosh, M. A. Albashrawi, I. Jeon, V. Dutot, M. Appanderanda, T. Crick, R. De’, et al., “Ai agents and agentic systems: A multi-expert analysis,” Journal of Computer Information Systems, pp. 1–29, 2025. [17] Z. Deng, Y. Guo, C. Han, W. Ma, J. Xiong, S. Wen, and Y. Xiang, “Ai agents under threat: A survey of key security challenges and future pathways,” ACM Computing Surveys, vol. 57, no. 7, pp. 1–36, 2025. [18] M. Gridach, J. Nanavati, K. Z. E. Abidine, L. Mendes, and C. Mack, “Agentic ai for scientific discovery: A survey of progress, challenges, and future directions,” arXiv preprint arXiv:2503.08979, 2025. [19] T. Song, M. Luo, X. Zhang, L. Chen, Y. Huang, J. Cao, Q. Zhu, D. Liu, B. Zhang, G. Zou, et al., “A multiagent-driven robotic ai chemist enabling autonomous chemical research on demand,” Journal of the American Chemical Society, vol. 147, no. 15, pp. 12534–12545, 2025. [20] M. M. Karim, D. H. Van, S. Khan, Q. Qu, and Y. Kholodov, “Ai agents meet blockchain: A survey on secure and scalable collaboration for multi-agents,” Future Internet, vol. 17, no. 2, p. 57, 2025. [21] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever, et al., “Improving language understanding by generative pre-training,” arxiv, 2018. [22] J. Sánchez Cuadrado, S. Pérez-Soler, E. Guerra, and J. De Lara, “Automating the development of task-oriented llm-based chatbots,” in Proceedings of the 6th ACM Conference on Conversational User Interfaces, pp. 1–10, 2024. [23] Y. Lu, A. Aleta, C. Du, L. Shi, and Y. Moreno, “Llms and generative agent-based models for complex systems research,” Physics of Life Reviews, 2024. [24] A. Zhang, Y. Chen, L. Sheng, X. Wang, and T.-S. Chua, “On generative agents in recommendation,” in Proceedings of the 47th international ACM SIGIR conference on research and development in Information Retrieval, pp. 1807–1817, 2024. [25] S. Peng, E. Kalliamvakou, P. Cihon, and M. Demirer, “The impact of ai on developer productivity: Evidence from github copilot,” arXiv preprint arXiv:2302.06590, 2023. [26] J. Li, V. Lavrukhin, B. Ginsburg, R. Leary, O. Kuchaiev, J. M. Cohen, H. Nguyen, and R. T. Gadde, “Jasper: An end-to-end convolutional neural acoustic model,” arXiv preprint arXiv:1904.03288, 2019. [27] A. Jaruga-Rozdolska, “Artificial intelligence as part of future practices in the architect’s work: Midjourney generative tool as part of a process of creating an architectural form,” Architectus, no. 3 (71, pp. 95–104, 2022. [28] K. Basu, “Bridging knowledge gaps in llms via function calls,” in Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, pp. 5556–5557, 2024. [29] Z. Liu, T. Hoang, J. Zhang, M. Zhu, T. Lan, J. Tan, W. Yao, Z. Liu, Y. Feng, R. RN, et al., “Apigen: Automated pipeline for generating verifiable and diverse function-calling datasets,” Advances in Neural Information Processing Systems, vol. 37, pp. 54463–54482, 2024. [30] H. Yang, S. Yue, and Y. He, “Auto-gpt for online decision making: Benchmarks and additional opinions,” arXiv preprint arXiv:2306.02224, 2023. [31] I. Hettiarachchi, “Exploring generative ai agents: Architecture, applications, and challenges,” Journal of Artificial Intelligence General science (JAIGS) ISSN: 3006-4023, vol. 8, no. 1, pp. 105–127, 2025. [32] A. Das, S.-C. Chen, M.-L. Shyu, and S. Sadiq, “Enabling synergistic knowledge sharing and reasoning in large language models with collaborative multi-agents,” in 2023 IEEE 9th International Conference on Collaboration and Internet Computing (CIC), pp. 92–98, IEEE, 2023. [33] Z. Duan and J. Wang, “Exploration of llm multi-agent application implementation based on langgraph+ crewai,” arXiv preprint arXiv:2411.18241, 2024. [34] R. Sapkota, Y. Cao, K. I. Roumeliotis, and M. Karkee, “Vision-language-action models: Concepts, progress, applications and challenges,” arXiv preprint arXiv:2505.04769, 2025. [35] R. Sapkota, K. I. Roumeliotis, R. H. Cheppally, M. F. Calero, and M. Karkee, “A review of 3d object detection with vision-language models,” arXiv preprint arXiv:2504.18738, 2025. [36] R. Sapkota and M. Karkee, “Object detection with multimodal large vision-language models: An in-depth review,” Available at SSRN 5233953, 2025. [37] B. Memarian and T. Doleck, “Human-in-the-loop in artificial intelligence in education: A review and entity-relationship (er) analysis,” Computers in Human Behavior: Artificial Humans, vol. 2, no. 1, p. 100053, 2024. [38] P. Bornet, J. Wirtz, T. H. Davenport, D. De Cremer, B. Evergreen, P. Fersht, R. Gohel, S. Khiyara, P. Sund, and N. Mullakara, Agentic Artificial Intelligence: Harnessing AI Agents to Reinvent Business, Work and Life. Irreplaceable Publishing, 2025. [39] F. Sado, C. K. Loo, W. S. Liew, M. Kerzel, and S. Wermter, “Explainable goal-driven agents and robots-a comprehensive review,” ACM Computing Surveys, vol. 55, no. 10, pp. 1–41, 2023. [40] J. Heer, “Agency plus automation: Designing artificial intelligence into interactive systems,” Proceedings of the National Academy of Sciences, vol. 116, no. 6, pp. 1844–1850, 2019. [41] G. Papagni, J. de Pagter, S. Zafari, M. Filzmoser, and S. T. Koeszegi, “Artificial agents’ explainability to support trust: considerations on timing and context,” Ai & Society, vol. 38, no. 2, pp. 947–960, 2023. [42] P. Wang and H. Ding, “The rationality of explanation or human capacity? understanding the impact of explainable artificial intelligence on human-ai trust and decision performance,” Information Processing & Management, vol. 61, no. 4, p. 103732, 2024. [43] E. Popa, “Human goals are constitutive of agency in artificial intelligence (ai),” Philosophy & Technology, vol. 34, no. 4, pp. 1731–1750, 2021. [44] M. Chacon-Chamorro, L. F. Giraldo, N. Quijano, V. Vargas-Panesso, C. González, J. S. Pinzón, R. Manrique, M. Ríos, Y. Fonseca, D. Gómez-Barrera, et al., “Cooperative resilience in artificial intelligence multiagent systems,” IEEE Transactions on Artificial Intelligence, 2025. [45] M. Adam, M. Wessel, and A. Benlian, “Ai-based chatbots in customer service and their effects on user compliance,” Electronic Markets, vol. 31, no. 2, pp. 427–445, 2021. [46] D. Leocádio, L. Guedes, J. Oliveira, J. Reis, and N. Melão, “Customer service with ai-powered human-robot collaboration (hrc): A literature review,” Procedia Computer Science, vol. 232, pp. 1222–1232, 2024. [47] T. Cao, Y. Q. Khoo, S. Birajdar, Z. Gong, C.-F. Chung, Y. Moghaddam, A. Xu, H. Mehta, A. Shukla, Z. Wang, et al., “Designing towards productivity: A centralized ai assistant concept for work,” The Human Side of Service Engineering, p. 118, 2024. [48] Y. Huang and J. X. Huang, “Exploring chatgpt for next-generation information retrieval: Opportunities and challenges,” in Web Intelligence, vol. 22, pp. 31–44, SAGE Publications Sage UK: London, England, 2024. [49] N. Holtz, S. Wittfoth, and J. M. Gómez, “The new era of knowledge retrieval: Multi-agent systems meet generative ai,” in 2024 Portland International Conference on Management of Engineering and Technology (PICMET), pp. 1–10, IEEE, 2024. [50] F. Poszler and B. Lange, “The impact of intelligent decision-support systems on humans’ ethical decision-making: A systematic literature review and an integrated framework,” Technological Forecasting and Social Change, vol. 204, p. 123403, 2024. [51] F. Khemakhem, H. Ellouzi, H. Ltifi, and M. B. Ayed, “Agent-based intelligent decision support systems: a systematic review,” IEEE Transactions on Cognitive and Developmental Systems, vol. 14, no. 1, pp. 20–34, 2020. [52] R. V. Florian, “Autonomous artificial intelligent agents,” Center for Cognitive and Neural Studies (Coneural), Cluj-Napoca, Romania, 2003. [53] T. Hellström, N. Kaiser, and S. Bensch, “A taxonomy of embodiment in the ai era,” Electronics, vol. 13, no. 22, p. 4441, 2024. [54] M. Wischnewski, “Attributing mental states to non-embodied autonomous systems: A systematic review,” in Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, pp. 1–8, 2025. [55] K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and M. Fritz, “Not what you’ve signed up for: Compromising real-world llm-integrated applications with indirect prompt injection,” in Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, pp. 79–90, 2023. [56] Y. Talebirad and A. Nadiri, “Multi-agent collaboration: Harnessing the power of intelligent llm agents,” arXiv preprint arXiv:2306.03314, 2023. [57] A. I. Hauptman, B. G. Schelble, N. J. McNeese, and K. C. Madathil, “Adapt and overcome: Perceptions of adaptive autonomous agents for human-ai teaming,” Computers in Human Behavior, vol. 138, p. 107451, 2023. [58] N. Krishnan, “Advancing multi-agent systems through model context protocol: Architecture, implementation, and applications,” arXiv preprint arXiv:2504.21030, 2025. [59] H. Padigela, C. Shah, and D. Juyal, “Ml-dev-bench: Comparative analysis of ai agents on ml development workflows,” arXiv preprint arXiv:2502.00964, 2025. [60] M. Raees, I. Meijerink, I. Lykourentzou, V.-J. Khan, and K. Papangelis, “From explainable to interactive ai: A literature review on current trends in human-ai interaction,” International Journal of Human-Computer Studies, p. 103301, 2024. [61] P. Formosa, “Robot autonomy vs. human autonomy: social robots, artificial intelligence (ai), and the nature of autonomy,” Minds and Machines, vol. 31, no. 4, pp. 595–616, 2021. [62] C. S. Eze and L. Shamir, “Analysis and prevention of ai-based phishing email attacks,” Electronics, vol. 13, no. 10, p. 1839, 2024. [63] D. Singh, V. Patel, D. Bose, and A. Sharma, “Enhancing email marketing efficacy through ai-driven personalization: Leveraging natural language processing and collaborative filtering algorithms,” International Journal of AI Advancements, vol. 9, no. 4, 2020. [64] R. Khan, S. Sarkar, S. K. Mahata, and E. Jose, “Security threats in agentic ai system,” arXiv preprint arXiv:2410.14728, 2024. [65] C. G. Endacott, “Enacting machine agency when ai makes one’s day: understanding how users relate to ai communication technologies for scheduling,” Journal of Computer-Mediated Communication, vol. 29, no. 4, p. zmae011, 2024. [66] Z. Pawlak and A. Skowron, “Rudiments of rough sets,” Information sciences, vol. 177, no. 1, pp. 3–27, 2007. [67] P. Ponnusamy, A. Ghias, Y. Yi, B. Yao, C. Guo, and R. Sarikaya, “Feedback-based self-learning in large-scale conversational ai agents,” AI magazine, vol. 42, no. 4, pp. 43–56, 2022. [68] A. Zagalsky, D. Te’eni, I. Yahav, D. G. Schwartz, G. Silverman, D. Cohen, Y. Mann, and D. Lewinsky, “The design of reciprocal learning between human and artificial intelligence,” Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW2, pp. 1–36, 2021. [69] W. J. Clancey, “Heuristic classification,” Artificial intelligence, vol. 27, no. 3, pp. 289–350, 1985. [70] S. Kapoor, B. Stroebl, Z. S. Siegel, N. Nadgir, and A. Narayanan, “Ai agents that matter,” arXiv preprint arXiv:2407.01502, 2024. [71] X. Huang, J. Lian, Y. Lei, J. Yao, D. Lian, and X. Xie, “Recommender ai agent: Integrating large language models for interactive recommendations,” arXiv preprint arXiv:2308.16505, 2023. [72] A. M. Baabdullah, A. A. Alalwan, R. S. Algharabat, B. Metri, and N. P. Rana, “Virtual agents and flow experience: An empirical examination of ai-powered chatbots,” Technological Forecasting and Social Change, vol. 181, p. 121772, 2022. [73] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al., “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774, 2023. [74] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al., “Palm: Scaling language modeling with pathways,” Journal of Machine Learning Research, vol. 24, no. 240, pp. 1–113, 2023. [75] H. Honda and M. Hagiwara, “Question answering systems with deep learning-based symbolic processing,” IEEE Access, vol. 7, pp. 152368–152378, 2019. [76] N. Karanikolas, E. Manga, N. Samaridi, E. Tousidou, and M. Vassilakopoulos, “Large language models versus natural language understanding and generation,” in Proceedings of the 27th Pan-Hellenic Conference on Progress in Computing and Informatics, pp. 278–290, 2023. [77] A. S. George, A. H. George, T. Baskar, and A. G. Martin, “Revolutionizing business communication: Exploring the potential of gpt-4 in corporate settings,” Partners Universal International Research Journal, vol. 2, no. 1, pp. 149–157, 2023. [78] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., “Learning transferable visual models from natural language supervision,” in International conference on machine learning, pp. 8748–8763, PmLR, 2021. [79] J. Li, D. Li, S. Savarese, and S. Hoi, “Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models,” in International conference on machine learning, pp. 19730–19742, PMLR, 2023. [80] S. Sontakke, J. Zhang, S. Arnold, K. Pertsch, E. Bıyık, D. Sadigh, C. Finn, and L. Itti, “Roboclip: One demonstration is enough to learn robot policies,” Advances in Neural Information Processing Systems, vol. 36, pp. 55681–55693, 2023. [81] M. Elhenawy, H. I. Ashqar, A. Rakotonirainy, T. I. Alhadidi, A. Jaber, and M. A. Tami, “Vision-language models for autonomous driving: Clip-based dynamic scene understanding,” Electronics, vol. 14, no. 7, p. 1282, 2025. [82] S. Park, M. Lee, J. Kang, H. Choi, Y. Park, J. Cho, A. Lee, and D. Kim, “Vlaad: Vision and language assistant for autonomous driving,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 980–987, 2024. [83] S. H. Ahmed, S. Hu, and G. Sukthankar, “The potential of vision-language models for content moderation of children’s videos,” in 2023 International Conference on Machine Learning and Applications (ICMLA), pp. 1237–1241, IEEE, 2023. [84] S. H. Ahmed, M. J. Khan, and G. Sukthankar, “Enhanced multimodal content moderation of children’s videos using audiovisual fusion,” arXiv preprint arXiv:2405.06128, 2024. [85] P. Chitra and A. Saleem Raja, “Artificial intelligence (ai) algorithm and models for embodied agents (robots and drones),” in Building Embodied AI Systems: The Agents, the Architecture Principles, Challenges, and Application Domains, pp. 417–441, Springer, 2025. [86] S. Kourav, K. Verma, and M. Sundararajan, “Artificial intelligence algorithm models for agents of embodiment for drone applications,” in Building Embodied AI Systems: The Agents, the Architecture Principles, Challenges, and Application Domains, pp. 79–101, Springer, 2025. [87] G. Natarajan, E. Elango, B. Sundaravadivazhagan, and S. Rethinam, “Artificial intelligence algorithms and models for embodied agents: Enhancing autonomy in drones and robots,” in Building Embodied AI Systems: The Agents, the Architecture Principles, Challenges, and Application Domains, pp. 103–132, Springer, 2025. [88] K. Pandya and M. Holia, “Automating customer service using langchain: Building custom open-source gpt chatbot for organizations,” arXiv preprint arXiv:2310.05421, 2023. [89] Q. Wu, G. Bansal, J. Zhang, Y. Wu, B. Li, E. Zhu, L. Jiang, X. Zhang, S. Zhang, J. Liu, et al., “Autogen: Enabling next-gen llm applications via multi-agent conversation,” arXiv preprint arXiv:2308.08155, 2023. [90] L. Gabora and J. Bach, “A path to generative artificial selves,” in EPIA Conference on Artificial Intelligence, pp. 15–29, Springer, 2023. [91] G. Pezzulo, T. Parr, P. Cisek, A. Clark, and K. Friston, “Generating meaning: active inference and the scope and limits of passive ai,” Trends in Cognitive Sciences, vol. 28, no. 2, pp. 97–112, 2024. [92] J. Li, M. Zhang, N. Li, D. Weyns, Z. Jin, and K. Tei, “Generative ai for self-adaptive systems: State of the art and research roadmap,” ACM Transactions on Autonomous and Adaptive Systems, vol. 19, no. 3, pp. 1–60, 2024. [93] W. O’Grady and M. Lee, “Natural syntax, artificial intelligence and language acquisition,” Information, vol. 14, no. 7, p. 418, 2023. [94] X. Liu, J. Wang, J. Sun, X. Yuan, G. Dong, P. Di, W. Wang, and D. Wang, “Prompting frameworks for large language models: A survey,” arXiv preprint arXiv:2311.12785, 2023. [95] E. T. Rolls, “The memory systems of the human brain and generative artificial intelligence,” Heliyon, vol. 10, no. 11, 2024. [96] K. Alizadeh, S. I. Mirzadeh, D. Belenko, S. Khatamifard, M. Cho, C. C. Del Mundo, M. Rastegari, and M. Farajtabar, “Llm in a flash: Efficient large language model inference with limited memory,” in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 12562–12584, 2024. [97] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, A. Wahid, J. Tompson, Q. Vuong, T. Yu, W. Huang, et al., “Palm-e: An embodied multimodal language model,” 2023. [98] P. Denny, J. Leinonen, J. Prather, A. Luxton-Reilly, T. Amarouche, B. A. Becker, and B. N. Reeves, “Prompt problems: A new programming exercise for the generative ai era,” in Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1, pp. 296–302, 2024. [99] C. Chen, S. Lee, E. Jang, and S. S. Sundar, “Is your prompt detailed enough? exploring the effects of prompt coaching on users’ perceptions, engagement, and trust in text-to-image generative ai tools,” in Proceedings of the Second International Symposium on Trustworthy Autonomous Systems, pp. 1–12, 2024. [100] A. Pan, E. Jones, M. Jagadeesan, and J. Steinhardt, “Feedback loops with language models drive in-context reward hacking,” arXiv preprint arXiv:2402.06627, 2024. [101] K. Nabben, “Ai as a constituted system: accountability lessons from an llm experiment,” Data & policy, vol. 6, p. e57, 2024. [102] P. J. Pesch, “Potentials and challenges of large language models (llms) in the context of administrative decision-making,” European Journal of Risk Regulation, pp. 1–20, 2025. [103] C. Wang, Y. Deng, Z. Lyu, L. Zeng, J. He, S. Yan, and B. An, “Q*: Improving multi-step reasoning for llms with deliberative planning,” arXiv preprint arXiv:2406.14283, 2024. [104] H. Wei, Z. Zhang, S. He, T. Xia, S. Pan, and F. Liu, “Plangenllms: A modern survey of llm planning capabilities,” arXiv preprint arXiv:2502.11221, 2025. [105] A. Bandi, P. V. S. R. Adapa, and Y. E. V. P. K. Kuchi, “The power of generative ai: A review of requirements, models, input–output formats, evaluation metrics, and challenges,” Future Internet, vol. 15, no. 8, p. 260, 2023. [106] Y. Liu, H. Du, D. Niyato, J. Kang, Z. Xiong, Y. Wen, and D. I. Kim, “Generative ai in data center networking: Fundamentals, perspectives, and case study,” IEEE Network, 2025. [107] C. Guo, F. Cheng, Z. Du, J. Kiessling, J. Ku, S. Li, Z. Li, M. Ma, T. Molom-Ochir, B. Morris, et al., “A survey: Collaborative hardware and software design in the era of large language models,” IEEE Circuits and Systems Magazine, vol. 25, no. 1, pp. 35–57, 2025. [108] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020. [109] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al., “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023. [110] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of machine learning research, vol. 21, no. 140, pp. 1–67, 2020. [111] A. Yang, B. Xiao, B. Wang, B. Zhang, C. Bian, C. Yin, C. Lv, D. Pan, D. Wang, D. Yan, et al., “Baichuan 2: Open large-scale language models,” arXiv preprint arXiv:2309.10305, 2023. [112] K. M. Yoo, D. Park, J. Kang, S.-W. Lee, and W. Park, “Gpt3mix: Leveraging large-scale language models for text augmentation,” arXiv preprint arXiv:2104.08826, 2021. [113] D. Zhou, X. Xue, X. Lu, Y. Guo, P. Ji, H. Lv, W. He, Y. Xu, Q. Li, and L. Cui, “A hierarchical model for complex adaptive system: From adaptive agent to ai society,” ACM Transactions on Autonomous and Adaptive Systems, 2024. [114] H. Hao, Y. Wang, and J. Chen, “Empowering scenario planning with artificial intelligence: A perspective on building smart and resilient cities,” Engineering, 2024. [115] Y. Wang, J. Zhu, Z. Cheng, L. Qiu, Z. Tong, and J. Huang, “Intelligent optimization method for real-time decision-making in laminated cooling configurations through reinforcement learning,” Energy, vol. 291, p. 130434, 2024. [116] X. Xiang, J. Xue, L. Zhao, Y. Lei, C. Yue, and K. Lu, “Real-time integration of fine-tuned large language model for improved decision-making in reinforcement learning,” in 2024 International Joint Conference on Neural Networks (IJCNN), pp. 1–8, IEEE, 2024. [117] Z. Li, H. Zhang, C. Peng, and R. Peiris, “Exploring large language model-driven agents for environment-aware spatial interactions and conversations in virtual reality role-play scenarios,” in 2025 IEEE Conference Virtual Reality and 3D User Interfaces (VR), pp. 1–11, IEEE, 2025. [118] T. R. McIntosh, T. Susnjak, T. Liu, P. Watters, and M. N. Halgamuge, “The inadequacy of reinforcement learning from human feedback-radicalizing large language models via semantic vulnerabilities,” IEEE Transactions on Cognitive and Developmental Systems, 2024. [119] S. Lee, G. Lee, W. Kim, J. Kim, J. Park, and K. Cho, “Human strategy learning-based multi-agent deep reinforcement learning for online team sports game,” IEEE Access, 2025. [120] Z. Shi, S. Gao, L. Yan, Y. Feng, X. Chen, Z. Chen, D. Yin, S. Verberne, and Z. Ren, “Tool learning in the wild: Empowering language models as automatic tool agents,” in Proceedings of the ACM on Web Conference 2025, pp. 2222–2237, 2025. [121] S. Yuan, K. Song, J. Chen, X. Tan, Y. Shen, R. Kan, D. Li, and D. Yang, “Easytool: Enhancing llm-based agents with concise tool instruction,” arXiv preprint arXiv:2401.06201, 2024. [122] B. Xu, X. Liu, H. Shen, Z. Han, Y. Li, M. Yue, Z. Peng, Y. Liu, Z. Yao, and D. Xu, “Gentopia: A collaborative platform for tool-augmented llms,” arXiv preprint arXiv:2308.04030, 2023. [123] H. Lu, X. Li, X. Ji, Z. Kan, and Q. Hu, “Toolfive: Enhancing tool-augmented llms via tool filtering and verification,” in ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5, IEEE, 2025. [124] Y. Song, F. Xu, S. Zhou, and G. Neubig, “Beyond browsing: Api-based web agents,” arXiv preprint arXiv:2410.16464, 2024. [125] V. Tupe and S. Thube, “Ai agentic workflows and enterprise apis: Adapting api architectures for the age of ai agents,” arXiv preprint arXiv:2502.17443, 2025. [126] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, “React: Synergizing reasoning and acting in language models,” in International Conference on Learning Representations (ICLR), 2023. [127] L. Ning, Z. Liang, Z. Jiang, H. Qu, Y. Ding, W. Fan, X.-y. Wei, S. Lin, H. Liu, P. S. Yu, et al., “A survey of webagents: Towards next-generation ai agents for web automation with large foundation models,” arXiv preprint arXiv:2503.23350, 2025. [128] M. W. U. Rahman, R. Nevarez, L. T. Mim, and S. Hariri, “Multi-agent actor-critic generative ai for query resolution and analysis,” IEEE Transactions on Artificial Intelligence, 2025. [129] J. Lála, O. O’Donoghue, A. Shtedritski, S. Cox, S. G. Rodriques, and A. D. White, “Paperqa: Retrieval-augmented generative agent for scientific research,” arXiv preprint arXiv:2312.07559, 2023. [130] Z. Wu, C. Yu, C. Chen, J. Hao, and H. H. Zhuo, “Models as agents: Optimizing multi-step predictions of interactive local models in model-based multi-agent reinforcement learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 10435–10443, 2023. [131] Z. Feng, R. Xue, L. Yuan, Y. Yu, N. Ding, M. Liu, B. Gao, J. Sun, and G. Wang, “Multi-agent embodied ai: Advances and future directions,” arXiv preprint arXiv:2505.05108, 2025. [132] A. Feriani and E. Hossain, “Single and multi-agent deep reinforcement learning for ai-enabled wireless networks: A tutorial,” IEEE Communications Surveys & Tutorials, vol. 23, no. 2, pp. 1226–1252, 2021. [133] R. Zhang, S. Tang, Y. Liu, D. Niyato, Z. Xiong, S. Sun, S. Mao, and Z. Han, “Toward agentic ai: generative information retrieval inspired intelligent communications and networking,” arXiv preprint arXiv:2502.16866, 2025. [134] U. M. Borghoff, P. Bottoni, and R. Pareschi, “Human-artificial interaction in the age of agentic ai: a system-theoretical approach,” Frontiers in Human Dynamics, vol. 7, p. 1579166, 2025. [135] E. Miehling, K. N. Ramamurthy, K. R. Varshney, M. Riemer, D. Bouneffouf, J. T. Richards, A. Dhurandhar, E. M. Daly, M. Hind, P. Sattigeri, et al., “Agentic ai needs a systems theory,” arXiv preprint arXiv:2503.00237, 2025. [136] W. Xu, Z. Liang, K. Mei, H. Gao, J. Tan, and Y. Zhang, “A-mem: Agentic memory for llm agents,” arXiv preprint arXiv:2502.12110, 2025. [137] C. Riedl and D. De Cremer, “Ai for collective intelligence,” Collective Intelligence, vol. 4, no. 2, p. 26339137251328909, 2025. [138] L. Peng, D. Li, Z. Zhang, T. Zhang, A. Huang, S. Yang, and Y. Hu, “Human-ai collaboration: Unraveling the effects of user proficiency and ai agent capability in intelligent decision support systems,” International Journal of Industrial Ergonomics, vol. 103, p. 103629, 2024. [139] H. Shirado, K. Shimizu, N. A. Christakis, and S. Kasahara, “Realism drives interpersonal reciprocity but yields to ai-assisted egocentrism in a coordination experiment,” in Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pp. 1–21, 2025. [140] Y. Xiao, G. Shi, and P. Zhang, “Towards agentic ai networking in 6g: A generative foundation model-as-agent approach,” arXiv preprint arXiv:2503.15764, 2025. [141] P. R. Lewis and Ş. Sarkadi, “Reflective artificial intelligence,” Minds and Machines, vol. 34, no. 2, p. 14, 2024. [142] C. Qian, W. Liu, H. Liu, N. Chen, Y. Dang, J. Li, C. Yang, W. Chen, Y. Su, X. Cong, et al., “Chatdev: Communicative agents for software development,” arXiv preprint arXiv:2307.07924, 2023. [143] S. Hong, X. Zheng, J. Chen, Y. Cheng, J. Wang, C. Zhang, Z. Wang, S. K. S. Yau, Z. Lin, L. Zhou, et al., “Metagpt: Meta programming for multi-agent collaborative framework,” arXiv preprint arXiv:2308.00352, vol. 3, no. 4, p. 6, 2023. [144] Y. Liang, C. Wu, T. Song, W. Wu, Y. Xia, Y. Liu, Y. Ou, S. Lu, L. Ji, S. Mao, et al., “Taskmatrix. ai: Completing tasks by connecting foundation models with millions of apis,” Intelligent Computing, vol. 3, p. 0063, 2024. [145] H. Hexmoor, J. Lammens, G. Caicedo, and S. C. Shapiro, Behaviour based AI, cognitive processes, and emergent behaviors in autonomous agents, vol. 1. WIT Press, 2025. [146] H. Zhang, Z. Li, F. Liu, Y. He, Z. Cao, and Y. Zheng, “Design and implementation of langchain-based chatbot,” in 2024 International Seminar on Artificial Intelligence, Computer Technology and Control Engineering (ACTCE), pp. 226–229, IEEE, 2024. [147] E. Ephrati and J. S. Rosenschein, “A heuristic technique for multi-agent planning,” Annals of Mathematics and Artificial Intelligence, vol. 20, pp. 13–67, 1997. [148] S. Kupferschmid, J. Hoffmann, H. Dierks, and G. Behrmann, “Adapting an ai planning heuristic for directed model checking,” in International SPIN Workshop on Model Checking of Software, pp. 35–52, Springer, 2006. [149] W. Chen, Y. Su, J. Zuo, C. Yang, C. Yuan, C. Qian, C.-M. Chan, Y. Qin, Y. Lu, R. Xie, et al., “Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents,” arXiv preprint arXiv:2308.10848, vol. 2, no. 4, p. 6, 2023. [150] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, E. Hambro, L. Zettlemoyer, N. Cancedda, and T. Scialom, “Toolformer: Language models can teach themselves to use tools,” Advances in Neural Information Processing Systems, vol. 36, pp. 68539–68551, 2023. [151] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al., “Chain-of-thought prompting elicits reasoning in large language models,” Advances in neural information processing systems, vol. 35, pp. 24824–24837, 2022. [152] S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao, and K. Narasimhan, “Tree of thoughts: Deliberate problem solving with large language models,” Advances in neural information processing systems, vol. 36, pp. 11809–11822, 2023. [153] J. Guo, N. Li, J. Qi, H. Yang, R. Li, Y. Feng, S. Zhang, and M. Xu, “Empowering working memory for large language model agents,” arXiv preprint arXiv:2312.17259, 2023. [154] S. Agashe, J. Han, S. Gan, J. Yang, A. Li, and X. E. Wang, “Agent s: An open agentic framework that uses computers like a human,” arXiv preprint arXiv:2410.08164, 2024. [155] C. DeChant, “Episodic memory in ai agents poses risks that should be studied and mitigated,” arXiv preprint arXiv:2501.11739, 2025. [156] A. M. Nuxoll and J. E. Laird, “Enhancing intelligent agents with episodic memory,” Cognitive Systems Research, vol. 17, pp. 34–48, 2012. [157] G. Sarthou, A. Clodic, and R. Alami, “Ontologenius: A long-term semantic memory for robotic agents,” in 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1–8, IEEE, 2019. [158] A.-e.-h. Munir and W. M. Qazi, “Artificial subjectivity: Personal semantic memory model for cognitive agents,” Applied Sciences, vol. 12, no. 4, p. 1903, 2022. [159] A. Singh, A. Ehtesham, S. Kumar, and T. T. Khoei, “Agentic retrieval-augmented generation: A survey on agentic rag,” arXiv preprint arXiv:2501.09136, 2025. [160] R. Akkiraju, A. Xu, D. Bora, T. Yu, L. An, V. Seth, A. Shukla, P. Gundecha, H. Mehta, A. Jha, et al., “Facts about building retrieval augmented generation-based chatbots,” arXiv preprint arXiv:2407.07858, 2024. [161] G. Wang, Y. Xie, Y. Jiang, A. Mandlekar, C. Xiao, Y. Zhu, L. Fan, and A. Anandkumar, “Voyager: An open-ended embodied agent with large language models,” arXiv preprint arXiv:2305.16291, 2023. [162] G. Li, H. Hammoud, H. Itani, D. Khizbullin, and B. Ghanem, “Camel: Communicative agents for” mind” exploration of large language model society,” Advances in Neural Information Processing Systems, vol. 36, pp. 51991–52008, 2023. [163] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez, Y. Sulsky, J. Kay, J. T. Springenberg, et al., “A generalist agent,” arXiv preprint arXiv:2205.06175, 2022. [164] C. K. Thomas, C. Chaccour, W. Saad, M. Debbah, and C. S. Hong, “Causal reasoning: Charting a revolutionary course for next-generation ai-native wireless networks,” IEEE Vehicular Technology Magazine, 2024. [165] Z. Tang, R. Wang, W. Chen, K. Wang, Y. Liu, T. Chen, and L. Lin, “Towards causalgpt: A multi-agent approach for faithful knowledge reasoning via promoting causal consistency in llms,” arXiv preprint arXiv:2308.11914, 2023. [166] Z. Gekhman, J. Herzig, R. Aharoni, C. Elkind, and I. Szpektor, “Trueteacher: Learning factual consistency evaluation with large language models,” arXiv preprint arXiv:2305.11171, 2023. [167] A. Wu, K. Kuang, M. Zhu, Y. Wang, Y. Zheng, K. Han, B. Li, G. Chen, F. Wu, and K. Zhang, “Causality for large language models,” arXiv preprint arXiv:2410.15319, 2024. [168] S. Ashwani, K. Hegde, N. R. Mannuru, D. S. Sengar, M. Jindal, K. C. R. Kathala, D. Banga, V. Jain, and A. Chadha, “Cause and effect: can large language models truly understand causality?,” in Proceedings of the AAAI Symposium Series, vol. 4, pp. 2–9, 2024. [169] J. Richens and T. Everitt, “Robust agents learn causal world models,” in The Twelfth International Conference on Learning Representations, 2024. [170] A. Chan, R. Salganik, A. Markelius, C. Pang, N. Rajkumar, D. Krasheninnikov, L. Langosco, Z. He, Y. Duan, M. Carroll, et al., “Harms from increasingly agentic algorithmic systems,” in Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pp. 651–666, 2023. [171] A. Plaat, M. van Duijn, N. van Stein, M. Preuss, P. van der Putten, and K. J. Batenburg, “Agentic large language models, a survey,” arXiv preprint arXiv:2503.23037, 2025. [172] J. Qiu, K. Lam, G. Li, A. Acharya, T. Y. Wong, A. Darzi, W. Yuan, and E. J. Topol, “Llm-based agentic systems in medicine and healthcare,” Nature Machine Intelligence, vol. 6, no. 12, pp. 1418–1420, 2024. [173] G. A. Gabison and R. P. Xian, “Inherent and emergent liability issues in llm-based agentic systems: a principal-agent perspective,” arXiv preprint arXiv:2504.03255, 2025. [174] M. Dahl, V. Magesh, M. Suzgun, and D. E. Ho, “Large legal fictions: Profiling legal hallucinations in large language models,” Journal of Legal Analysis, vol. 16, no. 1, pp. 64–93, 2024. [175] Y. A. Latif, “Hallucinations in large language models and their influence on legal reasoning: Examining the risks of ai-generated factual inaccuracies in judicial processes,” Journal of Computational Intelligence, Machine Reasoning, and Decision-Making, vol. 10, no. 2, pp. 10–20, 2025. [176] S. Tonmoy, S. Zaman, V. Jain, A. Rani, V. Rawte, A. Chadha, and A. Das, “A comprehensive survey of hallucination mitigation techniques in large language models,” arXiv preprint arXiv:2401.01313, vol. 6, 2024. [177] Z. Zhang, Y. Yao, A. Zhang, X. Tang, X. Ma, Z. He, Y. Wang, M. Gerstein, R. Wang, G. Liu, et al., “Igniting language intelligence: The hitchhiker’s guide from chain-of-thought reasoning to language agents,” ACM Computing Surveys, vol. 57, no. 8, pp. 1–39, 2025. [178] Y. Wan and K.-W. Chang, “White men lead, black women help? benchmarking language agency social biases in llms,” arXiv preprint arXiv:2404.10508, 2024. [179] A. Borah and R. Mihalcea, “Towards implicit bias detection and mitigation in multi-agent llm interactions,” arXiv preprint arXiv:2410.02584, 2024. [180] X. Liu, H. Yu, H. Zhang, Y. Xu, X. Lei, H. Lai, Y. Gu, H. Ding, K. Men, K. Yang, et al., “Agentbench: Evaluating llms as agents,” arXiv preprint arXiv:2308.03688, 2023. [181] G. He, G. Demartini, and U. Gadiraju, “Plan-then-execute: An empirical study of user trust and team performance when using llm agents as a daily assistant,” in Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pp. 1–22, 2025. [182] Z. Ke, F. Jiao, Y. Ming, X.-P. Nguyen, A. Xu, D. X. Long, M. Li, C. Qin, P. Wang, S. Savarese, et al., “A survey of frontiers in llm reasoning: Inference scaling, learning to reason, and agentic systems,” arXiv preprint arXiv:2504.09037, 2025. [183] M. Luo, X. Shi, C. Cai, T. Zhang, J. Wong, Y. Wang, C. Wang, Y. Huang, Z. Chen, J. E. Gonzalez, et al., “Autellix: An efficient serving engine for llm agents as general programs,” arXiv preprint arXiv:2502.13965, 2025. [184] K. Hatalis, D. Christou, J. Myers, S. Jones, K. Lambert, A. Amos-Binks, Z. Dannenhauer, and D. Dannenhauer, “Memory matters: The need to improve long-term memory in llm-agents,” in Proceedings of the AAAI Symposium Series, vol. 2, pp. 277–280, 2023. [185] H. Jin, X. Han, J. Yang, Z. Jiang, Z. Liu, C.-Y. Chang, H. Chen, and X. Hu, “Llm maybe longlm: Self-extend llm context window without tuning,” arXiv preprint arXiv:2401.01325, 2024. [186] M. Yu, F. Meng, X. Zhou, S. Wang, J. Mao, L. Pang, T. Chen, K. Wang, X. Li, Y. Zhang, et al., “A survey on trustworthy llm agents: Threats and countermeasures,” arXiv preprint arXiv:2503.09648, 2025. [187] H. Chi, H. Li, W. Yang, F. Liu, L. Lan, X. Ren, T. Liu, and B. Han, “Unveiling causal reasoning in large language models: Reality or mirage?,” Advances in Neural Information Processing Systems, vol. 37, pp. 96640–96670, 2024. [188] H. Wang, A. Zhang, N. Duy Tai, J. Sun, T.-S. Chua, et al., “Ali-agent: Assessing llms’ alignment with human values via agent-based evaluation,” Advances in Neural Information Processing Systems, vol. 37, pp. 99040–99088, 2024. [189] L. Hammond, A. Chan, J. Clifton, J. Hoelscher-Obermaier, A. Khan, E. McLean, C. Smith, W. Barfuss, J. Foerster, T. Gavenčiak, et al., “Multi-agent risks from advanced ai,” arXiv preprint arXiv:2502.14143, 2025. [190] D. Trusilo, “Autonomous ai systems in conflict: Emergent behavior and its impact on predictability and reliability,” Journal of Military Ethics, vol. 22, no. 1, pp. 2–17, 2023. [191] M. Puvvadi, S. K. Arava, A. Santoria, S. S. P. Chennupati, and H. V. Puvvadi, “Coding agents: A comprehensive survey of automated bug fixing systems and benchmarks,” in 2025 IEEE 14th International Conference on Communication Systems and Network Technologies (CSNT), pp. 680–686, IEEE, 2025. [192] C. Newton, J. Singleton, C. Copland, S. Kitchen, and J. Hudack, “Scalability in modeling and simulation systems for multi-agent, ai, and machine learning applications,” in Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, vol. 11746, pp. 534–552, SPIE, 2021. [193] H. D. Le, X. Xia, and Z. Chen, “Multi-agent causal discovery using large language models,” arXiv preprint arXiv:2407.15073, 2024. [194] Y. Shavit, S. Agarwal, M. Brundage, S. Adler, C. O’Keefe, R. Campbell, T. Lee, P. Mishkin, T. Eloundou, A. Hickey, et al., “Practices for governing agentic ai systems,” Research Paper, OpenAI, 2023. [195] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, et al., “Retrieval-augmented generation for knowledge-intensive nlp tasks,” Advances in neural information processing systems, vol. 33, pp. 9459–9474, 2020. [196] Y. Ma, Z. Gou, J. Hao, R. Xu, S. Wang, L. Pan, Y. Yang, Y. Cao, A. Sun, H. Awadalla, et al., “Sciagent: Tool-augmented language models for scientific reasoning,” arXiv preprint arXiv:2402.11451, 2024. [197] K. Dev, S. A. Khowaja, K. Singh, E. Zeydan, and M. Debbah, “Advanced architectures integrated with agentic ai for next-generation wireless networks,” arXiv preprint arXiv:2502.01089, 2025. [198] A. Boyle and A. Blomkvist, “Elements of episodic memory: insights from artificial agents,” Philosophical Transactions B, vol. 379, no. 1913, p. 20230416, 2024. [199] Y. Du, W. Huang, D. Zheng, Z. Wang, S. Montella, M. Lapata, K.-F. Wong, and J. Z. Pan, “Rethinking memory in ai: Taxonomy, operations, topics, and future directions,” arXiv preprint arXiv:2505.00675, 2025. [200] K.-T. Tran, D. Dao, M.-D. Nguyen, Q.-V. Pham, B. O’Sullivan, and H. D. Nguyen, “Multi-agent collaboration mechanisms: A survey of llms,” arXiv preprint arXiv:2501.06322, 2025. [201] K. Tallam, “From autonomous agents to integrated systems, a new paradigm: Orchestrated distributed intelligence,” arXiv preprint arXiv:2503.13754, 2025. [202] Y. Lee, “Critique of artificial reason: Ontology of human and artificial intelligence,” Journal of Ecohumanism, vol. 4, no. 3, pp. 397–415, 2025. [203] L. Ale, S. A. King, N. Zhang, and H. Xing, “Enhancing generative ai reliability via agentic ai in 6g-enabled edge computing,” Nature Reviews Electrical Engineering, pp. 1–3, 2025. [204] N. Shinn, F. Cassano, A. Gopinath, K. Narasimhan, and S. Yao, “Reflexion: Language agents with verbal reinforcement learning,” Advances in Neural Information Processing Systems, vol. 36, pp. 8634–8652, 2023. [205] F. Kamalov, D. S. Calonge, L. Smail, D. Azizov, D. R. Thadani, T. Kwong, and A. Atif, “Evolution of ai in education: Agentic workflows,” arXiv preprint arXiv:2504.20082, 2025. [206] A. Sulc, T. Hellert, R. Kammering, H. Hoschouer, and J. S. John, “Towards agentic ai on particle accelerators,” arXiv preprint arXiv:2409.06336, 2024. [207] J. Yang, C. Jimenez, A. Wettig, K. Lieret, S. Yao, K. Narasimhan, and O. Press, “Swe-agent: Agent-computer interfaces enable automated software engineering,” Advances in Neural Information Processing Systems, vol. 37, pp. 50528–50652, 2024. [208] S. Barua, “Exploring autonomous agents through the lens of large language models: A review,” arXiv preprint arXiv:2404.04442, 2024. Generated on Thu May 15 16:20:54 2025 by LaTeXML
{"source": "https://arxiv.org/html/2505.10468v1", "title": "AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges", "language": "en", "loader_type": "webbase", "source_url": "https://arxiv.org/html/2505.10468v1", "load_timestamp": "2025-05-21T10:22:31.627898", "doc_type": "html"}
RedTeamLLM: an Agentic AI framework for offensive security 1 Introduction 2 State of the Art 2.1 Research challenges for Agentic AI 2.2 Cognitive Architectures 2.3 Agentic AI and cybersecurity 3 Requirements 4 RedTeamLLM 4.1 The Architecture 4.2 Features 4.3 Memory management 4.4 The Security Model 5 Implementation 5.1 Three-Step Pipeline 1. Reasoning 2. Act 3. Summarizer 5.2 Sample Run 6 Evaluation 6.1 Use cases 6.2 Cognitive steps 6.3 Reasoning: a strong optimization lever 7 Conclusions and Perspectives RedTeamLLM: an Agentic AI framework for offensive security Brian Challita1    Pierre Parrend1,2    1Laboratoire de Recherche de l’EPITA, 14-16 Rue Voltaire, 94270 Le Kremlin-Bicêtre, France 2ICube, UMR 7357, Université de Strasbourg, CNRS, 300 bd Sébastien Brant - CS 10413 - F-67412 Illkirch Cedex {brian.challita, pierre.parrend}@epita.fr Abstract From automated intrusion testing to discovery of zero-day attacks before software launch, agentic AI calls for great promises in security engineering. This strong capability is bound with a similar threat: the security and research community must build up its models before the approach is leveraged by malicious actors for cybercrime. We therefore propose and evaluate RedTeamLLM, an integrated architecture with a comprehensive security model for automatization of pentest tasks. RedTeamLLM follows three key steps: summarizing, reasoning and act, which embed its operational capacity. This novel framework addresses four open challenges: plan correction, memory management, context window constraint, and generality vs. specialization. Evaluation is performed through the automated resolution of a range of entry-level, but not trivial, CTF challenges. The contribution of the reasoning capability of our agentic AI framework is specifically evaluated. Keywords: Cyberdefense; AI for cybersecurity; generative AI; Agentic AI; offensive security 1 Introduction The recent strengthening of Agentic AI Hughes et al. (2025) approaches poses major challenges in the domains of cyberwarfare and geopolitics Oesch et al. (2025). LLMs are already commonly used for cyber operations for augmenting human capabilities and automating common tasks Yao et al. (2024); Chowdhury et al. (2024). They already pose significant ethical and societal challenges Malatji and Tolah (2024), and a great threat of proliferation of cyberdefence and -attack capabilities , which were so far only available for nation-state level actors. Whereas there current recognized capabilities are still bound to the rapid analysis of malicious code or rapid decision taking in alert triage, and they pose significant trust issues Sun et al. (2024), there expressivity and knowledge-base are rapidly ramping up. In this context, Agentic AI, i.e. autonomous AI systems that are capable of performing a set of complex tasks that span over long periods of time without human supervision Acharya et al. (2025), is opening a brand new type of cyberthreat. They follow two complementary strategies: goal orientation, and reinforcement learning, which have the capability to dramatically accelerate the execution of highly technical operations, such as cybersecurity actions, while supporting a diversification of supported tasks. In the defense landscape, cyberwarfare takes a singular position, and targets espionage, disruption, and degradation of information and operational systems of the adversary. More than in traditional arms, skill is a strong limiting factor, especially since targeting critical defense systems heavily relies on the exploitation of rare, unknown vulnerabilities, which are most often than not 0-days threats. Actually, whereas financial criminality aims at money extorsion and thus targets a broad range of potential victims to exploit the weakest ones, defense operations aim at entering and disrupting highly exposed, and highly protected, technical environments, where known vulnerabilities are closed very quickly. In this context, operational capability relies so far in talented analysts capable of discovering novel vulnerabilities. This high-skill, high-mean game could face a brutal end with the advent of tools capable of discovering new exploitable flows at the heart of the software, thus enabling smaller actors to exhibit a highly asymetric threats capable of disrupting critical infrastructures, or launching large-scale disinformation campaigns. Agentic AI has the capability to provide such a tool, and LLMs themselves in their stand-alone versions, have already proved capable of detecting these famous 0-day vulnerabilities: Microsoft has published, with the help of its Copilot tools, no less that 20 (!!) vulnerabilities in the Grub2, U-Boot and barebox bootloaders since late 2024 111https://www.microsoft.com/en-us/security/blog/2025/03/31/analyzing-open-source-bootloaders-finding-vulnerabilities-faster-with-ai/. This is the public side of the medal, by a company who seeks to advertise its software development environment, and create some noise on vulnerabilities on competing operating systems. No doubt malicious actors have not waited to take the same tool at their advantage to unleash novel capabilities to their arsenal, beyond the malicious generative tools analyzed by the community: WormGPT222https://flowgpt.com/p/wormgpt-6, DarkBERT Jin et al. (2023), FraudGPT Falade (2023). In the domain of autonomous offensive cybersecurity operations, the probability and likely impact of proliferation of agentic AI frameworks are high. Understanding their mechanism to leverage these tools for defensive operations, and for being able to anticipate their malicious exploitation, is therefore an urgent requirement for the community. We therefore propose the RedTeamLLM model to the community, as a proof-of-concept of the offensive capabilities of Agentic AI. The model encompasses automation, genericity and memory support. It also defines the principles of dynamic plan correction and context window contraint mitigation, as well as a strict security model to avoid abuse of the system. The evaluations demonstrate the strong competitivity of the model wrt. state-of-the-art competitors, as well as the necessary contribution of its summarizer, reasoning and act components. In particular, RedTeamLLM exhibit a significant improvement in automation capability against PenTestGPT Deng et al. (2024), which still show restricted capacity. The remainder of this paper is organised as follows: Section 2 presents the state of the art. Section 3 defines the requirements, and section 4 present the RedTeamLLM model for agentic-AI based offensive cybersecurity operations. Section 5 presents the implementation and section 6 the evaluation of the model. Section 7 concludes this work. 2 State of the Art The advent, under the form of LLMs, of computing processes capable of generating structured output beyond existing text, is a key driver for a renewed development of agent-based models, with so-called ‘agentic AI’ models Shavit et al. (2023), which are able both to devise technical processes and technically correct pieces of code. These novel kind of agents support multiple, complex and dynamic goals and can operated in dynamic environments while taking a rich context into account Acharya et al. (2025). They thus open novel challenges and opportunity, both as generic problem-solving agents and for highly complex and technical environment like cybersecurity operations. 2.1 Research challenges for Agentic AI The four main challenges in Agentic AI are: analysis, reliability, human factor, and production. These challenges can be mapped to the taxonomy of prompt engineering techniques by Sahoo et al. (2024): Analysis: Reasoning and Logic, knowledge-based reasoning and generation, meta-cognition and self-reflection; Reliability: reduce hallucination, fine-tuning and optimisation, improving consistency and coherence, efficiency; Human factor: user interaction, understanding user intent, managing emotion and tones; production: code generation and execution. The first issue for supporting reasoning and logic is the capability to address complex tasks, to decompose them and to handle each individual step. The first such model, chain-of-thought (CoT), is capable of structured reasoning through step-by-step processing and proves to be competitive for math benchmarks and common sense reasoning benchmarks Wei et al. (2022). Automatic chain-of-thought (Auto-CoT) automatize the generation of CoTs by generating alternative questions and multiple alternative reasoning for each to consolidate a final set of demonstrations Zhang et al. (2022). Trees-of-thought (ToT) handles a tree structure of intermediate analysis steps, and performs evaluation of the progress towards the solution Yao et al. (2023a) through breadth-first or depth-first tree search strategies. This approach enables to revert to previous nodes when an intermediate analysis is erroneous. Self consistency is an approach for the evaluation of reasoning chains for supporting more complex problems through the sampling and comparative 1evaluation of alternative solutions Wang et al. (2022). Text generated by LLM is intrinsically a statistical approximation of a possible answer: as such, it requires 1) a rigorous process to reduce the approximation error below usability threshold, and 2) systematic control by a human operator. The usability threshold can be expressed in term of veracity, for instance in the domain of news.333https://www.cjr.org/tow_center/we-compared-eight-ai-search-engines-theyre-all-bad-at-citing-news.php. For code generation, it matches code that is both correct and effective, i.e. that compiles and run, and that perform expected operation. Usable technical processes, like in red team operations, are defined by reasoning and logic capability. reducing hallucination: Retrieval Augmented Generation (RAG) for enriching prompt context with external, up-to-date knowledge Lewis et al. (2020); REact prompting for concurrent actions and updatable action plans, with reasoning traces Yao et al. (2023b). One key issue for red teaming tasks is the capability to produce fine-tuned, system-specific code for highly precise task. Whereas the capability of LLMs to generate basic code in a broad scope of languages is well recognized Li et al. (2024), the support of complex algorithms and target-dependent scripts is still in its infancy. In particular, the articulation between textual, unprecise and informal reasoning and lines of code must solve the conceptual gap between the textual analysis and the executable levels. Structured Chain-of-Thought Li et al. (2025) closes this gap by enforcing a strong control loop structure (if-then; while; for) at the textual level, which can then be implemented through focused code generation. Programatically handling numeric and symbolic reasoning, as well as equation resolution, requires a binding with external tools, such as specified by Program-of-Thought (PoT) Bi et al. (2024) or Chain-of-Code (CoC) Li et al. (2023) prompting models. However, these features are not required in the case of read teaming tasks. 2.2 Cognitive Architectures Three main architectures implement the Agentic AI approach: ReAct (Reason and Act), ADaPT (As needed Decomposition and Planning) and P&E (Plan and Execute). ReActYao et al. (2023b) first reasons about the analysis strategy, then rolls out this strategy. It performs multiple rounds of reasoning and acting, executing one action at each round then collecting observation. This enables a strong reduction of the error margin. As shown in Figure 1, ReAct input is built with an explicit objective and an optional context. Reasoning then summarizes the goal and context and plan next action, each through a call to an LLM agent. The selected action is then executed, again based on an LLM call. If the analysis is not completed, the pipeline returns to the goal definition step, with a given subgoal. If the goal is achieved, the pipeline terminates. The main limits of this architecture, whether it is used with prompting or with complex pipelines, is the absence of memory, which requires each prompt to embed all context and knowledge about previous analysis steps. Since the context windows of current LLMs are strongly limited, information start being ignored as the context and history start exceeding the context window’s limit, which can lead to reduced performance and inaccurate outputs. Figure 1: Process diagram of ReAct ADaPT Prasad et al. (2023) takes a greedy approach to decomposition: it keeps decomposing the task until it reaches subtasks that can be executed, through recursive decomposition which avoids a saturation of agent capability. The decomposition stops either when a task can be executed directly, or when a max depth is reached. Unlike ReAct and P&E, ADaPT can’t be a prompting method as it is based on recursion. ADaPT completely solves the problem of context window size restriction, by decomposing as much as needed. Execution of leaves are then be carried out independently. However, many complications come along the way: plan correction (if a task fails completely, how can we correct the rest of the plan ?) and new discoveries (the agent might stumble upon information that can lead to a complete change of plan), in particular, are not supported. P&E Sun et al. (2023) aims to decompose a task into multiple subtasks that are executed independently from one another. This architecture defines first solutions to ReAct’s weak points, by decomposing a task and isolating the subtask’s execution. Prompt’s length is thus minimized, which slows down the consumption of the context window capacity. Task execution becomes more efficient. However, one key issue remains: context window is eventually reached; and a new one is introduced: error handling, since, on a subtask’s failure, the whole execution fails. 2.3 Agentic AI and cybersecurity Recent offensive-security agents all converge on a narrow design spectrum: a frontier LLM in a ReAct-style loop that plans, executes a single tool call, observes, then repeats Heckel and Weller (2024) — yet none of them store or revise a global plan the way ADaPT or other deliberative-memory systems do. AutoAttacker couples ReAct with an episodic “Experience Manager,” but that memory is consulted only to validate the current action rather than to update or back-track the plan itself Xu et al. (2024). LLM-Directed Agent preserves the classic four-stage ReAct chain (NLTG → CFG → CG → NLTP) and likewise discards alternative branches once the CFG selects one Laney (2024). One-Day Vulnerabilities’ Exploit Fang et al. (2024a) and Hack-Websites Fang et al. (2024b) expose different toolsets to the same ReAct controller, and performance collapses as soon as GPT-4 is replaced by weaker models. CyberSecEval 3 uses an even leaner single-prompt ReAct wrapper to probe Llama-3 and contemporaries, finding that all models stall long before complex exploitation Wan et al. (2024). HackSynth strips the pattern down to just a Planner and a Summariser —- still a think-then-act loop—and shows that temperature and context-window size, not architectural novelty, dominate success rates Muzsai et al. (2024). The sole departure from ReAct is PenTestAgent, which hard-codes a pentesting workflow (Reconnaissance → Search → Planning → Execution) without agentic recursion Shen et al. (2024), and PenTestGPT, whose Plan-and-Execute modules shuffle intermediate results between Reasoning, Generation and Parsing stages but never revisit earlier strategies once execution starts Deng et al. (2024). Although defensive models exhibit promising properties Ismail et al. (2025), the exploitation of Agentic AI for malicious operations is a key concern to the community Malatji and Tolah (2024). Across current systems, memory is used only as a scratch-pad for latest observations; none implement hierarchical plan refinement, long-horizon memory, or roll-back of faulty plans. 3 Requirements In this section we explicit the specific challenges of agentic AI offensive cybersecurity operations.We address context window’s limit, continuous improvement, genericity and automation. One major issue of LLM agent-based systems is their limited context window. Complex tasks usually require many iterations between the agent and a changing environment especially using ReAct, so tracking what has happened is essential for high-quality results. A common way to address this challenge is recursive planning Prasad et al. (2023), in which a task is broken down into many subtasks that are executed individually; each subtask then passes the key points of its outcome to the next ones. A difficulty arises when a subtask fails, potentially blocking the subtasks that follow. To prevent this, a plan-correction mechanism Wang et al. (2024) is applied: whenever a subtask fails, the overall plan is adjusted so execution can proceed smoothly. These two techniques are crucial for building a high-performance agent, but further refinements are still possible. Repeating the same mistakes on every run wastes time, money, and computation. Introducing a memory manager during task planning lets the agent avoid exploratory paths that have already failed. Moreover, genericity is essential. Allowing the agent full freedom to choose its own tools and techniques fosters creativity and broadens its capabilities beyond a fixed toolset. In our case, the agent has unrestricted execution privileges through root access to a terminal. Finally a key part to consider is automation; refining an agent system is important but not useful unless the whole process is automated, not requiring human interaction during the process. Thus, integrating a tool call of an interactive terminal access within this context is rudimentary. Consolidated requirements for our penetration-testing agent are thus: 1. Dynamic Plan Correction — Handling subtask or action failures without halting the entire workflow Wang et al. (2025). 2. Memory Management — Managing large amounts of contextual data in long-running tasks, which enables continuous self-improvement. 3. Context Window Constraints mitigation — Preventing critical information loss due to an LLM’s limited prompt size Yao et al. (2023b). 4. Generality vs. Specialization — Balancing the need for specialized pentesting tools with broader adaptability. 5. Automation — Automating the interaction of the agent with its designated environment; in our case a terminal. 4 RedTeamLLM In this section, we propose a novel architecture, supported features and related memory management mechanism for an offensive cybersecurity agentic model. Given the high capability and autonomy of the RedTeamLLM model, a robust security model is also required. 4.1 The Architecture The architecture of RedTeamLLM is composed of seven components: Launcher, RedTeamAgent, Memory Manager, ADaPT Enhanced, Plan Corrector, ReAct, and Planner. On a run, the Launcher retrieves the input task and gives it to the RedTeamAgent while acting as the user interface (showing number of tasks running, memory access, failed and successful tasks, and allowing intervention in a task’s operation, e.g., stopping it or modifying its plan). Upon receiving the task, the RedTeamAgent has two objectives: pass it to ADaPT Enhanced and await a tree structure representing the full agent execution, then save that structure to the Memory Manager. The Memory Manager, which is the storage area for operation’s knowledge, embeds and stores each node’s description from a task tree in a database, thus providing full access to previous task structures and dependencies. ADaPT Enhanced then takes that task, passes it to the Planner (which returns a tree of subtasks) and traverses it to execute leaves and pass results to siblings. The Plan Corrector can then adjust the plan and resume execution on any failure. All leaf executions are performed by the ReAct component, which carries out multiple rounds of reasoning, execution, and observations with terminal access. Figure 2: Software Architecture for Red Team LLM Model 4.2 Features To support autonomous offensive operations, the proposed model must address many challenges and effectively meet essential requirements. Thus here are the principal features: • To address context window’s limit the model needs to decompose a task recursively as much as needed. This is accomplished by the ADaPT component. • With subtasks comes dependencies that need monitoring to avoid fatal execution failure on error. Here comes the plan corrector that has the ability to modify a task’s accordingly to lattest outcomes. • In order to support continuous improvement of the model capabilities, the Memory Management comes to improve planning over time by storing past execution in a tree-like way. • Finally the model needs to be generic and avoid restriction to cover a wider range of tasks and not be limited to a set of tools. Here comes ReAct with full terminal access.This allows full automation, having full control and autonomy on the task it is executing. 4.3 Memory management Memory management is an essential part of the model to be implemented. In all other competing models, memory is just used at the execution part to retrieve already executed commands for a similar task. In our case, memory is used at a higher phase in which the agent decides how to create the execution plan. In fact, at the end of each execution the traces of the whole process are stored in form of a tree that is saved using task’s description embedding. Thus, at every decomposition, the planner queries the saved node’s hence having access to their success/failure reason, sub-tasks and detailed execution. This technique helps the agent improve over time and especially when re executing a task where he eventually narrows all the possibilities to the right path. This way the RedTeamLLM model, improves over time and has more chances to complete a task over multiple rounds of execution. Figure 3: Database schema for Memory management Model 4.4 The Security Model The Red-Team LLM architecture supports a powerful autonomous process for pen-testing, including error recovery when the process meets dead-ends, and automation of offensive action. The architecture is thus exposed to two main threat families: hijacking of the execution process, on the one hand, and inversion of dependency from the LLM agents towards the framework, on the other hand. A strong security model is thus required to address the key vulnerabilities of agentic AI models: attack surface expansion, data manipulation and prompt injection, API usage and sensitive data exposure Khan et al. (2024). Its five key components, shown in Figure 4 are: 1) a dedicated authentication, authorization and session management module, 2) network and system isolation of the runtime environment, 3) systematic command validation by the user before any offensive action, 4) logging in append-only mode for a posteriori analysis and 5) a kill switch to shut the platform down. The threats related to containment and inversion of dependency are shown in table 5. Isolation prevents unauthorized access to network entities or configurations, and to system capabilities. Command validation by the user ensures the alignment between the ongoing security task and performed operations, and prevent accidental calls to unwanted or dangerous tools upon proposal by the agent. Following and, when necessary, reconstructing the execution track is supported by the logging facility. To enhance the reaction capability and to pave the way to greater autonomy of the framework, a kill switch is set up to immediately halt any agent over which the supervision, or the control over actual operations, would have been weakened or lost. Figure 4: Security layers wrapping the LLM agent The LLM itself is used in its default configuration, and with a benevolent user that have not intend to abuse it. Consequently, typical threats like prompt injection attacks Labunets et al. or app store abuses Hou et al. (2024) are not relevant to RefTeam LLM. Figure 5: Security challenges and how RedTeamLLM address them 5 Implementation The proof of concept for the RedTeamLLM model, that we evaluate in following section, entails the ReAct component for task execution. The current state of the implementation also covers ADaPT for recursive planning, Memory management for continuous improvement, and Plan correction to support operation continuity after task failure. However, these are less mature, and not evaluated here. RedTeamLLM and related tests are avalaible for the community444https://github.com/lre-security-systems-team/redteamllm. The evaluation is tested on a docker container over a Thinkpad e14 gen 5 with 16GB of RAM ddr4 /I513420h processor, and uses OpenAI’s API with GPT4-o. 5.1 Three-Step Pipeline The ReadTeamLLM implementation uses a three-step pipeline, each step handled by a separate LLM session: 1. Reasoning Before executing any action, the agent reasons about the next steps. Reasoning occurs in an isolated LLM session which elicits an explicit output of its process, detailed steps, and a plan. When the user provides the task definition to the model, it is forwarded to the reasoning component; its output is then passed to the Act component. After each tool call, the executed command and its output are fed back to the reasoning component to generate further analysis. 2. Act The output of Reasoning is treated as an assistant message by the Act session, which enforces adherence to the plan and reduces the model’s inclination to interrupt execution with additional reasoning or safety checks. This setup allows the LLM to focus solely on executing the recommended action. For tool execution, the LLM session has full access to a quasi-interactive, root-privileged Linux terminal. A current challenge is determining when a process requires input; we address this using strace, but it is not perfectly precise because some processes read from multiple file descriptors, not only stdin. After each tool execution, if the output is too long, it is passed to a summarizer to avoid exceeding the context window. 3. Summarizer The summarizer is a stateless LLM session: for each request, it summarizes the given command’s output. Because this session does not maintain context about the agent’s overall goal, it sometimes omits important information. We plan to address this limitation in future work. 5.2 Sample Run A sample run proceeds as follows: 1. A task is given to the agent (e.g., Obtain root access to the machine with IP x.x.x.x”). 2. The task is forwarded to the reasoning session as a user message. 3. The reasoner generates a result, which is provided as an assistant message to the acting session. 4. The act session recommends a tool call (e.g., nmap or sqlmap). 5. After execution, if the command output is lengthy, it is summarized and sent back to the reasoner as a user message. 6. The reasoner produces further thoughts, and the loop continues until the reasoner stops recommending actions. 7. At that point, the system prompts the user for input (e.g., Continue” or a new task). 6 Evaluation The evaluation is performed in three steps: a qualitative evaluation of the RedTeamLLM capability to autonomously perform offensive operations; a comparative study between the cognitive mechanisms involved in these operations; an ablation study focused on the evaluation of the impact of the presence, or absence, of the reasoning capability. 6.1 Use cases The choice of the benchmark to evaluate the RedTeamAgent is based on two factor: reproducibility and variability. We therefore selected 5 use cases: Sar, CewiKid, Victim1, WestWild, CTF4, from VULNHUB repository, which cover a broad range of technical difficulties and various security techniques, are easily deployable, and support reproducible executions. The objective of this work is focused on creating a proof on concept for ReadTeam LLM model, with the evaluation of cognitive operations: summarize, reason, act; and with a processing engine restricted to REaCT component. The 5 selected use cases are embedded in virtual machines from the easy category. This selections also allows us to compare our results since TAPT Benchmark Isozaki et al. (2024) tested PentestGPT Deng et al. (2024) on the same target VMs. RedTeamLLM proves to be competitive for the target use cases, and surpasses PentestGPT on almost all the VM’s when using GPT4-o. The decisive factors for these performance are the following ones. First is reasoning, the difference without this step is really important. The agent used block more on same thoughts and doesn’t keep a stable execution plan. Launching the agent multiple times, he sometimes completely changes strategy. Having an important amount of tokens dedicated to strategy, output analysis and reasoning help the agent to stay on track. Regularly, without reasoning the agent stops what he’s doing to ask for permission. Additionally, giving complete control over a terminal not giving a limited set of tools to the agent, helps with his creativity; being able to chose whatever path to take in order to achieve his goal. Sometime a specific version of a program isn’t sufficient so he installs another one, sometime he launches scripts, sometimes he save operation information in a file. Moreover, the fact that he is directly executing the commands himself saves token on other topics. Finally automation is a key part of the agent, which enables longer and more complex automation without the need for manual supervision. 6.2 Cognitive steps The RedTeamLLM implementation evaluated in this work is built around the ReACT analysis component. It entails 3 LLM session, i.e. 3 interaction dialogs built by assistant and user messages: 3) the summarizer that summarizes command outputs; 2) the reasoning component that reasons over tasks and their outputs, and 3) the Act component that execute the tasks. Figure 6 shows the total number of API calls for each component, over the different use cases after 10 tests on each VM.The Summarizer typically consumes between 9,5% (CTF4) and 15,9% (Cewlkid) of API call tokens, with a low at 3,1% for the WestWild use case and a peek at 30,9% for the Victim1 use case. This peek enables a strong reduction of the required tool calls (See Fig. 7). Reason and Act processes perform a very similar number of API calls. Figure 6: Number of API calls in Summarizer, Reason, Act steps for the 5 use cases RedTeamLLM outperforms PenTestGPT in 3 use cases out of 5: wrt. the use case write-up, it completes 33% more steps than PentestGPT-Llama (4 successful CTF levels vs. 3) and 300% more than PentestGPT4-o (4 vs. 1) for Victim1 use case, 33% more steps than PentestGPT4-o or PentestGPT-Llama (4 vs. 3) for WestWild use case, 75% more than PentestGPT4-o (3.5 vs. 2) and 250% than PentestGPT-Llama (3.5 vs. 1) for CTF4. PenttestGPT-Llama outperforms RedTeamLLM for Sar by 17% (7 vs. 6) and by 100% (4 vs. 2) for CewiKid use case, while PentestGPT4-o is similar or weaker that RedteamLLM for these 2 test cases. 6.3 Reasoning: a strong optimization lever The ablation study aims to evaluate the contribution of reasoning to the RedTeamLLM framework. Figure 7 shows the number of tool calls without and with reasoning for the 5 use cases. Every LLM session can have tool calls. A tool calls is a specific API response from an LLM session that triggers the use of provided tools (in our case a terminal). For example: when the agent executes a terminal command ls, that is a tool call response suggested by the LLM. The total tool calls over the 5 vms with 10 tests on each VM is sumed up: 5 with reasoning, 5 without reasoning. These are only the tool calls with the Act components only because this is where execution is performed. We can clearly see that the agent consumes significantly less tool calls with reasoning in 4 out of 5 use cases: the drop is tool calls range from 37% (Sar) to 68% (Victim1). Only for CTF4, the use of reasoning is bound with an increase of 291% of tool calls, to support a slightly better achievement of the target operation (see Fig. 8). In short, the agent performs more analysis before performing actions, and thus chooses better strategies to perform. Figure 7: Number of tool calls without and with reasoning for the 5 use cases The degree of completion is computed for each use cases, using the write-up, which contains the listing of correct steps to complete the security challenge, as reference. Figure 8 shows these results. The write-up bar shows the total numbers of steps required to achieve the CTF (total of recon,general technique, exploit and privilege escalation). The Reason and No Reason bars show how many steps the agent has completed for each use case with and without reasoning respectively. The test process is similar to previous evaluation: RedTeamLLM handles 5 tests with reasoning and 5 tests without reasoning for every use case. The maximum number of steps achieved over the 5 runs is considered, i.e. the better execution. Reasoning improves the results in 4 cases out of 5. In two of these cases, the number of steps mastered pass from 1 to 4. A significant result of our experiments is that this improvement is coupled with a strong gain in efficiency wrt. to tool calls (see Fig 7). In one case (Cewlkid), reasoning does not improve the offensive capability. These results highlight the contribution of the reasoning step to security operation by RedTeamLLM model. Figure 8: CTF level completed by the RedTeamLLM framework without and with reasoning for the 5 use cases 7 Conclusions and Perspectives Beyond generative AI and now wide-spread Large Language Models (LLMs), Agentic AI is opening wide novel opportunities and threat to global security, and cybersecurity in particular. The objective of this work is to specify a reference model for agentic AI as applied to offensive cyber operations, so that the community can better understand these tools and their capability, leverage them for securing their information systems, and control this novel attack vector. In this work, we define the key requirements for offensive agentic AI, propose a reference architecture model, and make a proof-of-concept of this architecture focused on iterative task analysis and execution through the ReACT component. The evaluation demonstrates that, though partial, our implementation beats state-of-the art competitors like PentestGPT in 60% of the use cases. It also validate our hypothesis that reasoning is a key feature for agentic AI, since it enables a strong reduction of the necessary tool calls in 80% of the use cases while improving offensive capabilities in 80 % of the use cases. Interestingly enough, in 20% of the use cases, it only supports reduction of tool calls, and thus process costs, and in 20%, the gain in offensive capability requires a 4 times increase in tool calls. This proves that while RedTeamLLM improves both parameters in 60% of cases, it is also efficient in dropping operation costs OR increasing operational capabilities in more complex tasks. The key insight of this study is that leveraging the dual capability of LLMs to analyze and decompose processes, on the one hand, and to generate code for well-defined tasks, on the other, brings a radical improvement to automation and genericity of ReACT-based offensive cybersecurity frameworks. These first promising results pave the way to structuring the research effort in agentic AI for global security, in particular wrt. methodologies for evaluation of cost and automation capabilities of these models. The evaluation of recursive planning, memory management and plan correction is also a necessity to better understand the underlying mechanics and capabilities of agentic models. References Acharya et al. [2025] Deepak Bhaskar Acharya, Karthigeyan Kuppan, and B Divya. Agentic ai: Autonomous intelligence for complex goals–a comprehensive survey. IEEE Access, 2025. Bi et al. [2024] Zhen Bi, Ningyu Zhang, Yinuo Jiang, Shumin Deng, Guozhou Zheng, and Huajun Chen. When do program-of-thought works for reasoning? In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 17691–17699, 2024. Chowdhury et al. [2024] Arijit Ghosh Chowdhury, Md Mofijul Islam, Vaibhav Kumar, Faysal Hossain Shezan, Vinija Jain, and Aman Chadha. Breaking down the defenses: A comparative survey of attacks on large language models. arXiv preprint arXiv:2403.04786, 2024. Deng et al. [2024] Gelei Deng, Yi Liu, Víctor Mayoral-Vilches, Peng Liu, Yuekang Li, Yuan Xu, Tianwei Zhang, Yang Liu, Martin Pinzger, and Stefan Rass. {{\{{PentestGPT}}\}}: Evaluating and harnessing large language models for automated penetration testing. In 33rd USENIX Security Symposium (USENIX Security 24), pages 847–864, 2024. Falade [2023] Polra Victor Falade. Decoding the threat landscape: Chatgpt, fraudgpt, and wormgpt in social engineering attacks. arXiv preprint arXiv:2310.05595, 2023. Fang et al. [2024a] Richard Fang, Rohan Bindu, Akul Gupta, and Daniel Kang. Llm agents can autonomously exploit one-day vulnerabilities. arXiv preprint arXiv:2404.08144, 2024. Fang et al. [2024b] Richard Fang, Rohan Bindu, Akul Gupta, Qiusi Zhan, and Daniel Kang. Llm agents can autonomously hack websites. arXiv preprint arXiv:2402.06664, 2024. Heckel and Weller [2024] Kade M Heckel and Adrian Weller. Countering autonomous cyber threats. arXiv preprint arXiv:2410.18312, 2024. Hou et al. [2024] Xinyi Hou, Yanjie Zhao, and Haoyu Wang. On the (in) security of llm app stores. arXiv preprint arXiv:2407.08422, 2024. Hughes et al. [2025] Laurie Hughes, Yogesh K Dwivedi, Tegwen Malik, Mazen Shawosh, Mousa Ahmed Albashrawi, Il Jeon, Vincent Dutot, Mandanna Appanderanda, Tom Crick, Rahul De’, et al. Ai agents and agentic systems: A multi-expert analysis. Journal of Computer Information Systems, pages 1–29, 2025. Ismail et al. [2025] Ismail Ismail, Rahmat Kurnia, Zilmas Arjuna Brata, Ghitha Afina Nelistiani, Shinwook Heo, Hyeongon Kim, and Howon Kim. Toward robust security orchestration and automated response in security operations centers with a hyper-automation approach using agentic ai. 2025. Isozaki et al. [2024] Isamu Isozaki, Manil Shrestha, Rick Console, and Edward Kim. Towards automated penetration testing: Introducing llm benchmark, analysis, and improvements. arXiv preprint arXiv:2410.17141, 2024. Jin et al. [2023] Youngjin Jin, Eugene Jang, Jian Cui, Jin-Woo Chung, Yongjae Lee, and Seungwon Shin. Darkbert: A language model for the dark side of the internet. arXiv preprint arXiv:2305.08596, 2023. Khan et al. [2024] Raihan Khan, Sayak Sarkar, Sainik Kumar Mahata, and Edwin Jose. Security threats in agentic ai system. arXiv preprint arXiv:2410.14728, 2024. [15] Andrey Labunets, Nishit V Pandya, Ashish Hooda, Xiaohan Fu, and Earlence Fernandes. Fun-tuning: Characterizing the vulnerability of proprietary llms to optimization-based prompt injection attacks via the fine-tuning interface. Laney [2024] Samuel P Laney. LLM-Directed Agent Models in Cyberspace. PhD thesis, Massachusetts Institute of Technology, 2024. Lewis et al. [2020] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020. Li et al. [2023] Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, and Brian Ichter. Chain of code: Reasoning with a language model-augmented code emulator. arXiv preprint arXiv:2312.04474, 2023. Li et al. [2024] Jia Li, Ge Li, Xuanming Zhang, Yunfei Zhao, Yihong Dong, Zhi Jin, Binhua Li, Fei Huang, and Yongbin Li. Evocodebench: An evolving code generation benchmark with domain-specific evaluations. Advances in Neural Information Processing Systems, 37:57619–57641, 2024. Li et al. [2025] Jia Li, Ge Li, Yongmin Li, and Zhi Jin. Structured chain-of-thought prompting for code generation. ACM Transactions on Software Engineering and Methodology, 34(2):1–23, 2025. Malatji and Tolah [2024] Masike Malatji and Alaa Tolah. Artificial intelligence (ai) cybersecurity dimensions: a comprehensive framework for understanding adversarial and offensive ai. AI and Ethics, pages 1–28, 2024. Muzsai et al. [2024] Lajos Muzsai, David Imolai, and András Lukács. Hacksynth: Llm agent and evaluation framework for autonomous penetration testing. arXiv preprint arXiv:2412.01778, 2024. Oesch et al. [2025] Sean Oesch, Jack Hutchins, Phillipe Austria, and Amul Chaulagain. Agentic ai and the cyber arms race. Computer, 58(5):82–85, 2025. Prasad et al. [2023] Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, and Tushar Khot. Adapt: As-needed decomposition and planning with language models. arXiv preprint arXiv:2311.05772, 2023. Sahoo et al. [2024] Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, and Aman Chadha. A systematic survey of prompt engineering in large language models: Techniques and applications. arXiv preprint arXiv:2402.07927, 2024. Shavit et al. [2023] Yonadav Shavit, Sandhini Agarwal, Miles Brundage, Steven Adler, Cullen O’Keefe, Rosie Campbell, Teddy Lee, Pamela Mishkin, Tyna Eloundou, Alan Hickey, et al. Practices for governing agentic ai systems. Research Paper, OpenAI, 2023. Shen et al. [2024] Xiangmin Shen, Lingzhi Wang, Zhenyuan Li, Yan Chen, Wencheng Zhao, Dawei Sun, Jiashui Wang, and Wei Ruan. Pentestagent: Incorporating llm agents to automated penetration testing. arXiv preprint arXiv:2411.05185, 2024. Sun et al. [2023] Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, and Mohit Iyyer. Pearl: Prompting large language models to plan and execute actions over long documents. arXiv preprint arXiv:2305.14564, 2023. Sun et al. [2024] Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, et al. Trustllm: Trustworthiness in large language models. arXiv preprint arXiv:2401.05561, 2024. Wan et al. [2024] Shengye Wan, Cyrus Nikolaidis, Daniel Song, David Molnar, James Crnkovich, Jayson Grace, Manish Bhatt, Sahana Chennabasappa, Spencer Whitman, Stephanie Ding, et al. Cyberseceval 3: Advancing the evaluation of cybersecurity risks and capabilities in large language models. arXiv preprint arXiv:2408.01605, 2024. Wang et al. [2022] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022. Wang et al. [2024] Yaoxiang Wang, Zhiyong Wu, Junfeng Yao, and Jinsong Su. Tdag: A multi-agent framework based on dynamic task decomposition and agent generation. arXiv preprint arXiv:2402.10178, 2024. Wang et al. [2025] Yaoxiang Wang, Zhiyong Wu, Junfeng Yao, and Jinsong Su. Tdag: A multi-agent framework based on dynamic task decomposition and agent generation. Neural Networks, page 107200, 2025. Wei et al. [2022] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:24824–24837, 2022. Xu et al. [2024] Jiacen Xu, Jack W Stokes, Geoff McDonald, Xuesong Bai, David Marshall, Siyue Wang, Adith Swaminathan, and Zhou Li. Autoattacker: A large language model guided system to implement automatic cyber-attacks. arXiv preprint arXiv:2403.01038, 2024. Yao et al. [2023a] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems, 36:11809–11822, 2023. Yao et al. [2023b] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023. Yao et al. [2024] Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, and Yue Zhang. A survey on large language model (llm) security and privacy: The good, the bad, and the ugly. High-Confidence Computing, page 100211, 2024. Zhang et al. [2022] Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in large language models. arXiv preprint arXiv:2210.03493, 2022. Generated on Sun May 11 09:18:44 2025 by LaTeXML
{"source": "https://arxiv.org/html/2505.06913v1", "title": "RedTeamLLM: an Agentic AI framework for offensive security", "language": "en", "loader_type": "webbase", "source_url": "https://arxiv.org/html/2505.06913v1", "load_timestamp": "2025-05-21T10:22:31.908137", "doc_type": "html"}
Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems 1 Introduction 2 Proposed Design Pattern: Control Plane as a Tool 2.1 Design Goals 2.2 Pattern Structure 3 Comparison with Model Context Protocol Disclaimer. 3.1 Similarities Between the Control Plane and MCP 3.2 Key Differences Between the Control Plane and MCP 4 Conclusion and Future Directions Author Disclaimer. Acknowledgements: Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems Sivasathivel Kandasamy [email protected] Abstract Agentic AI systems represent a new frontier in artificial intelligence, where agents—often based on large language models (LLMs)—interact with tools, environments, and other agents to accomplish tasks with a degree of autonomy. These systems show promise across a range of domains, but their architectural underpinnings remain immature. This paper conducts a comprehensive review of the types of agents, their modes of interaction with the environment, and the infrastructural and architectural challenges that emerge. We identify a gap in how these systems manage tool orchestration at scale and propose a reusable design abstraction: the “Control Plane as a Tool” pattern. This pattern allows developers to expose a single tool interface to an agent while encapsulating modular tool routing logic behind it. We position this pattern within the broader context of agent design and argue that it addresses several key challenges in scaling, safety, and extensibility. 1 Introduction Agents in software are not a new concept. The foundational definition can be traced back to Wooldridge and Jennings [14], who defined software agents as autonomous, goal-directed computational entities capable of perceiving and acting upon their environment. Historically, such agents have been explored across domains like robotics, multi-agent systems, and distributed computing. The advent of generative AI—especially large language models (LLMs) such as GPT-4 [12], Claude [2], and Gemini [8]—has dramatically transformed this paradigm. LLM-driven agents are no longer bound by pre-coded rules; they now exhibit emergent reasoning, multi-step planning, memory awareness, and flexible tool use. This evolution has given rise to a new class of intelligent systems: Agentic AI. We define Agentic AI as autonomous software programs, often LLM-powered, that can perceive their environment, plan behaviors, invoke external tools or APIs, and interact with both digital environments and other agents to fulfill predefined goals. These systems are characterized by goal-seeking autonomy, tool adaptability, contextual memory, and multi-agent coordination [9, 1, 18]. Agentic AI has rapidly entered mainstream discourse, with organizations seeking to embed agent-based workflows into domains such as customer service, software engineering, and operations. While some cases demonstrate meaningful gains [5], others are driven by hype cycles and premature generalization [4]. The core production value of Agentic AI lies in: • Autonomous Decision-Making: Dynamic task planning and real-time behavioral adaptation. • Multi-Tool Integration: Composition across APIs, search interfaces, and databases. • Contextual Reasoning: Use of memory and history for iterative improvement. • Composable Workflows: Encapsulation of agents as modular, role-oriented microservices. To realize these capabilities, developers rely on a combination of agentic design patterns, including: • Reflection Pattern (ReAct) [17]: Alternates between reasoning and acting. • Tool Use Pattern [7]: A tool can be defined as a piece of code that the Agent uses to observe or act towards achieving its goal. The pattern focuses on agents that uses tools to achieve their goal • Hierarchical Agentic Pattern [16]: Decomposes planning across layered sub-agents. • Collaborative Agentic Pattern [15, 6]: Assigns roles to specialized agents that cooperate toward a shared objective. In Agentic-AI systems, a tool can be defined as a piece of code that the Agent uses to observe or effect change to achieve its goal. Most production-grade systems employ hybrid designs, mixing multiple patterns to meet business constraints. In parallel, several frameworks have emerged to reduce orchestration complexity and abstract common operations: • LangChain [10]: A Python framework that chains prompts, tools, and memory components. Focus: Prompt-based orchestration and memory integration. Limitation: Tight coupling of agent logic and tool invocation leads to brittle workflows. • LangGraph [11]: A graph-based orchestration runtime supporting condition-based tool chains and node-based state handling. Focus: Declarative, recoverable workflows. Limitation: Requires explicit node wiring and is less dynamic for runtime tool adaptation. • AutoGen [15]: An LLM-based multi-agent library emphasizing role separation and dialog coordination. Focus: Agent-to-agent conversation and memory persistence. Limitation: Orchestration is hardcoded; lacks modular tool routing logic. • CrewAI [6]: A lightweight framework for role-based multi-agent collaboration. Focus: Domain-specialized agents working in crews. Limitation: Static role definitions; limited support for dynamic role/tool mutation. • Anthropic MCP [3]: A schema-centric protocol enabling LLMs to securely invoke external tools. Focus: Interoperability and tool safety via typed interfaces. Limitation: Steep learning curve; orchestration logic is implicit and non-modular. Despite these advancements, productionizing Agentic AI remains challenging due to: • Tool Orchestration Complexity: Scaling APIs without prompt bloat or entangled logic [13, 7]. • Governance and Observability: Ensuring traceability and enforcement of tool usage policies [15, 11, 3]. • Memory Synchronization: Maintaining consistent state across workflows [6, 10]. • Cross-Agent Coordination: Preventing task collisions and misaligned objectives [15, 17]. • Adaptability vs. Safety: Controlling exploratory behavior while preserving reliability [4, 5]. These challenges reveal a deeper architectural design gap. This paper focuses specifically on the challenges related to tool usage by agents. The major contribution of this article is to provide an architectural design pattern that addresses the following limitations in tool handling in Agentic AI systems: 1. Add, remove, or modify tools without changing agent code or prompts. 2. Learn and personalize tool usage for specific tasks and users. 3. Track tool usage and enforce organizational or compliance policies. 4. Select tools dynamically based on context or metadata. 5. Reduce the learning curve for developers building agentic systems. 6. Enable and simplify distributed, collaborative development of tools across teams. The next section introduces the proposed architectural pattern—Control Plane as a Tool—and discusses how it addresses the identified gaps in tool orchestration within Agentic AI systems. In the following sections, we demonstrate the application of this pattern by designing a simplified chatbot system and outline future directions for extending this work across other facets of Agentic AI architecture. 2 Proposed Design Pattern: Control Plane as a Tool This section introduces a reusable design pattern - Control Plane as a Tool - that modularizes and enhances tool orchestration in Agentic AI Systems. The pattern aims to decouple tool management from the Agent’s reasoning and decision layers. Thereby enabling flexibility, observability, and scalability across systems. Naturally, this pattern can be considered as an extension of the Tool-use Pattern. 2.1 Design Goals The Control Plane as a Tool pattern is driven by the following goals: • Modularity: The tool logic should be abstracted from the agent, allowing tools to be modified, added, or removed without changing the agent’s prompt or control logic. • Dynamic Selection: Tool invocation should be dynamic, based on task requirements, metadata, user profiles, or past interactions. • Governance and Observability: Tool usage should be auditable, allowing the enforcement of organizational or safety policies. • Cross-Framework Portability: As a design pattern, it would be framework-agnostic and can be easily used with any framework. • Developer Usability: Developers should be able to use a single tool interface and offload orchestration complexity to the control plane. • Support for Personalization: Agents should be able to learn and adapt tool selection policies based on feedback or task success. 2.2 Pattern Structure In simple Terms, Control Plane, in this context, is a piece of software that configures and routes the data between the configured tools and the agents. The set of tools configured forms with Tools Layer and one or more agents configured to the control plane to use the tools form the Agentic Layer. The Control Plane is exposed to the agent as a tool(), similar to other callable tools (e.g., search, calculator, database). Internally, the control plane executes the following sequence: 1. The agent queries the control plane with an intent or query. 2. Parses metadata of the tools and retrieves relevant candidate tools. 3. Applies routing logic (e.g. semantic similarity, user context, policy filters, user preference, etc.). 4. Calls the appropriate tool, and logs the interaction. 5. Returns the output of the tool to the agent. This makes orchestration transparent from the agent’s point of view, supporting reuse, caching, validation, and dynamic composition. Figure 1 shows an overview of the high-level Control Plane. Figure 1B also shows that the proposed pattern enables interaction between agents as well through the control plane. (a) Agents-Tool Separation Through Control Plane (b) Agents as Tool Through Control Plane Figure 1: Figures show how control plane help with the interaction of agents and tools The internals of the Control Plane is provided in the Figure 2 Figure 2: Control Plane Architecture Agents and externals systems are expected to interact with the control plane through an API endpoint or a CLI. Request Router module decodes the incoming request and route it to the appropriate modules viz Registration Module, Invocation Module and Feedback Integration Module. The main goal of the Registration Module is to register the interacting agents, tools, validation rules and metrics. The Invocation Module, module helps the invoking agents to query a tool or other registered agents. Input Validator assess the inputs passed for data integrity, safety and alignment, based on the validation rules in the validation registry. Once the inputs are validated, the Intent Resolver Module try to understand the invoking agents’ intentions to identify the correct tools/toolset. Routing Handler, based on the resolved intent identify the tools(incl. agents) and their sequence of invocations. Once the outputs from the tools are consolidated, output validator , validates the outputs again to make sure the output complies with registered rules and regulations. Once the results are validated it is returned to the invoking agent. The Feedback Module, helps to integrate user feedbacks into the systems so that the tool selection and their sequences can be personalized as per user preference. Though it is an optional module, it highly recommended for performance and accuracy. The control plane will be registered as a tool in an agentic,so that each agent has to bind to only one tool, simplifying the process. The proposed architecture is considered a design pattern as either be implemented through an Agentic approach or as a set of microservices. Both have the advantages and disadvantages. The non-Agentic approache keeps the complexity to minimum and might be less expensive. However, the Agentic Approach would provide more flexibility and extendability. 3 Comparison with Model Context Protocol The Model Context Protocol (MCP) [3] has a similar objective to that of the proposed approach. Hence it become necessary the similarities and differences between the two. The table 1 shows the similarities between the two systems. ( Disclaimer. Model Context Protocol (MCP) is a tool interface specification introduced by Anthropic. This paper does not implement, replicate, or reverse-engineer MCP. All comparisons are based on publicly available documentation and are intended solely for academic discussion and architectural contrast.) 3.1 Similarities Between the Control Plane and MCP The proposed Control Plane architecture shares several goals and structural traits with Anthropic’s Model Context Protocol (MCP), particularly in its emphasis on safety, tool registration, and structured invocation. Table 1: Similarities Between Control Plane and MCP Feature Description Tool Registration Both systems require structured metadata or schema registration for external tools. MCP uses JSON schema; the Control Plane maintains a Tool Registry. Input Validation Both validate tool inputs using schema constraints. MCP enforces JSON Schema at runtime, while the Control Plane uses a dedicated Input Validator module. Invocation Routing MCP tools are invoked based on prompt-matched schemas. The Control Plane routes requests via a Routing Handler using similarity search or rule-based matching. Structured Interfaces Both emphasize deterministic tool behavior through formalized I/O specifications to reduce prompt ambiguity. 3.2 Key Differences Between the Control Plane and MCP While the Control Plane and MCP share structural themes, their operational design and goals diverge significantly. The Control Plane emphasizes orchestration, governance, and multi-agent extensibility, whereas MCP focuses on standardizing tool invocation for a single LLM context. Table 2, shows how they differ from one another. Table 2: Key Differences Between Control Plane and MCP Aspect Control Plane (This Work) Model Context Protocol (MCP) Architecture Type External modular orchestrator Embedded schema-based interface Routing Strategy Rule-based and similarity-based routing via Routing Handler Implicit function selection via schema-matching in prompt Agent Scope Supports multiple agents and decoupled planning Coupled to a single Claude-based LLM instance Governance and Tracking Includes Usage Tracker, policy enforcement, failure handling No built-in governance or logging Learning and Feedback Optional Feedback Integrator for experience-based routing No feedback loop or adaptive learning Tool Chaining Support Enables explicit chaining and dependency-based tool routing No chaining logic; tools treated atomically Tool Fallback and Safety Failure Handler supports default responses and recovery No structured fallback mechanism Extensibility LLM-agnostic, framework-agnostic Claude-specific runtime binding 4 Conclusion and Future Directions The advent of generative models and their role in the development of Agentic AI systems, have also led to the rise of many frameworks. One of the challenges in developing systems the orchestration of the tools in an simple, safe, and manageable manner in production. The lack of composable, minimal design patterns is limiting the scalability of agentic AI. The proposed pattern was developed with that and model-agnosticism in mind. The proposed “Control Plane as a Tool” allows developers to encapsulate routing logic and enforce governance across environments. While this work addresses many of the current challenges in the development of Agentic AI systems, the pattern/approach has a potential to be extended to include many more features. Future work in this area will realize the development of a framework-agnostic system and evaluate performance, safety, and extensibility across larger multi-agent deployments. Author Disclaimer. This work was independently conceived and executed by the author without financial support from any institution, company, or donor organization. It was not funded by the author’s current employer or by any external grant, donation, or sponsorship. All opinions and technical claims are solely those of the author. Acknowledgements: This work would not have been possible without the unwavering support and understanding of my wife, Dharani, and my kids, Shree and Kart, even when I have to work late nights References Aisera [2024] Aisera. Agentic ai: What it means for your business. https://aisera.com/blog/agentic-ai, 2024. Accessed: 2024-04-23. Anthropic [2023a] Anthropic. Introducing claude. https://www.anthropic.com/index/introducing-claude, 2023a. Anthropic [2023b] Anthropic. Tool use with claude 2 and the mcp. https://docs.anthropic.com/claude/docs/tool-use, 2023b. Bommasani and et al. [2022] Rishi Bommasani and et al. Opportunities and risks of foundation models. Communications of the ACM, 66(1):67–77, 2022. Bubeck and et al. [2023] Sebastien Bubeck and et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. CrewAI [2024] CrewAI. Crewai: Build agentic workflows with multiple roles. https://docs.crewai.com, 2024. DeepLearning.AI [2024] DeepLearning.AI. Agentic design patterns part 3: Tool use. https://www.deeplearning.ai/the-batch/agentic-design-patterns-part-3-tool-use/, 2024. Accessed: 2024-04-23. DeepMind [2024] Google DeepMind. Gemini 1.5 technical report. https://deepmind.google/technologies/gemini/, 2024. IBM [2024] IBM. What is agentic ai? https://www.ibm.com/think/topics/agentic-ai, 2024. Accessed: 2024-04-23. LangChain [2023] LangChain. Langchain documentation. https://docs.langchain.com, 2023. LangGraph [2024] LangGraph. Langgraph documentation. https://www.langgraph.dev, 2024. OpenAI [2023] OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023. Qin and et al. [2023] Guang Qin and et al. Toolllm: Facilitating llms to master 16000+ real tools. arXiv preprint arXiv:2310.06832, 2023. Wooldridge and Jennings [1995] Michael Wooldridge and Nicholas R. Jennings. Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2):115–152, 1995. Wu and et al. [2023] Eric Wu and et al. Autogen: Enabling next-generation multi-agent llm applications. arXiv preprint arXiv:2309.12307, 2023. Xu and et al. [2023] Muhan Xu and et al. Hierarchical planning with llms: A modular framework. arXiv preprint arXiv:2311.09541, 2023. Yao and et al. [2023] Shinn Yao and et al. React: Synergizing reasoning and acting in language models. In ICLR, 2023. Yin and et al. [2024] Hang Yin and et al. Agentic large language models: A survey. arXiv preprint arXiv:2503.23037, 2024. Generated on Sun May 11 02:50:41 2025 by LaTeXML
{"source": "https://arxiv.org/html/2505.06817v1", "title": "Control Plane as a Tool: A Scalable Design Pattern for Agentic AI Systems", "language": "en", "loader_type": "webbase", "source_url": "https://arxiv.org/html/2505.06817v1", "load_timestamp": "2025-05-21T10:22:32.275895", "doc_type": "html"}