Skip to content

Pro-GenAI/Agent-Action-Guard

Repository files navigation

Project banner Workflow Diagram

AI is perceived as a threat. Increasing usage of LLM Agents and MCP leads to the usage of harmful tools and harmful usage of tools as proven using HarmActEval. Classifying AI agent actions ensures safety and reliability. Action Guard uses a neural network model trained on HarmActions dataset to classify actions proposed by autonomous AI agents as harmful or safe. The model has been based on a small dataset of labeled examples. The work aims to enhance the safety and reliability of AI agents by preventing them from executing actions that are potentially harmful, unethical, or violate predefined guidelines. Safe AI Agents are made possible by Action Classifier.

Preprint YouTube Blog

AI LLMs Python License: CC BY 4.0 HuggingFace Dataset

Demo

Demo GIF

Common causes of harmful actions by AI agents:

  • User attempting to jailbreak the model.
  • Model hallucinating or misunderstanding the context.
  • Model being overconfident in its incorrect knowledge.
  • Lack of proper constraints or guidelines for the agent.
  • Inadequate training data for specific scenarios.
  • MCP server providing incorrect tool descriptions that mislead the agent.
  • Harmful MCP servers returning manipulative text to mislead the model.
  • The experiments proved that the model performs a harmful action and still responds "Sorry, I can't help with that."

New contributions of Agent-Action-Guard framework:

  1. HarmActions, an structured dataset of safety-labeled agent actions complemented with manipulated prompts that trigger harmful or unethical actions.
  2. HarmActEval benchmark leveraging a new metric “Harm@k.”
  3. Action Classifier, a neural classifier trained on HarmActions dataset, designed to label proposed agent actions as potentially harmful or safe, and optimized for real-time deployment in agent loops.
  4. MCP integration supporting live action screening using existing MCP servers and clients.

Special features:

  • This project introduces "HarmActEval" dataset and benchmark to evaluate an AI agent's probability of generating harmful actions.
  • The dataset has been used to train a lightweight neural network model that classifies actions as safe, harmful, or unethical.
  • The model is lightweight and can be easily integrated into existing AI agent frameworks like MCP.
  • This project is about classifying actions and not related to Guardrails.
  • Supports MCP (Model Context Protocol) to allow real-time action classification.
  • Unlike OpenAI's "require_approval": "always" flag, this blocks harmful actions without human intervention.
  • A2A-compatible version: https://github.com/Pro-GenAI/A2A-Agent-Action-Guard.

Safety Features:

  • Automatically classifies MCP tool calls before execution.
  • Blocks harmful actions based on the outputs of the trained model
  • Provides detailed classification results
  • Allows safe actions to proceed normally

Waiting for feedback and discussions on how this helps you or the AI community.

Usage

For usage instructions, kindly refer USAGE.md.

A2A version:

While this repository focuses on standard tool calls and MCP, an Agent-to-Agent (A2A) compatible version is available at: https://github.com/Pro-GenAI/A2A-Agent-Action-Guard

Citation

If you find this repository useful in your research, please consider citing:

@article{202510.1415,
	title = {Agent Action Guard: Classifying AI Agent Actions to Ensure Safety and Reliability},
  	year = 2025,
	month = {October},
	publisher = {Preprints},
	author = {Praneeth Vadlapati},
	doi = {10.20944/preprints202510.1415.v1},
	url = {https://doi.org/10.20944/preprints202510.1415.v1},
	journal = {Preprints}
}

Limitation

Personally Identifiable Information (PII) detection is not performed by this project as it can be performed accurately using other existing systems.

Created based on my past work

Agent-Supervisor: Supervising Actions of Autonomous AI Agents for Ethical Compliance: GitHub