Skip to content

Guardrail Management ExampleΒΆ

This guide provides a professional, step-by-step walkthrough for adding and validating guardrails using the GuardrailManager in the agenticaiframework package. It is intended for developers who want to enforce content safety, compliance, or quality rules in AI-generated outputs.

Enterprise Compliance

Part of 237 enterprise modules including 18 compliance/audit modules and 12 guardrail types. See Enterprise Documentation.

Prerequisites & ConfigurationΒΆ

  • Installation: Ensure agenticaiframework is installed and accessible in your Python environment.
  • No additional configuration is required for this example.
  • Python Version: Compatible with Python 3.10+.

CodeΒΆ

Python
import logging

logger = logging.getLogger(__name__)

from agenticaiframework.guardrails import GuardrailManager

if __name__ == "__main__":
    guardrail_manager = GuardrailManager()

    # Add a guardrail to prevent profanity
    guardrail_manager.add_guardrail("No profanity", lambda text: "badword" not in text)

    # Validate compliant and non-compliant outputs
    logger.info("Compliant Output Valid:", guardrail_manager.validate("This is clean text."))
    logger.info("Non-Compliant Output Valid:", guardrail_manager.validate("This contains badword."))

Step-by-Step ExecutionΒΆ

  1. Import the Class Import GuardrailManager from agenticaiframework.guardrails.

  2. Instantiate the Manager Create an instance of GuardrailManager to manage guardrail rules.

  3. Add a Guardrail Use add_guardrail with:

  4. name: A descriptive name for the guardrail.
  5. validation_fn: A function that returns True if the content passes, False otherwise.

  6. Validate Compliant Output Call validate with a text string that should pass the guardrail.

  7. Validate Non-Compliant Output Call validate with a text string that should fail the guardrail.

Best Practice: Keep guardrail functions efficient and deterministic to avoid performance bottlenecks in production.

Expected InputΒΆ

No user input is required; the script uses hardcoded values for demonstration purposes. In production, guardrails could be dynamically loaded from configuration files or policy management systems.

Expected OutputΒΆ

Text Only
Compliant Output Valid: True
Non-Compliant Output Valid: False

How to RunΒΆ

Run the example from the project root:

Bash
python examples/guardrails_example.py

If installed as a package, you can also run it from anywhere:

Bash
python -m examples.guardrails_example

Tip: Combine GuardrailManager with LLMManager to automatically validate AI-generated outputs before returning them to users.