Guardrail Management ExampleΒΆ
This guide provides a professional, step-by-step walkthrough for adding and validating guardrails using the GuardrailManager in the agenticaiframework package. It is intended for developers who want to enforce content safety, compliance, or quality rules in AI-generated outputs.
Enterprise Compliance
Part of 237 enterprise modules including 18 compliance/audit modules and 12 guardrail types. See Enterprise Documentation.
Prerequisites & ConfigurationΒΆ
- Installation: Ensure
agenticaiframeworkis installed and accessible in your Python environment. - No additional configuration is required for this example.
- Python Version: Compatible with Python 3.10+.
CodeΒΆ
Step-by-Step ExecutionΒΆ
-
Import the Class Import
GuardrailManagerfromagenticaiframework.guardrails. -
Instantiate the Manager Create an instance of
GuardrailManagerto manage guardrail rules. -
Add a Guardrail Use
add_guardrailwith: name: A descriptive name for the guardrail.-
validation_fn: A function that returnsTrueif the content passes,Falseotherwise. -
Validate Compliant Output Call
validatewith a text string that should pass the guardrail. -
Validate Non-Compliant Output Call
validatewith a text string that should fail the guardrail.
Best Practice: Keep guardrail functions efficient and deterministic to avoid performance bottlenecks in production.
Expected InputΒΆ
No user input is required; the script uses hardcoded values for demonstration purposes. In production, guardrails could be dynamically loaded from configuration files or policy management systems.
Expected OutputΒΆ
How to RunΒΆ
Run the example from the project root:
| Bash | |
|---|---|
If installed as a package, you can also run it from anywhere:
| Bash | |
|---|---|
Tip: Combine
GuardrailManagerwithLLMManagerto automatically validate AI-generated outputs before returning them to users.