The Feedback Tool is a specialized StackOneTool implementation that collects explicit, user-consented qualitative feedback about the StackOne tools experience. Unlike the implicit feedback system (5.1), which automatically emits behavioral data to LangSmith, this tool requires explicit user permission before submitting feedback to the StackOne API.
This document covers the tool's architecture, validation logic, execution model, and integration patterns. For automated behavioral tracking, see 5.1 Implicit Feedback System. For general tool execution concepts, see 3.2 Tool Execution Model.
Sources: README.md283-308 stackone_ai/feedback/tool.py1-227
The Feedback Tool extends the base StackOneTool class with enhanced validation and multi-account submission capabilities. It consists of two primary components: the FeedbackInput Pydantic model for strict input validation, and the FeedbackTool class that overrides the execute method to support batch submissions.
Sources: stackone_ai/feedback/tool.py20-64 stackone_ai/feedback/tool.py66-146 stackone_ai/feedback/tool.py148-227
| Component | File Path | Responsibility |
|---|---|---|
FeedbackInput | stackone_ai/feedback/tool.py20-64 | Pydantic model for input validation with custom validators |
FeedbackTool | stackone_ai/feedback/tool.py66-146 | Extended StackOneTool with multi-account submission logic |
create_feedback_tool() | stackone_ai/feedback/tool.py148-227 | Factory function that constructs FeedbackTool instances |
Sources: stackone_ai/feedback/tool.py1-227
The FeedbackInput Pydantic model enforces strict validation rules to ensure data quality before submission. All string fields are trimmed and checked for non-empty content after whitespace removal.
Sources: stackone_ai/feedback/tool.py27-63
The validators at stackone_ai/feedback/tool.py27-63 implement the following logic:
| Field | Type | Validation | Error Message |
|---|---|---|---|
feedback | str | Must be non-empty after .strip() | "Feedback must be a non-empty string" |
account_id | str | list[str] | Single: non-empty after strip List: at least one non-empty element | "Account ID must be a non-empty string" "At least one account ID is required" "At least one valid account ID is required" |
tool_names | list[str] | At least one non-empty element after strip | "At least one tool name is required" |
Sources: stackone_ai/feedback/tool.py27-63 tests/test_feedback.py48-156
The validation logic is tested extensively using Hypothesis for property-based testing at tests/test_feedback.py20-165 Key test strategies include:
\u00a0, \u2003, \u2009) across all fieldsStackOneErroraccount_idSources: tests/test_feedback.py20-165
The FeedbackTool.execute() method at stackone_ai/feedback/tool.py69-145 implements a dual-path execution model: single-account submissions use the parent class's execute() method directly, while multi-account submissions iterate through each account ID and aggregate results.
Sources: stackone_ai/feedback/tool.py102-109 tests/test_feedback.py187-214
When multiple account IDs are provided, the tool sends the same feedback to each account individually and aggregates the results:
Sources: stackone_ai/feedback/tool.py111-136 tests/test_feedback.py258-353
| Scenario | Response Structure | Example |
|---|---|---|
| Single account success | API response passed through | {"message": "Feedback successfully stored", "trace_id": "test-trace-id"} |
| Multiple accounts | Aggregated summary | {"message": "Feedback sent to 3 account(s)", "total_accounts": 3, "successful": 2, "failed": 1, "results": [...]} |
Each result in the results array contains:
account_id: The account ID that was processedstatus: Either "success" or "error"result: The API response (only present for successful calls)error: The error message (only present for failed calls)Sources: stackone_ai/feedback/tool.py130-136 tests/test_feedback.py276-352
The Feedback Tool makes POST requests to the /ai/tool-feedback endpoint with a JSON body containing the validated parameters.
The ExecuteConfig object at stackone_ai/feedback/tool.py203-213 defines the HTTP request structure:
Sources: stackone_ai/feedback/tool.py203-213
The request body includes three required fields, all mapped to ParameterLocation.BODY:
Sources: stackone_ai/feedback/tool.py208-212 tests/test_feedback.py210-213
The tool inherits authentication from StackOneTool. The API key is provided during tool creation at stackone_ai/feedback/tool.py148-152 and automatically injected into request headers by the parent class's execution logic.
Sources: stackone_ai/feedback/tool.py221-223
The create_feedback_tool() factory function at stackone_ai/feedback/tool.py148-227 constructs a FeedbackTool instance with preconfigured parameters and execution settings.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
api_key | str | Yes | N/A | StackOne API key for authentication |
account_id | str | None | No | None | Optional default account ID (not used for feedback) |
base_url | str | No | DEFAULT_BASE_URL | API base URL (defaults to production) |
Sources: stackone_ai/feedback/tool.py148-163
The factory function configures the tool with specific metadata that guides AI agents on proper usage:
Key Design Decision: The description explicitly instructs AI agents to request user permission before invoking the tool, ensuring user consent is obtained before any feedback submission.
Sources: stackone_ai/feedback/tool.py164-170
The ToolParameters object at stackone_ai/feedback/tool.py172-201 defines the JSON Schema for tool parameters:
The account_id parameter uses a oneOf schema to support both single string and array of strings, enabling flexible submission patterns.
Sources: stackone_ai/feedback/tool.py172-201
The Feedback Tool inherits all integration capabilities from StackOneTool, supporting OpenAI, LangChain, LangGraph, and CrewAI frameworks.
The tool converts to OpenAI function calling format via the inherited to_openai_function() method:
Sources: tests/test_feedback.py362-369
The tool converts to LangChain BaseTool format through the standard to_langchain() method inherited from StackOneTool. The conversion creates a dynamic Pydantic model for input validation and wraps the execute() method as the _run() implementation.
Sources: README.md283-308
The feedback tool is automatically included when using fetch_tools() with patterns that match "tool_*":
Sources: README.md287-302
The Feedback Tool implements comprehensive error handling for validation failures, API errors, and multi-account batch processing.
Sources: stackone_ai/feedback/tool.py87-145
| Error Type | Trigger | Exception | Message Pattern |
|---|---|---|---|
| JSON parsing | Invalid JSON string input | StackOneError | "Invalid JSON in arguments: ..." |
| Validation | Empty/whitespace-only fields | StackOneError | "Validation error: ..." |
| API error | HTTP error response | StackOneError | Propagated from parent class |
| Multi-account partial failure | Some accounts fail | None (captured in results) | Included in results array with status: "error" |
Sources: stackone_ai/feedback/tool.py138-145 tests/test_feedback.py237-255 tests/test_feedback.py304-352
When submitting to multiple accounts, the tool continues processing remaining accounts even if some fail. Errors are captured and included in the aggregated response:
Sources: stackone_ai/feedback/tool.py112-136 tests/test_feedback.py304-352
The Feedback Tool employs a multi-layered testing approach combining unit tests, property-based tests, and integration tests.
| Test Suite | File | Lines | Focus |
|---|---|---|---|
| Validation Tests | tests/test_feedback.py48-182 | 135 lines | Input validation, edge cases, PBT |
| Execution Tests | tests/test_feedback.py184-369 | 186 lines | Single/multi-account execution, API mocking |
| Integration Tests | tests/test_feedback.py371-398 | 28 lines | Live API submission (skipped by default) |
Sources: tests/test_feedback.py1-398
The test suite at tests/test_feedback.py20-165 uses Hypothesis strategies to generate edge cases:
These strategies power tests that verify validation behavior across thousands of generated inputs, ensuring robustness against edge cases like Unicode whitespace and malformed JSON.
Sources: tests/test_feedback.py20-46 tests/test_feedback.py113-165
The execution tests use respx to mock HTTP requests without hitting the actual API:
Sources: tests/test_feedback.py187-214 tests/test_feedback.py258-303
The tool's description explicitly instructs AI agents to obtain user consent before submission. This is enforced at the metadata level rather than the code level, relying on the AI agent to follow the instructions:
"First ask the user, 'Are you ok with sending feedback to StackOne?' and mention that the LLM will take care of sending it. Call this tool only when the user explicitly answers yes."
Rationale: Since the tool collects qualitative feedback that may contain personal opinions or sensitive information, explicit user consent is essential before submitting data to external APIs.
Sources: stackone_ai/feedback/tool.py165-169 README.md304-307
When collecting feedback for multiple accounts (e.g., in multi-tenant scenarios), provide all account IDs in a single call rather than making separate tool invocations:
The single-call approach provides better error handling (partial failures are captured) and returns a unified summary of all submissions.
Sources: stackone_ai/feedback/tool.py111-136 tests/test_feedback.py258-303
When integrating the feedback tool in agent workflows, handle validation errors gracefully:
Sources: stackone_ai/feedback/tool.py87-145
Refresh this wiki