11. AI-Enhanced Workflow Integration¶
- Date: 2025-09-17
- Status: Proposed
Context¶
To enhance end-user productivity and streamline interaction with our system, we are introducing an AI-powered assistant. Currently, users rely on manually navigating the user interface to perform operations and find information. This can be inefficient for complex tasks and presents a learning curve for new users. The goal is to allow users to accomplish these tasks more directly using natural language commands.
Decision¶
We will integrate a generative AI-powered chat assistant into the frontend application. This will provide an "AI-Enhanced Workflow" (AIEW) that allows authenticated users to interact with the system via a natural language interface, including slash commands and information queries against the project's /docs repository. The business and functional requirements for this feature are detailed in the AI-Enhanced Workflow feature document.
The architecture will be composed of the following new components:
- Frontend Chat Interface: A chat widget will be integrated into the React application shell.
- AI Service (
ai_service): An internal microservice to orchestrate the AI workflow. It receives queries, uses an LLM (Gemini) with LangGraph to build an execution plan, presents the plan for confirmation, and synthesizes the final response. It uses Redis for session state and MongoDB for chat history. - Model Context Protocol Service (
mcp_service): An internal microservice acting as a secure tool adapter. It executes the confirmed plan by translating tool calls from theai_serviceinto authenticated requests to existing internal APIs.
The workflow is visualized in the following sequence diagram:
sequenceDiagram
participant User
participant Frontend
participant API Gateway
participant AI Service
participant MCP Service
participant Internal APIs
User->>Frontend: 1. Sends query/command
Frontend->>API Gateway: 2. Forwards query to /api/ai
API Gateway->>AI Service: 3. Forwards authenticated request
AI Service->>AI Service: 4. Constructs execution plan (via LangGraph)
alt Data-Modifying Operation
AI Service-->>API Gateway: 5a. Sends plan for approval
API Gateway-->>Frontend: 5b. Forwards plan
Frontend-->>User: 5c. Displays plan and asks for confirmation
User->>Frontend: 6a. Clicks "Confirm"
Frontend->>API Gateway: 6b. Sends confirmation
API Gateway->>AI Service: 6c. Forwards confirmation
AI Service->>MCP Service: 7. Executes tool via secure adapter
MCP Service->>Internal APIs: 8. Calls relevant internal API
Internal APIs-->>MCP Service: 9. Returns result
MCP Service-->>AI Service: 10. Returns result to orchestrator
else Read-Only Query
AI Service->>MCP Service: 7 (alt). Executes read-only tool
MCP Service->>Internal APIs: 8 (alt). Calls internal API
Internal APIs-->>MCP Service: 9 (alt). Returns data
MCP Service-->>AI Service: 10 (alt). Returns data
end
AI Service->>AI Service: 11. Synthesizes final response (via LLM)
AI Service-->>API Gateway: 12a. Sends final response
API Gateway-->>Frontend: 12b. Forwards response
Frontend-->>User: 12c. Displays final response in chat
A critical, non-negotiable aspect of this design is the mandatory confirmation step. For any operation that creates, modifies, or deletes data, the ai_service must first present a clear, human-readable execution plan to the user. The operation will only proceed after receiving explicit user confirmation.
All actions initiated via the AIEW will be subject to the existing Role-Based Access Control (RBAC) model and will be recorded in the security audit trail.
Consequences¶
Positive¶
- Increased User Productivity: Users can perform complex actions and queries much faster by issuing commands in natural language instead of navigating the traditional UI.
- Improved System Accessibility: Reduces the learning curve for new users, making the system more intuitive and easier to operate.
- Enhanced Safety: The mandatory confirmation step for data-modifying actions provides a critical safety check against unintended operations initiated by the user.
- Centralized Auditing: All AI-initiated actions are logged, maintaining a clear and secure audit trail of user activity.
- Consistency: The
mcp_serviceensures that all operations, whether initiated by a human via the UI or the AI assistant, use the same underlying business logic and APIs.
Negative / Risks¶
- Increased Complexity: Introduces two new microservices and two new data stores, increasing the system's architectural and operational complexity. This also introduces more potential points of failure, requiring robust resilience and error handling strategies.
- Intent Misinterpretation: There is a risk that the LLM could misunderstand a user's intent. While the confirmation step prevents incorrect execution, a poor plan proposal can still lead to user frustration and a degraded experience.
- New Dependencies: Creates a dependency on the LLM (Gemini) and the LangGraph framework.
- Prompt Engineering Overhead: Requires careful design and maintenance of the prompts and tools used by the
ai_serviceto ensure reliable and secure operation. - New Testing Demands: This feature requires specialized testing strategies, including performance testing for response times, usability testing for the conversational interface, and resilience testing for the distributed components.
Neutral¶
- This decision introduces a new modality of user interaction with the application, which will require user education.
- The development team will need to acquire and maintain skills related to managing AI services, prompt engineering, and graph-based orchestration to support this new feature.