Skip to content

Test Plan for AI-Enhanced Workflow

1. Functional Testing

  • Verify that all defined slash commands (/group, /knowledge, etc.) work as expected.
  • Verify the user confirmation step for all data-modifying operations. Test both the 'Confirm' path (action is executed) and the 'Cancel' path (action is aborted).
  • Verify that read-only queries do not trigger the confirmation step.
  • Test various natural language phrasings for the same intent to ensure robust interpretation.
  • Test for correct and clear success messages upon completion of a command.
  • Test for graceful error handling when a command is malformed or an entity (e.g., user) is not found.
  • Test chat history retrieval. For the "@" command, verify that typing "@" triggers an auto-complete list, that the list is sorted in descending order, and that typing further filters the list. For both the UI button and the command, verify that selecting a session loads the correct history and the conversation can be continued.

2. Security and Access Control Testing

  • Critical: Confirm that a user without specific permissions cannot execute a restricted command. This check should happen before a plan is even presented.
  • For query commands, verify that the results are filtered according to the user's data access rights (e.g., a manager can see their team, but not other teams).
  • Attempt to inject malicious commands or scripts through the chat input to test for vulnerabilities.
  • Verify that unauthenticated users cannot access or trigger the chat workflow.

3. Integration and Workflow Testing

  • Conduct end-to-end tests from the UI to the api_gateway, ai_service, mcp_service, and back, covering both confirmed and canceled flows.
  • Verify that the ai_service correctly uses Redis for short-term memory and MongoDB for long-term logging.
  • Test the integration with the MCP server, ensuring it correctly calls the underlying platform APIs only after user confirmation.
  • Verify that chat history is being correctly persisted in MongoDB and can be retrieved accurately.

4. Audit Trail Testing

  • For every command tested (successful, failed, and canceled), verify that a corresponding, accurate entry is created in the security audit trail.
  • Ensure the audit log correctly identifies the user, the action they attempted, its proposed plan, and the final outcome (executed, canceled, or failed).

5. Performance and Load Testing

  • Verify that simple, read-only queries consistently return a response in under 3 seconds as per NFR-2.
  • Test the system's response time under a simulated load of multiple concurrent users to ensure performance does not degrade significantly.

6. Usability Testing

  • Test the clarity and user-friendliness of the AI-generated responses and confirmation plans. Non-technical users should be able to easily understand the proposed actions.
  • Verify that the slash command structure is intuitive and that error messages for malformed commands are helpful.
  • Test the user experience of the chat history retrieval, including the @ command auto-complete feature.

7. Resilience Testing

  • Test the system's behavior when downstream services are unavailable. For example, if the mcp_service cannot connect to the auth_service, the chat should return a graceful error message.
  • Simulate a failure in the ai_service or its database connections (Redis, MongoDB) to ensure the main application remains stable and the chat component shows an appropriate disabled/error state.