Skip to main content

Documentation Index

Fetch the complete documentation index at: https://ito.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

When Ito finds a failing test case, it assigns a severity level based on the impact of the bug on users and the application. Severity is set by the AI agent during analysis and reflects the real-world consequence of the failure.

The four levels

Severe issues that block core functionality or break key user flows. The application is unusable or data integrity is at risk.Examples: A checkout flow that throws an unhandled error, a login screen that rejects valid credentials, a payment form that fails to submit.Critical bugs should be fixed before the PR merges.
Major issues that impact important features or significantly degrade the user experience, but do not completely block the application.Examples: A search feature that returns no results for valid queries, a settings page that saves changes silently without applying them, a modal that cannot be dismissed.High bugs typically warrant a fix before shipping.
Noticeable issues that affect usability but do not block core functionality. Users can still accomplish their goals but the experience is degraded.Examples: Incorrect copy on a success message, a form field that accepts invalid input without an error message, a page element that renders misaligned on a specific viewport.Medium bugs can often be fixed in a follow-up PR.
Minor issues or edge cases with limited impact on users. Often cosmetic or only triggered in rare conditions.Examples: A truncated label in a dropdown, an inconsistent hover state, a tooltip with outdated text.Low bugs are tracked for awareness but are typically low priority.

Severity in the dashboard

Severity levels appear throughout the Ito dashboard:
  • Statistics panel — The dashboard shows total bug counts broken down by severity level (Critical, High, Medium, Low) with percentage distribution across the selected time period (7 days, 30 days, 90 days, or all time). A donut chart visualizes the relative proportion of each level.
  • PR list — Each PR row shows the pass and fail counts from its latest test run.
  • Test case detail — Individual test cases display their severity level alongside the bug description and code analysis.
The statistics panel also shows a bug detection rate: the percentage of tested PRs where at least one bug was found.
  • Test Cases — Where severity is assigned and displayed
  • Test Runs — The run that produces the test cases