Reports
The Reports page provides a centralized view of all test executions in a BlinqIO project. Each execution represents a complete test run that may include multiple scenarios and example rows.
Types of Reports
BlinqIO supports three types of reports based on how tests are executed or handled.
1. Run Report
Run Reports are generated for tests executed manually or through the AI Recorder.
Labeled as Execution Type: Local Execution.
Note
To learn how Run Reports are generated during test execution, see Run Tests.
2. AI Recovery Reports
Reports for test runs where AI detected and fixed failures, including recovered scenarios and retraining details.
Accessible by expanding a failed test report.
What the report contains:
Error Message - The original error that caused the test to fail.
Root Cause - A short explanation of why the failure occurred.
Retraining Reason - Describes why retraining was triggered (e.g., locator mismatch, timeout).
Note
To learn how AI Recovery Reports are generated during test execution, see Run Tests.
3. AI Generation Reports
Reports generated from test runs where scenarios were created or modified by AI, showing details of AI-generated steps and their execution results.
Labeled as Execution Type: Local AI Generation
Note
To learn how Run Reports are generated during test execution, see Generate Code for Gherkin Steps.
Execution Summary
At the top of the Reports page, you can view a quick summary of all test executions in the project. This section helps users get an immediate sense of test activity and overall project health.
The summary displays:
Metric | Description |
---|---|
Total Executions | The total number of test executions run in the project. |
Successful | Number of executions where all scenarios passed. |
Failed | Number of executions where one or more scenarios failed |
These numbers automatically update as new tests are executed.
Below the summary, each test execution appears as a row in a table. This table provides detailed information about every test run, including its status and environment.
Column | Description |
---|---|
Execution Info | A unique identifier or summary of the run. |
Execution Date | Timestamp when the execution started. |
Execution Type | Indicates how the run was triggered (e.g., Local Execution, Local AI Generation). |
Status | Whether the execution Passed, Failed, or Error fixed by AI. |
Pass Ratio | Number of scenarios that passed versus total executed. |
User | The person who initiated the run. |
Environment | Target environment or environment URL where the test was executed. |
You can also filter the executions by time range using quick filters like Last 24 hours, Last week, Last 2 weeks, or Show all. These help you focus on recent or specific runs efficiently.
Opening a Report
Click on any execution row to open a detailed report for that test run. Inside the execution, you’ll find all the test scenarios that were part of that run.
Each scenario provides the following details:
- Start Time – The time the scenario execution began.
- Status – Whether the scenario Passed, Failed, or was Recovered using AI.
- Error Message – Specific error message if the scenario failed.
- Root Cause – AI-generated explanation of why the failure occurred.
- Retraining Reason – Reason for AI retraining, if applicable.
- Duration – Total time taken for the scenario to run.
- Past Executions – Color-coded history of the last 5 runs for the same scenario.
- User Comment – Optional comment area to document findings or decisions.
Click the dropdown arrow next to the scenario name. This expands a list of the most recent executions for that specific scenario.
You can click to expand each scenario and view a detailed step-by-step breakdown.
Step-by-Step Scenario Details
Within each scenario, BlinqIO captures every step and substep in detail to help with debugging and analysis.
For every step, you can view:
- Screenshots – Captured at key points during the step for visual reference.
- Console Logs – Any relevant console output from the browser or application.
- Network Logs – Records of network requests and responses triggered during the step.
- Errors – Detailed error messages and stack traces, if a step fails.
- Substeps – Some steps include substeps when the action involves multiple interactions or validations.
This full breakdown allows users to trace exactly what happened during execution and where issues may have occurred.