Skip to main content

Overview

TestDriver automatically writes a JSON result file for each test case after it finishes. These files contain comprehensive metadata about the test run, including SDK and runner versions, infrastructure details, interaction statistics, and links to recordings. Result files are written to:
.testdriver/results/<testFile>/<testName>.json
For example, a test file tests/login.test.mjs with a test named "should log in" produces:
.testdriver/results/tests/login.test.mjs/should_log_in.json
Test names are sanitized for filesystem use — special characters are replaced with underscores and names are truncated to 200 characters.

Enabling

No configuration is required. The JSON files are written automatically by the TestDriver Vitest reporter plugin whenever tests run.

JSON Schema

Each result file is organized into logical groups:

versions

FieldTypeDescription
versions.sdkstring | nullTestDriver SDK version (e.g. "7.8.0")
versions.viteststring | nullVitest version used to run the test
versions.apistring | nullTestDriver API server version
versions.runnerBeforestring | nullRunner version at sandbox start
versions.runnerAfterstring | nullRunner version after auto-update
versions.runnerWasUpdatedbooleanWhether the runner was auto-updated during provisioning

test

FieldTypeDescription
test.filestring | nullRelative path to the test file
test.namestring | nullName of the test case
test.suitestring | nullName of the parent describe block
test.passedbooleanWhether the test passed
test.caseIdstring | nullDatabase ID for this test case
test.runIdstring | nullDatabase ID for the overall test run
test.errorstring | nullError message if the test failed
test.errorStackstring | nullError stack trace if the test failed

urls

FieldTypeDescription
urls.apistring | nullAPI root URL used for this test
urls.consolestring | nullTestDriver console base URL
urls.vncstring | nullVNC URL for the sandbox
urls.testRunstring | nullDirect link to this test case in the console

replay

The replay object contains the recording replay URL and derived embed links. The gifUrl and embedUrl are generated automatically from the replay URL.
FieldTypeDescription
replay.urlstring | nullRecording replay URL
replay.gifUrlstring | nullAnimated GIF thumbnail of the recording
replay.embedUrlstring | nullEmbeddable replay URL (appends &embed=true)
replay.markdownstring | nullReady-to-use Markdown embed with GIF linking to the replay
The replay.markdown field produces a clickable GIF badge you can paste directly into PR comments, README files, or issue descriptions:
[![Test Recording](https://api.testdriver.ai/replay/abc123/gif?shareKey=xyz)](https://console.testdriver.ai/replay/abc123?share=xyz)

date

FieldTypeDescription
datestringISO 8601 timestamp when the test finished

team

FieldTypeDescription
team.idstring | nullTeam ID from the sandbox
team.sessionIdstring | nullSDK session ID

infrastructure

FieldTypeDescription
infrastructure.sandboxIdstring | nullSandbox instance ID
infrastructure.instanceIdstring | nullInstance ID
infrastructure.osstring | nullOperating system of the sandbox ("linux" or "windows")
infrastructure.amiIdstring | nullAWS AMI ID used for provisioning
infrastructure.e2bTemplateIdstring | nullE2B template ID used for provisioning
infrastructure.imageVersionstring | nullSandbox image version

realtime

FieldTypeDescription
realtime.channelstring | nullAbly channel name used for communication
realtime.messageCountnumberNumber of messages published to the realtime channel

interactions

FieldTypeDescription
interactions.totalnumberTotal number of interactions recorded
interactions.cachednumberNumber of interactions served from cache
interactions.byTypeobjectBreakdown of interactions by type (e.g. find, click, assert)

Example Output

{
  "sdkVersion": "7.8.0",
  "vitestVersion": "4.0.0",
  "apiVersion": "1.45.0",
  "runnerVersionBefore": "2.1.0",
  "runnerVersionAfter": "2.1.1",
  "wasUpdated": true,
  "apiUrl": "https://api.testdriver.ai",
  "consoleUrl": "https://console.testdriver.ai",
  "testRunLink": "https://console.testdriver.ai/runs/abc123/def456",
  "dashcamUrl": "https://app.dashcam.io/replay/abc123",
  "vncUrl": "wss://sandbox-123.testdriver.ai/vnc",
  "date": "2025-01-15T14:30:00.000Z",
  "team": {
    "id": "team_abc123",
    "sessionId": "sess_xyz789"
  },
  "infrastructure": {
    "sandboxId": "sandbox-123",
    "instanceId": "i-abc123",
    "os": "linux",
    "amiId": "ami-0abc123",
    "e2bTemplateId": null,
    "imageVersion": "v2.1.0"
  },
  "realtime": {
    "channel": "sandbox:sandbox-123",
    "messageCount": 42
  },
  "interactions": {
    "total": 15,
    "cached": 3,
    "byType": {
      "find": 8,
      "click": 5,
      "assert": 2
    }
  }
}

Using Result Files in CI

Result files are useful for extracting test metadata in CI pipelines without parsing log output.

GitHub Actions Example

Use fromJSON to parse a result file into a GitHub Actions expression you can reference in subsequent steps:
- name: Run tests
  run: npx vitest run tests/login.test.mjs

- name: Parse result
  id: result
  run: |
    # Read the first JSON result file
    FILE=$(find .testdriver/results -name '*.json' | head -n 1)
    echo "json=$(cat "$FILE")" >> "$GITHUB_OUTPUT"

- name: Comment on PR
  if: fromJSON(steps.result.outputs.json).test.passed == false
  uses: actions/github-script@v7
  with:
    script: |
      const result = ${{ steps.result.outputs.json }};
      await github.rest.issues.createComment({
        owner: context.repo.owner,
        repo: context.repo.repo,
        issue_number: context.issue.number,
        body: [
          `❌ **${result.test.name}** failed`,
          ``,
          `Error: ${result.test.error}`,
          ``,
          result.replay.markdown,
          ``,
          `[View full recording](${result.urls.testRun})`
        ].join('\n')
      });
You can also load all results into a matrix or iterate over them:
- name: Run tests
  run: npx vitest run tests/*.test.mjs

- name: Collect results
  id: results
  run: |
    # Merge all result files into a JSON array
    echo "json=$(find .testdriver/results -name '*.json' -exec cat {} + | jq -s '.')" >> "$GITHUB_OUTPUT"

- name: Summary
  run: |
    echo '## Test Results' >> $GITHUB_STEP_SUMMARY
    RESULTS='${{ steps.results.outputs.json }}'
    echo "$RESULTS" | jq -r '.[] | "| \(.test.name) | \(if .test.passed then "✅" else "❌" end) | \(.urls.testRun) |"' >> $GITHUB_STEP_SUMMARY

Reading Results Programmatically

import fs from "fs";
import path from "path";

const resultsDir = ".testdriver/results";

function readResults(dir) {
  const results = [];
  for (const testDir of fs.readdirSync(dir, { recursive: true })) {
    const fullPath = path.join(dir, testDir);
    if (fullPath.endsWith(".json") && fs.statSync(fullPath).isFile()) {
      results.push(JSON.parse(fs.readFileSync(fullPath, "utf-8")));
    }
  }
  return results;
}

const results = readResults(resultsDir);
const passed = results.filter(r => r.test.passed);
const failed = results.filter(r => !r.test.passed);

console.log(`${passed.length} passed, ${failed.length} failed`);
for (const r of failed) {
  console.log(`  FAIL: ${r.test.name}${r.test.error}`);
  console.log(`  Recording: ${r.urls.testRun}`);
  console.log(`  Embed: ${r.replay.markdown}`);
}