This feature is coming soon! Spark AI Evidence Testing will initially be available for the Controls Compliance Application only. For use cases outside Controls Compliance, please contact your account team or submit a feature request.
Overview
Spark AI Evidence Testing enables continuous control monitoring by extending automation from evidence collection to evidence review. Spark AI evaluates an evidence attachment against the testing procedure defined by your team and returns a recommended testing result, along with supporting testing rationale, and remediation recommendation.
Instead of relying on point-in-time audits, Spark AI Automated Evidence Testing allows evidence to be validated continuously as it is submitted. When paired with Risk Cloud’s Automated Evidence Collection, evidence can be automatically collected and reviewed—shifting compliance programs from manual, periodic testing to always-on assurance while keeping reviewers in the loop.
How to Enable Automated Evidence Testing (Prerequisites)
1. Log in to Risk Cloud as an admin user.
2. Go to Admin → Integrations → Spark AI Integration Card.
3. Click Configure, then enable Automated Evidence Testing.
- By enabling Automated Evidence Testing, you will have access to a new Job operation card called "Run Evidence Test" when you set up the Job Automations within Controls Compliance Application.
-
Tip: Spark AI features can be enabled or disabled at any time. The integration card makes it easy for admin to manage which AI features to opt-in/out.
4. Make sure your organization has access to the Controls Compliance Application. You can verify this in Application Build > App Settings > Application Use Case. Make sure Controls Compliance is selected.
Creating an Automation for Evidence Testing
Spark AI Automated Evidence Testing is configured in Risk Cloud Jobs. Follow the steps below to create a job that automates evidence tests when evidence records are created or moved. To learn more about Risk Cloud Jobs, refer to the Help Center Article.
Note: The example below is based on the Controls Compliance Application template. You can tailor the configuration to align with your organization's workflows setup.
1. Navigate to Build → Jobs → + New Job.
2. Select a Trigger: Choose either Record Created or Record Move on the workflow where you want Spark AI Evidence Test to run (e.g., Controls Compliance Application's Evidence Task Workflow)
Note: The Run Evidence Test job operation is currently available only for these 2 trigger types.
Common Scenario - With Automated Evidence Collection (AEC): If AEC is enabled, AEC creates new Evidence Task records on a scheduled cadence, with evidence file attached at each collection. Use the Record Created trigger on the Evidence Task workflow at the Evidence Review step.
Common Scenario - Without AEC: If you are not using AEC, you can still run evidence tests. Use the Record Moved trigger on the Evidence Task workflow, moving records from Evidence Collection to Evidence Review.
3. Add the Run Evidence Test operation. When the trigger type and prerequisites are met, the Run Evidence Test operation card will be available.
4. Configure Run Evidence Test - Select input fields:
Evidence Attachment (required): Choose an attachment field that contains the evidence file to be evaluated. Note: Only one attachment field can be selected; if multiple files are present, the latest version of the uploaded file will be used.
Testing Procedures (required): Choose a text, text area, or text concatenation field that defines the testing procedure or criteria.
Control Description (optional): Select a text, text area, or text concatenation field that contains additional context about the control being tested.
Evidence Description (optional): Select a text, text area, or text concatenation field that provides additional context about the evidence.
5. Configure Run Evidence Test - Select output fields:
Testing Results (required): Select a single‑select field to store the AI‑generated outcome (e.g., Pass, Fail, Insufficient).
Testing Rationale (required): Select a text or text area field to store the AI-generated supporting rationales.
Remediation Action (optional): Select a text or text area field to store the AI-generated suggested remediations.
6. (Optional) Add Conditions or Additional Operations. Jobs that include the Run Evidence Test operation support stacking conditions and additional operations.
For example, you can configure a condition to run evidence test only when the Test Type field equals Automated if you don't want all evidence submitted goes through Spark AI Automated Evidence Testing. You can also combine this with other operations, such as notifying the Evidence Owner when a test result fails.
7. Publish and Test the Job. Publish the job to enable it. Once published, the job will run automatically when the configured trigger and conditions are met. To test the job, create or move a record that triggers the job. You can verify execution details in the Job History tab.
Running Automated Evidence Testing
Triggered by Record Moved
Create a record and upload evidence. Create a record in your workflow (e.g., an Evidence Task record in the app template) and upload an evidence file. Ensure the required fields for the Run Evidence Test operation — Attachment and Testing Procedure — are populated.
Move the record to trigger testing. Move the record to the workflow step configured to trigger the Run Evidence Test job operation (e.g., from Evidence Collection to Evidence Review step).
-
Review test outputs. After the record is moved, the output fields — Test Result, Rationale, and Remediation Action — will be populated once the job completes.
Note: Depending on current job volume, results may appear immediately or take up to 1–2 minutes. You can track execution details in the Job History tab.
-
Output fields populated by Spark AI are labeled “Tested by Spark AI.” If a user edits a value, the label is automatically removed.
Triggered by Record Created (compatible with AEC)
Configure AEC on the parent record (e.g., an Evidence Requirement record in the app template). Set up AEC to automatically create a new child record with the evidence file attached (e.g., an Evidence Task record created in the Evidence Review step).
Run AEC immediately or on its scheduled cadence. When AEC creates a new record in the workflow step configured with a Record Created trigger, the Run Evidence Test job operation is automatically triggered.
Review test outputs. Once the job completes, the output fields—Test Result, Rationale, and Remediation Action—will be populated on the newly created record.
Labels and Field History
Labels: Fields populated by Automated Evidence Testing are labeled “Tested by Spark AI.” If a user manually edits a value, the AI label is automatically removed.
Interaction with Autofill: Fields populated by Spark AI Autofill can be overwritten by Automated Evidence Testing. In this case, the label updates from “Autofilled with Spark AI” to “Tested by Spark AI.” However, fields populated by Automated Evidence Testing are not overwritten by Spark AI Autofill, as Autofill only generates content for blank fields.
Audit Trial: Every field populated by Automated Evidence Testing is recorded in the Field History side panel and is exportable as a CSV. Logged details include the field name, updated value, record ID, job name, executor (Spark AI – Testing), record assignee, and timestamp. This provides full traceability for all AI-generated field updates.
Best Practices
Provide Clear Testing Procedures. The quality of AI-generated results depends heavily on the clarity of your testing procedures. Clearly specify what constitutes a pass or fail to ensure accurate and consistent outcomes.
Provide Additional Contexts. Use optional input fields to improve result quality: Control Description to describe control objectives and pair with the testing procedure. Evidence Description to explain the context of the attached evidence.
Review AI Outputs. Spark AI can make mistakes, so human oversight remains essential. Always review AI-generated test results, supporting rationale, and remediation recommendations before finalizing decisions.
-
Combine with AEC. Automated Evidence Testing delivers the most value when paired with Automated Evidence Collection. Together, they enable continuous evidence collection and testing, helping shift programs away from manual uploads and periodic audits toward ongoing control monitoring.
Troubleshooting
If you encounter errors in Job configuration, Job History, or if Spark AI Automated Evidence Testing does not trigger as expected, review the checks below.
Supported Trigger Types: The Run Evidence Test job operation is available only for Record Created and Record Moved. If a different trigger type is selected, the operation will not be available.
-
Supported Application Use Case: The Run Evidence Test operation is available only when the Application Use Case is set to Controls Compliance at the time the job is created.
If the Application Use Case is later changed to another use case, existing jobs that include Run Evidence Test will continue to execute as configured.
Required Input/Output fields Deleted: If any required input or output field used by the Run Evidence Test operation is deleted, the operation will be disabled. The Job Configuration will display an error prompting you to update the invalid operation. To resolve this, reselect the required input and output fields, save the operation, and publish the job.
Required Input Fields Missing Values: If any required input field is left blank, the Run Evidence Test operation will execute but appear as No Operation in the Job History. In this case, no output fields will be populated.
-
Spark AI Integration & Feature Toggles: Disabling the Spark AI integration or the Automated Evidence Testing feature toggle disables all existing Run Evidence Test operations. The Job Configuration will display an error indicating the operation is invalid. To resolve this:
Re-enable the Spark AI integration and/or Automated Evidence Testing feature toggle.
Reselect any input or output field in the Run Evidence Test operation to save the configuration.
Republish the job. This clears the error messages and reactivates the job.
Evidence File Size Limits: Evidence files are limited to approximately 30,000 characters to avoid exceeding token limits. If a larger file is uploaded, Spark AI Automated Evidence Testing evaluates only the first 30,000 characters.
Multiple Evidence Files: When multiple attachments are present, Spark AI Automated Evidence Testing evaluates only the most recent version of the latest uploaded attachment.