CT Agency Suite for audit response

When the state audit asks, the data is there.

CT Agency Suite holds the evidence the audit asks for (ISPs, MT visits, MH/SCS checklists, signed authorizations, EVV logs) because that data is what the SC and supervisor work with every day. Audit response is a benefit, not the primary purpose.

Day-to-day
Built for SC work, not audit prep
Per consumer
Evidence on one record, on demand
One screen
Evidence on the consumer record
What state audits typically request
The evidence is on the consumer record

When a state audit pulls a sample of consumers, the same data the SC works with every day is what the audit needs. Nothing extra to assemble.

  • Current ISP (the source of truth)
  • MT visit log + MH/SCS Checklist completion
  • Signed authorizations and consents
  • Staff credentials and training hours
  • Incident reports and follow-up
Day-to-day data
Same data audits ask for
Per-consumer export
Evidence on one record
Multi-state
Per-state configuration
Audit ≠ Survey
Two different things
Why this matters

The data is there because it's the day-to-day work.

When a state audit happens, the platform's value isn't 'audit-readiness' as a marketing posture — it's that the data the audit asks for is the same data the SC, supervisor, and biller already work with every day. Current ISPs, MT visits with upload-vs-visit documentation, MH/SCS Checklist completion, signed authorizations, EVV logs — these aren't assembled for the audit, they're already there.

When the audit selects a sample of consumers, you filter the consumer view to those, and each consumer's record IS the evidence package. Export it cleanly. Run the interviews from that data. The audit response that used to require stitching evidence from five systems happens on one screen because the work that produces the evidence is the day-to-day work the platform was built for.

How CT Agency Suite supports an audit
  • Per-consumer export — one click pulls everything on a consumer's record
  • Same data as the day-to-day — no audit-only data sets to assemble
  • MT visit log with upload-vs-visit color coding — visible documentation latency
  • MH/SCS Checklist completion — annual checklists tracked per consumer-year
  • Current ISP available — no per-edit revision history, but the current plan is always clear
What the suite delivers for audit response

The platform's audit-response capabilities, honestly framed.

Per-consumer evidence packages

When the audit selects a sample of consumers, filter to those consumers and export each one's complete record: current ISP, MT history, MH and SCS Checklist completion, signed authorizations, EVV visits, billing linkage. The export is fast because the data is already structured around the consumer.

Day-to-day data is the audit data

There is no separate 'audit-prep' data set. The MT visits the SC documented, the checklists they completed, the plans they renewed — that's what the audit reviews. The platform doesn't manufacture audit evidence; it captures the day's work.

Per-state configuration

Different states' audit processes differ. Per-state configuration tailors evidence-export templates and visible fields to what each state's auditors actually ask for. Multi-state agencies get per-state views without confusion.

Current ISP visibility

The current plan is the source of truth for each consumer. The platform doesn't track per-edit revision history (we don't keep one), but the current plan is always identifiable and exportable. When the audit asks 'what plan is in effect,' the answer is unambiguous.

Incident tracking with follow-up

Incident reports are first-class records with follow-up workflow. Closure requires documentation. The full incident lifecycle is preserved with attribution. Audit questions about incident response have answers.

Audit-trail data where the audit cares

Specific compliance-relevant actions (document upload, signature capture, plan finalization) carry attribution and timestamps. Where the audit needs to know who-did-what-when, the data is there. We don't claim a universal cross-module audit trail; we claim the data the audit actually asks about.

What it looks like in practice

A few ways teams use this.

State audit selects 20 consumers

DDD audit selects 20 randomly-chosen consumers from your caseload. QA filters the consumer view to those 20. Each consumer's record contains the current ISP, MT history with upload-vs-visit color coding, MH and SCS Checklist completion, EVV visits, signed authorizations. Export each one's evidence package. The data was already there; you didn't assemble it for the audit.

Audit asks a follow-up question

An auditor reviewing a specific consumer asks 'when was this MT visit documented?' QA opens the consumer record, scrolls to the MT history, sees the visit date and upload date color-coded against each other. The latency is visible at a glance. The answer takes 30 seconds, with the underlying data the audit can verify independently.

Internal review between audits

QA does an internal record review (not for an audit — just normal QA work). They sample consumers, drill into MT history and checklist completion, follow up with SCs whose uploads are missing. The same data and the same workflow that supports the day-to-day SC work supports internal QA review. Audit response, when it comes, is the same workflow at scale.

Frequently asked

Common questions about audits and audit response.

Does the platform handle state-specific audit requirements?

Per-state configuration tailors evidence-export templates and visible fields to what each state's auditors actually ask for. Sensible defaults ship for common programs (NJ DDD, common HCBS waiver programs); state-specific configuration is part of onboarding. Multi-state agencies get per-state dashboards and exports.

Does the platform track per-edit revision history of plans?

No, intentionally. The current plan is the source of truth. When a plan is renewed, a new plan replaces it; the prior plan archives. We don't track per-edit revision history because in practice it adds complexity without serving the SC's day-to-day work. When an audit asks 'what plan is in effect,' the answer is unambiguous.

What if the audit finds an issue the platform didn't help us catch?

The platform is built to support the day-to-day work of running the agency well. If the day-to-day work is being done correctly, the audit finds clean evidence. If there are gaps in the day-to-day work, the audit will find them — that's what audits do. The platform isn't a substitute for doing the work; it's a tool that makes the work easier and the evidence available.

How does the platform handle incident reports?

Incidents are first-class records with required-follow-up workflows. When an incident is logged, the workflow ensures required follow-up actions are tracked and documented. Closure requires documentation. The full incident lifecycle — from initial report to resolution — is preserved with attribution.

Does this work for non-HCBS audits (Medicaid program audits, etc.)?

Yes. The principle is the same: collect data as part of the day-to-day work, expose it cleanly when the audit needs it. Non-HCBS audits (Medicaid claim audits, financial audits, etc.) use the same evidence-export tooling, with audit-specific configuration as needed.

Audit response, without the assembly.

Apply for the CT Agency Suite early-access program. We'll walk through your specific state's audit process and how the platform supports the response.