ISO 42001 + NIST AI RMF aligned

AI risk assessment (advisory)

Get a clear view of how AI is being used in your organization (and by your vendors), where risk is introduced, and what governance and controls are reasonable for your size and regulatory/audit expectations.

What we assess

Focused on practical risk, evidence, and decision-ready findings—without turning into an implementation project.

Data exposure & handling
What data is shared with AI tools (PII/NPI/PHI), retention and deletion, prompts/logging, and where sensitive data may leak.
Third-party AI risk
Vendor due diligence, sub-processors, contractual expectations, and oversight for AI-enabled services.
Governance & controls
Policy alignment, roles/responsibilities, approval workflows, monitoring, and right-sized control expectations.

Deliverables

Clear outputs your team can use immediately.

Findings report
Risk-ranked findings written in plain language, with context for leadership and audit/regulatory scrutiny.
Recommendations + evidence expectations
General recommended approaches and what “good evidence” looks like (policies, records, approvals, and oversight artifacts).
Out of scope
  • Hands-on remediation or operating AI/security tooling
  • Model building, training, or prompt engineering as a service
  • Penetration testing