Layer8 Tech Group Exit Readiness Assessment — AI Governance
← Back to Portfolio
Responsible AI

AI Governance at Layer8

How we build, deploy, and govern artificial intelligence — responsibly.

Layer8 Tech Group uses AI to power assessment products that inform major financial decisions — business valuations, acquisition risk, and franchise investments. We take that responsibility seriously. This page summarizes our AI governance framework and links to our full policy documentation.

Aligned with the NIST AI Risk Management Framework

Our governance framework is aligned with the NIST AI Risk Management Framework (AI RMF 1.0) and the NIST Generative AI Profile (AI 600-1) — the federal standard for responsible AI development and deployment.

GOVERNPARTIAL
Written AI governance policy in place. Roles and responsibilities defined. Policy review schedule established.
MAPPARTIAL
AI system inventory and stakeholder mapping in progress. Data flows documented per product.
MEASUREIN PROGRESS
146-check automated compliance test suite running on every commit. Bias assessment documentation underway.
MANAGEIN PROGRESS
Incident response protocol defined. Third-party AI risk assessments scheduled. Anthropic BAA pursuit initiated.
Our Commitments
Permitted Models Only

We maintain an approved model allowlist. Only claude‑sonnet‑4‑6, claude‑haiku‑4‑5, claude‑opus‑4‑6, and self-hosted Ollama are approved for use in Layer8 products. Any new model requires written approval before deployment.

🔒 No PHI in External AI

Protected Health Information (PHI) is never sent to external AI model providers. Our HIPAA engagement policy requires local inference via isolated infrastructure for any healthcare client data until a BAA is in place with our model provider.

👤 Human Review on Every Report

No AI-generated score is delivered to a client without human review by a Layer8 advisor. Our AI surfaces findings — our advisors validate them.

Bias Acknowledgment

Our scoring rubrics may reflect assumptions that disadvantage certain business types or verticals. We document these bias vectors explicitly and validate rubrics against known outcomes as real client data becomes available.

Automated Compliance Testing

146 automated checks run on every code commit covering secrets management, token limits, observability tracing, permitted models, DNC suppression, and data access controls.

📋 Transparent Incident Response

We maintain a documented 5-step incident response protocol. If a client receives a materially incorrect AI-generated report, we contain, assess, notify, remediate, and document within 48 hours.

What AI We Use and Why
SystemProviderApproved UseData Sensitivity
claude-sonnet-4-6AnthropicPrimary scoring and report generationNon-PHI only
claude-haiku-4-5AnthropicClassification and triageNon-PHI only
claude-opus-4-6AnthropicComplex analysisNon-PHI only
Ollama (local)Self-hostedLocal inference, healthcare use casesAny (isolated)
QdrantSelf-hostedVector storage and retrievalNon-PHI only
LangFuseSelf-hostedObservability and tracingNon-PHI only
Read the Full Policy
AI Governance Policy v1.0

Our complete 9-section governance policy covering permitted models, data handling rules, governance roles, bias acknowledgment, incident response, technical compliance controls, third-party supply chain risk, and review schedule.

View on GitHub ↗
NIST AI RMF Compliance Roadmap

Our 12-month roadmap to full NIST AI RMF alignment — 20 initiatives across 4 phases covering governance, mapping, measurement, and management.

View on GitHub ↗
AI Governance Questions

If you have questions about our AI governance practices, data handling, or would like to request our full policy documentation for vendor review, contact us directly.

Layer8 Tech Group AI Governance Policy v1.0 effective April 2026. Aligned with NIST AI RMF 1.0 and NIST AI 600-1 Generative AI Profile. This page is updated when material changes are made to our governance framework.