AI Compass
AI Management System · AIMS
Powered byMicrosoft Power Platform

Navigate AI Governance
with Confidence
From Day One.

AI Compass is a purpose-built AI Management System on Microsoft Power Platform, giving public-sector agencies a structured, audit-ready path from AI policy to evidence-backed governance readiness — without building from scratch.

AI Compass is designed for state agencies, counties, cities, courts, and public safety organizations that need to operationalize AI inventory, risk management, evidence collection, oversight documentation, and framework readiness across ISO/IEC 42001, NIST AI RMF, and Texas TRAIGA.

ISO/IEC 42001 NIST AI RMF Texas TRAIGA EU AI Act TXShare Contract Microsoft-Native Architecture
AI Compass · Executive Risk Dashboard · Illustrative
47
AI Systems
12
High Risk
89%
Controls Met
6
Open Findings
Risk Score by Department
IT Services
82
Finance
65
HR
58
Public Safety
44
Legal
22
Framework Coverage
ISO · NIST · TRAIGA readiness
AI System Registry · Recent Activity
SystemDeptRiskReviewStatus
Copilot M365ITHighRequiredAssessment
Predictive PatrolPublic SafetyHighRequiredFinding
Benefits ScreenerSocial SvcsMediumScreenedApproved
Invoice AIFinanceLowN/AMonitored
Illustrative UI · Data populated from Dataverse via Power BI
18+
Years Gov't Tech
State & local government expertise
100%
Microsoft Native
No separate AI GRC platform or middleware
6
Lifecycle Stages
Registration through retirement
Core Capabilities

Everything Your Agency Needs to
Show AI Is Governed

AI Compass delivers a single, structured workspace where every AI system, risk decision, control, evidence record, and audit trail lives — built natively on Microsoft Power Platform and Dataverse. Configuration-led deployment, no custom middleware, and no separate infrastructure required for the core governance workflows.

⚡ TXShare Cooperative Contract
Texas state and local agencies may procure AI Compass directly through AG365's active TXShare cooperative contract — no full RFP required. Reducing procurement timelines and administrative burden significantly.
AI Use Case Registry
Centralized inventory of every AI system with ownership, classification, data lineage, and purpose — your single source of truth.
Risk Assessment Engine
Configurable scoring aligned to NIST AI RMF (Govern, Map, Measure, Manage) with automated routing by risk tier.
Compliance Dashboard
Power BI reporting that surfaces governance posture, evidence status, risk trends, and open actions based on the configured refresh model.
Continuous Monitoring
Ongoing drift detection, performance tracking, and automated escalation to keep every deployed AI system in check.
Built on Microsoft Cloud

100% Microsoft Native —
No Separate AI GRC Platform. No Middleware.

Your governance data lives inside your Microsoft environment, using Dataverse, Power Platform, and Microsoft security controls already familiar to many public-sector IT teams. Depending on your current licensing and implementation scope, AI Compass can reduce the need for a separate AI governance platform, custom middleware, or additional infrastructure.

Power Platform
AIMS Engine & Workflows
Model-driven apps, business process flows, approvals, forms, role-based navigation, and the complete AI governance user experience — minimal custom code required for core governance workflows.
Power AppsPower AutomateBPF
Dataverse
Secure Governance Data Store
50+ configurable Dataverse tables for AI systems, risks, controls, vendors, evidence, reviews, and remediation, with role-based security, ownership, and audit history configured according to the customer environment.
50+ TablesSecurity RolesAudit Log
Power BI
Compliance & Audit Reporting
Executive and operational dashboards for risk posture, evidence status, open actions, framework readiness, vendor exposure, and TRAIGA screening status — updated through the configured Power BI refresh model.
DashboardsKPIsAudit Reports
Microsoft Purview
Data Governance & Lineage
Sensitivity labels, data protection, policy evidence, and AI-related data handling governance. Integration patterns connect Purview signals directly to AI Compass records and evidence tracking.
LabelsDLPEvidence
Copilot Studio
AI Agent Governance
AI agents and custom copilots built in Copilot Studio can be registered, classified, and governed through AI Compass — closing the loop on internal AI development and deployment lifecycles.
Agent RegistryGovernance
Microsoft Sentinel
Threat Detection & SIEM
Sentinel playbooks configured to trigger AI governance review workflows where integration is licensed and in scope. Connects security alerts to Dataverse-based governance records.
AlertsPlaybooksSOAR
Microsoft licensing, tenant configuration, and implementation scope determine which integrations are available. AI Compass works within your existing Microsoft environment — reducing the need for a separate governance platform, custom middleware, or additional infrastructure.
AI Use Case Lifecycle

Six-Stage Governance
From Registration to Retirement

Select any stage to see what happens, what records are created, and what outputs are produced at each step of the AI governance lifecycle.

01
Registration
Every AI system, tool, model, or AI-assisted workflow begins here. Departments submit a structured intake record capturing the system's purpose, data inputs, autonomy level, vendor or developer, impacted populations, and regulatory exposure. This creates the system's permanent governance identity in Dataverse — the single source of truth for everything that follows.
AI System record created in Dataverse with unique identifier
System owner, department, and data steward assigned
Data classification and vendor/model information captured
TRAIGA applicability screening initiated automatically
Power Automate routes record to Risk Assessment queue
AI Use Case Intake Form · Dataverse
Predictive Case Routing — Social Services
Health & Human Services
Decision-Support · ML Model
Sensitive — PII & Protected Records
Internal Build · Azure ML
Submitted → Awaiting Risk Assessment
47
Registered
8
This Month
100%
In Dataverse
02
Risk Assessment
The registered system undergoes a configurable risk assessment aligned to NIST AI RMF (Govern, Map, Measure, Manage) and ISO/IEC 42001. Scores are calculated across likelihood, impact, data sensitivity, human oversight adequacy, vendor exposure, and regulatory relevance. High-risk systems are automatically routed to mandatory senior review. Low-risk systems may proceed with standard documentation.
Risk score calculated across 6 dimensions (1–25 scale)
Risk tier assigned: Low / Medium / High / Critical
NIST AI RMF function mapping recorded (Govern, Map, Measure, Manage)
Required controls and evidence list generated automatically
Power Automate escalates High/Critical to senior reviewer queue
Risk Score Card · NIST AI RMF Aligned
Likelihood of Harm4 / 5
Impact Severity3 / 5
Data Sensitivity5 / 5
Human Oversight2 / 5 risk
Vendor Exposure1 / 5
Risk Tier HIGH — Senior Review Required
03
Reviewer Approval
Designated reviewers — CIO, CISO, Legal, or a governance committee — receive structured review tasks in their Power Apps queue. They examine the risk assessment, evaluate proposed controls, request additional evidence, and approve or reject the system for production. Every decision, comment, and timestamp is retained in Dataverse audit history according to the configured retention policy. Rejections trigger a structured remediation workflow.
Reviewer task assigned with risk summary and evidence package
Approval, conditional approval, or rejection recorded with rationale
All reviewer decisions timestamped and attributed in audit log
Required conditions or control requirements attached to record
Rejected systems enter remediation workflow automatically
Reviewer Queue · Power Apps
Legal Review — Contract AI ClausesApproved
CISO — Data Security AssessmentApproved
CIO — Architecture Sign-offIn Review
Governance Committee VotePending
Reviewer Note — CIO
"Conditional approval pending confirmation that human override mechanism is documented and tested. Require evidence upload before production clearance."
2
Approved
1
In Review
1
Pending
04
Production
Following full approval and evidence collection, the AI system is cleared for live deployment. The Dataverse record transitions to Active status, governance ownership is confirmed, monitoring parameters are set, and the system enters the operational oversight cycle. A production deployment package — including all approvals, controls, and conditions — is archived as the baseline governance record.
System status updated to Active in AI Use Case Registry
Production deployment date and version recorded
Monitoring schedule and review cadence configured
Governance owner notified; accountability formally assigned
Power BI dashboard updated; system appears in live inventory
Production Record · AI Use Case Registry
● Active — Production
January 14, 2026
v2.1.0 · Azure ML Studio
D. Carter, Deputy CIO
July 14, 2026 — Quarterly
14 of 14 · All Conditions Met
05
Monitor & Review
Active AI systems remain under continuous governance oversight. Drift alerts, performance review cycles, evidence renewal, and periodic re-assessments keep every deployed system accountable. Scheduled reviews — quarterly, semi-annual, or annual — generate structured review tasks in Power Apps. Any open finding triggers a remediation workflow automatically tracked through to closure.
Automated drift and performance alerts from connected systems
Scheduled review tasks generated in Power Automate
Evidence renewal tracked with age, owner, and renewal date
Open findings logged with owner, due date, and priority
Power BI dashboard reflects the configured refresh and integration model — executive visibility
Sample Monitoring Dashboard
Quarterly Review — Q1 2026Overdue
Bias Drift Alert — Output skew +4%Open
Evidence Renewal — Data lineage docOverdue
Annual Re-AssessmentScheduled
Performance Health Score
72/100
2 open findings reducing score — remediation in progress
06
Retirement
When an AI system is decommissioned, replaced, or discontinued, it is formally retired through a structured close-out workflow. The retirement record captures the reason for decommission, data disposition decisions, stakeholder notifications, and final audit package. The complete governance record is preserved — not deleted — ensuring the full lifecycle history remains available for future audits, procurement reviews, and institutional accountability.
Retirement reason and authority documented and approved
Data retention and disposition decisions captured
Stakeholder notifications sent via Power Automate
System status set to Retired — removed from active dashboard
Complete lifecycle record retained in Dataverse according to the agency's records-retention policy
Retirement & Archive Record
◼ Retired — Archived
March 3, 2026
Replaced by upgraded model v3.0
Retained 7 years · Records Management Policy
CIO · Legal · Records Manager
14 months in Production · Full record preserved
14mo
Active Life
6
Reviews Done
100%
Documented
Risk Management

AI Risk — Visible, Scored,
Routed, and Resolved

Select a step to explore how AI Compass turns risk from a spreadsheet into a structured governance workflow across your entire AI portfolio.

Step 01 · Identify

Find Every
AI System in Use

You can't govern what you can't see. AI Compass starts by building a complete, structured inventory of every AI system, tool, model, and AI-assisted workflow across your agency — including shadow AI, vendor-supplied tools, and internal builds. Each system gets a permanent Dataverse record with ownership, classification, and exposure profile.

  • Structured intake captures purpose, data type, autonomy level, and vendor
  • Department and system owner assigned — no orphaned AI systems
  • TRAIGA applicability flag set automatically on registration
  • Vendor, model, and deployment environment documented
  • Impacted populations and affected groups identified up front
AI System Inventory · Portfolio View
47 SYSTEMS
High Risk 12
Medium Risk 13
Low Risk 15
Cleared 7
System Dept Risk
Predictive Routing HHS ● HIGH
Copilot M365 IT ● MED
Invoice AI Finance ● LOW
+ 44 more systems…
Step 02 · Assess

Score Risk
Across Six Dimensions

The AI Compass risk engine scores every system across six configurable dimensions — producing a composite risk tier that drives automated routing, reviewer assignment, and required evidence collection. Aligned to NIST AI RMF and ISO/IEC 42001, scores are repeatable, auditable, and defensible.

  • Six-dimension scoring: Likelihood, Impact, Data Sensitivity, Oversight, Vendor, Regulatory
  • Composite score maps to Low / Medium / High / Critical risk tier
  • Aligned to NIST AI RMF Govern + Map functions
  • High and Critical tiers trigger mandatory senior review workflow
  • All scores timestamped and attributed in Dataverse audit log
Risk Scorecard · NIST AI RMF Aligned
← LikelihoodImpact →
Low
Moderate
High
Critical
Predictive Routing · Score Breakdown
Likelihood
4
Impact
3
Data Sensitivity
5
Oversight
2
Vendor Risk
1
Regulatory
4
Composite Score 19 / 25 HIGH
Step 03 · Control

Assign Controls,
Collect Evidence

Once risk is scored, AI Compass maps required controls, mitigation actions, approval conditions, and evidence requirements to the system record. Controls are linked to NIST AI RMF and ISO/IEC 42001 clauses. Evidence records are tracked with age, owner, renewal dates, and review cadence — no more relying on email chains or shared drives.

  • Controls mapped to NIST AI RMF and ISO/IEC 42001 framework clauses
  • Evidence requirements generated based on risk tier automatically
  • Each control assigned an owner, due date, and completion status
  • Conditional approvals attach specific requirements to the record
  • All control decisions permanently logged in Dataverse audit trail
Control Register · Predictive Routing System
✓ Human Override
Override mechanism documented, tested, and approved by CIO
✓ Data Lineage
Full lineage map from source to output stored in Purview
⚠ Bias Audit
Quarterly bias assessment — renewal status tracked automatically
⚠ User Notice
Public-facing AI disclosure — draft under legal review
Impact Assessment
Annual AIIA scheduled — first due Jan 2027
Vendor Attestation
Azure ML terms reviewed — attestation on file
Evidence Status · All 47 Systems
Current
35
Due Soon
9
Overdue
3
Step 04 · Monitor

Track Findings,
Close the Loop

Governance doesn't end at approval. AI Compass continuously monitors active systems through scheduled review cycles, drift alerts, overdue evidence flags, and open finding trackers. Every finding is logged, owned, prioritized, and tracked to closure — producing a dashboard based on the configured refresh and integration model that executives and auditors can access without submitting a report request.

  • Open findings logged with severity, owner, due date, and status
  • Automated alerts for overdue reviews and expiring evidence
  • Drift detection flags trigger governance review tasks automatically
  • Power BI dashboard reflects the configured refresh and integration model
  • Closed findings preserved in audit trail with resolution details
Sample Risk Monitor · Open Findings
Bias Drift Detected — Predictive Routing
Severity: Critical · Owner: J. Rivera, Deputy CIO · Due: Apr 15, 2026 · Overdue
Quarterly Review Overdue — Copilot M365
Severity: High · Owner: IT Governance · Due: Mar 31, 2026 · Overdue
Evidence Renewal — Benefits Screener Lineage Doc
Severity: Medium · Owner: Data Steward · Overdue
✓ Vendor Attestation — Invoice AI · Closed
Closed Mar 28, 2026 · Resolution on file
3
Critical
6
Open
89%
Controls Met
Dashboards & Reporting

Executive and Operational
Governance Views

Four purpose-built dashboard views surface different aspects of your AI governance posture — from a one-page executive summary to operational evidence tracking and TRAIGA readiness. Select a view to explore the report layout and what each delivers.

Executive Summary · AI Governance Posture
Designed for CIOs, agency directors, and board presentations. One-page view of your entire AI governance posture — inventory, risk, evidence, and open actions.
Sample · Illustrative Data
47
AI Systems Registered
↑ 8 this quarter
12
High-Risk Systems
↑ 2 since last review
89%
Evidence Current
↑ 4% vs last quarter
6
Open Findings
↓ 3 closed this month
Risk Score by Department
IT Services
84
Health & Human Svcs
76
Finance
62
Public Safety
48
Legal
24
Procurement
18
Open Findings · Priority Order
SystemFindingSeverityOwner
Predictive RoutingBias drift alertCriticalDep. CIO
Copilot M365Quarterly reviewHighIT Gov
Benefits ScreenerEvidence renewalMediumData Steward
HR ScreenerOversight doc gapMediumCHRO
Portfolio Risk Distribution
47 SYSTEMS
Critical / High Risk12
Medium Risk13
Low Risk15
Cleared / Approved7
Governance Readiness Status
AI inventory established (47 systems)
System owners assigned — all records
Risk tiers configured and active
Reviewer workflows operational
!Evidence gaps open — 6 records
!Review cycle overdue — 3 systems
!Bias drift finding unresolved
Risk & Findings Report · Operational View
For CISOs, risk managers, and compliance officers. Full risk posture by system, department, vendor, and finding status — with drill-down to individual records and audit trail.
Sample · Illustrative Data
3
Critical Findings
↑ 1 new this week
6
Open Findings
↓ 3 vs last month
19
Avg Risk Score
↑ 2pts vs last quarter
14
Remediated (90d)
↑ 5 vs prior 90d
All Systems · Risk Tier & Status
SystemDeptScoreTierStatus
Predictive RoutingHHS19/25HighFinding Open
Patrol AnalyticsPublic Safety21/25CriticalIn Review
Copilot M365IT15/25HighReview Overdue
Benefits ScreenerSocial Svcs11/25MediumApproved
Invoice AIFinance6/25LowMonitored
+ 42 more systems
Finding Volume · Rolling 6 Months
Oct
Nov
Dec
Jan
Feb
Mar
High/Critical Medium Low / Closed
Risk by Category
Data Privacy
22
Bias / Fairness
18
Vendor Risk
13
Oversight Gap
11
Regulatory
8
Remediation Velocity
14
Findings closed (90 days)
Avg time to close: 11 days
74% of findings remediated within SLA target (14 days)
Evidence & Controls Report · Audit Readiness View
For internal audit, legal, and compliance teams. Full evidence register with age, owner, renewal dates, policy linkage, and control satisfaction — ready for auditor review at any time.
Sample · Illustrative Data
35
Evidence Current
9
Renewal Due Soon
3
Overdue / Stale
127
Controls Mapped
Evidence Register · All Systems
Evidence ItemSystemOwnerRenewedStatus
Human Override TestPred. RoutingCIOJan 2026Current
Data Lineage MapPred. RoutingData StewardOct 2025Overdue
Bias Audit ReportBenefits ScreenerAnalystFeb 2026Current
Vendor AttestationCopilot M365ProcurementMar 2026Current
Impact AssessmentPatrol AnalyticsLegalMissing
User Disclosure NoticeHR ScreenerLegalDraft
Control Satisfaction · Framework Mapping
ISO 42001 — Planning
92%
ISO 42001 — Operation
78%
NIST — Govern
85%
NIST — Measure
68%
TRAIGA — HB 149
74%
Evidence Health
74% CURRENT
Current (35)74%
Due Soon (9)19%
Overdue (3)6%
Audit Readiness Score
B+
Governance Readiness Grade
3 evidence gaps are the primary gap
TRAIGA Readiness Report · HB 149 · Texas
Purpose-built for Texas public-sector agencies. Tracks applicability screening, risk review, oversight documentation, vendor records, and reporting readiness — organized around HB 149 requirements.
Sample · Illustrative Data
47
Systems Screened
18
TRAIGA Applicable
14
Risk Reviews Complete
4
Reviews Pending
HB 149 Requirement Coverage · All 18 Applicable Systems
AI Inventory
18/18
Risk Review
14/18
Human Oversight
13/18
Vendor Tracking
15/18
Impact Mitigation
11/18
Reporting Ready
10/18
Applicable Systems · Review Status
SystemCategoryRisk ReviewOversight Doc
Predictive RoutingDecision-supportCompleteOn File
Patrol AnalyticsHigh-riskIn ReviewPending
Benefits ScreenerEligibilityCompleteOn File
HR ScreenerEmploymentOverdueMissing
+ 14 more systems
TRAIGA Screening Breakdown
47 SCREENED
TRAIGA Applicable (18)38%
Not Applicable (26)55%
Under Review (3)6%
TXShare Note
Texas agencies may procure AI Compass through AG365's active TXShare cooperative contract — no full RFP required. Procurement depends on eligibility, contract scope, and applicable purchasing authority.
Sample dashboards · Illustrative data only · Actual configuration, data, and reporting capabilities depend on implementation scope and Power BI licensing
Regulatory & Framework Alignment

AI Governance, Privacy & Security Alignment
In One Operational System

AI Compass is designed around major AI governance, risk, privacy, and security frameworks relevant to public-sector organizations. It does not guarantee certification or legal compliance — it provides structured governance workflows, evidence management, and readiness documentation.

Governance Capability
ISO/IEC 42001
NIST AI RMF
Texas TRAIGA
AI inventory and ownership
AIMS context clauses
Govern / Map
System registry
Risk assessment and treatment
Planning / Operation
Measure / Manage
Risk review workflow
Evidence and control tracking
Support / Evaluation
Profiles / Evidence
Documentation records
Reporting and remediation
Improvement cycle
Manage function
Reporting readiness
ISO/IEC
42001
AI Management System
Supports an ISO/IEC 42001-aligned AI Management System by organizing policies, risks, evidence, controls, management-review inputs, and continual-improvement records within a single structured workspace.
PlanningOperationEvaluationImprovement
NIST
AI RMF
Risk Management Framework
Maps governance records and workflows directly to the NIST AI RMF functions: Govern, Map, Measure, and Manage. Risk scoring is configurable to align with agency-specific profile selections.
GovernMapMeasureManage
Texas
TRAIGA
HB 149 Readiness
Supports applicability screening, risk review workflows, oversight documentation, vendor tracking, remediation, and reporting readiness for Texas public-sector AI governance under HB 149 (effective January 1, 2026).
ScreeningOversightVendor TrackingReporting
EU
AI Act
European AI Regulation
Risk classification, conformity assessment support, and documentation workflows aligned to EU AI Act risk tiers — relevant for agencies with international partnerships or grant-funded programs.
Risk TiersConformityDocumentation
HIPAA
Health Data Privacy
For agencies operating AI in health-adjacent contexts, AI Compass tracks data classification, access controls, vendor agreements, and evidence records relevant to HIPAA compliance-readiness documentation.
Data Class.Access ControlVendor Mgmt
FedRAMP
Aligned
Federal Cloud Security
When deployed in eligible Microsoft cloud environments, AI Compass can support documentation and control alignment for agencies with federal funding requirements, state-federal program obligations, or cloud security review needs.
Cloud SecurityControlsATO Support
Texas TRAIGA · HB 149

Practical AI Governance
for Texas Public Sector

HB 149 (effective January 1, 2026) creates restrictions and compliance considerations for AI systems in Texas. AI Compass helps agencies establish a practical governance workspace for AI inventory, applicability screening, risk review, oversight documentation, vendor tracking, remediation, and reporting readiness.

  • Guided applicability screening workflow for all AI systems and AI-assisted workflows
  • Risk review and assessment records for sensitive or regulated AI use cases
  • Governance reporting templates populated automatically from Dataverse records
  • Human oversight documentation and accountability tracking with full approval history
  • Vendor and AI provider tracking with contract, evidence, and risk linkage
  • Remediation action tracking for identified governance gaps and open findings
  • Texas public-sector reference workflows available upon request, subject to customer authorization
TRAIGA-related workflows should be reviewed against your agency counsel's interpretation of HB 149. AI Compass provides documentation and workflow support — not legal advice.
TXShare Cooperative Contract
Texas agencies procure AI Compass directly — no full RFP required. AG365 holds an active TXShare cooperative contract award, reducing procurement timelines and administrative burden significantly.
TRAIGA Readiness Support
AI System InventorySupported
AI Use Case IntakeSupported
Applicability ScreeningSupported
Risk Assessment WorkflowSupported
Risk Mitigation TrackingSupported
Human Oversight DocumentationSupported
Vendor & AI Provider TrackingSupported
Governance Reporting SupportSupported
Complaint / Review WorkflowPlanned

Procurement availability depends on customer eligibility, contract scope, participating agency rules, and applicable purchasing authority.

Implementation Packages

A Typical AI Compass
Implementation

Every agency starts from a different place. AI Compass is delivered through structured implementation packages — scoped, configured, and handed off with training, documentation, and a governance playbook so your team can operate the core workflows after go-live.

4–6 wks
Foundation
Best for agencies starting AI governance — limited prior infrastructure
  • Working AI inventory and intake workflow
  • Configurable risk scoring model
  • Reviewer approval routing
  • Evidence register setup
  • Basic executive dashboards in Power BI
  • Governance operating playbook
Get Started
12–16 wks
Enterprise
Best for larger agencies or multi-department rollouts with advanced requirements
  • Everything in Operational
  • Advanced security roles and data model
  • Microsoft Purview integration
  • Multi-framework control mapping (ISO 42001, NIST, TRAIGA)
  • Full Power BI reporting suite
  • Governance operating model documentation
  • Staff training and knowledge transfer
  • Optional: Sentinel / Defender integration
Contact Us
Typical Implementation Phases · Operational Package (8–10 weeks)
Discovery & Scope
Kickoff · Stakeholders · Data model
Week 1
AI governance scope definition, stakeholder mapping, Dataverse data model confirmation, licensing validation, and implementation plan. Existing AI inventory reviewed where available.
Core App Build
Dataverse · Power Apps · Security
Weeks 2–3
Dataverse tables configured, model-driven apps built, security roles and access control defined, business process flows created, initial intake and risk forms functional.
Workflow Automation
Power Automate · Approvals · Alerts
Weeks 4–5
Power Automate flows for routing, approval notifications, evidence renewal alerts, reviewer task assignment, and remediation tracking. Conditional logic by risk tier configured.
Dashboards & Evidence
Power BI · Evidence register · Controls
Weeks 6–7
Power BI dashboards built and configured (Executive, Risk, Evidence, TRAIGA views), evidence register structure established, control mapping framework loaded, reporting validated.
Pilot & Validation
User acceptance · Pilot systems
Week 8
Pilot with 3–5 real AI systems: full intake, risk assessment, approval, and evidence collection run end-to-end. Feedback incorporated, UAT sign-off completed.
Handoff & Go-Live
Training · Playbook · Hypercare
Weeks 9–10
Admin training, end-user guidance, governance operating playbook delivered. Go-live with full AI inventory population. 30-day hypercare support can be included based on implementation scope.
Managed Service Tiers

Three Tiers to Match
Your Agency's Readiness

Whether your agency needs a self-directed starting point or a fully managed AI governance operation, AI Compass scales to meet you where you are today — and where you need to be tomorrow.

Entry · Self-Directed
Foundation
  • AI use case registry & setup
  • Risk assessment workflows
  • Basic compliance dashboard
  • Quarterly governance review
  • Standard support SLA
Get Started
Enterprise · Full Service
Command
  • Everything in Operations
  • Microsoft Sentinel & Defender
  • Custom risk frameworks
  • Executive briefing cadence
  • Custom SLA & onboarding
Contact Us
Who It Supports

Cross-Functional
AI Governance

AI governance is not owned by one department. AI Compass gives each stakeholder a defined role in the governance lifecycle — with purpose-built views, workflows, and evidence records tailored to their responsibilities.

CIO / IT
Maintain AI inventory, platform ownership, technical review, and Microsoft architecture alignment. Own the system of record for all AI tools in the environment.
CISO
Connect AI governance with data protection, security monitoring, access control, and incident review. Link Purview, Defender, and Sentinel signals to governance records.
Legal / Compliance
Track obligations, policy decisions, evidence, approvals, and readiness artifacts. Generate structured documentation packages for counsel and regulatory response.
Internal Audit
Review evidence, findings, risk treatment, control status, and management-review records. Access timestamped audit trails without relying on email or spreadsheets.
Procurement
Track vendors, contract clauses, questionnaires, obligations, and provider evidence. Ensure every third-party AI tool is registered, assessed, and periodically reviewed.
Departments
Submit AI use cases through structured intake, document purpose, identify data sensitivity, and complete review tasks without needing technical expertise.
Get In Touch

Request a Demo or
Introductory Governance Assessment

See AI Compass live in a 45-minute session tailored to your agency's AI portfolio and compliance needs. Texas agencies: ask about TXShare cooperative procurement options.

United States
Flower Mound, Texas
Texas, USA
Canada
Montréal, Québec
Montréal, Canada
TXShare Contract
Active cooperative contract — Texas agencies may procure without a full RFP.
Framework Expertise
NIST AI RMFISO/IEC 42001Texas TRAIGAEU AI ActHIPAAFedRAMP
Send Us a Message
By submitting this form, you agree that Avant-Garde 365 may use your information to respond to your inquiry. Your information is used to respond to your inquiry and is handled according to Avant-Garde 365's privacy practices.

Prepare Your Agency for
Practical AI Governance

Request a working demonstration of AI Compass and see how Microsoft Power Platform supports AI inventory, risk assessment, policy evidence, oversight, remediation, and TRAIGA readiness in your environment.

Texas agencies: ask about TXShare cooperative procurement — no full RFP required. Procurement availability depends on eligibility, contract scope, and applicable purchasing authority.