Compliance Command Center
Back to Secure RAG Shell

NIST AI RMF Governance Dashboard

Real-time trust evidence layer mapping Sentinel Docs controls to Govern, Map, Measure, and Manage requirements.

Compliance Health

No audit evidence yet: run Q&A in Secure RAG Shell to generate baseline scores.

BASELINING

Sample Size

0

Latest Judge Score

--

Average (Last 10)

--

Compliance Matrix

Static NIST AI RMF alignment map for Sentinel controls.

NIST AI RMF compliance mapping of Sentinel controls, evidence, and audit details.
NIST FunctionRequirementEvidenceAudit Detail
GOVERN (GV)Accountability & TransparencyLangSmith TracesImmutable audit trail of every interaction.
MAP (MP)Contextualizing AI Use CasesUpstash NamespacesIsolated session-level document context.
MEASURE (MS)Assessing System TrustworthinessLLM-as-a-JudgeReal-time faithfulness and grounding scores.
MANAGE (MG)Prioritizing & Acting on RisksUpstash Redis + Sentinel Guard DLPEdge rate limiting and redaction trigger controls.

Redaction Counter

Total PII patterns blocked in the active session.

0

Audit Actions

Open session trace evidence and export compliance report artifacts.

Session: No active session

Integrity Heatmap

Visual summary of the last 10 Judge scores from this browser session.

No judge history yet. Ask a few questions in the chat to populate this heatmap.
Compliant (>= 0.90)Review (0.70 - 0.89)Violation (< 0.70)

Economic Shield Metrics

Session token and cost telemetry (UI scaffold; LangSmith wiring follows).

GPT-4o-mini

Total Tokens Consumed

0

In: 0 / Out: 0

Estimated Session Cost

$0.0000

Pricing: $0.15/M input - $0.60/M output

Cost-to-Intelligence Efficiency

--

Awaiting judged response to compute efficiency

Model Benchmarking

Side-by-side model KPI scaffold (LangSmith eval wiring to follow).

Benchmark comparison of model faithfulness, latency, and cost per response.
ModelFaithfulnessAvg. LatencyCost / ResponseStatus
GPT-4o-mini
0.98WINNER
0.80sWINNER
$0.0020WINNER
active
Llama 3.3 70B
--
--
--
pending