Explainability

Understanding how Blankstate derives operational insights.

Making Sense of Insights

Explainability in Blankstate refers to the ability for users to understand *why* certain operational insights, Metric values, or OEI scores are generated. It's about providing transparency into the analytical process, even though the underlying AI models can be complex.

Blankstate achieves explainability by grounding its analysis in user-defined frameworks and providing mechanisms to trace insights back to their source.

Pillars of Explainability

  • The Blueprint: The Blueprint itself is the primary source of explainability. It's a human-readable, user-defined framework that explicitly states *what* aspects of operations are being measured and *how* (via Protocols and Metrics).
  • Protocols and Nuances: Protocols define the specific criteria and nuances (labels with associated scores) that the AI models use for analysis. By reviewing a Protocol's definition, users can understand the specific conditions or patterns the system is looking for.
  • Markers and Rationale (Replay): In Replay, the "State of Entities" visualization highlights "Markers" – segments in documents that triggered a Protocol score. Optional AI-generated "Rationale" can further explain *why* a particular Marker received a certain score based on the Protocol definition. This provides a direct link between the insight and the source data.
  • Protocol Quality Indicators: The automated Quality Indicators (QIs) calculated for Protocols (like Specificity and Completeness) provide insight into how well-defined and unambiguous a Protocol is, contributing to transparency about its expected analytical behavior.
  • Metric Definitions: Metric definitions clearly state *how* Protocol scores are aggregated and calculated into quantitative measurements, providing transparency into how the OEI and other performance indicators are derived.

No headings found.