Unified Interaction Representation

Protocols are modality-agnostic. All input modalities convert to a Unified Interaction Representation (UIR) before protocol analysis. One protocol. Any modality. Same measurement.

Abstraction Principle

Regardless of source modality, the protocol engine receives a normalized representation containing:

Semantic Content

What was communicated

Temporal Structure

When it happened

Actant Involvement

Who was involved

Reconstruction Chain

Glass-box provenance

Interaction Types

Human ↔ Human

Conversations, meetings, negotiations, interviews, written exchanges

Human ↔ System

Voice assistants, chatbots, UI interactions, forms, navigation

System ↔ System

API calls, event streams, log sequences, data pipelines

Human ↔ Environment

Sensor data, biometrics, IoT interactions, physical-digital bridging

Protocol Modalities

Text Documents

● Available

.pdf, .docx, .pptx, .rtf, .txt

Structured Data

● Available

.xlsx, .csv, JSON, XML

Scanned Documents

● Available

Image-based PDFs, photos

Audio

● Available

Calls, meetings, voice notes

Images

● Available

Screenshots, diagrams, embedded

Video

◐ Partial

Screen recordings, video meetings

System Logs

◐ Partial

Application logs, error traces

Event Streams

○ Roadmap

API request/response, webhooks

Behavioral Signals

○ Roadmap

Click streams, navigation paths

Sensor Data

○ Roadmap

IoT telemetry, biometrics

Available Partial Roadmap