Unified Interaction Representation
Protocols are modality-agnostic. All input modalities convert to a Unified Interaction Representation (UIR) before protocol analysis. One protocol. Any modality. Same measurement.
Regardless of source modality, the protocol engine receives a normalized representation containing:
Semantic Content
What was communicated
Temporal Structure
When it happened
Actant Involvement
Who was involved
Reconstruction Chain
Glass-box provenance
Interaction Types
Conversations, meetings, negotiations, interviews, written exchanges
Voice assistants, chatbots, UI interactions, forms, navigation
API calls, event streams, log sequences, data pipelines
Sensor data, biometrics, IoT interactions, physical-digital bridging
Protocol Modalities
Text Documents
● Available.pdf, .docx, .pptx, .rtf, .txt
Structured Data
● Available.xlsx, .csv, JSON, XML
Scanned Documents
● AvailableImage-based PDFs, photos
Audio
● AvailableCalls, meetings, voice notes
Images
● AvailableScreenshots, diagrams, embedded
Video
◐ PartialScreen recordings, video meetings
System Logs
◐ PartialApplication logs, error traces
Event Streams
○ RoadmapAPI request/response, webhooks
Behavioral Signals
○ RoadmapClick streams, navigation paths
Sensor Data
○ RoadmapIoT telemetry, biometrics