Event Architecture
This section is normative.
The MIND Event Architecture defines how semantic and low-level events are represented, timestamped, typed, related, and linked to multimodal sensor data. It is designed for high‑precision human data capture, XR systems, biosensing, CV pipelines, robotics control, and embodied AI learning.
Event Purpose
Events represent temporal occurrences of any kind, including:
- Low-level markers (e.g., “frame dropped”, “tracking lost”)
- High-level interaction semantics (e.g., “grasp started”)
- Task and behavior semantics (e.g., “manipulation completed”)
- System or device state changes (e.g., “EEG artifact detected”)
Events MAY encode:
- cognitive or physiological interpretations,
- task or episode structure,
- agent output actions,
- user actions inferred from multimodal data.
All event types MUST inherit from one of the base Event structures defined in this section.
Event Type Hierarchy
Events MUST follow a hierarchical type system:
Event
├── EventPoint
├── IntervalEvent
└── TypedEvent
├── InteractionEvent
├── ManipulationEvent
├── CognitiveEvent
├── DeviceEvent
└── AnnotationEvent
Each typed event MUST declare:
- namespace,
- family,
- name,
- version.
Example:
MIND.interaction/GraspEvent@1.0.0
Typed events MAY define:
- actors,
- targets,
- tools,
- parameters,
- metadata references.
Base Event Structure
All events MUST include:
- A unique
event_id - At least one timestamp (Section 4)
- An
event_typeidentifier using the Modality-ID pattern - Optional labels or tags
- Optional structured metadata
Unknown fields MUST be ignored unless marked required.
Event Timestamps
Events MAY carry multiple timestamp forms, including:
t_monotonict_systemframesample_index- vendor-defined timestamps
Events MUST include at least one canonical timestamp (t_monotonic or t_system).
Events MAY include domain-specific timestamps for:
- EEG windows,
- video frame alignment,
- CV pipeline stages,
- robotic command execution cycles.
EventPoint
An EventPoint MUST include:
- one timestamp (minimum)
- an event type
- optional references (streams, samples, metadata)
It represents an instantaneous event.
ASCII:
------•-------------------------
EventPoint(t=302.1 ms)
IntervalEvent
An IntervalEvent MUST include:
start(EventPoint)end(EventPoint)
The end timestamp MUST be >= start timestamp.
Example:
GraspEvent(start=210.0 ms, end=388.2 ms)
IntervalEvents represent:
- reach phases
- sustained grasps
- ongoing tasks
- EEG artifacts with duration
- CV tracking stable windows
Event Referencing Rules
Events MAY reference:
- Streams (by ID)
- Samples (by index or timestamp)
- Metadata objects (e.g., SkeletonProfile, CameraProfile)
All references MUST resolve strictly.
Examples:
- A GraspEvent referencing hand pose streams
- A CognitiveEvent referencing EEG windows
- A ManipulationEvent referencing object metadata
Relationships in Typed Events
The base Event MUST NOT define specific relationships.
Typed events MAY define:
actor(human joint, robot effector, AI agent)target(object, body part, device, location)tool(held object, controller)context(task information)
Example:
{
"event_type": "MIND.interaction/GraspEvent@1.0.0",
"actor": "left_hand",
"target": "object_42"
}
Compound Event Structure (Hierarchy)
Events MAY reference parent events.
Example structure:
TaskEvent("PickAndPlace")
├── ReachEvent
├── GraspEvent
└── PlaceEvent
Parent-child relationships MUST form a Directed Acyclic Graph (DAG).
6.10 Event Families
Events MUST belong to a family:
interaction/manipulation/cognitive/device/annotation/task/agent/
Families MAY define required metadata (e.g., EEG montage for cognitive events).
6.11 Summary
- Events represent semantic and low-level temporal occurrences.
- Event hierarchy enables expressive and extensible semantics.
- EventPoint and IntervalEvent form the temporal backbone.
- Events may carry multiple timestamps.
- Typed events define actors, targets, etc.
- Compound events create task-level semantics.
- All references MUST resolve strictly.