Skip to main content

Event Architecture

This section explains the event system in simple language.


What Are Events?

Events are “things that happened.”
They attach meaning to time in your dataset.

Examples:

  • “The user pressed button A”
  • “A grasp started”
  • “A robot arm reached waypoint 3”
  • “EEG channel spiked”
  • “The agent completed a task phase”

Events turn raw pose/bio/CV data into semantic structure.


Event Hierarchy Explained

Events come in three main forms:

EventPoint     → a moment in time
IntervalEvent → a span of time
TypedEvent → a domain-specific event

Think of it like inheritance in object-oriented programming.

ASCII:

Event
├── EventPoint
├── IntervalEvent
└── TypedEvent
├── GraspEvent
├── AttentionShiftEvent
├── TrackingLostEvent
└── ObjectPlacedEvent

Why Multiple Timestamps?

Different sensors use different timebases.

Examples:

  • EEG uses sample indices
  • Video uses frame numbers
  • XR uses monotonic clocks
  • Mocap uses system clocks

Events may include ANY of these.

This is what allows cross-device alignment.


Event Relationships (actor, target)

Typed events can describe who did what to what.

Example:
A GraspEvent:

  • actor = left hand
  • target = object_42

A CognitiveEvent:

  • actor = “EEG montage”
  • context = “working memory trial 7”

An AgentEvent:

  • actor = robot arm
  • target = null

Compound Events

Complex tasks break down into smaller events.

Example:

PickAndPlace
Reach
Grasp
Transport
Place

This structure is essential for:

  • imitation learning
  • hierarchical RL
  • multimodal analysis
  • robotics task decomposition

Referencing Streams and Samples

Events may link to:

  • specific pose samples,
  • specific EEG windows,
  • specific frames,
  • specific objects or metadata elements.

This allows learning systems to reconstruct the exact context of an event.


Summary

Events are:

  • semantic,
  • hierarchical,
  • multimodal,
  • timestamp-aware,
  • cross-device friendly,
  • essential for training embodied AI.