Datasets:
video video |
|---|
YAML Metadata Warning:The task_categories "representation-learning" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
YAML Metadata Warning:The task_categories "video-understanding" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
Fidelity Data Factory – Egocentric State–Action Transitions (v0)
This repository contains an initial release of structured state–action–state′ transitions extracted from real-world egocentric video.
The goal of this dataset is to provide early infrastructure for learning dynamics and representations from large-scale human activity data.
Overview
Each data point is a short temporal transition of the form:
(s_t, a_t, s_{t+1})
Transitions are derived from monocular egocentric footage recorded in real factory environments.
This release does not include robot-specific signals such as torques or joint states, and is intended for research and exploration rather than deployment.
Data Contents
- ~200k+ transitions
- Egocentric (head / chest-mounted) viewpoint
- Real industrial environments
Transitions are stored in JSONL format.
Schema (Simplified)
Each record contains:
s:ego_poseego_velocityhand_stateentities(objects with image-space location)meta(video id, timestamp)
a:ego_deltahand_deltainteraction_delta
s_prime:- Same structure as
s, representing the next timestep
- Same structure as
See schema.json for full details.
Intended Use
This dataset may be useful for:
- World model research
- Offline RL
- Vision–language–action pretraining
- Learning dynamics from human activity
- Representation learning from egocentric video
Limitations
- Monocular video only
- No force / torque signals
- No task labels
- Contains estimation noise
Credits
Original video data provided by BuildAI.
Enrichment and processing by Fidelity Dynamics.
- Downloads last month
- 316