The dataset viewer should be available soon. Please retry later.
OpenGitHub
What is it?
This dataset contains every public event on GitHub: every push, pull request, issue, star, fork, code review, release, and discussion across all public repositories. GitHub is the world's largest software development platform, home to over 200 million repositories and the daily work of tens of millions of developers, from individual open-source contributors to the engineering teams behind the most widely used software on earth.
The archive currently spans from 2011-02-12 to 2015-01-03 (1,026 days), totaling 170,256,903 events across 16 fully structured Parquet tables. New events are fetched directly from the GitHub Events API every few seconds and committed as 5-minute Parquet blocks through an automated live pipeline, so the dataset stays current with GitHub itself.
We believe this is the most complete and regularly updated structured mirror of public GitHub activity available on Hugging Face. The original 49.0 GB of raw GH Archive NDJSON has been parsed, flattened, and compressed into 15.9 GB of Zstd-compressed Parquet. Every nested JSON field is expanded into typed columns — no JSON parsing needed downstream. The data is partitioned as data/TABLE/YYYY/MM/DD.parquet, making it straightforward to query with DuckDB, load with the datasets library, or process with any tool that reads Parquet.
The underlying data comes from GH Archive, created by Ilya Grigorik, which has been recording every public GitHub event via the Events API since 2011. Released under the Open Data Commons Attribution License (ODC-By) v1.0.
Live data (today)
Events from today are captured in near-real-time from the GitHub Events API and stored as 5-minute blocks in today/raw/YYYY/MM/DD/HHMM.parquet. Each block contains a generic event record with the full JSON payload preserved for later processing. Live blocks are committed to this dataset within minutes of the events occurring.
2026-03-28 — 2,096,569 events in 3820 blocks
00:00 █████████████████████████░░░░░ 235.4K
01:00 █████████████████████████████░ 272.8K
02:00 █████████████████████████████░ 271.7K
03:00 █████████████████████████████░ 273.7K
04:00 ██████████████████████████████ 279.7K
05:00 ██████████████████░░░░░░░░░░░░ 169.7K
06:00 ██░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 21.1K
07:00 █████████████████░░░░░░░░░░░░░ 165.7K
08:00 ████████████████████████████░░ 262.4K
09:00 ███████████████░░░░░░░░░░░░░░░ 144.4K
10:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
11:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
12:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
13:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
14:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
15:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
16:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
17:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
18:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
19:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
20:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
21:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
22:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
23:00 ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 0
Live event schema
| Column | Type | Description |
|---|---|---|
event_id |
string | Unique GitHub event ID |
event_type |
string | Event type (PushEvent, IssuesEvent, etc.) |
created_at |
timestamp | When the event occurred |
actor_id |
int64 | User ID |
actor_login |
string | Username |
repo_id |
int64 | Repository ID |
repo_name |
string | Full repository name (owner/repo) |
org_id |
int64 | Organization ID (0 if personal) |
org_login |
string | Organization login |
action |
string | Event action (opened, closed, started, etc.) |
number |
int32 | Issue/PR number |
payload_json |
string | Full event payload as JSON |
# Query today's live events with DuckDB
import duckdb
duckdb.sql("""
SELECT event_type, COUNT(*) as n
FROM read_parquet('hf://datasets/open-index/open-github/today/raw/**/*.parquet')
GROUP BY event_type ORDER BY n DESC
""").show()
Events per year
2011 █████░░░░░░░░░░░░░░░░░░░░░░░░░ 14.1M
2012 █████████████░░░░░░░░░░░░░░░░░ 34.3M
2013 ██████████████████████████████ 74.5M
2014 ██████████████████░░░░░░░░░░░░ 46.9M
2015 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 511.7K
| Year | Days | Events | Avg/Day | Raw Input | Parquet Output | Download | Process | Upload |
|---|---|---|---|---|---|---|---|---|
| 2011 | 243 | 14,096,144 | 58,008 | 2.7 GB | 1.4 GB | 1h06m | 50m30s | 1h55m |
| 2012 | 291 | 34,256,841 | 117,721 | 9.2 GB | 3.2 GB | 2h14m | 3h16m | 2h50m |
| 2013 | 344 | 74,483,412 | 216,521 | 22.7 GB | 7.0 GB | 3h27m | 10h53m | 4h29m |
| 2014 | 146 | 46,908,757 | 321,292 | 14.2 GB | 4.3 GB | 1h34m | 8h09m | 2h14m |
| 2015 | 2 | 511,749 | 255,874 | 166.6 MB | 85.1 MB | 20s | 2m59s | 48s |
Pushes per year
Pushes are the most common event type, representing roughly half of all GitHub activity. Each push can contain multiple commits. Bots (Dependabot, Renovate, CI pipelines) account for a significant share.
2011 █████░░░░░░░░░░░░░░░░░░░░░░░░░ 6.7M
2012 ████████████░░░░░░░░░░░░░░░░░░ 16.5M
2013 ██████████████████████████████ 38.1M
2014 ██████████████████░░░░░░░░░░░░ 23.9M
2015 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 274.6K
-- Top 20 repos by push volume this year
SELECT repo_name, COUNT(*) as pushes, SUM(push_size) as commits
FROM read_parquet('hf://datasets/open-index/open-github/data/pushes/2025/**/*.parquet')
GROUP BY repo_name ORDER BY pushes DESC LIMIT 20;
Issues per year
Issue events track the full lifecycle: opened, closed, reopened, labeled, assigned, and more. Use the action column to filter by lifecycle stage.
2011 █████░░░░░░░░░░░░░░░░░░░░░░░░░ 737.1K
2012 █████████████░░░░░░░░░░░░░░░░░ 1.9M
2013 ██████████████████████████████ 4.3M
2014 █████████████████░░░░░░░░░░░░░ 2.5M
2015 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 24.9K
-- Repos with the most issues opened this year
SELECT repo_name,
COUNT(*) FILTER (WHERE action = 'opened') as opened,
COUNT(*) FILTER (WHERE action = 'closed') as closed
FROM read_parquet('hf://datasets/open-index/open-github/data/issues/2025/**/*.parquet')
GROUP BY repo_name ORDER BY opened DESC LIMIT 20;
Pull requests per year
Pull request events cover the full review cycle: opened, merged, closed, review requested, and synchronized (new commits pushed). The merged field indicates whether a PR was merged when closed.
2011 ███░░░░░░░░░░░░░░░░░░░░░░░░░░░ 370.9K
2012 ███████████████░░░░░░░░░░░░░░░ 1.5M
2013 ██████████████████████████████ 2.9M
2014 █████████████████████░░░░░░░░░ 2.0M
2015 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 20.7K
-- Top repos by merged PRs this year
SELECT repo_name, COUNT(*) as merged_prs
FROM read_parquet('hf://datasets/open-index/open-github/data/pull_requests/2025/**/*.parquet')
WHERE action = 'closed' AND merged = true
GROUP BY repo_name ORDER BY merged_prs DESC LIMIT 20;
Stars per year
Stars (WatchEvent in the GitHub API) reflect community interest and discovery. Starring patterns often correlate with Hacker News, Reddit, or Twitter posts.
2011 ██████░░░░░░░░░░░░░░░░░░░░░░░░ 1.4M
2012 ██████████████░░░░░░░░░░░░░░░░ 3.3M
2013 ██████████████████████████████ 7.0M
2014 ██████████████████░░░░░░░░░░░░ 4.3M
2015 █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 50.3K
-- Most starred repos this year
SELECT repo_name, COUNT(*) as stars
FROM read_parquet('hf://datasets/open-index/open-github/data/stars/2025/**/*.parquet')
GROUP BY repo_name ORDER BY stars DESC LIMIT 20;
Quick start
Python (datasets)
from datasets import load_dataset
# Stream all stars
ds = load_dataset("open-index/open-github", "stars", streaming=True)
for row in ds["train"]:
print(row["repo_name"], row["actor_login"], row["created_at"])
# Load a specific month of issues
ds = load_dataset("open-index/open-github", "issues",
data_files="data/issues/2024/06/*.parquet")
# Load all pull requests into memory
ds = load_dataset("open-index/open-github", "pull_requests")
# Query today's live events
ds = load_dataset("open-index/open-github", "live", streaming=True)
for row in ds["train"]:
print(row["event_type"], row["repo_name"], row["created_at"])
DuckDB
-- Top 20 most-starred repos this year
SELECT repo_name, COUNT(*) as stars
FROM read_parquet('hf://datasets/open-index/open-github/data/stars/2025/**/*.parquet')
GROUP BY repo_name ORDER BY stars DESC LIMIT 20;
-- Most active PR reviewers (approved only)
SELECT actor_login, COUNT(*) as reviews
FROM read_parquet('hf://datasets/open-index/open-github/data/pr_reviews/2025/**/*.parquet')
WHERE review_state = 'approved'
GROUP BY actor_login ORDER BY reviews DESC LIMIT 20;
-- Issue open/close rates by repo
SELECT repo_name,
COUNT(*) FILTER (WHERE action = 'opened') as opened,
COUNT(*) FILTER (WHERE action = 'closed') as closed,
ROUND(COUNT(*) FILTER (WHERE action = 'closed') * 100.0 /
NULLIF(COUNT(*) FILTER (WHERE action = 'opened'), 0), 1) as close_pct
FROM read_parquet('hf://datasets/open-index/open-github/data/issues/2025/**/*.parquet')
GROUP BY repo_name HAVING opened >= 10
ORDER BY opened DESC LIMIT 20;
-- Full activity timeline for a repo
SELECT event_type, created_at, actor_login
FROM read_parquet('hf://datasets/open-index/open-github/data/*/2025/03/*.parquet')
WHERE repo_name = 'golang/go'
ORDER BY created_at DESC LIMIT 100;
Bulk download (huggingface_hub)
from huggingface_hub import snapshot_download
# Download only stars data
folder = snapshot_download(
"open-index/open-github",
repo_type="dataset",
local_dir="./open-github/",
allow_patterns="data/stars/**/*.parquet",
)
For faster downloads, install pip install huggingface_hub[hf_transfer] and set HF_HUB_ENABLE_HF_TRANSFER=1.
Schema
Event envelope (shared across all 16 tables)
Every row includes these columns:
| Column | Type | Description |
|---|---|---|
event_id |
string | Unique GitHub event ID |
event_type |
string | GitHub event type (e.g. PushEvent, IssuesEvent) |
created_at |
string | ISO 8601 timestamp |
actor_id |
int64 | User ID of the actor |
actor_login |
string | Username of the actor |
repo_id |
int64 | Repository ID |
repo_name |
string | Full repository name (owner/repo) |
org_id |
int64 | Organization ID (0 if personal repo) |
org_login |
string | Organization login |
Per-table payload fields
pushes.PushEvent
Git push events, typically the highest volume table (~50% of all events). Each push includes the full list of commits with SHA, message, and author.
Processing: Each PushEvent produces one row. The commits field is a Parquet LIST of structs with fields sha, message, author_name, author_email, distinct, url. All other fields are flattened directly from payload.*.
| Column | Type | Description |
|---|---|---|
push_id |
int64 | Unique push identifier |
ref |
string | Git ref (e.g. refs/heads/main) |
head |
string | SHA after push |
before |
string | SHA before push |
size |
int32 | Total commits in push |
distinct_size |
int32 | Distinct (new) commits |
commits |
list<struct> | Commit list: [{sha, message, author_name, author_email, distinct, url}] |
issues.IssuesEvent
Issue lifecycle events: opened, closed, reopened, edited, labeled, assigned, milestoned, and more. Contains the full issue snapshot at event time.
Processing: Flattened from payload.issue.*. Nested objects like issue.user become user_login, issue.milestone becomes milestone_id/milestone_title. Labels and assignees are Parquet LIST columns.
| Column | Type | Description |
|---|---|---|
action |
string | opened, closed, reopened, labeled, etc. |
issue_id |
int64 | Issue ID |
issue_number |
int32 | Issue number |
title |
string | Issue title |
body |
string | Issue body (markdown) |
state |
string | open or closed |
locked |
bool | Whether comments are locked |
comments_count |
int32 | Comment count |
user_login |
string | Author username |
user_id |
int64 | Author user ID |
assignee_login |
string | Primary assignee |
milestone_title |
string | Milestone name |
labels |
list<string> | Label names |
assignees |
list<string> | Assignee logins |
reactions_total |
int32 | Total reactions |
issue_created_at |
timestamp | When the issue was created |
issue_closed_at |
timestamp | When closed (null if open) |
issue_comments.IssueCommentEvent
Comments on issues and pull requests. Each event contains both the comment and a summary of the parent issue.
Processing: Flattened from payload.comment.* and payload.issue.*. Comment reactions are flattened from comment.reactions.*. The parent issue fields are prefixed with issue_ for context.
| Column | Type | Description |
|---|---|---|
action |
string | created, edited, or deleted |
comment_id |
int64 | Comment ID |
comment_body |
string | Comment text (markdown) |
comment_user_login |
string | Comment author |
comment_created_at |
string | Comment timestamp |
issue_number |
int32 | Parent issue/PR number |
issue_title |
string | Parent issue/PR title |
issue_state |
string | Parent state (open/closed) |
reactions_total |
int32 | Total reactions on comment |
pull_requests.PullRequestEvent
Pull request lifecycle: opened, closed, merged, labeled, review_requested, synchronize, and more. The richest table, containing diff stats, merge status, head/base refs, and full PR metadata.
Processing: Deeply flattened from payload.pull_request.*. Branch refs like head.ref, head.sha, base.ref become head_ref, head_sha, base_ref. Repository info from head.repo and base.repo become head_repo_full_name, base_repo_full_name. Labels and reviewers are Parquet LIST columns.
| Column | Type | Description |
|---|---|---|
action |
string | opened, closed, merged, synchronize, etc. |
pr_id |
int64 | PR ID |
pr_number |
int32 | PR number |
title |
string | PR title |
body |
string | PR body (markdown) |
state |
string | open or closed |
merged |
bool | Whether merged |
draft |
bool | Whether a draft PR |
commits_count |
int32 | Commit count |
additions |
int32 | Lines added |
deletions |
int32 | Lines deleted |
changed_files |
int32 | Files changed |
user_login |
string | Author username |
head_ref |
string | Source branch |
head_sha |
string | Source commit SHA |
base_ref |
string | Target branch |
head_repo_full_name |
string | Source repo |
base_repo_full_name |
string | Target repo |
merged_by_login |
string | Who merged |
pr_created_at |
timestamp | When the PR was opened |
pr_merged_at |
timestamp | When merged (null if not merged) |
labels |
list<string> | Label names |
requested_reviewers |
list<string> | Requested reviewer logins |
reactions_total |
int32 | Total reactions |
pr_reviews.PullRequestReviewEvent
Code review submissions: approved, changes_requested, commented, or dismissed. Each review is one row.
Processing: Flattened from payload.review.* and payload.pull_request.*. The review state (approved/changes_requested/commented/dismissed) is the most useful field for analyzing review patterns.
| Column | Type | Description |
|---|---|---|
action |
string | submitted, dismissed |
review_id |
int64 | Review ID |
review_state |
string | approved, changes_requested, commented, dismissed |
review_body |
string | Review body text |
review_submitted_at |
timestamp | Review timestamp |
review_user_login |
string | Reviewer username |
review_commit_id |
string | Commit SHA reviewed |
pr_id |
int64 | PR ID |
pr_number |
int32 | PR number |
pr_title |
string | PR title |
pr_review_comments.PullRequestReviewCommentEvent
Line-level comments on pull request diffs. Includes the diff hunk for context and threading via in_reply_to_id.
Processing: Flattened from payload.comment.* and payload.pull_request.*. The diff_hunk field contains the surrounding diff context. Thread replies reference the parent comment via in_reply_to_id.
| Column | Type | Description |
|---|---|---|
action |
string | created |
comment_id |
int64 | Comment ID |
comment_body |
string | Comment text |
diff_hunk |
string | Diff context |
path |
string | File path |
line |
int32 | Line number |
side |
string | LEFT or RIGHT |
in_reply_to_id |
int64 | Parent comment (threads) |
comment_user_login |
string | Author |
comment_created_at |
string | Timestamp |
pr_number |
int32 | PR number |
reactions_total |
int32 | Total reactions |
stars.WatchEvent
Repository star events. Simple, high-signal data: who starred which repo, and when. The action field is always "started" (GitHub API naming quirk: WatchEvent means starring, not watching).
Processing: Minimal flattening: only the action field from payload. The event envelope (actor, repo, timestamp) carries all the useful information.
| Column | Type | Description |
|---|---|---|
action |
string | Always started |
forks.ForkEvent
Repository fork events. Contains metadata about the newly created fork, including its language, license, and star count at fork time.
Processing: Flattened from payload.forkee.*. The forkee is the newly created repository. Owner info from forkee.owner becomes forkee_owner_login. License from forkee.license becomes forkee_license_key. Topics are a Parquet LIST column.
| Column | Type | Description |
|---|---|---|
forkee_id |
int64 | Forked repo ID |
forkee_full_name |
string | Fork full name (owner/repo) |
forkee_language |
string | Primary language |
forkee_stars_count |
int32 | Stars at fork time |
forkee_forks_count |
int32 | Forks at fork time |
forkee_owner_login |
string | Fork owner |
forkee_description |
string | Fork description |
forkee_license_key |
string | License SPDX key |
forkee_topics |
list<string> | Repository topics |
forkee_created_at |
timestamp | Fork creation time |
creates.CreateEvent
Branch, tag, or repository creation. The ref_type field distinguishes between them.
Processing: Direct mapping from payload.* fields. When ref_type is "repository", the ref field is null and description contains the repo description.
| Column | Type | Description |
|---|---|---|
ref |
string | Ref name (branch/tag name, null for repos) |
ref_type |
string | branch, tag, or repository |
master_branch |
string | Default branch name |
description |
string | Repo description (repo creates only) |
pusher_type |
string | User type |
deletes.DeleteEvent
Branch or tag deletion. Repositories cannot be deleted via the Events API.
Processing: Direct mapping from payload.* fields.
| Column | Type | Description |
|---|---|---|
ref |
string | Deleted ref name |
ref_type |
string | branch or tag |
pusher_type |
string | User type |
releases.ReleaseEvent
Release publication events. Contains the full release metadata including tag, release notes, and assets.
Processing: Flattened from payload.release.*. Author info from release.author becomes release_author_login. Assets are a Parquet LIST of structs. Reactions flattened from release.reactions.*.
| Column | Type | Description |
|---|---|---|
action |
string | published, edited, etc. |
release_id |
int64 | Release ID |
tag_name |
string | Git tag |
name |
string | Release title |
body |
string | Release notes (markdown) |
draft |
bool | Draft release |
prerelease |
bool | Pre-release |
release_created_at |
timestamp | Creation time |
release_published_at |
timestamp | Publication time |
release_author_login |
string | Author |
assets_count |
int32 | Number of assets |
assets |
list<struct> | Assets: [{name, label, content_type, state, size, download_count}] |
reactions_total |
int32 | Total reactions |
commit_comments.CommitCommentEvent
Comments on specific commits. Can be on a specific file and line, or on the commit as a whole.
Processing: Flattened from payload.comment.*. When the comment is on a specific file, path and line are populated. Reactions flattened from comment.reactions.*.
| Column | Type | Description |
|---|---|---|
comment_id |
int64 | Comment ID |
commit_id |
string | Commit SHA |
comment_body |
string | Comment text |
path |
string | File path (line comments) |
line |
int32 | Line number |
position |
int32 | Diff position |
comment_user_login |
string | Author |
comment_created_at |
string | Timestamp |
reactions_total |
int32 | Total reactions |
wiki_pages.GollumEvent
Wiki page creates and edits. A single GollumEvent can contain multiple page changes, so we emit one row per page (not per event).
Processing: The payload.pages array is unpacked: each page in the array produces a separate row, all sharing the same event envelope. This means one GitHub event can generate multiple rows.
| Column | Type | Description |
|---|---|---|
page_name |
string | Page slug |
title |
string | Page title |
action |
string | created or edited |
sha |
string | Page revision SHA |
summary |
string | Edit summary |
members.MemberEvent
Collaborator additions to repositories.
Processing: Flattened from payload.member.*. The actor is who added the member; the member fields describe who was added.
| Column | Type | Description |
|---|---|---|
action |
string | added |
member_id |
int64 | Added user's ID |
member_login |
string | Added user's username |
member_type |
string | User type |
public_events.PublicEvent
Repository visibility changes from private to public. The simplest table, containing only the event envelope (who, which repo, when) with no additional payload columns.
Processing: No payload fields are extracted. The event envelope alone captures the relevant information.
discussions.DiscussionEvent
GitHub Discussions lifecycle: created, answered, category_changed, labeled, and more. Includes category, answer status, and full discussion metadata.
Processing: Flattened from payload.discussion.*. Category info from discussion.category becomes category_name/category_slug/category_emoji. Answer info becomes answer_html_url/answer_chosen_at. Labels are a Parquet LIST column. Reactions flattened from discussion.reactions.*.
| Column | Type | Description |
|---|---|---|
action |
string | created, answered, category_changed, etc. |
discussion_number |
int32 | Discussion number |
title |
string | Discussion title |
body |
string | Discussion body (markdown) |
state |
string | Discussion state |
comments_count |
int32 | Comment count |
user_login |
string | Author |
category_name |
string | Category name |
category_slug |
string | Category slug |
discussion_created_at |
timestamp | When created |
answer_chosen_at |
timestamp | When answer was accepted (null if none) |
labels |
list<string> | Label names |
reactions_total |
int32 | Total reactions |
Per-table breakdown
| Table | GitHub Event | Events | % | Description |
|---|---|---|---|---|
pushes |
PushEvent | 85,477,582 | 50.2% | Git pushes with commits |
issues |
IssuesEvent | 9,504,384 | 5.6% | Issue lifecycle events |
issue_comments |
IssueCommentEvent | 14,848,057 | 8.7% | Comments on issues/PRs |
pull_requests |
PullRequestEvent | 6,875,412 | 4.0% | PR lifecycle events |
pr_review_comments |
PullRequestReviewCommentEvent | 1,536,339 | 0.9% | Line-level PR comments |
stars |
WatchEvent | 15,970,341 | 9.4% | Repository stars |
forks |
ForkEvent | 6,116,619 | 3.6% | Repository forks |
creates |
CreateEvent | 21,460,713 | 12.6% | Branch/tag/repo creation |
deletes |
DeleteEvent | 2,079,003 | 1.2% | Branch/tag deletion |
releases |
ReleaseEvent | 186,729 | 0.1% | Release publications |
commit_comments |
CommitCommentEvent | 1,596,617 | 0.9% | Comments on commits |
wiki_pages |
GollumEvent | 2,875,837 | 1.7% | Wiki page edits |
members |
MemberEvent | 104,094 | 0.1% | Collaborator additions |
public_events |
PublicEvent | 157,568 | 0.1% | Repo made public |
How it's built
The pipeline has two modes that work together:
Archive mode processes historical GH Archive hourly dumps in a single pass per file: download the .json.gz, decompress and parse each JSON line, route by event type to one of 16 handlers, flatten nested JSON into typed columns, write to Parquet with Zstd compression, and publish daily to HuggingFace.
Live mode captures events directly from the GitHub Events API in near-real-time. Multiple API tokens poll concurrently with adaptive pagination (up to 300 events per cycle). Events are deduplicated by ID, bucketed into 5-minute blocks by their created_at timestamp, and written as Parquet files. Each block is pushed to HuggingFace immediately after writing. On each hour boundary, the corresponding GH Archive file is downloaded and merged into the typed daily tables for complete coverage.
All scalar fields are fully flattened into typed columns. Variable-length arrays (commits, labels, assets, topics, assignees) are stored as native Parquet LIST columns — no JSON strings. All *_at timestamp fields use the Parquet TIMESTAMP type (UTC microsecond precision), so DuckDB, pandas, Spark, and the HuggingFace viewer all read them as native datetimes.
No events are filtered. Every public event captured by GH Archive appears in the corresponding table. Events with parse errors are logged and skipped (typically less than 0.01%).
Known limitations
- Full coverage starts 2015-01-01. Events from 2011-02-12 to 2014-12-31 are included but parsed from the deprecated Timeline API format, which has less detail for some event types.
- Bot activity. A significant fraction of events (especially pushes and issues) are generated by bots such as Dependabot, Renovate, and CI systems. No bot filtering is applied.
- Event lag. GH Archive captures events with a small delay (roughly minutes). Events during GitHub outages may be missing.
- Pre-2015 limitations. IssuesEvent and IssueCommentEvent from 2012-2014 contain only integer IDs (no title, body, or state) because the old API did not include full objects in event payloads.
Personal information
All data was already public on GitHub. Usernames, user IDs, and repository information are included as they appear in the GitHub Events API. Email addresses may appear in commit metadata within PushEvent payloads (from public git commit objects). No private repository data is present.
License
Released under the Open Data Commons Attribution License (ODC-By) v1.0. The underlying data is sourced from the public GitHub Events API via GH Archive. GitHub's Terms of Service apply to the original data.
Credits
- GH Archive by Ilya Grigorik, the foundational project that has recorded every public GitHub event since 2011
- GitHub Events API, the source data stream
- Built with Apache Parquet (Go), published via HuggingFace Hub
Contact
Questions, feedback, or issues? Open a discussion on the Community tab.
- Downloads last month
- 100