The Density Index
My first Ethereum conference was 20,000 people, sprawled across Amsterdam.
I had a game plan. Find the venues where someone was putting in effort for a great experience — not big-name speakers or swag, but comforts like good food, and communal spaces away from the megaphones. Because when an event works, it's because the people in it are at their best. Learning, conversation, connections. All of those depend on the state of the people in the room.
That lead me to 0xPARC's Dark Forest conference.
After the talks, we've all seen small crowds form around speakers. People waiting their turn to get answers in a topic they care about. The speaker's coming down from nerves, just wants a bottle of water, and the whole conference environment isn't built for this moment to be good. But it's where you can find the meaningful questions, thoughtful expert answers, and deep conversations.
And if it's your topic too, it's where you find your people.
I hit it off with a bunch of game developers.
The Amsterdam tourist district was a drunken, crypto merch explosion. I wondered how many were setting themselves up to get pickpocketed, or worse.
What struck me about the whole Blockchain Week experience was the exclusive events. Tickets sold out on the event pages. Bouncers at the door. Then you'd get inside and find a mostly empty room. Like clubs that keep a line outside to look full.
A group of us figured out a better way. Check which events your friends were at. Show up. They'd come to the door, maybe grab the organiser, who'd be happy to let in people vetted by their community. No tickets needed.
I had to be missing something. Crypto was supposed to be about economic design, and nobody had solved this economic design problem they were all living through every summer?
The design problem
Blockchain Weeks have a no-show problem. RSVPs are free options, organisers compete on hype, and the events that could really move people forward get lost in the noise. Years later, I'm building zKal, and the Proof Projector architecture it uses gives you a privacy-preserving way to publish aggregate signals from private attendance proofs.
It's a crpytophraphic pattern. It doesn't tell you what to aggregate. And that turns out to be the harder question.
But the constraint helps by bringing the design challenge into clearer focus.
When you can't see raw attendance history — because privacy — you have to ask: what's the thing across all this data that tells you something real?
It breaks down into two specific functions:
- Projector Function: What single, anonymous number comes out of an individual's private history?
- Aggregation Function: How do you combine those so they're useful to others in the community?
What the weight needs to capture
If you've run meetups or conferences for some time, you've seen a general progression.
First, someone's checking out the scene. Is this a place for me? They're scanning the topics, the vibe, the people. Big-name speakers help here — trustworthy authorities who high-level the current issues, a fast track to understanding what matters.
Then they start applying what they've learned. Their needs shift. They want practitioners who are various distances ahead of them on their specific path. Too many big names actually get in the way at this point. They become noise to a signal that's about practicalities — projects working and learning in public, people who've been building for six months or a few years and have hard-won things to share.
The crypto ecosystem is full of these people, but the conference structure very rarely surfaces them. I'd argue this is the one of the big reason we have a chasm between many with deep interest and only a few succeeding projects. (The other is over-funding kills projects, but that's another thing altogether.)
On the other side, you get the committed communities. Specialised workshops, cohort programs, or just small rooms of people around hands-on practical topics — seemingly niche but major unlocks for anyone in that space.
A good weight function should reflect where someone is in this progression, and match them to events where they'll get the most out of it.
The formula
zKal is a self-hosted ticketing platform, but you can host for an entire community of events. The tickets are private ZK proofs, and the fun part is how tickets can get released in phases. A phase is for people who have been before (they need an attendance proof) or who've been to related events, and then there's open access. Event hosts choose the amount of tickets and timelines for each phase.
When someone proves eligibility for a ticket, the proof carries properties derived from private inputs, their prior event attendance: recency, event type, event size. Enough to generate a single weight. Not unique, not traceable.
Only the weight enters the aggregate layer. The person doesn't.
participant_weight = recency × trust × size_normalization
Recency uses exponential decay — weight halves every 12 months since attendance. Recent engagement matters more, but older participation still contributes. Nobody gets permanently locked in or locked out.
Trust reflects how often a prior event is cited as an access credential by other events in the ecosystem. It's bounded between 0.33 and 0.66 on an S-curve — starts increasing after 2 independent citations, saturates after 6. This prevents any single anchor event from monopolising reputation while rewarding events the ecosystem organically values.
Size normalization is the inverse of the prior event's RSVP count. Smaller, more focused gatherings carry stronger signal.
The event's public score is:
Density Index = mean(participant_weights)
The Density Index is a single public number per event, visible in realtime on the calendar. It tells you how likely this event is to have a high density of people who actually care about its topic.
That's the model I started with. Grounded in years of running meetups and conferences, encoded into math. The question was whether the math would agree with the experience — and where it would break.
Breaking it
I vibe-coded a scenario simulator and threw two years of blockchain week event listing data at it. No historical attendance records existed, only event listings — so the exercise was about ranging assumptions across scenarios and looking for where the model fell apart. Each run would push a different set of permutations; event hosts playing the Density Index game, and participants reacting to it.
I kept running it, scanning for outcomes that looked wrong or broke free from what the incentives were supposed to do.
Running it again and again and again.
Under-curation
The first thing that broke was the obvious way to game the system. Event hosts can select relevant events for theirs, which give early access and up your event's Density Index.
Hosts that want to attract big crowds can just select a large number of other events for early access to their — or cherry-pick attendees from the biggest or most exclusive events. The simulations showed an optimal range of 3–4 relevant intake events, with an inflection point around 5. Go beyond that and you're not curating, you're harvesting.
So the Density Index gets a curation penalty — an S-curve centered at 5 listed qualifying events. The farther from that range, the steeper the cost.
Density Index = mean(participant_weights) × curation_penalty
The limit forces choosiness -- picking the right people for your event rather than mainstream or "in-group" event.
The modeling was about finding the right shape for the penalty — where to set the inflection point, how steep to make the curve.
Collusion
The curation fix surfaced a deeper problem.
A related event adds weight to yours through the trust factor — but that relationship can be faked. Two organisers list each other as qualifying access events, fake a bunch of attendances, and points go up for both.
Mutual cross-listing for free score. Vouching rings. Circular amplification.
But responding to this with typical collusion detection is tricky.
In crypto events ecosystem, there are natural cliques. Polkadot events have their specific crowd. The ZK chains are kind of the same people, competing with each other but sharing a core community. DAOs and the pro-social funding crowd are another cluster. They form organically around shared interests. Some of that is unhealthy: siloed knowledge, exclusive in-groups that atrophe. But a lot of that is healthy. Tight relationships, internal jargon and mental models. That's how communities cohere.
Separate from these cliques, there's a small network of event marketing people who work across the better-funded projects. They co-promote constantly, and it produces a sameness across events — similar lineups, similar sponsors, similar vibes. This creates a form of de facto collusion, horizonal integration for conference visibility. Not great, but it's how the conference industry works. This shouldn't necessarily be penalised unless it can be empirically demonstrated that those events have weaker communities.
So the model needed to reward two things at once: a returning core community (which shows your event is good and your community is strong enough that attendees will find great conversations easily) and genuine openness — pulling in diverse groups based on topic-interest rather than who-you-know access. True inclusiveness, not in-grouping.
Then it needed to penalise the gaming layer without punishing the organic collaboration underneath.
The answer was a steep anti-collusion penalty when two events cite each other above 40% mutual overlap.
Density Index = mean(participant_weights) × curation_penalty × anti_collusion_penalty
As an event host, you can collaborate broadly. But the Index rewards you for having a distinct community core and for drawing in people from genuinely different communities. Not for trading favours with three friends to look exclusive.
Scarcity and sorting
The penalties had a side effect I noticed when I looked at the scenarios qualitatively.
When early access is limited — for both hosts granting it and participants earning it — exposing the right events increased demand for them, and that demand moved scarcity to them. Anchor events got more exclusive too. And participants started self-selecting into topics that actually mattered to them.
This maps to the progression I described earlier. The newcomer checking out the scene gravitates toward big-name events — broad, authoritative, good for orientation. But someone who's been applying skills for a while, who needs practitioners and practical work, starts gravitating toward the focused rooms where those people are. The Density Index was matching the-events-you-want-now with the-events-you-can-get-into-first.
The model was shifting high scores away from large general events toward focused rooms with strong community stickiness — with the exception of anchor events that the whole ecosystem genuinely draws from. The design was encoding the sorting that conferences have never managed to produce: people finding the rooms that match where they actually are, not where the hype tells them to be.
Whether this plays out in practice is the experiment. But the simulations showed the mechanism — expose quality, distribute scarcity, and people sort themselves toward depth over time.
Small samples
One practical problem: early-stage events with fewer than about 10 signups were wildly volatile. A few strong attendees could spike a score. A few early signups with no history could crater it.
The fix is Bayesian shrinkage toward the ecosystem mean for events under 30 RSVPs — a weighted blend of the event's raw Density and the network baseline, stabilising scores until there's enough data to trust them.
Density Index = mean(participant_weights) × curation_penalty × anti_collusion_penalty × baysian_shrinkage_factor
Without it, your newest events will be the noisiest, with the least trustable score. And we want the opposite effect, to help people find the small but good events that are right for them.
Those are where the strongest connections and relationships form.
The new game this creates
Right now, event organisers play one game: maximise RSVPs, compete on hype. The currency is attention. Free drinks, big-name speakers, aggressive cross-sponsorship. It degrades into out-spending, not out-performing.
The Density Index creates a second game. Your event has a public score — a signal of whether your attendees actually show up, how strong your community core is. That score gets you featured on the calendar. That score tells someone with five choices to pick you.
Returning members become your most valuable asset. You start winning by focusing on building your own community snowball effect, which means serving them better over time.
It shifts the intake goals towards which communities you invite, not just how many people you get through the door.
The cynical version of this game still exists. A competitive event will notice that pulling their strongest competitors' community members spikes their weight. Two friendly organisers will notice that mutual cross-listing earns free points.
But I'm hoping we can still tip them towards a better one.
When your score compounds with genuine retention — people who came last time, brought a friend, came back again — you stop optimising for footfall and start optimising for your community members' success. You want the kind of attendee who'll be building on what they learned six months later. Who'll introduce two people who end up collaborating. Who'll come back because last time mattered. Because when you have a room full of those people, your newcomers meet the people they need to to succeed for themselves.
That's a different conference than the one everyone's been building. It's a different conference week than anyone's ever seen.
Whether the score makes enough organisers play the second game to shift the ecosystem — or whether the edge cases eat it first — is the question worth finding out.