Your Custom Text Here
Running towards “the suck”
The most subtle kind of failure is the one that hides behind competence. You’re doing well enough to be seen as good, but not enough to feel the stretch. You’re failing to fail.
Real growth begins where feedback starts to hurt. Progress demands discomfort, but our reptile brain is wired to avoid it. We often choose fluency over friction, familiarity over feedback, comfort over clarity; and in doing so, we stop evolving.
I was reminded of this viscerally through tennis. When I first took up the sport three years ago, my improvement rate was steep. Every week, a new shot landed, a new motion clicked. But then the curve flattened abruptly. I could rally cleanly in training but collapsed in matches. All the training, straight out the window; it just would not translate in action. Instead, it translated to a few broken rackets.
That phase lasted years. Painful, repetitive, unrewarding, and often downright revolting. Similarly with windsurfing, I have thought of quitting many a times; “why do I always pick sports that produce more frustration than they do joy?” I would type into GPT.
The only thing that kept me going, besides a genuine love for act of playing, was a quiet belief that something underneath was rewiring; that if I only endured “the suck” for a little longer, I would eventually climb into the next level of the learning curve. And so instead of playing tentative in matches to hold my own and scrape a W by, I decided to suck some more instead.
And one random Tuesday afternoon, it happened. Somewhere along the line, the patterns had inverted. Instead of scrambling to think of what to do next, I started flowing. Time dilated, my form maintained, and the points started racking up. The system finally integrated.
That’s when I think I started climbing the slope of enlightenment.
The same mechanics apply to work, to relationships, to self-development. The only sustainable path to getting good enough, then getting good, then becoming great, is learning to get over failure faster than it accumulates; standing your ground as it tries to beat you into submission.
The moment you stop failing, you’ve stopped learning. And that’s the real failure.
The second derivative of conflict resolution
I wrote this after noticing a pattern in how good teams and good relationships evolve. It’s not that they avoid conflict, but rather they metabolize it faster every time. The model that emerged was a mathematical one; relationships as learning systems, their health measurable by the slope of repair.
One of the beliefs I hold most firmly is that the best predictor of success in any relationship, whether romantic, friendship, or team, is the second derivative of conflict resolution.
By conflict, I don’t mean shouting or drama. I mean any point where expectations diverge and two internal models of reality collide.
A great relationship is not one without friction; it’s one where friction resolves faster and cleaner over time. The first time you face conflict, it takes a day to recover. The second, six hours. The third, ninety minutes. The fourth, twenty. After that, the curve asymptotes toward zero.
That curve, the rate at which repair accelerates, is what I call the second derivative of conflict resolution (SDCF). It measures not harmony, but learning. Every disagreement, once resolved, adds a building block to shared understanding, which means you don’t have to fight the same fight twice.
This reframes relationship quality from being about harmony to being about adaptive efficiency. The first derivative of conflict resolution shows how quickly a single conflict resolves (i.e. the velocity of recovery). The second derivative shows how that velocity improves over time (i.e. whether the system learns). In simpler terms, what matters isn’t how fast you repair once, but how fast you get better at repairing.
If over successive conflicts the first derivative (recovery speed) becomes more negative more quickly, meaning repair happens faster each time, then the second derivative across conflicts is positive in the direction of learning. Conversely, when the second derivative flattens or turns negative (i.e.when conflicts take just as long, or longer, to resolve) it’s a sign of structural incompatibility. The system isn’t learning. What looks like “communication problems” is really the absence of adaptation.
Most people assess relationships based on emotional tenor; how good they feel or how frequently they argue. But the SDCF model suggests something different; conflict isn’t a sign of failure, but rather it is signal. Each disagreement surfaces new data about boundaries, needs, and blind spots.
In that sense, the counterintuitive truth is that the path to relational strength runs straight through conflict.
Every repair is a form of learning; every argument, a test of how well two people can turn friction into shared understanding. What ultimately defines longevity is how efficiently that learning compounds, and how each conflict leaves the system slightly more aligned than before.
What we often call being “well-matched” is really just phase alignment under low stress. A relationship that truly compounds is one where both people elevate each other through conflict.
Common sense suggests compatible people should recover faster, but the inverse is also true; people who recover faster become more compatible. The variable you can actually control is the learning rate; the slope of repair.
It’s worth highlighting that awareness itself changes the shape of the curve. Most relationships operate unconsciously along their derivative, unaware of whether repair is accelerating or stalling. But once you can see the curve, you can influence it. Awareness reigns in entropy, and replaces drift with structure.
That awareness can have two outcomes, both good. It can either help a relationship move to a higher level of coherence, or reveal that the system has reached its limit, that its slope will never meaningfully improve, and thus allow it to end cleanly. Both outcomes are infinitely better than unconscious decay.
This lens changes how you think about relational “success.” It’s not about avoiding arguments or achieving constant peace. It’s about whether repair gets faster and deeper each time. Whether the feedback loop between conflict and understanding tightens. Whether the relationship compounds.
It also applies beyond the personal. Teams, partnerships, and organizations all have a SDCF. The best companies aren’t those without disagreement but those whose disagreement resolution curve steepens with time, as they learn to metabolize tension into clarity.
A team’s greatness isn’t its lack of internal debate, but how fast it integrates disagreement into improved operating norms. Cultures that avoid conflict decay, while cultures that metabolize it evolve.
If you believe this, then conflict stops being something to fear. It becomes diagnostic. You run toward it, because every repair is a data point on the curve. A chance to move the derivative in the right direction.
That, to me, is what separates fragile from enduring systems, whether personal or collective. It’s not how they avoid stress, but how quickly and gracefully they repair after it. The rest is just noise.
Coase in the age of code
I recently came across an essay I wrote in university in 2011, about Ronald Coase and the theory of the firm. It was dry, academic, and deeply curious about a question that still feels pertinent today; why do firms exist at all?
Back then, the answer felt settled. Coase had explained that firms emerge because organizing through the market is costly; contracts take time, negotiations add friction, and information is imperfect. When it becomes cheaper to manage people internally than to transact externally, a firm is born. The invisible boundary of the firm, he said, lies at the point where these two costs meet.
Reading that old paper now, after a decade spent around blockchains, DAOs, and “trustless” financial systems, I’m struck by how cyclical the question feels. Crypto’s grand promise was to eliminate the very frictions that gave birth to the firm; to replace bureaucracy with code, management with incentives, contracts with consensus.
If Coase’s firm existed to minimize transaction costs, and those costs could be automated away, then perhaps the firm itself could be dissolved.
That was the dream. But it didn’t happen.
When you replace contracts with smart contracts, you still need judgment: what counts as a valid state, what to upgrade, when to fork. When you remove hierarchy, you rediscover governance, only now it’s slower, noisier, and happening in public.
Bounded rationality didn’t vanish with blockchains. It simply migrated to Discord. The same cognitive limits that once defined the borders of the firm now define the borders of the network.
Agency problems persist too. Token holders delegate to committees, multisigs, or core teams. Power concentrates. Decision-making slows. Coordination becomes its own cost center. Every “decentralized” organization ends up rebuilding a managerial layer; sometimes reluctantly, sometimes accidentally.
The irony is that Coase’s logic still applies: a network expands until the cost of coordinating one more decision exceeds the cost of spinning up a new one.
Coase described the firm as an economic structure. But over time it became something deeper: a social technology for minimizing collective error. Firms exist not just to cut transaction costs, but to give a group of humans a shared model of the world, a rhythm, a sense of who decides what.
Even if code can settle value instantly, humans still need slower systems for context, accountability, and trust. The firm endures because it optimizes for judgment, not just execution.
This is why most DAOs still look suspiciously like companies. They may route capital through tokens, but their structure—small cores, delegated authority, decision bottlenecks—echoes the same patterns Coase was describing in 1937. It turns out coordination is a harder problem than trust.
What has changed is where the boundary lies.
The minimum viable firm used to require offices, payroll, and legal scaffolding. Now it can exist as a wallet, a few passkeys, and a group chat. The cost of coordination has collapsed — not to zero, but low enough that the firm can shrink to its essence: a system for allocating capital and attention toward a shared goal.
That collapse in cost doesn’t end the firm; it atomizes it. The future looks less like one monolithic organization and more like a mesh of smaller, temporary ones.
In that sense, blockchains didn’t abolish Coase’s world; they made his boundary dynamic.
Coase saw transaction costs as economic. What he couldn’t see from 1930s London was that information processing itself would one day be the scarce resource. The true constraint on coordination is no longer contract enforcement, but rather comprehension.
A modern firm isn’t just a bundle of contracts; it’s a bundle of cognition. Its size is limited not by the cost of managing people, but by the bandwidth of shared understanding among them.
Technology keeps lowering the cost of transaction, but it doesn’t raise the ceiling of comprehension. We can move money instantly, but aligning meaning still takes time. That’s the paradox of the digital firm: infinite speed, finite sensemaking.
When I wrote that early essay, I thought of the firm as an object, a structure bounded by cost. Now I think of it as a living organism bounded by cognition. The question is no longer why firms exist, but how fluid they can become before they stop being coherent.
Coase explained why we built firms. Crypto reminded us why we still need them.
When smart people fail together
In every financial crisis, many billions are lost not by crooks but by smart people doing honest work, often as part of a committee. Jason Zweig wrote that in 2009. It still holds.
Committees can amplify wisdom or destroy it. Their output depends on how they’re tuned—by “tuning” I mean the group’s shared mental models, processes, and balance of skills and domain expertise.
So: how can a group improve the filters that govern collective decision-making?
First, understand bias. The human brain is an unreliable instrument under uncertainty, and naming its flaws helps tame them. If it has a name, it has a face.
Second, anchor on process. In investing, results emerge from repeatable systems. Good venture funds operate from a thesis—a directional map. Quant hedge funds live at the opposite pole: they shorten the feedback loop between input and output to near real-time. But both depend on alignment around the basic truths of their field. Without that, collaboration becomes noise.
Noise is the enemy. Groups add it easily, especially through groupthink. Diversity of perspective is the antidote, though not to be confused with diversity of principle. A good team contains different pairs of eyes on the same truth, not different truths.
Third, invest in a shared epistemology. How much evidence is “enough”? What logic governs the tie-break: empiricism, first principles, or precedent? What is the agreed method for evaluating new ideas? In any domain of repeated judgment under high uncertainty, eliminating systematic error is the only path to durable success.
And today, that path runs through data. Ten years ago, heuristic judgment could suffice; the infrastructure wasn’t ready. Now, ignoring data is like entering a boxing match one-armed. If you’re Tyson, maybe you’ll win. Most aren’t.
Consensus should exist only at the level of foundational truth. Beyond that, it breeds mediocrity. In fact, committees could use a “scrum master” of sorts—a behavioral-science referee tasked with exposing flaws in individual and group logic.
Below are a few cognitive biases worth keeping on the radar. Think of them as recurring bugs in the operating system of judgment:
False equivalence – mistaking resemblance for parity
Cherry picking – privileging confirming data
Representativeness heuristic – assuming similar appearances imply similar odds
Anchoring – clinging to the first data point
Scarcity bias – equating rarity with value
Social proof – mistaking consensus for truth.
Sunk cost fallacy – persisting because of prior investment
Clustering illusion – seeing patterns in noise
Endowment effect – overvaluing what we own
Procrastination and inertia – deferring hard calls under uncertainty
The list is long because bias is persistent. But awareness converts it from fatal flaw to manageable friction. The goal isn’t perfect rationality—it’s consistent calibration.
In the end, good group decisions come from shared truth, diverse perception, and tight feedback loops. Everything else is variance.
The Honey Badger and the mirror
Why planned fair distributions don't work for broad use cases - adapted from an internal memo drafted in May 2019.
Bitcoin’s volatility profile is - by now - the stuff of legends. Monumental, euphoric rallies give way to abrupt, violent crashes and proclamations of Bitcoin’s demise (380 and counting). Thus far, the cycle has repeated without failure, earning Bitcoin the “Honey Badger” moniker in the process.
As the current cycle is unfolding, behind the BTC/USD pair’s most recent gyrations, new types of participants are entering the market; traditional macro money managers (e.g. PTJ) and nation states (e.g. Iran) are becoming increasingly open about dipping their toes in the cryptoasset ecosystem.
With every new type of player that jumps on board, the likelihood of Bitcoin becoming a widely accepted store of value and the Bitcoin blockchain becoming a globally accepted settlement layer, increases. The “why” Bitcoin makes for a good settlement layer and store of value has been covered extensively. However, the “how we get there” remains somewhat elusive. In this post, I will attempt to unpack that.
Fair != Equal
Let's for a moment imagine what an optimal state of the Bitcoin network at maturity looks like; Bitcoin is a widely accepted global store of value and/or settlement layer; global institutions (e.g. central banks) are on board, co-existing with crypto-native actors (e.g. miners); market manipulation is too expensive to attempt, as are direct attacks on the network; BTC is distributed widely among holders, such that network participants extract maximum value by being able to settle with all other parties they may wish to, and that no party has disproportionate “bargaining power” over network outcomes, allowing participants in the network to be continuously incentivized to remain participants.
From the above, a “fair” allocation of BTC among holders seems to be a key underlying requirement for this future to come to bear. Note that “fair” is not the same as “equal”.
Fairness, in this case, implies that every participant’s utility function is maximized, subject to their unique constraints. Under that condition, “equal” is “unfair” and therefore, unsustainable.
If we assume the “fairness” condition as requisite, then while not necessarily an easy pill to swallow, the rollercoaster ride might be the *only* path available to get us there. To illustrate the point, an approach by deduction reveals why the competing approaches cannot work;
A centrally planned diffusion mechanism: this construct fails as the planner holds all the bargaining power - such that no other party would willingly opt-in. In order to be executed effectively, it would have to be orchestrated and delivered by a benevolent dictator (a party with perfect information and perfectly benign incentives), and for all participants in the network to trust that the allocator is indeed benevolent. In practice, impossible.
A diffusion mechanism planned by a “political” coalition: this can’t be orchestrated in a multi-party explicit negotiation format, because there are too many conflicting interests at play in order to implement top down consensus. If it is sufficiently hard to achieve with structures where there is some cultural cohesion (e.g. EU and the Eurozone), it should be near impossible to achieve at a global scale.
So if we agree that neither of the two are viable options, the only option left is a free market mechanism; a continuous game, that is played by individualistic agents with hidden preferences, in near infinite (and infinitesimal) rounds, that allows for each participant to opt-in at the valuation that perfectly satisfies their objective function (what they strive to maximize under given constraints), therefore covering the full utility spectrum of the population of agents.
Hidden preferences become revealed ex-post and as such competing agents cannot devise a strategy that creates a surplus for themselves that leaves others at a deficit ex-ante, resulting to an ultimately fair distribution. And in the process of revealing preferences in a continuous game with infinite rounds, bubbles are created. Competing agents with similar objective functions are forced to respond to the first mover among their counterparts and jump on the bandwagon. Under scarcity, the price rallies, until the reservation price of agents that opted-in earlier is met. At that point the distribution phase begins, as earlier participants divest and get rewarded for stewarding the network thus far, by locking in a margin. As painful as the process might be, it ultimately yields to a fairer allocation.
As the "rounds" of the tacit negotiation - come free-market-bonanza - game unravel, the very nature of the platform evolves, opening up to a wider possible utility spectrum. With time, the network becomes more secure as wider margins become available for miners (either through higher prices or through advances in operational efficiency) and more resources are committed towards Bitcoin’s Proof of Work. It follows that as the network’s security improves, it opens up to new types of agents that are striving to maximize value preservation potential, subject to the liability they have to their constituents (measured as risk). The more types of agents there are on the network, the better a settlement layer it becomes, and so on.
Therefore, there is sense to the idea that progressively larger agents would opt-in at a higher prices, as they are effectively buying into a fundamentally different - and arguably better - network for value store and transfer. And with every new type of agent unlocked, the bandwagon effect re-emerges.
Bitcoin might not have been a secure or wide enough network for Square (an agent to its shareholders) to consider as its future payment rails and settlement layer in 2015. It is in 2020. Similarly, while Bitcoin might not be a secure or wide enough network for a sovereign to opt-in in 2020, it might be in 2025.
So, not only should we not be surprised by the new type of participant that is emerging in the early innings of this cycle, but we should expect more of this as the network’s value increases and its security profile improves.
And in the process, we should learn to accept the nature of the game.
2019: A year of quiet infrastructure deployment
This essay draws from a letter I wrote to investors at the end of 2019, when I was managing a crypto fund through the industry’s first true post-bubble cycle. Reading it now, it feels like a time capsule from an era when the noise was still louder than the signal.
2019 began with confidence and ended with contradiction. Equity markets rallied, the yield curve inverted, and talk of recession filled the air. In that environment, crypto rose from the ashes of its 2018 bear market, then promptly reminded everyone how volatile belief can be.
Bitcoin led a 230% rally from April to July before losing half its value in the second half. Yet even after the deflation, the asset class finished the year up roughly 90%—beating the NASDAQ, the S&P, and REITs. Still, sentiment was dark. Investors anchored on the summer’s highs, unable to see through recency bias and loss aversion.
That dissonance—the gap between absolute performance and perceived failure—captures much of what defines crypto’s psychology. When narratives swing faster than fundamentals can evolve, volatility becomes emotional before it becomes financial.
2019 wasn’t a year of explosive returns. It was a year when infrastructure quietly solidified—both top-down, through institutional adoption, and bottom-up, through open-source experimentation.
Top-down—Custody, derivatives, and data pipelines matured. Fidelity, ICE, CME, and Coinbase expanded the financial plumbing that future strategies will rely on. ETFs were still premature—markets too fragmented, prices too contested, but institutional curiosity deepened. Millennials treated Grayscale’s Bitcoin Trust like a tech stock. Regulators, meanwhile, wavered between curiosity and caution. Libra’s short-lived ascent into Washington hearings clarified one thing: no one quite knows who should hold the pen on crypto policy.
China did. Xi Jinping’s October endorsement of blockchain as national strategy triggered a chain reaction. Within months, German banks gained approval to custody crypto, and the ECB began testing digital currency concepts. The U.S., as usual, legislated by committee.
Bottom-up—While policymakers debated, builders built. Ethereum consolidated its position as the laboratory of decentralized finance. In 2019 alone, protocols like Maker, Compound, Uniswap, and Synthetix demonstrated that capital coordination could happen entirely on-chain—without banks, brokers, or trusted intermediaries.
Each protocol was small by traditional standards, but collectively they proved a point: money is programmable, and finance is now open to iteration. Ethereum grew more expressive; developers could compose financial logic as software. New frameworks, tools, and primitives made experimentation cheaper, faster, and, crucially, cumulative.
Meanwhile, domain-specific chains began to appear. Different architectures optimized for different tasks; Ethereum for high-value settlement, EOS and WAX for high-frequency, low-value interaction. Layer 2 solutions promised scalability, but specialization hinted at something deeper: an ecosystem evolving toward purpose-built diversity, not one-chain monopoly.
Every market cycle hides a deeper project beneath it. 2019’s was construction. The industry moved from speculation to capability, from narrative to infrastructure. Institutions built the rails; developers built the applications. The foundation was being laid for systems that could carry real economic weight.
Progress, in crypto as in any frontier, is uneven and recursive. Hype retreats, builders stay, and what remains becomes the new baseline for the next wave.
2019 was that baseline year.
How memory becomes a moat
A thought that’s stayed with me over the past few weeks: tokens that govern state will accrue value, while tokens that govern schema won’t.
The distinction seems subtle but it isn’t. It touches the core of how value concentrates in open systems, and why some protocols accumulate gravity while others remain just infrastructure.
Let’s start by unpacking what “state” actually means.
Stateless and stateful systems
In software, state is memory—the record of everything that has happened from time t₀ to tₙ. A stateful program remembers; a stateless one forgets after each interaction.
Stateless systems - like HTTP or IP - process requests independently. They’re fast, scalable, and composable because no memory carries over. Each interaction is atomic.
Stateful systems do the opposite: they retain context. They build continuity. Each new input depends on the last. Databases, operating systems, and blockchains are all examples. They maintain a living record of past events and use it to determine what comes next.
Statelessness gives performance. Statefulness gives meaning.
From code to memory
Blockchains are, by definition, stateful machines. Every block is a crystallized memory of all prior actions. As time passes, their value shifts from the elegance of the code to the richness of the accumulated state.
Think of it this way: a protocol begins as code, but over time it becomes a vessel filled with data, users, and relationships. Code bootstraps; state compounds.
A service like Google Search illustrates the same dynamic. Every query not only serves the user but also refines the model, improving results for everyone who follows. The global state becomes more valuable with every interaction.
So while software may begin as an artifact of logic, it matures into an artifact of memory. Over time, value migrates from how it works to what it remembers.
The locus of value
If value migrates from code to state, then the power to govern state becomes the most valuable position in a system. Tokens that govern state decide how memory evolves—who gets liquidated, what parameters shift, what data is canonical. They shape the trajectory of value.
Tokens that govern schema, by contrast, decide on structure: how data is organized, what parameters exist, what rules might one day apply. Schema defines the scaffolding. State defines the living organism within it.
Schema tokens matter when a system is new. State tokens matter once it works.
Over time, the asymmetry compounds. Governance that influences future state captures value. Governance over schema becomes utility.
The market prices the difference instinctively. Control over memory is control over power.
The paradox of growth
There’s a cost, of course. The more memory a protocol accumulates, the heavier it becomes. Statefulness slows computation, just as institutional memory slows organizations. But that friction is inseparable from persistence — it’s what gives the system continuity, identity, and resilience against amnesia.
Blockchains are not valuable because they compute efficiently; they are valuable because they remember precisely.
Memory as moat
Every software system starts as schema and ends as state. Every network starts as coordination and ends as history. That history becomes the moat. It’s what new entrants can’t copy, and what old participants can’t easily leave behind.
Tokens that govern state are therefore not just governance tokens. They are claims on the protocol’s memory, on the inertia of everything that came before.
In a world where information is cheap and code is open, memory is the last scarce resource.
A stablecoin paradox
The more I study emerging stablecoin propositions, the more convinced I become that this is an all-or-nothing market. Unless you have absolute conviction that one design will win, it makes little sense to back any single stablecoin at all.
This isn’t like betting on competing tech platforms where several can coexist. Stablecoins fight for the same narrow space: trust, liquidity, and exchange integration. Every new entrant dilutes the rest, pushing the market toward a single dominant winner.
Stablecoins are also an exceptionally risky business. Beyond the operational fragility of many designs, the value-capture mechanisms that link a stablecoin to its governance token are often murky. Add a hyper-competitive market with almost no barriers to entry, and you have a setup where each new launch erodes the future value of all others. Since early May, when I first started tracking the space, the number of live or planned stablecoins has more than doubled (see stablecoinindex.com for a running list).
This will almost certainly be a winner-takes-most market.
Take the current leader, Tether. Many projects have tried to unseat it, yet none have come close. What’s remarkable is how little effort Tether has made to polish its image—no full audit, constant skepticism—and still, it dominates. Why? First-mover advantage and attention.
Attention converts to revenue. Claiming attention real estate early can make or break a company. Tether might be one implosion away from collapse, but time compounds its advantage: it gets smarter, more entrenched, better resourced.
Now imagine two plausible dethroning scenarios:
Collapse: A revelation shows only 10% of Tether’s supply is actually backed by dollars.
Institutional flood: Tens of billions in institutional capital enter crypto, flowing into the most transparent, regulated alternative.
In either case, the market would reshuffle briefly before a new stablecoin king emerges. The logic mirrors why the world runs on the USD: attention, liquidity, and interoperability. We don’t deal with USD #1 through USD #99. We deal with the USD.
If I had to bet, I’d back the institutionally backed, fully regulated alternative. As the market matures, institutional participants will favor systems that resemble those they already trust. Their adoption would cascade, signaling legitimacy and pulling the rest of the market along.
The arrival of Gemini Dollar and Paxos Standard marks exactly that kind of shift—a sign that crypto is inching toward maturity.