Friends,
This week I’ll share 2 posts that grabbed my attention.
🔗For The Person Who Has Everything (Tom Morgan)
This is a solid article reminiscent of much of Tom’s writing which I regularly read. But I want to point to a specific bit:
Any of science’s most transformational insights are closely associated with visionary states. Felix Hoffmann's synthesis of aspirin was influenced by a dream of white willow bark. James Watson's dream of a spiral staircase played a role in the discovery of the double helix structure of DNA. Dmitri Mendeleev's arrangement of elements in the periodic table came to him in a dream. Most people think their best ideas come from their own brains, but true visionaries know they come from somewhere else.
This quote recalls observations from folks who know a thing or two about creativity.
In The Third Eye, I note how much emphasis Rory Sutherland and Rick Rubin put on the idea that “artistic breakthroughs have no sense of proportion.”
In The Virus With No Vaccine, I explain how the late Cormac McCarthy in his article The Kekulé Problem makes the controversial claim that language is more of a bug than something we were destined for. The clue to this was the origin of all great breakthroughs in mathematics — they weren’t at the end of some chain of logic but arrived as visions. The metaphor of constructed logic getting in the way of our felt insights runs through all of these pieces.
[In the Virus link, I relate that idea to how psychedelics are used in therapy, at least according to a Swedish psychologist I met last summer).
[I have more to say on the topic of creativity inception in a longer piece I’m noodling on. As always, don’t hold your breath on timing]
🔗A Map Is Not A Blueprint: Why Fixing Nature Fails (Nat Eliason)
Ozempic, Fertilizer, Lobotomies, and the dangers of hubris
You’ve heard the expression “the map is not the territory” as a warning about the dangers of extrapolating from compressed representations of reality (ie models). This article is an admonition in the same vein but also fresh and worth a read because it uses the fractal quality of nature’s shapes to make the distinction between building something from a blueprint versus attempting to mimic a portion of a complex system without respecting the whole. A coastline is not a simple geometric shape like a square or circle. It cannot be precisely measured. It is a fractal, an infinitely complex shape where you would have to drill down to the atomic level to get a precise measure of it and maybe not even then.
I’ll get you started with Nat’s metaphor but go the main piece where the argument is concisely and effectively communicated into a model for how you can quickly classify new innovations as either “fertilizer” or “airplanes”.
Drawing a map then becomes a question of how precisely you want to represent it, and how much space you have on the map to do that representation. If the island needs to fit into a 1-inch diagram, you will have to sacrifice considerably more precision than you would if it were fitting into a 10-inch or 10-foot drawing.
A map of a coastline can approach accuracy; it can get infinitely close to accurately representing the coastline, but it can never fully represent it. There is an infinite amount of subtle detail that the map will have to leave out.
This doesn’t matter for normal navigational purposes. It will still help you find the beach, even if it’s only 90% precise. But imagine you had to rebuild the coastline from the map. Now the precision matters quite a lot! The more precise you measure the coastline, the more accurate your reconstruction will be. But here’s the important part: you can never successfully map the coastline well enough to accurately rebuild it.
Money Angle
We started talking about Kelly criterion a couple weeks ago. As you play with the ideas yourself, I’ll point out 2 subtleties. One here and another below in the Masochism section.
Edge/Odds
I posted a couple ways to express the Kelly formula. Because it’s easy to remember, I prefer the simple expression edge/odds.
If you use this version too, let me offer some user notes.
It only works when there’s a possibility of loss
This is a technicality but consider the following bet:
A stock is $100 and you believe it is 90% to worth $100 and 10% to be worth $300.
The expected arithmetic return is therefore 20% (.90 x 100 + .10 x $300 minus your $100 investment)
The odds or percent return when you win is 200%
f* = Edge/odds = 20/200 = 10%
With this version of the formula……you get a divide by zero error. Which is nature’s way of saying “Bruh, you can’t lose with this proposition you should bet 100% why you asking a calculator.”
The second user note for using edge/odds is noticing a a counterintuitive idea:
For a given level of edge, the optimal Kelly fraction to bet decreases as you get better odds (ie the denominator increases).
Kelly has a preference for high win rates, an attribute that always arrives with negative skew.
We’ll address this in the next section.
Money Angle For Masochists
Bias towards negatively skewed bets
Consider 2 bets:
A 10% chance of getting paid 10-to-1, 90% chance of losing my bet
The expectancy is straightforward. If you start with $10 and play 10x betting $1 on each trial you will lose $9, and your last dollar will get you paid $10 leaving you with $11 total. A 10% total return or 10% arithmetic expectancy.
Using the spreadsheet:The prescribed Kelly fraction is to bet 1% of your capital on this proposition.
This is a positively skewed bet. You lose most of the time, but win a large amount occasionally.
Let’s look at a negatively skewed bet with the same 10% expectancy.A 90% chance of getting paid 22.22%, 10% chance of losing my bet
Again, we start with $10 and bet $1 each time. You will earn $.22 9x or $2 and lose a dollar on the 10th trial. Once again you’re net profit is $1 or a 10% expected return.
But look what calculator spits out:The expectancy is the same but now Kelly wants you to bet nearly 1/2 your bankroll.
My intuition is that Kelly conclusions are loaded on volatility as opposed to higher order moments of a distribution. I’ve discussed this many times but to find the links I asked MoontowerGPT:
The first link of the responses is the most relevant (it’s embedded in the second link as well):
Kelly’s bias towards negatively skewed bets is already understood:
And here you have Euan’s adjustment:
🔗The Kelly Criterion and Option Trading
[Euan needs no boost from me but I’ll add that his book Positional Option Trading was terrific. My notes here]
In real-life, almost nobody is aggressive enough to bet full Kelly (at least amongst those who would consider using Kelly in the first place). Half or quarter Kelly is more common and Euan’s adjustment will lower the prescribed full Kelly amount even further in the presence of strong negative skew.
This bit from Fortune’s Formula is instructive:
A Kelly's bettor’s wealth is more volatile than the Dow or S&P 500 have historically been. In an infinite series of serial Kelly bets, the chance of your bankroll ever dipping down to half its original size is 50%.
A similar rule holds for any fraction 1/n. The chance of ever dipping to 1/3 of your original bankroll is 1/3. The chance of being reduced to 1% of your bankroll is 1%.
Any way you slice it the Kelly bettor spends a lot of time being less wealthy than he was.
A Kelly bettor has a 1/3 chance of halving the bankroll before doubling it. - The half Kelly bettor has only a 1/9 chance of halving before doubling.
The half Kelly bettor halves risk but cuts expected return by one 1/4.
If you have gotten this far, you’ll probably enjoy these poll questions which strike at a lack of strict risk ordering and transitivity in comparing propositions.
I’m done writing about Kelly and my current take is when faced with a bet whose properties lend themselves to the formula I’d like to see what it prescribes to get a ballpark for the upper bound of how much to bet. The ultimate choice of sizing would incorporate my instincts about the shape of the payoff and personal comfort.
I’ve shared my summary of the Haghani bet sizing study and the overwhelming conclusion is people, including economists and grad students, instincts are quite poor on bet sizing. Just acquiring the knowledge that Kelly exists would help a reader recruit their “System 2 thinking” even if the details are foggy. This was a widely read post:
🔗Bet Sizing Is Not Intuitive
This week the beta for moontower.ai opens to a portion of the waitlist.
Tomorrow, we will also start dripping the second and final unit of the Primer which is an implementation manual that accompanies the conceptual framework.
Here’s a short bridge between the 2 units:
A quote to start your week
☮️
Stay Groovy
Need help analyzing a business, investment or career decision?
Book a call with me.
It's $500 for 60 minutes. Let's work through your problem together. If you're not satisfied, you get a refund.
Let me know what you want to discuss and I’ll give you a straight answer on whether I can be helpful before we chat.
I started doing these in early 2022 by accident via inbound inquiries from readers. So I hung out a shingle through the Substack Meetings beta. You can see how I’ve helped others:
Moontower On The Web
📡All Moontower Meta Blog Posts
👤About Me
Specific Moontower Projects
🧀MoontowerMoney
👽MoontowerQuant
🌟Affirmations and North Stars
🧠Moontower Brain-Plug In
Curations
✒️Moontower’s Favorite Posts By Others
🔖Guides To Reading I Enjoyed
🛋️Investment Blogs I Read
📚Book Ideas for Kids
Fun
🎙️Moontower Music
🍸Moontower Cocktails
🎲Moontower Boardgaming