Friends,
The picture will really tie the post together.
On San Francisco
The noise about SF’s decay has been so loud the past few years that the contrarian antennae have been starting to poke out of my scalp. The “city by the bay” has a seductive and timeless allure masked by self-inflicted, but I believe impermanent harm.
A few recent bits that make me think the tide will turn (and that SF’s descent is not a divergent process but one that will revert…like a fun-but-crazy-rich aunt getting discharged from Betty Ford):
One of my friends and neighbors is a CRE broker specializing in SF. While he confirmed the bloodbath (and very wide bid/ask) on stuff that does print, he mentioned that there was a bit of wait-and-see about some behind-the-scenes negotiations at the city level that would impact visibility to “stabilization”.
Despite the utterly cringe optics of SF taking a shower for a visit by an authoritarian dignitary, it does show that where there’s a will there’s a way. We just need to focus on the wills of like the actual people who live here.
This is Paul Graham in an August interview with Tyler Cowen:
COWEN: Are you an optimist about the city of San Francisco? Not the area, the city.
GRAHAM: Yes, I am.
COWEN: Tell us why.
GRAHAM: I can’t tell you because there are all sorts of things happening behind the scenes to fix the problem.
COWEN: In politics, you mean, or in tech start-ups?
GRAHAM: No, no, no, politics. The problems with San Francisco are entirely due to a small number of terrible politicians. It’s all because Ed Lee died. The mayor, Ed Lee, was a reasonable person. Up till the point where Ed Lee died, San Francisco seemed like a utopia. It was like when Gates left Microsoft, and things rapidly reverted to the mean. Although in San Francisco’s case, way below the mean, and so it’s not that it didn’t take that much to ruin San Francisco. It’s really, if you just replaced about five supervisors, San Francisco would be instantly a fabulously better city.
COWEN: Isn’t it the voters you need to replace? Those people got elected, reelected.
GRAHAM: Well, the reason San Francisco fundamentally is so broken is that the supervisors have so much power, and supervisor elections, you can win by a couple hundred votes. All you need to do is have this hard core of crazy left-wing supporters who will absolutely support you, no matter what, and turn out to vote. Everybody else is like, “Oh, local election doesn’t matter. I’m not going to bother.” [laughs] It’s a uniquely weird situation that wasn’t really visible. It was always there, but it wasn’t visible until Ed Lee died. Now, we’ve reverted to what that situation produces, which is a disaster.
How’s the algebra-in-middle-school thing going?
I like this tweet from co-founder of code.org Ali Partov. We should listen to his 6-year-old daughter’s public comments!
I noticed there’s a new substack devoted strictly to SF politics and culture. Feels like good timing.
On LLMs/AI
I’ve been using GPT-4, Claude-2, and Notion’s AI to help with editing notes, cleaning up/summarizing docs, and I mentioned that most of the code I wrote for Short Where She Lands, Long Where She Ain’t was written by GPT. A Moontower GPT is going to be coming soon (as well as some other stuff still in stealth).
Yesterday, Notion released its own chatbot that can respond to you based on the entirety of your own Notion second brain. I’ve been a heavy Notion user for 5 years — my second brain dwarfs my wet one. I’ve been waiting for this for a long time. If you want a quick video primer on what Notion is calling Q&A Thomas Frank has it covered. (His YT channel is incredibly useful in general btw).
Here’s some brain food about LLM’s abstractly from
.LLMs and Communicating in More Dimensions (The Diff)
Some documents are hard to read solo: an academic paper in a field you don't know especially well, a prospectus for a hard tech company where you're unfamiliar with the underlying technology, or just an email from a busy professional in a different field who knows more acronyms than you and isn't afraid to use them.
There used to be two ways to do this: struggle a lot on your own, or ask someone for help. LLM chats are a new kind of "someone," with infinite knowledge, near-infinite patience, and, yes, a tendency to tell you what they think you want to hear.
One way to think of them is that they're an implementation of the old "enhance" trope, where a blurry image from a security camera can be zoomed in to reveal a license plate number, a name on a business card, a distinguishing facial feature, etc. "Enhance" doesn't work that way—you can't break one gray pixel into four or sixteen black-and-white pixels—but given enough data, you can map the vague images to precise ones. This is more or less a description of facial recognition: take an image of a face, identify the distinguishing metrics that describe it, search a database of existing photos where those metrics have been identified, and reproduce the best match. So it's not enhancing one image, but turning it into a search query for other existing images.[1]
This works in both directions; a newly-standard use case for LLMs is, more or less, "Explain it like I'm not five," i.e. stripping as much extraneous verbiage as possible from a text to get at the fundamental point. Some of this can be done programmatically; there's a ritual on quarterly conference calls where the call starts with a sort of opening benediction, in which the company informs investors that it will contain forward-looking statements that may or may not come to pass, and sometimes enumerates the magical verbs (like "can," "should," "may," "plans to,") that should be understood to immunize the executives from accusations of securities fraud. The information content of this section of the call is zero, and it's easy to strip out of a transcript. But there are other parts of quarterly calls that also carry minimal information, but they're more situational. Summarizing an earnings call, or uploading a bunch of them to a vector database in order to interrogate them, is a way to convert reading and memorization into a search problem.
AI is also improving our resolution of history. This was something The Diff discussed a bit a few weeks ago, but it seems to be accelerating. For example, The Beatles broke up in 1970, but released their latest album last Thursday. We're still early; you can read all of Goethe's works in German or Chinese, but not English. But over time, the idea of an untranslated work will be an anachronism; if it's been digitized in any language, it will be available in every language.[2]
What this ultimately means is that more of your asynchronous information consumption will happen at your preferred resolution. In-person, things will still be tricky; you can't live-compress a five-minute monologue into a fifteen-second summary (or a two hour long lecture into a 5 minute podcast episode) without waiting for it to end.[3] I've personally seen this with The Diff—there are readers who run long pieces through ChatGPT to summarize them (hence our decision to start offering summaries ourselves), and there are other readers who read articles and ask ChatGPT to expand on excessively terse bits.
You can imagine other instances of this. For example, if an essay seems like it's making an interesting point, but the author has wildly different premises than you—they're a Marxist, an Evangelical Christian, whatever—translating the main essay into something disconnected from these points can be a useful exercise. We already do this implicitly when we read anything sufficiently old; the worldviews of people even eighty years ago are sometimes alien. Of course, this is a tricky exercise, especially when the LLM has been optimized to be inoffensive in the context of the modern US (or China ($, WSJ) or Abu Dhabi).
This creates a more profound change, too: it’s the slow death of incomprehensibility. Sometimes, a conversation will involve jargon, or the application of some domain-specific concept in a new area. And sometimes, this completely flummoxes people who are less familiar with the jargon. So understanding it is binary; if someone talking about politics says "this is just like 1994" or someone talking about macroeconomics says "it's 1997 all over again," the listener would get it or they wouldn’t. But LLM chatbots mean that instead of a boolean, the datatype in question is time: jargon speeds up communication for people who share it, but now, for people who don’t, it just means asking ChatGPT what this could possibly mean.
The net result of this is that more obscure interests are accessible to more people. It's safer to pick up an obscure book on an esoteric topic. It's less risky to read an intimidating paper. And it's less of a waste of time to email the paper's author with follow-up questions—for both of you. LLMs expand the surface area of all human knowledge, and, conveniently, represent a map of that same surface. We've barely started exploring.
Facial recognition is a naturally controversial topic, but it's unclear how much of this controversy is from a combination of users not thinking statistically and journalists doing the same thing. Models can be set with different accuracy rates, and if you are a police department in a city of 100,000 people and your model identifies a murder suspect with 99% accuracy, you have simultaneously reduced your potential suspect count by two orders of magnitude and ensured that if you actually arrest the person who matches, your odds of getting the wrong person are 99.9%. In general, the UI for a statistical process that isn't being used by professional statisticians treats these probabilities as binary; Gmail does not have a p(spam) tag next to every email, just an inbox for p(spam) below a certain threshold and a spam folder for p(spam) above it. The way you know they've gotten the interface right for a given accuracy level is that you have false positives and false negatives. Of course, it's better to have a higher accuracy level, but for whatever reason this is harder than it seems like it should be. ↩︎
This will have significant social effects, but with a long lag, because the changes will be downstream from the behavior of people who like to nerd out. And that nerding out takes time. A more prosaic near-term effect will be a smaller valuation gap between companies that don't publish financial statements and investor communications in English and the ones that do. And one can imagine other kinds of AI-based machine translation, like creating truly comparable valuations for companies that use GAAP and IFRS, or looking at how insurers would be valued under different accounting regimes. ↩︎
One of the downstream effects of this may be that heavy AI users will be more rude in person. You can see a bit of this when interacting with other people whose job performance hinges on the transmission of as many timely bytes as possible in as few syllables as necessary. ↩︎
Math Fun
I found this courtesy of Cal professor’s
substack:Street-Fighting Mathematics: The Art of Educated Guessing and Opportunistic Problem Solving (free textbook download)
This book is the main text for Sanjoy Mahajan’s MIT seminar Street-Fighting Mathematics
Description
This course teaches the art of guessing results and solving problems without doing a proof or an exact calculation. Techniques include extreme-cases reasoning, dimensional analysis, successive approximation, discretization, generalization, and pictorial analysis. Applications include mental calculation, solid geometry, musical intervals, logarithms, integration, infinite series, solitaire, and differential equations. (No epsilons or deltas are harmed by taking this course.)
This course is designed to teach you a flexible attitude toward problem-solving. I’ve divided the attitude into six skills or tools. There are others, and more detail on each, but life is short and these six make a decent toolkit.
Stay groovy ☮️
Substack Meetings
I was invited to be a part of the Substack Meetings beta. You can book a time to chat. I’m more expensive than a 900 number from 1988 and have a less sexy voice.
Book a meeting with Kris Abdelmessih
Moontower On The Web
📡All Moontower Meta Blog Posts
Specific Moontower Projects
🧀MoontowerMoney
👽MoontowerQuant
🌟Affirmations and North Stars
🧠Moontower Brain-Plug In
Curations
✒️Moontower’s Favorite Posts By Others
🔖Guides To Reading I Enjoyed
🛋️Investment Blogs I Read
📚Book Ideas for Kids
Fun
🎙️Moontower Music
🍸Moontower Cocktails
Becoming a patron
The Moontower letter is and will always be free. My writing is a search “for the others”. The “others” are people like you who are unlearning the mental frames that artificially narrow our choices.
If you are here you already understand that inspiration is a tradable good. It’s not as tangible as a cup of coffee, but it packs 10x the adrenaline with an infinitely longer half-life than caffeine.
If you feel inspired, you can upgrade to becoming a patron.
Facial recognition efficacy drops into the toilet as skin color darkens. Having your life ruined by cops because of a system built on white people faces, mixes yours up with an ex con,is not trivial. Hearing about theoretical "orders of magnitude" of suspects, shows you have no idea how murder investigations or clearance rates work.
Given the other NIMBY- adjacent BS in this edition, I'll give my money to people who don't roll their windows up when they see homeless people.