AI Existential Crisis
How should I live in the brave new world?

Hey! I wrote something new for the first time in years. It’s more personal than my usual stuff, and nonfiction, but maybe you’ll find it interesting nonetheless.
When ChatGPT launched, it kicked off an existential crisis for me that lasted the better part of 2 years. I had already been ASI-pilled1 for about 5 years by then, but my timeline (like most) was many decades long. I believed in it rationally but not intuitively, and I also wasn’t tapped into the field. So when ChatGPT dropped, it caused a sudden and severe update to my model of the future. It felt like superintelligence was now imminent, which obviously changed everything.
The shift in my thinking wasn’t nearly as sudden as the shift in my future model. It took a long time for things to settle into something stable again (they’re still not fully settled, but now they’re unsettled in a more normal way). The biggest reason it took so long, I think, is because it involved fundamental changes to my identity. My sense of self was constructed around my future model, and that model had dissolved into something extremely murky and unpredictable.
When I finally came out the other end, my brain was significantly rearranged. I roughly sort the changes into 3 buckets.
1. I stopped planning beyond a few years
I used to think I knew, in some loose way, what the world would look like during my lifetime. Based on that, I tended to have a rough plan for the next 10 years, and rougher aspirations for further on.
Now I have no idea what the world will look like in even 10 years. I think I have a decent idea of what it will look like in 1 year, but each year after that becomes more unpredictable at an insane rate. So now I find myself optimizing for the next 2 years instead of the next 10, and accepting that everything beyond that is heavy static.
This naturally made me care a lot more about actually enjoying the next 2 years. The biggest change that provoked was in my work. In the 6 years prior to ChatGPT, my main goal was to start a huge company. The thing is (I realized after a lot of reflection), I didn’t really want the life that running a huge company requires. I didn’t want to be accountable to VCs, or spend my days in meetings, or work 80 hours a week. What I really wanted was to have a large positive impact on humanity, and to get rich (i.e. free). I thought I could/should do all the things I didn’t want to do in order to achieve those goals, but it was a massive exercise in delayed gratification.
Now I’m still trying to have a large positive impact and get rich, but I also care a lot about enjoying my day-to-day work. Adding that objective to the optimization changed what seemed best to work on. Instead of trying to start a VC-style startup, I decided to bootstrap an app. I spend all my time designing and coding. I allow myself to obsess over tiny details—even though they slow me down a lot—simply because I find it satisfying.
2. I reoriented around subjective experience
Like many, I’m worried about ASI going horribly wrong. But after ChatGPT launched, I found myself more concerned by the prospect of it going perfectly right, and rendering me useless. I thought about a “solved” world, where disease, war, and poverty are extinct. In that world, what would my purpose be?
I felt like I had to figure out from scratch what was worth doing. This forced me to think through my most basic assumptions about what is good and why. Eventually I burrowed down to “good subjective experience” as the atomic good. In my view, something is good if it increases the amount of good experience in the world. For example, saving a life is good because it allows the saved life to continue having good experience and creating good experience for others. This might seem obvious or pedantic, but it’s very clarifying for me.
“Increasing good experience” is a purpose that survives the singularity. The best ways to help will be different, but the goal will remain unchanged. Instead of trying to create good experience by changing the material world (which the AIs will have covered), I will try to create it more directly. As humans, we’ll be able to provide each other at least one thing the machines can’t: relation to others like you. Others that share your makeup and therefore experience. I think most of our core desires—like belonging, self-expression, physical intimacy—will prove to be much better fulfilled with fellow humans.
This focus on subjective experience has also changed my approach pre-ASI. I think a lot about things that improve subjective experience without changing physical reality. Things like mindfulness and gratitude and focus. I now think of these things as being equally important as interventions in the material world.
3. I skip to the end
Considering a solved world brought up another big question. Beyond purpose, if I didn’t have to do anything, what would I do for fun?
I’d make music, I’d write stories, I’d build things, I’d experience the things other people created, I’d experience the things nature created, I’d go for walks, I’d learn about science and philosophy and history, I’d spend time with my loved ones.
When I list these out, it becomes obvious that I can do them all right now. I need not wait for the world to be solved. Unfortunately, I’m such a nerd that it took the prospect of an AI utopia for me to fully realize that. Now that I have, I spend more time on them and enjoy them more, with less productivity guilt. I think, “This is the point, this is what all the work is for,” and while I’m not able to spend all of my time there, I can spend a lot of it.
My thinking on all this is far from settled. I don’t claim to have solved philosophy, or be fully at ease in the thrust of acceleration, or even to be acting totally rationally according to my own worldview. But strangely, despite the dramatic increase in uncertainty, I feel more at peace on the other side of my crisis than I did before it. There’s something comforting about relaxing my grip on the future, accepting that it’s a dense fog in which I can only see a few feet ahead of each step. Fuck it, we’ll do it live!
If you’re not tapped into AI discourse, “ASI” stands for “artificial superintelligence”. My preferred definition of ASI is an AI that’s smarter than the smartest humans in every domain. Because it would be digital, it could use it’s intelligence to continually upgrade itself, there could be billions (trillions?) of copies of it, and the copies could think/collaborate orders of magnitude faster than humans can.
For the record, if it was up to me we would not create ASI. I’m in the camp that thinks the chances are far too high that we would fail to keep control of it (it being much smarter than us), become totally disempowered, and inherit a future much worse than the one we would have created for ourselves.
