Random Thoughts (AI edition) - April 27th, 2026
I’ve been spending a lot of time thinking about AI, not in the breathless way, and not in the dismissive way, but in the way you think about something that is genuinely changing the conditions of work, formation, and culture in real time. These are ten things I’ve landed on. Some are practical. Some are uncomfortable. A few are things I wish someone had told me earlier.
1. There is no AI bubble.
People keep waiting for it to pop. It won’t at least not in the way they’re imagining. A bubble forms when hype dramatically outpaces real demand. That’s not what’s happening here. Demand for AI is accelerating, not inflating. Companies are restructuring around it. Individuals are building workflows on top of it. The investment is enormous because the use cases are real and multiplying. This doesn’t mean every AI company survives or that every valuation makes sense. Some of both will be wrong. But the technology itself isn’t going away, and the people waiting for the collapse so they can safely ignore it are going to find themselves badly behind.
2. Many people use AI like Google and are surprised when it performs like a slightly better search engine.
Type a question, get an answer, move on. That’s retrieval, not thinking. The people getting the most out of this technology aren’t using it to look things up. They’re using it to work through things. They bring half-formed ideas, unresolved problems, bad first drafts. They treat it like a thinking partner, not a search bar. The tool didn’t change. The posture did. And posture, as it turns out, is everything.
3. Nobody asks about the ROI of Excel anymore.
At some point spreadsheets stopped being a technology investment and started being just how work gets done. AI is on that same path, faster. The ROI question isn’t wrong, it’s late. The better question is: what does this make possible that wasn’t possible before? What decisions get sharper, what work gets faster, what capacity gets freed up for the things only humans can do? The companies still running ROI calculations on AI adoption are asking 2019 questions in a 2025 world.
4. The worst thing a company can do right now is replace its customer service team with AI and call it an efficiency win.
Customers don’t want faster frustration. They want to feel like someone actually gives a damn. The companies that figure this out will stop asking how AI can replace human interaction and start asking how AI can free up humans to have better ones. Great customer service has always been the killer app. AI doesn’t change that. It just reveals which companies actually believed it.
5. The risk of treating AI like a therapist or a romantic companion isn’t that the AI will hurt you.
It’s that you’ll stop building the muscles that real relationships require. Real relationships are hard because they’re mutual. The other person has needs, moods, limits, and bad days that have nothing to do with you. That friction isn’t a flaw. It’s the whole point. An AI companion removes the friction. And with it, the formation.
6. AI is not going to destroy civilization. It’s also not going to usher in a golden age. The people selling either story are selling something.
The truth is slower, messier, and harder than either camp wants to admit. Real disruption doesn’t announce itself cleanly. It accumulates. Jobs don’t vanish overnight. They shift, splinter, and require different things from people who weren’t trained for those things. The workers most at risk aren’t being consulted by the people making the decisions. That gap between who benefits first and who absorbs the cost is where the actual story lives. Getting this right isn’t inevitable. It will require choices that are genuinely difficult and interests that are genuinely in conflict. Neither the optimists nor the doomsayers are preparing us for that.
7. Congress needs to legislate AI responsibly. That would require Congress to be responsible.
This isn’t a partisan observation, it’s an institutional one. The people best positioned to craft meaningful AI regulation would need to understand the technology, resist the lobbying pressure of the companies building it, think across election cycles rather than within them, and prioritize public welfare over political advantage. We are not currently stocked with that kind of leadership. The danger isn’t that bad legislation gets passed. The danger is that no serious legislation gets passed at all, and we spend a decade catching up to consequences that were predictable.
8. Too many people approach AI backwards starting with the technology and then looking around for somewhere to put it.
That’s not a strategy. That’s a solution in search of a problem. The result is usually a handful of flashy demos, a lot of internal skepticism, and a leadership team that quietly concludes AI isn’t delivering what it promised. It delivered exactly what that approach deserves. The better starting point isn’t the technology at all. It’s your workflows. Where does work slow down? Where do handoffs break? Where are your best people spending time on things that don’t actually require their best thinking? Answer those questions first; with rigor, with specificity, with the people actually doing the work in the room and then ask where AI fits. That sequence matters. Organizations that start with the problem find genuine use cases. Organizations that start with the tool end up with expensive experiments they can’t explain to the board.
9. The internet is filling up with content that is technically correct, competently assembled, and completely empty.
It’s called slop: AI-generated text, images, and video produced at scale by people who decided that volume was the point. It looks like writing. It reads like writing. But it doesn’t have a person in it. No particular observation that only someone who lived a specific life could have made. And readers, even readers who couldn’t tell you exactly why, can feel the difference. The organizations flooding channels with AI slop aren’t winning an attention game. They’re slowly burning down the trust they spent years building. The answer to AI-generated everything isn’t more content. It’s more you.
10. For decades, the liberal arts degree has been the punchline of career conversations. That’s changing.
The skills that made those degrees easy to dismiss such as critical thinking, clear communication, the ability to move between disciplines, comfort with ambiguity, a habit of asking why before asking how turn out to be exactly what AI can’t replicate and can’t replace. AI is extraordinarily good at producing. It is not good at discerning. It can generate ten options; it cannot tell you which one is true to your values, appropriate for this particular relationship, or wise given what you know about the room. That requires judgment formed over time, across a wide range of human experience, by someone who has learned to think rather than just retrieve. The engineers who can only code will find themselves managing AI that codes. The people who can think clearly about hard problems, write in a way that actually moves someone, and adapt when the frame changes, they’ll find themselves indispensable in ways the market is only beginning to price correctly. The liberal arts didn’t fail to prepare people for the real world. The real world just took a while to catch up.


