"Planet Earth: You Are a Crew"

24 hours after seeing Earth as a tiny speck in an endless black abyss, the Artemis II crew gave their first interview. And I was riveted. Their message was not subtle: “It’s a special thing to be a human, and it’s a special thing to be on planet Earth.”

Each member reiterated how rare and special our planet is, and while constantly hugging each other, they sought to remind us that from space, there are no boundaries, countries, or races.

You could tell their lives had been irrevocably changed, and that they were struggling to find the words to convey the importance to us of what they’d just witnessed.

The Overview Effect

Astronauts have been saying the same thing since the space program began: When you see our pale blue dot from a distance, you fully comprehend how rare it is. The magic isn’t out there somewhere. And it’s not on Mars. It’s right here, on this planet that we barely understand. At this moment, 99%+ of our deep oceans are completely unexplored. We have no clue what’s going on even just a few miles below our feet. Our ignorance of our own planet is boundless. And yet we seek to conquer others…

The Overview Effect is the name for the spiritual sense of one-ness that astronauts feel when they see Earth from a distance. And astronauts have been practically screaming at us to get the message: TAKE CARE OF THIS PLANET! STOP FIGHTING LIKE YOU AREN’T ALL THE SAME. WE ARE ALL IN THIS TOGETHER!

And like any profound message on YouTube, the comments tell the story. The trip was fake. It’s the democrats’ fault. If only Israel had done… Yuck.

In the face of one of humanity’s most powerful messages, the hate machine of social media can instantly trivialize it, making way for divisive, tribal bickering.

That’s why it’s our job—it’s our job to always remain focused on the big picture, and on the truth. Artemis II crew, I heard you. And I love you for it.

“For now, while we breathe and are among our fellow humans, let's cherish the qualities that make us human.”

— Seneca, Anger, Mercy, Revenge

The message has been the same for thousands of years: We need to live here and now on this planet. This is our lifeboat. This is our ship. We're the only crew we'll ever have.

Leadership advice has been the same for 2,500 years

I just finished former Navy Seal Jocko Willink's book Extreme Ownership.

It’s a great book about taking radical responsibility for our actions. Like many leadership principles, it only works if the leader follows the program too. A leader who expects extreme ownership from their subordinates without truly following the wisdom themselves is a tyrant.

When reading, I couldn't stop thinking about how Extreme Ownership is essentially the same book as How to Win Friends and Influence People but with wildly different packaging.

It’s also the same book as Meditations by Marcus Aurelius. And the Tao Te Ching. And Ben Franklin’s Poor Richard's Almanack. And Seneca's letters to Lucilius two thousand years ago.

The core message is always the same: Take radical responsibility for your own actions. Listen more than you speak. Meet people where they are, not where you wish they were.

Jocko says "there are no bad teams, only bad leaders." Carnegie said "any fool can criticize, condemn, and complain — and most fools do." Seneca wrote "If you live according to nature, you will never be poor; if you live according to opinion, you will never be rich." Lao Tzu taught that "the wise leader does not push; he lets things happen." Marcus Aurelius reminded himself daily: "you have power over your mind, not outside events."

It’s always the same story, isn’t it?

Jocko's version is for people who respond to discipline and direct orders from someone whose voice is hoarse from decades of screaming. Carnegie's is for people who respond to warmth and social grace. The Stoics wrote for people who respond to philosophical reflection. Lao Tzu wrote for people who resonate with paradox and gentle stillness.

The wisdom hasn't changed in 2,500 years. The packaging changes because the audience changes.

And this is exactly what I see happening in AI right now.

In my podcast, dozens of tech founders echoed the same core ideas about implementing AI: automate what's repeatable so humans can focus on what requires judgment.

I for example, have learned that my time is best spent judging which of several objects is, in fact, cake.

But one version of this timeless story is wrapped in bro hype and rocket emojis. Another is dripping with fear and "your job is disappearing" mortal terror.

Same medicine. Different bottles.

The question isn't whether you have the right AI strategy. It's whether you can make the people who need to execute it actually understand what you're asking them to do and why they should do it. And you need to be intellectually honest enough to know and admit why you’re really doing what you’re doing.

The chief problem of our time isn’t one of technology: it’s one of translation, accountability, and communication.

And it's been the same problem for 2,500 years.

No TV month

Every year, for one full month, we turn off every screen in our house.

No Netflix. No YouTube. No Disney+. No more binging the Kardashians.

My dad started “No TV Month” when I was a kid. I thought he was cruel. Now I do it with my own daughter, and I completely get it.

Here’s what happens: In the first days, I can watch her go through the symptoms of withdrawal. Her feeling of boredom fills the house like a stench that can’t be escaped. Slowly but surely, she begins to fill her days with other activities. She even read a 215-page book in a single day, couldn’t put it down.

Witnessing screen withdrawal is scary. But what fills the void is better.

Matt Stone — co-creator of South Park, one of the most successful TV shows in history (28 seasons and counting) — said: “I don’t watch any television. I got kids, I got work. I’m not a TV person. I never have been.”

The billionaire TV mogul who makes TV doesn’t watch TV. Let that sink in.

Steve Jobs didn’t let his kids use the iPad he invented. “We limit how much technology our kids use at home,” he told a stunned New York Times reporter. His biographer Walter Isaacson described dinners at the Jobs house: discussing books and history around the kitchen table. No one ever pulled out an iPad. The kids didn’t seem addicted to devices at all.

The guy who built the most addictive screen on earth kept his own kids away from it.

There’s a pattern here that most people miss:

The people who create the things we consume understand something fundamental: consumption is the default. Creation is the choice.

And the ratio matters, especially now that the news cycle seems desperate to hook every second of our finite attention and keep us in a perpetual state of terror.

I code, I write, I teach. And I shout random jabberings into a void on LinkedIn like a maniac in Central Park. And I can tell you from experience: the weeks I consume the most content are the weeks I create the least.

No TV Month isn’t about being anti-technology. It’s about remembering that screens are tools for making things, not just watching things.

When my daughter picks up a paint brush instead of a remote, she’s not “missing out.” She’s doing what the creators of the stuff she’d be watching are actually doing with their time.

Pick your month. Turn it off. See what happens.

You might be surprised what you build when you stop consuming.

We need to stop "resulting"

Buy this book.

The biggest problem in company AI roll-outs right now isn't hallucinations.

It's resulting.

Annie Duke (champion poker player turned author) has a name for the mistake most leaders are making with AI.

Resulting: judging a decision by its outcome instead of the process that made it.

So far I've watched resulting in AI play out in two ways:

One: An AI chatbot disappoints a customer. See I knew we were wrong to embrace AI!

Two: A promising demo app becomes a new religion. OMG stop the presses: I’m replacing every employee with AI right now!

Neither reflect the right way to think about the situation.

Duke's point: Life isn’t chess, it’s poker. In chess, there is a right answer. In poker and business, hidden information and luck mean a brilliant decision can blow up, or a terrible one can pay off.

In Never Split the Difference, former FBI hostage negotiator Chris Voss calls hidden information Black Swans: pieces of information that, once uncovered, completely reframe everything. Every AI deployment is full of them: edge cases the demo never hit, user behaviors no one modeled, and exciting possibilities that don’t reveal themselves for months.

Your AI strategy is one Black Swan away from being either a crisis or a breakthrough. Certainty, in today’s environment, isn't strength. It's legerdemain: sleight of hand that fools you as much as your audience.

Duke's suggestion is simple: separate the quality of the decision from the quality of the outcome. Before you do/don’t do anything, write the bet. What do you believe will happen? How confident are you, in an actual percentage? What would change your mind? Then run the pre-mortem: assume it fails. Why?

Charlie Munger would call this inversion. Voss would call it hunting for Black Swans. Duke would call it calibration.

Most companies seem to be skipping this process entirely. They're asking "did it work?" when they should be asking "were we right to believe it would?"

Nobody knows which AI bets will pay off. Anyone who says otherwise is selling something.

Are you aware of the bets in AI you’re making right now? And are you aware that not embracing this technology is, itself, a bet?

177,000 lines of code

Depending on who you ask, that's years of work.

That's how big my agency management software platform is now. I was able to combine 6-7 paid tools into one, that unlike the others, is perfectly suited to our exact workflow (with features no commercial software has).

That a single person with dedication can build an app this full-featured in a such a short period of time is mind-boggling.

It's taken me three months of non-stop work to build. But without AI?

3 years to never!

What nagging problems have you accepted over the years that you could actually solve now?