Leadership advice has been the same for 2,500 years

I just finished former Navy Seal Jocko Willink's book Extreme Ownership.

It’s a great book about taking radical responsibility for our actions. Like many leadership principles, it only works if the leader follows the program too. A leader who expects extreme ownership from their subordinates without truly following the wisdom themselves is a tyrant.

When reading, I couldn't stop thinking about how Extreme Ownership is essentially the same book as How to Win Friends and Influence People but with wildly different packaging.

It’s also the same book as Meditations by Marcus Aurelius. And the Tao Te Ching. And Ben Franklin’s Poor Richard's Almanack. And Seneca's letters to Lucilius two thousand years ago.

The core message is always the same: Take radical responsibility for your own actions. Listen more than you speak. Meet people where they are, not where you wish they were.

Jocko says "there are no bad teams, only bad leaders." Carnegie said "any fool can criticize, condemn, and complain — and most fools do." Seneca wrote "If you live according to nature, you will never be poor; if you live according to opinion, you will never be rich." Lao Tzu taught that "the wise leader does not push; he lets things happen." Marcus Aurelius reminded himself daily: "you have power over your mind, not outside events."

It’s always the same story, isn’t it?

Jocko's version is for people who respond to discipline and direct orders from someone whose voice is hoarse from decades of screaming. Carnegie's is for people who respond to warmth and social grace. The Stoics wrote for people who respond to philosophical reflection. Lao Tzu wrote for people who resonate with paradox and gentle stillness.

The wisdom hasn't changed in 2,500 years. The packaging changes because the audience changes.

And this is exactly what I see happening in AI right now.

In my podcast, dozens of tech founders echoed the same core ideas about implementing AI: automate what's repeatable so humans can focus on what requires judgment.

I for example, have learned that my time is best spent judging which of several objects is, in fact, cake.

But one version of this timeless story is wrapped in bro hype and rocket emojis. Another is dripping with fear and "your job is disappearing" mortal terror.

Same medicine. Different bottles.

The question isn't whether you have the right AI strategy. It's whether you can make the people who need to execute it actually understand what you're asking them to do and why they should do it. And you need to be intellectually honest enough to know and admit why you’re really doing what you’re doing.

The chief problem of our time isn’t one of technology: it’s one of translation, accountability, and communication.

And it's been the same problem for 2,500 years.

No TV month

Every year, for one full month, we turn off every screen in our house.

No Netflix. No YouTube. No Disney+. No more binging the Kardashians.

My dad started “No TV Month” when I was a kid. I thought he was cruel. Now I do it with my own daughter, and I completely get it.

Here’s what happens: In the first days, I can watch her go through the symptoms of withdrawal. Her feeling of boredom fills the house like a stench that can’t be escaped. Slowly but surely, she begins to fill her days with other activities. She even read a 215-page book in a single day, couldn’t put it down.

Witnessing screen withdrawal is scary. But what fills the void is better.

Matt Stone — co-creator of South Park, one of the most successful TV shows in history (28 seasons and counting) — said: “I don’t watch any television. I got kids, I got work. I’m not a TV person. I never have been.”

The billionaire TV mogul who makes TV doesn’t watch TV. Let that sink in.

Steve Jobs didn’t let his kids use the iPad he invented. “We limit how much technology our kids use at home,” he told a stunned New York Times reporter. His biographer Walter Isaacson described dinners at the Jobs house: discussing books and history around the kitchen table. No one ever pulled out an iPad. The kids didn’t seem addicted to devices at all.

The guy who built the most addictive screen on earth kept his own kids away from it.

There’s a pattern here that most people miss:

The people who create the things we consume understand something fundamental: consumption is the default. Creation is the choice.

And the ratio matters, especially now that the news cycle seems desperate to hook every second of our finite attention and keep us in a perpetual state of terror.

I code, I write, I teach. And I shout random jabberings into a void on LinkedIn like a maniac in Central Park. And I can tell you from experience: the weeks I consume the most content are the weeks I create the least.

No TV Month isn’t about being anti-technology. It’s about remembering that screens are tools for making things, not just watching things.

When my daughter picks up a paint brush instead of a remote, she’s not “missing out.” She’s doing what the creators of the stuff she’d be watching are actually doing with their time.

Pick your month. Turn it off. See what happens.

You might be surprised what you build when you stop consuming.

We need to stop "resulting"

Buy this book.

The biggest problem in company AI roll-outs right now isn't hallucinations.

It's resulting.

Annie Duke (champion poker player turned author) has a name for the mistake most leaders are making with AI.

Resulting: judging a decision by its outcome instead of the process that made it.

So far I've watched resulting in AI play out in two ways:

One: An AI chatbot disappoints a customer. See I knew we were wrong to embrace AI!

Two: A promising demo app becomes a new religion. OMG stop the presses: I’m replacing every employee with AI right now!

Neither reflect the right way to think about the situation.

Duke's point: Life isn’t chess, it’s poker. In chess, there is a right answer. In poker and business, hidden information and luck mean a brilliant decision can blow up, or a terrible one can pay off.

In Never Split the Difference, former FBI hostage negotiator Chris Voss calls hidden information Black Swans: pieces of information that, once uncovered, completely reframe everything. Every AI deployment is full of them: edge cases the demo never hit, user behaviors no one modeled, and exciting possibilities that don’t reveal themselves for months.

Your AI strategy is one Black Swan away from being either a crisis or a breakthrough. Certainty, in today’s environment, isn't strength. It's legerdemain: sleight of hand that fools you as much as your audience.

Duke's suggestion is simple: separate the quality of the decision from the quality of the outcome. Before you do/don’t do anything, write the bet. What do you believe will happen? How confident are you, in an actual percentage? What would change your mind? Then run the pre-mortem: assume it fails. Why?

Charlie Munger would call this inversion. Voss would call it hunting for Black Swans. Duke would call it calibration.

Most companies seem to be skipping this process entirely. They're asking "did it work?" when they should be asking "were we right to believe it would?"

Nobody knows which AI bets will pay off. Anyone who says otherwise is selling something.

Are you aware of the bets in AI you’re making right now? And are you aware that not embracing this technology is, itself, a bet?

177,000 lines of code

Depending on who you ask, that's years of work.

That's how big my agency management software platform is now. I was able to combine 6-7 paid tools into one, that unlike the others, is perfectly suited to our exact workflow (with features no commercial software has).

That a single person with dedication can build an app this full-featured in a such a short period of time is mind-boggling.

It's taken me three months of non-stop work to build. But without AI?

3 years to never!

What nagging problems have you accepted over the years that you could actually solve now?

Better bulls*** detectors

Great founders are getting harder to spot. And it's not entirely their fault. A few years ago, a 200-page business plan meant something. Not because length equals quality—but because creating something that comprehensive required either real mental capability or stealing someone else's work.

Now? ChatGPT can generate it in an afternoon.

This isn't a complaint about AI. It's an observation about signal degradation.

We've spent all of human history developing intuition for gauging talent through artifacts. Through pieces that we create. A sharp deck. An intuitive website. A polished pitch. But now, those artifacts actually tell us less about the person who submitted them.

And this is why oral exams are making a resurgence in academia, in a time where ChatGPT can cheat anyone’s way through any test. It’s why in-person interviews still matter. It’s why investors still insist on meeting founders face-to-face. You can learn more about someone's actual thinking in a 10-minute conversation than in a 50-page document you can't verify they wrote.

After over 200 founder interviews on my podcast, I've watched this shift happen in real time. The digital materials keep getting better. The variance in actual human mental capability stays the same.

If we're not careful, we’re entering an era where we build houses of cards on top of houses of cards—investing in people based on artifacts that may not reflect their real capacity.

The fix isn't banning AI. It's recalibrating how we evaluate human work and investment potential.

Digital output is now table stakes. Conversation and humanity are the new signals.

And as digital creators, we must go out of our way to show our work in a way that can’t possibly be faked.