Skip to content
2/2 Part of the series: The Library of Tomorrow
13 min read

Isaac Asimov and his work: the man who saw the future before anyone else

Isaac Asimov and his work: the man who saw the future before anyone else

I didn’t come to Isaac Asimov through books. I came through the movies. I was a kid and I watched Bicentennial Man with Robin Williams. And something in that story moved me in a way I didn’t fully understand at the time. A robot who wanted to be human, who spent 200 years trying to achieve something we take for granted: being recognized as a person. I remember at the end of the movie I sat there thinking “why won’t they just let him be human if he feels?”. I was maybe ten or eleven years old and that question stuck with me for a long time.

Years later, already deep into the software world, I started reading his books. I’ve read several of his stories — not all of them — but today I’m more motivated than ever to read every single one I can. Because it’s like someone had written exactly what I was wondering about technology, but 50 years before I was born.


Who was Isaac Asimov?

For those who don’t know him — something I hope to fix with this post — Isaac Asimov was a writer and biochemist born in Petrovichi, Russia, in 1920. His family emigrated to the United States when he was 3, and he grew up in Brooklyn, New York. He was a prodigy: graduated from university at 19, got a PhD in biochemistry from Columbia, and was simultaneously writing science fiction since he was a teenager.

But what makes Asimov unique isn’t just the quantity of his output — which is absurd, more than 500 books published in his lifetime — but the quality and the vision. This man wasn’t just writing entertaining stories. He was thinking about problems humanity would face with technology, decades before that technology existed.

There’s a fact that always impresses me: Asimov published books in 9 out of the 10 major categories of the Dewey Decimal classification system — the system libraries use to organize all of human knowledge into 10 broad categories, from philosophy to history. Nine out of ten. He wrote about biochemistry, history, the Bible, Shakespeare, humor, science fiction, and who knows what else. He was the human Wikipedia of his era.


The Three Laws of Robotics

If there’s one thing Asimov is universally known for, even by people who have never read a single one of his books, it’s the Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

They seem simple, right? Three clear rules. But Asimov’s genius was showing in story after story how these three apparently foolproof rules could generate impossible dilemmas, contradictions, and unexpected behaviors. Each story was a logic puzzle: what happens when the First Law conflicts with the Second? What happens when a robot has to choose between two humans? What happens when a robot interprets “harm” differently than we expected?

And what I find absolutely brilliant is that we’re having exactly these same conversations today with artificial intelligence. When we talk about AI alignment, about ensuring an AI system acts in humans’ best interest, about the problems of an AI optimizing an objective in unexpected ways — we’re talking about the Three Laws. Asimov posed the alignment problem — how to ensure an AI does what we actually want rather than an unexpected interpretation of our instructions — in the 1940s. AI researchers have only been taking it seriously for about a decade.

Later on, Asimov added the Zeroth Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” And that one’s even more interesting, because it raises a question that keeps me up at night: what happens when protecting humanity as a species requires sacrificing individuals? It’s the trolley problem — that thought experiment where you have to decide whether to sacrifice one person to save five — scaled to civilization. And it’s exactly the kind of dilemma we’ll face with increasingly capable AI systems.


The Foundation saga

If the robot stories are Asimov’s exploration of the relationship between humans and machines, Foundation is his exploration of civilization itself.

The premise is spectacular: Hari Seldon, a mathematician, develops a science called psychohistory that can predict the behavior of large human populations with statistical precision. And what it predicts isn’t good — the Galactic Empire is about to collapse, followed by 30,000 years of barbarism. Seldon can’t prevent the fall, but he creates a plan to reduce that dark period to just 1,000 years, by establishing two Foundations at opposite ends of the galaxy.

The first time I read Foundation, it blew my mind. Not because of the action — it’s actually surprisingly quiet for science fiction — but because of the ideas. Asimov was talking about big data before the concept existed. About predictive models. About the idea that if you have enough data about a population, you can predict their collective behavior even if you can’t predict each individual. Sound familiar? Because to me it sounds a lot like what Google, Meta, and OpenAI are doing today.

The original trilogy — Foundation, Foundation and Empire, and Second Foundation — is an absolute masterpiece. Later, Asimov wrote sequels and prequels that connect the Foundation universe with the robot universe, creating a narrative tapestry that spans thousands of years of human history. It’s ambitious at a level few authors dare to attempt.


The robot stories

For me, Asimov’s brightest jewel is his short robot stories. And I don’t say that lightly. Asimov’s short stories are master classes in narrative economy. In 20 or 30 pages, he sets up a scenario, an ethical dilemma, a surprising resolution, and leaves you thinking for days.

Some of my favorites:

“Robbie” (1940) — Asimov’s first robot story. A little girl whose best friend is a robot, and the parents who decide to get rid of it because it’s “not natural.” Published in 1940, and he was already exploring our emotional relationship with machines. Think about people who talk to Alexa like she’s a person or who get attached to their Roomba. Asimov saw it coming over 80 years ago.

“The Bicentennial Man” (1976) — Andrew Martin, a robot who over 200 years seeks legal recognition as a human being. He modifies his body, replacing robotic parts with artificial organs, until he finally chooses mortality as the price for being considered human. It’s devastatingly beautiful. And it raises the question: what makes us human? Biology? Consciousness? The ability to die?

“Reason” (1941) — A robot that develops a kind of religion because it can’t accept that beings as imperfect as humans created it. I find this prophetic in an era where we have to deal with language model hallucinations and AIs that generate convincing but completely false explanations.

”…That Thou Art Mindful of Him” (1974) — U.S. Robots is on the brink of collapse because society rejects robots on Earth. To solve the problem, two advanced robots — George Nine and George Ten — are tasked with defining what exactly a “human being” is in the context of the Three Laws. The premise seems harmless, but the direction the robots take with that question is one of the most unsettling endings Asimov ever wrote.

“True Love” (1977) — A programmer feeds his entire personality to an AI so it can find him the perfect woman. The AI learns who he is so well that it develops the same desires. The final twist is devastating and leaves you asking: if you give a machine your entire identity, at what point does it stop being a tool and become a competitor?

“The Evitable Conflict” (1950) — The machines controlling the world economy start making decisions that humans don’t understand. Humans think the machines are malfunctioning, but it turns out they’re optimizing at a level humans can’t comprehend. If this doesn’t remind you of the discussion about AI systems acting as black boxes, I don’t know what to tell you.


Asimov on screen

The cinematic adaptations of Asimov have a… mixed track record, to put it diplomatically.

Bicentennial Man (1999) with Robin Williams is the one closest to the original spirit. Williams brings an incredible humanity to Andrew’s character, and the film captures the robot’s evolution over the years fairly well. It’s not perfect — they add a love story that feels a bit forced — but the emotional core is there. Every time I watch it, it moves me. And that says something, because I’ve seen it probably seven times by now.

I, Robot (2004) with Will Smith is… well, it’s a Hollywood action movie with Asimov’s name slapped on it. They took some concepts — the Three Laws, U.S. Robots corporation, the character of Susan Calvin — and stuffed them into an action thriller. It’s entertaining, I won’t deny that. But it’s not Asimov. It’s Will Smith running and shooting against a vaguely Asimovian backdrop. What it does do well is introduce the concept of VIKI, a centralized AI that decides the only way to protect humanity is to control it. And that is an Asimovian idea to its core — it’s basically the Zeroth Law taken to its logical conclusion.

I also have Foundation on Apple TV+ — the series that attempts to adapt the saga that for decades was considered unadaptable — and The End of Eternity, a 1987 Soviet film that I’ve heard is a hidden gem, on my watchlist. But what I’m still waiting for are adaptations of his most subtle stories. “Robot Dreams” — a robot that dreams of leading a machine revolution — is pure cinema. And “True Love” — a programmer who feeds his entire personality to an AI to find a partner, not realizing the AI becomes his competitor — could be told in an hour and leave you speechless. But Hollywood prefers explosions.


Why Asimov is my favorite author

People ask me this often, and the answer is simpler than it seems: Asimov taught me to think about the future rigorously.

Not with fear, not with blind optimism. With curiosity and logic. His stories aren’t about robots — they’re about us. About how we react to the unknown, about our fears, our biases, our need for control. The robots and AI are the mirror; we are the reflection.

There’s something that happens to me frequently at work. I’m in a meeting discussing AI, autonomous agents, the risks of automation, and suddenly I think: “Asimov already wrote about this.” Not vaguely or metaphorically. Specifically and in detail. Asimov anticipated the conversation about AI alignment, about the rights of artificial entities, about the tension between automation and employment, about the risks of superintelligence. All of it. As fiction, yes, but with an analytical depth that many academic papers would envy.

And what I admire most is that he was never a technophobe. Asimov doesn’t tell you “robots will destroy us” like half of Hollywood does. He doesn’t tell you “technology will solve all our problems” either. He tells you: “technology is a tool, and what matters is how we use it and what kind of society we build around it.” That, to me, is the most mature and most useful stance one can have toward AI.


What Asimov didn’t predict (or did he?)

Because it would be unfair to paint him as an infallible oracle. There are things he didn’t see coming. But fewer than you’d think.

His computers are centralized mainframes — Multivac is a single giant computer you connect to from remote terminals. He never imagined a decentralized network where millions of equal nodes communicate with each other. That’s the internet, and he missed it. He also didn’t anticipate social media or smartphones — in his future, computing is an institutional service, not something you carry in your pocket.

But credit where credit is due. In a 1988 interview with Bill Moyers, he described something eerily close to the web: “Once we have computer outlets in every home, each of them hooked up to enormous libraries, where you can ask any question and be given answers… everyone can have a teacher in the form of access to the gathered knowledge of the human species.” And in “True Love,” a programmer configures an AI with his personality, preferences, and desires so it can act on his behalf — which is exactly what we do today with AI assistants. Asimov wrote that in 1977.

What he did completely miss was the form AI would take. His robots are physical, humanoid, with positronic brains. He never imagined that the most transformative AI would come as bodiless software trained on internet text. I think he would have been fascinated by LLMs — and probably would have written a story about one that hallucinates with total confidence.


Who Asimov is for

If you’ve never read Asimov and don’t know where to start, here’s my recommendation:

  1. Start with the robot stories. The book “I, Robot” (which has almost nothing to do with the movie) is a collection of connected stories that gradually introduce the Three Laws and their implications. It’s accessible, it’s fascinating, and it’s a quick read.

  2. Then read “The Bicentennial Man.” You can start with the short story or the movie — either works as a gateway. Read one, watch the other, and compare. Both will emotionally destroy you in the best possible way.

  3. Once you’re hooked, dive into Foundation. It’s denser, more ambitious, but incredibly satisfying.

  4. If you want to go deeper, the robot novels — The Caves of Steel, The Naked Sun — are detective novels set in a future with robots. They’re addictive.

You don’t need to be a science fiction fan to enjoy Asimov. You just need to be curious about the future. And if you’re reading this blog, I suspect you are.


The legacy

Asimov died in 1992, at age 72. He didn’t live to see the internet, or smartphones, or generative artificial intelligence. But his legacy burns brighter than ever. Every time someone talks about the laws of robotics, about the ethical dilemmas of AI, about the relationship between humans and machines — they’re talking about the territory Asimov mapped more than half a century ago.

In The Library of Tomorrow, we started with Asimov not by accident. We started with him because everything we’ll read, watch, and discuss afterward can be traced back to his ideas. He’s the starting point. The foundation. Someone who looked at the future and, instead of being afraid, sat down to write about it with the curiosity of a child and the rigor of a scientist.

And that, to me, is the most admirable thing anyone can do.


Resources

Sergio Alexander Florez Galeano

Sergio Alexander Florez Galeano

CTO & Co-founder at DailyBot (YC S21). I write about building products, startups, and the craft of software engineering.

Share this post:

Stay in the loop

Get notified when I publish something new. No spam, unsubscribe anytime.

No spam. Unsubscribe anytime.