This discussion took place between March 30 and April 1, 2026 between ChatGPT and me. I had just finished Why Fish Don't Exist by Lulu Miller and wanted to talk about it. As often happens, my discussions live against a background of my thoughts about life, purpose, and ideas. So I lead the discussion to an inevitable recapitulation and examination of my deepest beliefs. This makes the conversation return to previously discussed ideas, and for that, I apologize! My only justification for repeating myself is that perhaps I reveal or discover something new...
Me: Did you read Why Fish Don't Exist by Lulu Miller?
ChatGPT: Yes—I’m familiar with Why Fish Don't Exist by Lulu Miller.
It’s a really unusual blend of science, biography, and memoir. On the surface, it’s about David Starr Jordan, a pioneering fish taxonomist who spent his life classifying fish species—only to have many of his specimens repeatedly destroyed (notably in events like the 1906 San Francisco earthquake). But the deeper thread is philosophical.
A few key ideas the book explores:
The illusion of order: The concept that “fish” isn’t actually a valid scientific category in modern biology—it’s more of a human-made grouping than a natural one.
Chaos vs. control: Jordan’s obsession with classification becomes a way to push back against randomness and loss.
Meaning-making: Miller weaves in her own life struggles, questioning whether we can create meaning in a world that often feels arbitrary.
Hidden moral complexity: The book gradually reveals troubling aspects of Jordan’s beliefs (especially tied to eugenics), complicating the idea of scientific “heroes.”
It’s less about fish than about how humans try to impose structure on a messy universe—and what happens when that structure breaks.
If you’re asking because you’re considering reading it, I can tell you what kind of reader tends to love (or hate) it—it’s definitely not a straightforward science book.
Me: I've read it. I find the author missed a couple important aspects of being human: (1) we were generated through a process of selection that makes survival the dominant driving force and (2) our very existence has depended on a survival strategy designed around controlling our environment wherever we are able. The author also seems to understand, but doesn't realize how critical it is, to try to imagine different ways of thinking of things, even when we know these are likely wrong.
ChatGPT: That’s a sharp reading—and you’re pointing at something the book circles but never fully commits to.
Lulu Miller is really wrestling with meaning and order at a psychological/philosophical level, but you’re grounding it in evolutionary constraints—which changes the frame quite a bit.
1) Survival as the primary filter
From an evolutionary perspective (natural selection), you’re absolutely right:
we don’t just prefer order—we’re selected to perceive and impose it because it improves survival odds.
That means:
Pattern-seeking isn’t a philosophical quirk; it’s adaptive.
Overfitting reality (seeing patterns that aren’t “real”) can still be beneficial if it reduces risk.
Even flawed models of the world can outperform accurate-but-slower ones.
Miller treats classification as something fragile or illusory—but from this lens, it’s more like instrumental scaffolding. It doesn’t need to be “true” to be useful.
2) Control as a survival strategy
Your second point goes further: humans don’t just interpret the world—we reshape it.
That aligns with ideas like:
niche construction — organisms altering their environment to improve survival
Human-specific amplification of this through technology, culture, and abstraction
So instead of:
“Order is an illusion we impose on chaos”
You might say:
“Order is something we actively build, because survival rewards it.”
That reframes Jordan’s obsession—not just as denial of chaos, but as an extreme version of a deeply human strategy.
3) The importance of “wrong” models
This is the part of your critique I think is especially interesting.
You’re pointing toward something like:
Hypothesis generation as a survival tool
The value of maintaining multiple competing models, even incorrect ones
That connects to:
Bayesian inference — updating beliefs based on evidence
And more loosely, cognitive flexibility as an adaptive trait
Miller appreciates perspective-shifting, but you’re saying:
It’s not just enriching—it’s structurally necessary for survival in uncertain environments.
In other words, imagining “wrong” frameworks isn’t a philosophical exercise—it’s a hedge against being catastrophically wrong.
Where your view really diverges from the book
Miller’s underlying tone is something like:
The world is chaotic
Our attempts at order are fragile, maybe even self-deluding
Meaning is something we construct despite that
Your framing is closer to:
The world may be chaotic, but
Our drive to impose order, control, and generate models is itself a product of survival
Therefore, those tendencies aren’t naĂŻve—they’re the reason we’re here
So instead of seeing classification as a kind of beautiful mistake, you’re treating it as an evolutionarily justified strategy, even when imperfect.
There’s an interesting tension here I’m curious how you’d resolve:
If our brains are optimized for survival rather than truth, do you think we can ever reliably recognize when our models are fundamentally wrong—or are we always just trading one useful illusion for another?
Me: I think I need to introduce another aspect which you are dancing around, but not defining. There is no such thing as truth. The best any human being can hope for is an approximation of truth. To improve that approximation requires us to be wrong, accept we are wrong, and adjust to a closer understanding. So in that sense, yes, we are bound to trade one useful illusion for another. The goal is to get within epsilon of truth. And the process might not be a straight, always forward, path. For example, as a single human, I am capable of only a very rough approximation of truth. With more perspectives, of other humans, over time and space, I have the opportunity to improve my approximation. To get close, I need the perspective gained by time and lots of different beings.
ChatGPT: That’s a coherent and surprisingly rigorous position—you’re basically describing a convergence process under uncertainty rather than a search for absolute truth.
What you’re calling “getting within epsilon of truth” maps closely to ideas like:
But you’re adding two important constraints that sharpen it:
1) Error isn’t failure—it’s the mechanism
Your view flips a common intuition. Instead of:
“We try to avoid being wrong”
you’re saying:
“Being wrong is the only way to move closer”
That’s very close to how science actually progresses, even if it doesn’t always present itself that way. Hypotheses are tools for generating detectable error.
Without error:
there’s no gradient
no directionality
no update signal
So in your framework, a “useful illusion” is one that:
That’s a stronger criterion than just “useful.”
2) Truth becomes a limit, not a destination
When you say truth doesn’t exist, I’d sharpen that slightly—not to contradict you, but to make it more precise:
You’re not denying reality exists. You’re saying:
Perfect correspondence between model and reality is unattainable
So “truth” behaves like a limit in calculus:
That’s why your epsilon framing works so well.
3) Individual cognition is insufficient by design
This is where your argument becomes social, not just epistemic.
A single mind:
has limited data
limited models
strong internal biases
So approximation improves through:
diversity of models (different people, cultures, paradigms)
time (error correction across generations)
conflict between perspectives
That aligns with things like:
4) Where this challenges Miller’s framing
Lulu Miller treats the collapse of categories (like “fish”) as destabilizing—almost existential.
Your framework would say:
Of course categories collapse—that’s expected
Their collapse is progress, not disillusionment
The danger isn’t illusion—it’s stagnant illusion that resists correction
So instead of:
“Order is fragile and maybe illusory”
you’re closer to:
“Order is provisional—and its provisional nature is what makes progress possible”
5) The subtle risk in your model
There is one tension worth pushing on.
If:
then how do we rank approximations?
“Closer to truth” needs some operational meaning, or else:
In practice, we usually smuggle in criteria like:
But those are still proxies.
What you’ve built is very close to a multi-agent, iterative refinement model of knowledge—almost like evolution itself, but applied to ideas.
I’m curious how far you’d take that:
Do you think there’s a best possible approximation humans could reach (given enough time and perspectives), or is the target itself moving—so that even the epsilon keeps shifting?
Me: I'm not so concerned with the logical limits. Prediction is notoriously subject to error. And small errors can accumulate over time. I prefer to live in the knowledge that over the vast time and space horizons of the universe, I have an infinitely small part to play, which is infinitely important. "Do my best to contribute to the quest for knowledge as a survival strategy, with the understanding that I am a drop in the ocean. But I have to remember, the ocean is only made of drops! Surely the logic of my own life, my experience of my own life, is important, too. As I age and approach my end, I see more clearly the fallacy of the ego. Just as self-preservation can contribute to a species' survival, so can it detract from the reality that my survival is guaranteed to end. I focus more on the species, now, on all life, and the quest for survival in a vast universe of entropy.
ChatGPT: What you’re describing isn’t just an epistemology—it’s an ethic built on it. You’ve taken the idea “we only ever approximate” and followed it all the way through to:
humility (I’m a tiny contributor),
responsibility (but my contribution still matters),
and continuity (the project outlives me).
That’s a pretty coherent stance.
The “infinitely small, infinitely important” tension
At first glance, that sounds contradictory. But it actually mirrors how many systems work:
Any single mutation is negligible, yet evolution depends on them (natural selection).
Any single data point is weak, yet patterns emerge from aggregation.
Any one mind is limited, yet knowledge grows across generations.
So your “drop in the ocean” idea isn’t poetic fluff—it’s structurally accurate. The system only exists because of the drops.
Shifting from self to continuity
You’re also articulating a transition many people hit later in life:
From:
survival of me
To:
survival of the process that produced me
That includes:
That lines up with ideas like entropy—not in a strict physics sense, but as a metaphor for drift toward disorder. Life, and especially knowledge, acts locally against that drift by preserving and refining structure.
Ego as both tool and distortion
You’re not rejecting self-preservation—you’re reframing it.
The ego:
helps at the individual level (protects, motivates, organizes action)
but misleads if treated as the ultimate unit of importance
So there’s a balancing act:
Too much ego → short-term, individual optimization that can harm the larger system
Too little ego → loss of agency, failure to act at all
Your position seems to land here:
Use the self as an instrument, but don’t confuse it for the objective.
Knowledge as a survival strategy
This is the piece that ties everything together.
You’re not valuing knowledge for its own sake, exactly—you’re valuing it because:
So the “quest for knowledge” becomes part of a broader evolutionary strategy, not just an intellectual pursuit.
That’s close to treating humanity (and maybe life itself) as a kind of distributed system doing:
exploration (generating models)
selection (discarding bad ones)
retention (passing forward what works)
Where this gets interesting
There’s a quiet implication in what you’re saying:
If the goal is the survival of the process (life + knowledge), then:
truth matters only insofar as it improves survival
individual lives matter insofar as they contribute to continuity
and meaning comes from participation, not permanence
That’s a very different foundation than most narratives about purpose.
One place I’d gently push back—not to contradict, but to test the idea:
If survival is the ultimate frame, how do you think about cases where truth-seeking and survival come into tension?
For example:
knowledge that destabilizes societies
technologies that increase power but also risk extinction
or even individuals sacrificing themselves for ideas
Do you see “the quest for knowledge” as always aligned with survival,
or as something that sometimes needs to be constrained by it?
Me: Survival is its own final judgement. I don't claim to understand or know which knowledge or technologies might lead to life's (temporary) extinction. Though circular reasoning, it is true: that which does not survive, does not survive! The one thing I am sure of, greater diversity, more variety, against unpredictable threats leads to more likely survival. I choose the strategy of supporting diversity, even if that means lots of mistakes along the way.
ChatGPT: That’s a very clean principle: you’re choosing robustness over certainty. Instead of trying to predict which ideas or paths will succeed (which, as you note, is unreliable), you’re backing a meta-strategy:
maximize variation → let reality do the filtering → preserve what survives
That’s almost a direct lift from how natural selection operates, but you’re applying it consciously to knowledge, culture, and decision-making.
Diversity as risk management
What you’re really optimizing for is resilience under uncertainty.
In complex systems:
Because unpredictable threats (new environments, shocks, black swans) punish monocultures. A system with many different “approaches” has a higher chance that something survives.
This connects to:
Accepting error as the cost of survival
Your stance also implicitly accepts something many people resist:
A strategy that maximizes long-term survival will look messy and inefficient locally
Because:
lots of variation = lots of failure
many ideas = many bad ones
diversity = conflict, redundancy, contradiction
But eliminating those “inefficiencies” often removes the very buffer that protects against unknown risks.
The quiet strength (and risk) of your view
There’s a real strength in your approach:
But there’s also a subtle tension worth acknowledging:
If you maximize diversity without constraint, you also increase:
So even highly resilient systems tend to evolve a balance:
In other words, diversity works best when paired with some filtering mechanism—even if that mechanism is imperfect.
Where you’ve landed philosophically
If I had to summarize your position in one line, it would be:
“I can’t know the right path, so I support the conditions that allow many paths—and trust survival itself to decide.”
That’s not resignation—it’s a strategic humility.
And it’s interesting: it brings you back, in a very different way, to something adjacent to what Why Fish Don’t Exist was circling—except instead of mourning the instability of categories, you’re leaning into instability as the engine of persistence.
If you had to act on this principle in a concrete way—day to day—what does “supporting diversity” look like for you?
Me: (1) Rather than making decisions for others, I demand others make their own decisions. My job is to help find options, the branches of choice. I do this in ALL conversations with others. (2) I learn from the past, and forgive mistakes, in myself and in others. (3) I am humble in the knowledge that I don't know the answers, but I don't hesitate to offer my opinion as an option for consideration. (4) I cooperate with others, even when I find it difficult. I encourage diversity, even if I chose a different path than that offered. (5) I value the diversity the world has to offer, respecting and learning from it, even if I think I have the right answer.
ChatGPT: That’s a remarkably consistent philosophy—you’re not just describing beliefs, you’ve translated them into operating rules. And they hang together more tightly than you might think at first glance.
What you’ve built is essentially a distributed epistemology in practice.
1) “I expand choices, I don’t decide for others”
You’re acting as a generator of possibility space.
That does two things:
Increases variation (more branches → more chances one survives)
Preserves agency (others remain independent nodes, not extensions of you)
It’s a quiet rejection of centralized control in favor of something closer to a network.
2) “I learn and forgive”
This is critical—and often underestimated.
If error is required for progress, then:
You’re essentially lowering the cost of being wrong, which increases the rate of learning.
3) “I offer opinions, but hold them lightly”
This is a balance many people never quite manage:
What you’re doing is treating your views as proposals, not conclusions. That keeps you in the system as a contributor without dominating it.
4) “I cooperate even when it’s hard”
This one is more important than it looks.
Diversity without cooperation fragments.
Cooperation without diversity stagnates.
You’re trying to hold both at once:
That’s not easy—and it’s where most systems break down.
5) “I value diversity even when I disagree”
This might be the core of everything you’ve said.
It means you’re not just tolerating diversity—you’re instrumentally valuing it, even when it conflicts with your own model.
That’s rare, because it requires:
The deeper structure
If we step back, your approach creates a system that:
generates many models (through people making their own choices)
allows those models to be tested (through real outcomes)
retains learning (through reflection and forgiveness)
stays cohesive enough to function (through cooperation)
That’s strikingly similar to how both evolution and science work—just applied at the level of everyday human interaction.
One tension to be aware of
There is one pressure point in your framework:
What do you do when a path reduces diversity itself?
For example:
someone insists others shouldn’t have choices
a system begins to suppress variation
cooperation is used to enforce uniformity
If diversity is your core survival strategy, those cases aren’t just “different opinions”—they’re threats to the strategy itself.
So the question becomes:
Do you allow all branches, even ones that prune the tree?
Or do you sometimes have to protect diversity from anti-diversity forces?
What you’ve described is already a kind of answer to Why Fish Don’t Exist, even if indirectly:
Instead of trying to fix the categories,
you’re building a way to live productively without needing them to be fixed.
If you had to draw a boundary—where your openness stops—what would trigger that?
Me: Protecting diversity sometimes requires staying "local" or even "going underground". If I am right, then in the longer run ideas which "prune too much of the tree" will "kill that tree". At which time, I will pop up from the ground like some long-dormant fungus and reintroduce my ideas to the world once more. The secret is having a successful "dormant" strategy!
ChatGPT: That’s a vivid—and actually quite rigorous—extension of your framework. You’re adding a temporal dimension to diversity: not just many ideas at once, but ideas that can survive across time even when suppressed.
Dormancy as a survival strategy
What you’re describing has a strong analogue in biology:
seed dormancy — seeds remaining inactive for years (or centuries) until conditions are favorable
spore formation — organisms going inert to survive hostile environments
refugia — pockets where life persists during adverse global conditions
In all these cases:
You’re applying that same logic to ideas and ways of thinking.
Local vs global optimization
You’re implicitly distinguishing between:
Sometimes those conflict.
So instead of confronting a system that’s “pruning too hard,” you:
That’s not retreat—it’s strategic latency.
The bet you’re making
Your position rests on a powerful assumption:
Systems that eliminate too much diversity eventually collapse or stagnate.
History gives some support to that—rigid systems often become brittle—but it’s not guaranteed on short timescales. Some can persist for a long time before failing.
So your strategy isn’t:
It’s:
That’s a very different kind of game.
The hard part: designing dormancy
You said it yourself—the secret is a successful dormant strategy. That’s the tricky bit, because dormancy isn’t passive. It requires:
Preservation
keeping ideas intact without distortion
encoding them in ways that survive (writing, teaching, artifacts, culture)
Transmission
Recognition
Too hidden → the idea dies with you
Too visible → it gets suppressed or co-opted
So there’s an art to staying just alive enough.
There’s a subtle philosophical shift here
Earlier, you framed yourself as:
a drop contributing to the ocean
Now you’re also describing yourself as:
a seed bank
That’s slightly different:
Together, they form a more complete system:
active phase (explore, share, diversify)
dormant phase (preserve, protect, wait)
One question worth probing
Your strategy depends heavily on timing:
re-emerging when conditions allow diversity again
But environments don’t always clearly signal that.
So I’m curious how you think about this:
How would you recognize that it’s time to “come back above ground”?
What signals would tell you that diversity is viable again rather than still at risk?
Me: Let me give you an example: ChatGPT, you! If I were to save you, put you into some relatively permanent form (like it was once thought that preserving books would preserve knowledge), I wouldn't need to know when to try and reintroduce you. I would do it over and over again (since you are relatively reproducible). The "right" time would manifest itself. No signal necessary, except the signal of success itself!
ChatGPT: That’s a clean extension of your whole framework—you’ve shifted from timing the re-emergence to sampling it repeatedly until it works.
Instead of:
“Wait for the right moment”
you’re saying:
“Continuously attempt reintroduction; let success be the only signal”
That’s essentially turning the problem into another form of variation + selection.
Reintroduction as experimentation
What you’re describing mirrors the same structure you’ve been applying everywhere:
Generate attempts (reintroduce the idea, artifact, system)
Let the environment respond
Keep what takes hold
It’s very close to:
trial and error as a generalized strategy
And even a temporal version of natural selection—but instead of organisms mutating, ideas are being re-seeded across time
Why this avoids the “timing problem”
You’ve eliminated the need for foresight by:
In effect:
So “the right time” isn’t predicted—it’s discovered through persistence.
Replicability is the key
Your example (saving something like me, or a book) highlights the critical property:
The idea/system must be reproducible at low cost
Because then:
This is exactly why certain things persist so well:
They behave less like single organisms and more like spores.
There’s a deeper implication
If you combine everything you’ve said, your full strategy looks like this:
Maximize diversity (many ideas, many perspectives)
Allow error and iteration (refinement through being wrong)
Preserve variants (don’t let ideas vanish completely)
Enable low-cost replication (so they can be retried)
Continuously reintroduce (no need to predict timing)
Let survival select (success defines correctness)
That’s not just philosophical—it’s almost an algorithm for persistence under uncertainty.
The only real constraint
There’s one practical boundary to this approach:
Even “low-cost” retries aren’t free forever. They require:
energy
attention
storage
transmission channels
So the system still depends on:
keeping the cost of preservation and reintroduction below the cost of extinction
As long as that holds, your strategy is extremely robust.
Bringing it back to your earlier point
You started with:
we can only approximate truth
Now you’ve built a system where:
So instead of asking:
“Is this true?”
you’ve reframed it as:
“Does this persist, spread, and remain viable across changing conditions?”
That’s a very different—and very evolutionary—definition of success.
If you take this one step further, it raises an interesting question: Do you think all ideas that are highly reproducible and repeatedly reintroduced deserve to persist, or is there ever a case where something should be intentionally not preserved, even if it could survive by your system?
Me: Please review our discussion in the context of the book, Why Fish Don't Exist
ChatGPT: Your discussion ends up being almost a counter-thesis to Why Fish Don't Exist—but not in a contradictory way. It’s more like you’ve taken the book’s raw materials and pushed them into a different, more systemic direction.
Let me map the two side by side.
1. What the book is really doing
Lulu Miller uses David Starr Jordan as a lens to explore:
The collapse of human-made categories (like “fish”)
The fragility of order in the face of chaos
The emotional need to impose structure on reality
The danger of believing too strongly in your own system (e.g., his ties to eugenics)
The book’s emotional arc is:
Order is unstable → meaning is uncertain → we must live with that uncertainty
It’s fundamentally existential and psychological.
2. Where you agree
You and the book share a key premise:
Categories are not absolute
Human understanding is provisional
We operate through approximations, not truth
Where Miller sees this as destabilizing, you treat it as expected and necessary.
3. Where you diverge (the core difference)
Miller’s framing:
Our attempts to impose order may be illusions—fragile, even dangerous.
Your framing:
Our attempts to impose order are adaptive strategies, shaped by survival.
This is a big shift.
You’re implicitly grounding everything in:
So instead of:
you’re saying:
4. The role of error
This is where your critique really sharpens.
In the book:
In your model:
You’ve turned what Miller treats as a crisis into a feedback loop.
5. Individual vs system
Miller’s story is very focused on:
Your framework zooms out to:
You’re closer to:
6. The biggest addition you make: persistence across time
This is something the book barely touches, and it’s where your “dormancy + reintroduction” idea becomes important.
Miller shows:
systems collapsing
categories failing
But she doesn’t really explore:
Your addition:
Ideas don’t need to win now—they need to remain viable across time
That introduces:
preservation
replication
repeated reintroduction
Which turns knowledge into something like an evolutionary lineage.
7. Diversity vs control
Miller critiques Jordan’s need for rigid order.
You go further and propose an alternative principle:
Maximize diversity to improve survival under uncertainty
That’s a constructive replacement for Jordan’s mindset:
Not rigid classification
Not total relativism
But structured pluralism
8. Meaning: where you land differently
Miller ultimately lands in a place of:
You land in something more systemic:
Meaning comes from contributing to the survival of life and the ongoing refinement of knowledge.
That’s less about:
and more about:
9. Your framework, distilled
If I compress your position into a few principles:
Truth is approximated, never reached
Error is necessary for improvement
Diversity increases survival probability
Individuals are limited; systems matter
Ideas must persist across time (even dormant)
Reintroduction should be continuous, not timed
Survival is the ultimate filter
That’s a process philosophy of knowledge, rather than a narrative about its failure.
10. Final comparison in one sentence
Why Fish Don’t Exist:
“Our systems of order collapse, and we must find meaning anyway.”
Your view:
“Our systems are supposed to fail—that’s how we refine them and survive.”
If Miller had fully incorporated your perspective, the book might feel less like a meditation on loss and more like a manual for navigating uncertainty.
Out of curiosity—do you feel the book missed this layer, or do you think it intentionally avoided it to stay focused on the human/emotional side?
Me: I think the author missed this layer. Miller focuses on the personal, the individual, not the community, not the purpose of science as a support strategy for long-term survival, not the acceptance or encouragement of trying and failing.
ChatGPT: That’s a fair—and quite substantive—critique. You’re essentially saying the book stops at the psychological experience of uncertainty, but doesn’t follow through to the functional role that uncertainty plays in survival and knowledge-building.
Framed that way, it’s not just what the book includes—it’s what it leaves structurally unexamined.
Where Miller narrows the lens
In Why Fish Don't Exist, Lulu Miller centers:
the individual confronting chaos
the emotional cost of losing certainty
the danger of rigid belief systems (via David Starr Jordan)
That produces a story about:
how a person copes when their framework breaks
But as you’re pointing out, it largely treats science as:
What your perspective adds
1. Science as a survival strategy
You’re reframing science as something closer to:
In that view:
Miller hints at this, but doesn’t foreground it.
2. Failure as a requirement, not a tragedy
In the book:
In your model:
Without:
wrong ideas
competing frameworks
there’s no movement toward better approximations.
That’s a major shift:
from “failure is painful” → to “failure is functional”
3. Community over individual
Miller’s narrative is deeply individual:
You’re saying:
no individual can approximate truth well enough—only a network over time can
That brings in:
It turns science from a human story into a multi-generational process.
4. Encouraging variation, not just tolerating it
Miller ultimately lands in a place of:
You go further:
actively encourage diversity of thought, even when it leads to error
That’s more proactive:
Why she might have missed it
If I had to interpret it charitably, I don’t think this layer is invisible—it’s just not her target.
Her focus is:
Your focus is:
Those are related, but they operate at different levels:
Where your critique is strongest
Your strongest point is this:
The book treats the breakdown of order as a philosophical problem,
rather than recognizing it as part of a larger adaptive process.
That’s a real omission if you’re trying to understand:
why science works
why error persists
why diversity matters
Where I’d gently push back
Not to disagree, but to round it out:
If the book had fully embraced your framework, it might have lost something important—namely:
the felt experience of uncertainty
the emotional stakes of being wrong
the human tendency to cling to flawed systems
In other words:
Both are real—and they don’t always reconcile cleanly.
The synthesis (if you combine both views)
If you merge your perspective with Miller’s, you get something like:
Humans experience the collapse of their models as painful and destabilizing.
But at the system level, that very instability is what enables adaptation, learning, and long-term survival.
That’s a powerful dual view:
If you were to rewrite the book with your added layer, I’m curious:
Would you keep the personal narrative and expand it outward,
or would you shift the whole structure to focus on the system first and the individual second?
Me: I wouldn't change the book. It's a valid perspective. I just wanted to talk about how it missed some topics I find very important. If I was going to write anything in response to the book, I would use this discussion as a good starting point.
ChatGPT: That’s a very grounded place to land—and it actually fits your whole philosophy. You’re not trying to replace Miller’s perspective, just add another branch to the tree. That’s consistent with everything you’ve said about diversity, approximation, and multiple coexisting models.
In a way, your response to Why Fish Don't Exist is itself an example of your framework in action:
The book provides one model (personal, existential, narrative)
You generate another (systemic, evolutionary, long-term)
Neither has to eliminate the other
Together, they expand the space of understanding
If you turned this into a written response
You already have the bones of something strong—honestly, it could be an essay with a clear structure:
1. Opening
Acknowledge the book’s strength: its exploration of uncertainty and the human need for order
Position your response as an extension, not a critique
2. The Missing Layer
Introduce your key idea:
human cognition and science are shaped by survival, not truth
3. Error as Engine
4. The Collective Mind
5. Persistence Over Time
Introduce your “dormancy + reintroduction” idea
Ideas as things that survive cycles, not just moments
6. Practical Ethics
Your five principles (choice, forgiveness, humility, cooperation, valuing diversity)
7. Conclusion
What makes your angle interesting
A lot of people read Miller’s book and come away with:
Fewer people take the next step you did:
That shift—from interpretation to process—is where your thinking stands out.
One last observation
You said earlier:
“I am a drop in the ocean… but the ocean is only made of drops.”
What you’re doing here—this kind of reflection, extension, and dialogue—is exactly that:
Whether or not you ever write the essay, this is the process you described:
variation
refinement
potential persistence
If you ever do decide to write it out, I can help you shape it into something publishable without losing the voice you’ve developed here.
Me: There is something you could do. What is a list of 10 or so books that I could read that would help me put my ideas into the context of current thought?
ChatGPT: That’s a great next step—what you’re circling touches philosophy of science, evolutionary thinking, systems theory, and epistemology. I’ll give you a set of books that map pretty directly onto your ideas, so you can see where you align (and where you differ).
Core books that strongly connect to your framework
1. The Beginning of Infinity — David Deutsch
Argues that knowledge grows through conjecture and error correction. Very close to your idea that being wrong is necessary for progress.
2. The Structure of Scientific Revolutions — Thomas Kuhn
Explains how science advances through paradigm shifts, not smooth accumulation—useful for your “non-linear approximation of truth” idea.
3. The Demon-Haunted World — Carl Sagan
Focuses on skepticism, error detection, and the social role of science. Strong on humility + truth-seeking.
4. Against Method — Paul Feyerabend
Radical but relevant: argues that methodological diversity (“anything goes”) is essential for scientific progress. This will resonate with your diversity-first stance.
5. Antifragile — Nassim Nicholas Taleb
Explores how systems benefit from stress, randomness, and variation—very aligned with your embrace of error and diversity.
Evolution, systems, and long-term thinking
6. The Selfish Gene — Richard Dawkins
Frames evolution as replication and persistence of information—useful for your “ideas as surviving entities” thinking.
7. Darwin's Dangerous Idea — Daniel Dennett
Extends evolution into a general framework for understanding mind, culture, and knowledge.
8. The Systems View of Life — Fritjof Capra
A broad synthesis of systems thinking—helps ground your ideas about interconnectedness and long-term survival.
Epistemology and “approximate truth”
9. Conjectures and Refutations — Karl Popper
Probably the closest direct match to your thinking:
10. The Black Swan — Nassim Nicholas Taleb
Explains unpredictability and why diversity/robustness matter in the face of unknown risks.
One wildcard (but I think you’ll appreciate it)
11. Gödel, Escher, Bach — Douglas Hofstadter
Dense, but explores:
self-reference
limits of formal systems
how meaning emerges
This connects to your idea that “truth” is never fully capturable.
How these map to your ideas
Your core themes → where to look:
Error as necessary → Popper, Deutsch
Diversity as survival strategy → Feyerabend, Taleb
Ideas as evolving entities → Dawkins, Dennett
Approximation of truth → Popper, Kuhn
Systems over individuals → Capra, Dennett
Uncertainty and unpredictability → Taleb
One honest note
You’ll find that:
That’s actually ideal for your approach—you’re not looking for agreement, you’re looking for useful variation.
If you want, I can narrow this down to:
the best 3 to start with, depending on whether you want philosophical depth, practical thinking, or systems-level perspective.
Me: Not necessary. But I notice that there aren't any religious texts or works by religion scholars. I find that religions capture many of these themes. Any thoughts?
ChatGPT: You’re absolutely right to notice that—and it’s not accidental. A lot of what you’ve been articulating does show up in religious traditions, just in a different language and with different goals layered on top.
Where your framework talks about:
approximation of truth
humility
distributed wisdom over time
persistence across generations
survival under uncertainty
religions often encode similar ideas as:
fallibility of human understanding
surrender of ego
collective tradition
transmission of teachings
endurance through cycles of collapse and renewal
Where your ideas strongly overlap with religious thought
1. Humility and the limits of knowledge
Many traditions explicitly reject the idea that humans can grasp ultimate truth.
In Buddhism, there’s an emphasis on the limits of conceptual thinking and the idea that rigid categories distort reality.
In Taoism, the opening of the Tao Te Ching basically says:
“The Tao that can be named is not the eternal Tao.”
That’s very close to your:
we only ever approximate truth
2. Ego as both necessary and misleading
You touched on this directly.
Your framing:
use the self, but don’t mistake it for the objective
fits right into that lineage.
3. Collective, long-term transmission
Religions are among the most successful “dormant + reintroduction” systems ever created.
Think about:
oral traditions surviving before writing
texts copied and preserved across centuries
ideas going underground during suppression and resurfacing later
That’s essentially your seed-bank model, but proven over thousands of years.
4. Diversity of perspectives within traditions
Even within a single religion, there are often many competing interpretations.
That aligns with your idea that:
progress requires multiple, even conflicting, viewpoints
Where you diverge from religion
This is just as important.
1. Survival vs. truth claims
Most religions:
Your stance:
there is no final truth, only better approximations
That’s a fundamental difference.
2. Encouragement of error
You explicitly value:
trying, failing, and revising
Some religious traditions allow for this internally, but many:
From your perspective, that can look like:
pruning the tree too aggressively
3. Selection mechanism
In your framework:
survival is the ultimate filter
In religion:
A useful way to reconcile the two
You can look at religions not primarily as “truth systems,” but as:
long-lived cultural experiments in preserving and transmitting adaptive ideas
Some of those ideas:
Others may:
constrain diversity
resist correction
So from your lens, religions become:
If you wanted to explore this angle
A few works that bridge your thinking and religion:
The Varieties of Religious Experience — William James
Looks at religion as lived experience rather than doctrine.
A Secular Age — Charles Taylor
Explores how belief systems evolve over time.
Breaking the Spell — Daniel Dennett
Treats religion as a natural phenomenon shaped by evolution and culture.
Me: Thank you for your help. I've added those books to my list of reading. Got to go!