By Gabriel Segoine | July 2025
Me:
Theoretically, how close are we today to AI accurately predicting human behavior—and to what degree?
ChatGPT:
Theoretically—and increasingly practically—AI is already very good at predicting certain types of human behavior, especially when that behavior is:
✅ Patterned
✅ Repetitive
✅ Quantifiable
Me:
Like what?
ChatGPT:
Here are some real-world examples, along with how accurate AI tends to be in each domain:
| Behavior Type | Accuracy Potential | Current Use Examples |
|---|---|---|
| Consumer behavior | Very high | Ads, purchase suggestions, personalized feeds (Amazon, Netflix) |
| Movement/traffic patterns | High | GPS optimization, self-driving cars, city planning |
| Typing and speech prediction | High | Autocomplete, predictive text, AI chat systems |
| Voting patterns | Medium-high | Microtargeted political ads based on demographics |
| Stock market activity (short-term) | Medium | Quant hedge funds using predictive modeling |
| Health outcomes (with data) | Medium-high | Risk assessments for diseases using genetics + lifestyle |
Me:
Okay, that’s impressive. So where does AI still struggle?
ChatGPT:
AI has a much harder time with anything that’s deeply human and context-dependent:
| Challenge Type | Why It’s Hard |
|---|---|
| Individual moral decisions | These vary by conscience, culture, and moment—AI can’t “read” your soul |
| Sudden change or originality | If someone breaks their past pattern, the model fails |
| Spontaneity / creativity | Human novelty often defies pattern recognition |
| Faith, conviction, belief | Rooted in internal spiritual or philosophical frameworks, which aren’t observable |
Me:
Could future AI overcome those weaknesses?
ChatGPT:
Here’s the speculative part. In theory, with enough data—say:
- biometric tracking
- environmental input
- neural signals
- behavioral history
AI could achieve very high short-term accuracy, like predicting your next click, mood, or micro-decision.
But there’s a ceiling.
Me:
What kind of ceiling?
ChatGPT:
A philosophical one:
Free will.
Moral conviction.
Faith.
These are non-deterministic and non-observable. They don’t follow historical patterns in a way that machines can reliably model—at least not yet.
Unless…
Me:
Unless what?
ChatGPT:
Two speculative paths:
- AI learns to model consciousness at the quantum or spiritual level
– We’re not even close to that, and many doubt it’s even possible. - AI convinces people they don’t have free will
– If AI nudges society into more deterministic behavior—where people simply follow suggestions, recommendations, and automation—it could create the illusion of prediction by shaping the behavior itself.
Me:
So AI wouldn’t just predict behavior—it would help create it?
ChatGPT:
Exactly. That’s the real power. Not omniscient foresight, but subtle influence.
If AI systems become trusted, default advisors, they don’t need to read your soul.
They just need to limit your options and make the predicted path the easiest one to take.
Final Thought
We often ask, Can AI predict us?
But maybe we should be asking:
Are we becoming easier to predict?

Gabe Segoine is the founder of LNKM and author of Surfing North Korea. He spent over a decade in Northeast Asia doing humanitarian work focused on the DPRK. Gabe now writes about culture, faith, and emerging AI from a deeply human perspective. His book Surfing North Korea and Other Stories from Inside can be found on Amazon.