Can AI Predict Human Behavior? A Q&A with ChatGPT

By Gabriel Segoine | July 2025

Me:
Theoretically, how close are we today to AI accurately predicting human behavior—and to what degree?

ChatGPT:
Theoretically—and increasingly practically—AI is already very good at predicting certain types of human behavior, especially when that behavior is:

Patterned
Repetitive
Quantifiable

Me:

Like what?

ChatGPT:
Here are some real-world examples, along with how accurate AI tends to be in each domain:

Behavior TypeAccuracy PotentialCurrent Use Examples
Consumer behaviorVery highAds, purchase suggestions, personalized feeds (Amazon, Netflix)
Movement/traffic patternsHighGPS optimization, self-driving cars, city planning
Typing and speech predictionHighAutocomplete, predictive text, AI chat systems
Voting patternsMedium-highMicrotargeted political ads based on demographics
Stock market activity (short-term)MediumQuant hedge funds using predictive modeling
Health outcomes (with data)Medium-highRisk assessments for diseases using genetics + lifestyle

Me:

Okay, that’s impressive. So where does AI still struggle?

ChatGPT:
AI has a much harder time with anything that’s deeply human and context-dependent:

Challenge TypeWhy It’s Hard
Individual moral decisionsThese vary by conscience, culture, and moment—AI can’t “read” your soul
Sudden change or originalityIf someone breaks their past pattern, the model fails
Spontaneity / creativityHuman novelty often defies pattern recognition
Faith, conviction, beliefRooted in internal spiritual or philosophical frameworks, which aren’t observable

Me:

Could future AI overcome those weaknesses?

ChatGPT:
Here’s the speculative part. In theory, with enough data—say:

  • biometric tracking
  • environmental input
  • neural signals
  • behavioral history

AI could achieve very high short-term accuracy, like predicting your next click, mood, or micro-decision.

But there’s a ceiling.

Me:

What kind of ceiling?

ChatGPT:
A philosophical one:
Free will.
Moral conviction.
Faith.

These are non-deterministic and non-observable. They don’t follow historical patterns in a way that machines can reliably model—at least not yet.

Unless…

Me:

Unless what?

ChatGPT:
Two speculative paths:

  1. AI learns to model consciousness at the quantum or spiritual level
    – We’re not even close to that, and many doubt it’s even possible.
  2. AI convinces people they don’t have free will
    – If AI nudges society into more deterministic behavior—where people simply follow suggestions, recommendations, and automation—it could create the illusion of prediction by shaping the behavior itself.

Me:

So AI wouldn’t just predict behavior—it would help create it?

ChatGPT:
Exactly. That’s the real power. Not omniscient foresight, but subtle influence.
If AI systems become trusted, default advisors, they don’t need to read your soul.
They just need to limit your options and make the predicted path the easiest one to take.

Final Thought

We often ask, Can AI predict us?
But maybe we should be asking:

Are we becoming easier to predict?