
If you’ve ever ordered dessert while swearing you were “just looking,” you’ve already met the sneaky side of your brain. That moment when your fork is halfway to your mouth and you think, Wait…I didn’t decide this? B. F. Skinner would smile knowingly.
Skinner, the unapologetic champion of behaviorism, argued that we don’t truly choose our actions at all, but we’re shaped, nudged, and trained by the rewards and punishments life throws at us. To him, free will was less a noble truth and more a bedtime story we tell ourselves to feel in control (Skinner, 1971).
And here’s the unsettling part: Neuroscience has been quietly backing him up. In the 1980s, Benjamin Libet hooked people up to brain monitors and asked them to flex a finger whenever they felt like it. Simple enough. But the results weren’t simple at all. Brain activity started revving up before people said they’d decided to move. Your neurons, it seems, are a step ahead of your “self.”
Then in 2008, Chun Siong Soon and his team upped the ante with fMRI scans. Instead of milliseconds, they could predict your choice up to seven seconds in advance. Seven seconds. That’s enough time to decide on coffee, regret it, and still have the scanner call your bluff. It’s like your brain is holding a meeting about your decision, and your conscious mind is the intern who gets the memo after it’s already signed.
Big Data Meets the Brain
Psychologists have known for decades that we’re not purely rational operators. Amos Tversky and Daniel Kahneman (1974) mapped out the mental shortcuts, biases, and heuristics that guide our choices. They’re fast, efficient, and often invisible to us, but also predictable.
Now add modern technology. Your smartwatch knows your resting heart rate and when it spikes. Your phone knows you searched “puppy adoption” at 2 a.m. and that you walk past three bakeries on your commute. Your streaming service knows the exact kind of comfort movie you reach for on bad days.
And that’s before we get to genetic testing or brain-computer interfaces. Picture all of it—your biology, habits, and moods streaming into one massive algorithm. Suddenly, predicting your next purchase, vote, or romantic decision doesn’t feel so sci-fi. An app could, in theory, know you’re about to quit your job before you even feel the first twinge of dissatisfaction.
But here’s the catch: The human brain is a moving target. Synapses strengthen or weaken depending on what we learn. Hormones shift with sleep, diet, or stress. A random joke from a friend can tilt your mood and change your plans. Predictive models thrive in stable conditions. Life is anything but.
Why Perfect Prediction Slips Away
Even if we could capture every spark of neural activity, we’d run into a different problem: People don’t like being predictable. Tell someone you know exactly what they’re going to do, and there’s a good chance they’ll change course just to prove you wrong. Prediction changes the predicted, like announcing the score of a game that hasn’t finished yet.
And then there’s the ethical earthquake. If machines could forecast your every move, what happens to the idea of responsibility? How do courts punish wrongdoing if the person “couldn’t have chosen otherwise”? Skinner imagined a society run by behavioral science, but most of us instinctively resist handing over the steering wheel, even if it turns out we were never really driving.
Libet himself, the man whose research shook free will’s foundations, wasn’t ready to throw it out completely. He suggested we might not have total control over starting an action, but we could still slam the brakes, a concept he called “free won’t.” And research on self-control, like Walter Mischel’s marshmallow experiments, shows we can learn to pause, reflect, and redirect our impulses (Mischel, 2014).
Free Will Essential Reads
My Prediction
Neuroscience, AI, and big data are going to get scary good at reading us. Companies will spot when you’re most vulnerable to an ad. Doctors will see warning signs for illness before symptoms appear. Governments might use predictive models to guide policy and prevent crises.
But the same tools that could help us could also be used for harm. In the wrong hands, predictive systems could become weapons of manipulation, nudging public opinion in subtle ways, eroding privacy until personal autonomy feels like a relic, or enabling authoritarian regimes to suppress dissent before it even surfaces. The more precisely our intentions can be read, the easier it becomes to shape them without our awareness. It’s one thing to sell you a product you might like, but it’s another to quietly rewrite what you think you want.
Even with these powers, perfect prediction, the idea that every move we make could be forecast like next week’s weather, probably isn’t happening. Our brains are too complex, too adaptive, and too responsive to the fact that they’re being watched. Whether free will is a deep truth or the best illusion we’ve ever believed, it’s not disappearing anytime soon. We’ll keep surprising the algorithms, if only because somewhere inside us, we like to.

