Large language models like ChatGPT are extremely impressive. I’ve used ChatGPT to help me code playable computer games in C#, in a matter of minutes.
But this has led some people to get a little carried away and begin making crazy proclamations. I’ve heard people claim that AI just needs “more data” before it can operate at a human-level of intelligence and potentially even develop sentience.
Others are excited at the prospect of “uploading” human consciousness. All you need to do is model the connector, right? We’ve already mapped the brains of basic worms, it’s only a matter of time.
I’m not denying that AI poses an existential threat, or that AGI is possible. But the notion that today’s AI is *anywhere* near as sophisticated as human intelligence, is laughable.
Sure, large language models have HUGE amounts of training data. They use this to generate what is deemed to be the most *probably* response to any given prompt. This is what can result in some surprisingly human-like conversation and emergent behaviours.
The human brain, meanwhile, has roughly 86 billion neutrons and over 100 trillion connections (called synapses). Sounds like a lot: but that’s not the most impressive bit at all.
What’s really impressive, is that each of those neutrons is capable of basic processing on its own, thanks to something called “dendritic computation.” This is not modelled by large language models or neural networks.
Consider, too, that human brains are not static. Were you to upload a snapshot of the human connector to a computer, that would be redundant within seconds. Thanks to brain plasticity, the human brain is CONSTANTLY remodelling itself.
Again, we don’t have an algorithm to accurately emulate this process.
Consider short-term plasticity, which means some pathways are more or less likely to fire based on recent activity.
Consider white, glial cells, which maintain neuron’s and their connections and even facilitate communication.
Consider neurotransmitters that excite and inhibit neurone on a global and local level. There are neutrons we’ve yet to even discover.
How about embodied cognition. Bacteria. Inflammation.
None of this is represented in our primitive neural networks. All of it, undoubtedly, plays a HUGE role in the way we experience the world. And I’m just scratching the surface. You’d need to emulate real-world physics to be in with a chance.
It’s hubris to think AIs are even CLOSE to human-like thought.
And when Elon Musk says he’s going to “replace the need for spoken communication” with his Neuralink by interpreting the signals of the brain… To that I say: good luck, ya noodle!
But this has led some people to get a little carried away and begin making crazy proclamations. I’ve heard people claim that AI just needs “more data” before it can operate at a human-level of intelligence and potentially even develop sentience.
Others are excited at the prospect of “uploading” human consciousness. All you need to do is model the connector, right? We’ve already mapped the brains of basic worms, it’s only a matter of time.
I’m not denying that AI poses an existential threat, or that AGI is possible. But the notion that today’s AI is *anywhere* near as sophisticated as human intelligence, is laughable.
Sure, large language models have HUGE amounts of training data. They use this to generate what is deemed to be the most *probably* response to any given prompt. This is what can result in some surprisingly human-like conversation and emergent behaviours.
The human brain, meanwhile, has roughly 86 billion neutrons and over 100 trillion connections (called synapses). Sounds like a lot: but that’s not the most impressive bit at all.
What’s really impressive, is that each of those neutrons is capable of basic processing on its own, thanks to something called “dendritic computation.” This is not modelled by large language models or neural networks.
Consider, too, that human brains are not static. Were you to upload a snapshot of the human connector to a computer, that would be redundant within seconds. Thanks to brain plasticity, the human brain is CONSTANTLY remodelling itself.
Again, we don’t have an algorithm to accurately emulate this process.
Consider short-term plasticity, which means some pathways are more or less likely to fire based on recent activity.
Consider white, glial cells, which maintain neuron’s and their connections and even facilitate communication.
Consider neurotransmitters that excite and inhibit neurone on a global and local level. There are neutrons we’ve yet to even discover.
How about embodied cognition. Bacteria. Inflammation.
None of this is represented in our primitive neural networks. All of it, undoubtedly, plays a HUGE role in the way we experience the world. And I’m just scratching the surface. You’d need to emulate real-world physics to be in with a chance.
It’s hubris to think AIs are even CLOSE to human-like thought.
And when Elon Musk says he’s going to “replace the need for spoken communication” with his Neuralink by interpreting the signals of the brain… To that I say: good luck, ya noodle!
- Category
- Artificial Intelligence
- Tags
- artificial intelligence, ai, chatgpt
Comments