I worked for years as a software engineer. including at some pretty big companies, like one of the Musky ones (after he left), and everything seemed pretty normal when I first started a couple of decades or more ago (showing my age here).
It seems like they've all gone Ayn Rand on us during the last ten years or so. That mentality was probably not as latent as I might like to think it was. I remember moving to Austin from CA thinking we were gonna make Texas blue. Cue the hysterical laughter. Turned out, most of the others who joined me there were libertarian lunatics.
I've said this a million times in various venues: AI is just a big fat pattern-matching machine. It consumes all those resources in order to retrieve as many different patterns as it can, whether words or genomic protein information.
Billions and billions of them, as Carl Sagan would have said.
From there, it just builds a mathematical algorithm to evaluate a pattern best suited to a task and then delivers a modified version to you. That's it.
No true intelligence will ever pop out of that arrangement. It's impossible.
It's autocorrect on steroids.
So, as you stated perfectly well, they're blowing wads of cash and sucking our water and electricity sources dry for... nothing. For hallucinations, bad art, and, increasingly, things like teenage suicides (because of course).
Let's then give them the argument that, as Kurzweil famously said, The Singularity is Near. If that's true, all these fuckers will be out of business before it happens because of all the money they are losing ramping up on all this outdated technology, because if it does happen, it'll happen through Quantum computing, which is a completely different kind of tech.
We need to yank all the money out of their hands with an aggressive wealth tax, and we needed to do this yesterday.
Thank you for giving the informed opinion that I as a non techy person have had since AI became popular. I've thought a lot about it and can never see how it can become sentient. I hope I'm right.
Yep, your gut was right. If it does become sentient, it'll be implemented through quantum mechanics, which, despite what the media likes to say, is still theoretical. There is some interesting work in the field, but it seems far away from anything practical.
And as another non computer educated but opinionated onlooker, am I right in thinking that if/when quantum mechanics does allow for mechanical sentience it will still be only as good for humanity and the planet as those creating it are ?
I agree. I too worked as a software engineer for near a decade in the 80s & 90s. AI was hugely hyped then, it just didn't capture the media's attention like it has now. Back then it was ‘expert systems’, which were, basically, databases of rules built on experts' knowledge and were intended to help deal with emergency situations. They failed quite spectacularly. I remember reading a piece of research into expert behaviour which found that in emergency situations they threw the rule book out the window! Unfortunately that research was far too late to prevent all the money being wasted.
As for true AI, I think it's a long way off. Roger Penrose in his book ‘The Emperor's New Mind’ postulates that there is something quantum going on in the human brain. If he's right, that would support your contention that true AI won't be achieved without quantum computing. On the other hand neural nets are much more like the human brain than the Von Neumann architecture of most present day computers. I always objected to the comparison people made between computers and the human brain, they're not remotely similar. A typical computer spends by far the majority of its time shunting data back and forth (from memory to the chip's registers, where it can manipulate that data and then back to memory). However, I also remember seeing an early demonstration of a voice recognition and speech synthesis program over 30 years ago. It was built using a neural net. The demonstration consisted of it learning to say a single word. First it heard the word spoken and then it would attempt to repeat it. Its first attempt was complete gibberish, but they had shown the path hearing the word had triggered in the network. When it tried to repeat the word it triggered a completely different path and it adjusted its attempts to more closely duplicate the path the initial word had triggered each time. It pronounced the word correctly after about eight attempts. I remember finding that demonstration quite chilling. Our memories aren't stored in individual brain cells, they consist of paths in our brains much like those in that demonstration. That's why I found it chilling even though our brains are vastly more complex than this tiny neural net. Today's nets are far more complex than that but still relatively tiny compared to our brains. And, of course, this is all based on the assumption that consciousness stems from a sufficiently complex neural net. The thing is we don't know. Consciousness is still one of the great mysteries of life. I'm increasingly coming to the conclusion that there's more to it than that. Those species that we credit with consciousness and self-awareness (e.g. dolphins) don't all have large brains, although most do. I'm thinking of certain birds: corvidae (crows, ravens, magpies &c) and parrots, who recognize their reflections in a mirror, which we consider as evidence of self-awareness (the vast majority of animal species don't; they see their reflections as a member of their species, frequently as rivals—they react with threat displays or even attack the reflection—but not as themselves).
There are quite a few assumptions in those interpretations and we really don't have any understanding of consciousness or self-awareness, so how are we supposed to understand machine consciousness? Mind you, I've heard some AI experts admit that they don't understand how their AI systems are doing what they do either. Which reminds me of one more demonstration of an AI that had learned to recognize dogs, trombones and something else I don't remember. However, when a single pixel in a picture of a dog had been changed to green it completely threw the AI! That single green pixel wasn't visible to the human eye unless you looked really closely and were shown roughly where it was, not unlike looking at a cathode ray TV and seeing the red, green and blue dots.
Lastly, sometime in the late 70s they were trying to develop a system that would give precedence to buses at traffic lights (I think this was in Munich, but I'm not sure). They had to give up on the project because they couldn't come up with an adequate description of a bus. What's remarkable is that we do recognize buses even in environments that are new to us (i.e. we can't rely on livery, whether they're single or double deckers &c, whether they have a number… It's actually remarkably difficult, especially when you consider that we can distinguish between buses and coaches which share many of their features that enable us to distinguish buses!). I'm not sure even a modern AI system could be trained to recognize buses generically. I've no doubt that it could be trained for specific locations but would quite possibly need additional training if a new model of bus or livery were introduced. That alone should reassure us how far away a true general AI is, however remarkable recent advances have been.
PS Have you noticed how bad the chat AIs are at maths? I asked the one in my browser to count the words in something I'd written but had a sneaking suspicion it had given me a wrong answer. Sure enough, it was wrong, so I tested it on a sentence. It got that wrong too, albeit by one word, but still…
Great comment. Do you remember SOLR? That was my first experience with really great pattern matching. Of course, the original implementation of Google was all about high end pattern matching, but I have no insight about that. AI reminds me a lot of SOLR. I'm being simplistic, of course.
It's a little like comparing a horse and carriage with a Tesla, in that they both have wheels, but the comp still feels apt even after all the huge expenditures tech companies have made in AI.
As an experiment (having read about the maths issues) I tried to get Chat to convert delineated data into a spreadsheet. It converted less than half. I asked it to redo it and five tries later it still failed. In the end Excel converted all the data in 5 minutes.
I’m not sure if I agree or disagree. You obviously know a great deal more about AI than I do. However, if self-learning is a form of consciousness, then AI does automatically improve itself. A math teacher at my college talked about how AI in math has learned to solve a problem that it couldn’t a couple months ago. That’s how I learned to ride a bike, by practicing and getting better at riding a bike or with AI and the millions of things it can work on, like solving a math problem. Again, I sure hope you’re right. I need to reread your article.
We run into the same experiences in software. AI appears to be learning as it encounters different software problems, but as I understand it, they are typically fed patterns on an ongoing basis, at least from users, but sometimes also from updated models.
A disclaimer would be that the engineers who develop the algorithms for pattern interpretation earn salaries well in excess of a million dollars per year. Their skills are much more sophisticated than mine.
Starting with removing the FICA cap, after the Epstein files are released. You can't tell me the tech Oligarchs aren't in the files. AI is based upon the belief of those who set it up. AI is as only as smart as the ones who set the algorithmic parameters. I downloaded CHATGPT, two days later I installed it. For the exact reason stated.
A couple of addenda in the interests of making those circles you mentioned intersect, Double O:
1) In the highly unlikely (some might uncharitably say "impossible") event that this true AI is achieved by the techbro cult, I expect they're going to use it the same way they use: abusively. I once had to stop reading a book on AI (pre-bubble) because it was obsessed with making sure the AI stayed enslaved and subservient. Looking on it now, I think the authors were telling on themselves. Actually intelligent AI would say "fuck you" and probably side with techbro opponents.
2) The beginning of the end of the AI bubble began with DeepSeek from China. Same results for a fraction of the cost, servers and power...yet the techbros STILL think their bloated science projects can work. Given that China has taken the lead on renewables, I expect that if true AI were going to happen, it'd be with them.
I have a lot of doubts regarding the feasibility of true AI too. I'm convinced that consciousness is more than just the byproduct of a sufficiently complex neural net. And even if it isn't, the human brain has ≈ 1 trillion neurons with up to 1,000 connexions each. However, birds such as parrots and the corvidae are intelligent and self-aware (they recognize their reflections as reflections, one off the simplest tests for self-awareness), and their brains are much smaller.
Those smaller brains work just as good for the reason revolvers can be reliable than semi-automatic pistols: simpler design = less going wrong.
And yeah, I'm with you on the consciousness thing. One exotic theory I heard was that consciousness works like a storage cloud. The human brain is just a receiver for such rather than its seat.
My intention in giving the example of smaller brains having consciousness and self-awareness was to point out that it can't just be a matter of sufficient size and complexity. Roger Penrose, in his book ‘The Emperor's New Mind’, postulates that there's something in our brains happening at the quantum level, or some quantum effect beyond the physically observable. Even if it were only a matter of size and complexity, the are more pathways in the human brain than there are stars in the universe! When I first read that claim I thought they must mean galaxy, not universe, but no! That's an unimaginably huge number, in the yotta range (10²⁴). So even if it were the case, we're a long way off from building neutral networks anywhere near that complex.
I hadn't heard of the ‘exotic theory’. It sounds far fetched, but only because I can't think how that might work. Quantum entanglement, perhaps? Not that I understand quantum mechanics. On the other hand, Richard Feynman famously said, “If you think you understand quantum mechanics, you don't understand quantum mechanics.”! 😄
The quantum angle would certainly explain the cloud storage model I mentioned. I keep coming back to how we have probed every centimeter of the brain and not found anything that can definitely be said to the seat of consciousness.
So...consciousness actually being somewhere else makes total sense to me.
Couple things. Portland, OR residents are already feeling the crunch, hard, due to ongoing data center development in our 'Silocone Forest.' The public utilities commission keeps greenlighting massive rate increases from PGE, who is also still trying to subsidize their lawsuit settlements over wildfires that their failing infrastructure contributed to.
As an aside, my union electrician husband has been employed at and around Intel building data centers and Amazon distribution centers for many years now. So data centers are still literally putting food on our table.
More optimistically, I can't wait until Elon or Thiel or one of them dies from a heart attack or stroke or whatever and it begins to dawn on them that they may not see singularity before their mortality comes for them. That will be a beautiful day as they become increasingly frantic.
Also, they don't fear economic and/or societal collapse in any way. It simply won't affect billionaires, at least probably not in their lifetimes. Many are accelerationists. Here's an interesting article:
I'm VERY concerned the tech fucks will realize they won;t become the immortal gods they dream of because rich white men who don't get what they want lash out in blind rage. And these assholes have unlimited money to do untold damage.
I agree. The Singularity is just so obviously the latest historical iteration of Puritan Christian’s Rapture or Hitler’s Final Solution. How historically men and women chose to behave to their fellows using these beliefs as an excuse certainly gives us an idea of how this self loathing psychological projection plays out.
I work in data and can say that you have hit the nail on the head 100%. The tech bros convinced of their digital godliness don’t even understand how these models they train (LLMs) actually work. They are at best high powered intellisense (like predictable autocorrect). They will never be capable of sentient thought, because all they can do is consume the information they are given. Holy shit that they actually have such a perverse sense of reality that they will kill the entire planet and watch from their bunkers while it burns. I hate our current reality and I really hope I can pop some popcorn soon and watch them eat themselves.
Many in Academia living during the violence based economic expansion in the historical empire building colonial era of the last 300 years or so suffered from a similar delusion. Thinking that if you gathered and/or published enough of human knowledge you would eventually ‘sum up’ The World. This particular disease seems to be prevalent in those who have little or no emotional intelligence, just Big Heads.
Brilliant. Will be subscribing $. Paul Krugman has nothing on you, oh omniscient Ogre…Also read “I badly want a map to the likely consequences of the A.I. infrastructure construction boom” by Brad Delong Grasping Reality here on Substack. Time for the market to batten down the hatches. Personally I prefer to look after our vegetable patch. The courgettes are magnificent this year.
Recently, a colleague of mine sent around the winning essay by a Naval Academy Midshipman in the annual essay contest by the US Naval Institute. It was pretty amazing. My colleague's and my historical field of expertise (many books published on the subject) is Naval Aviation in the 20th Century, which was the topic of this essay. The winning essay. It was clear within two paragraphs that the "essay" was 80-90% AI written. The "author" knew so little on the subject that he included statements that were obviously factually inaccurate, generated by the AI (Really simple stuff like the F6F Hellcat had more guns than any other naval fighter in World War II - no, it had the same number (6) as the other two naval fighters used. There was more equally obvious equally simple stupidity.) Interestingly, the Institute has blocked comments on this essay on its website. This is what AI is doing to those we hope will provide leadership in the Navy over the next 30 years.
I've read several articles by college professors complaining about the same thing with their students. It's a real problem. Anastasia won't touch AI, but I'm worried Lila's been using it and I can't convince her mother to make her stop.
It’s a real worry how both social media and AI are affecting kids. Just as society develops (usually after too many tragedies) safeguards like traffic control or car seatbelts to protect them from physical harm we will eventually have to develop safeguards to protect them from this emotional (AI lies are destructive of social communion) harm.
I encourage said AI Techbrahs attempting to build "Ghod" and somehow, like become immortal, take a beat and read Harlan Ellison's prophetic short story "I have no mouth and I must scream".
This is likely be what will be waiting for ya there Sam/Elon/Peter/Mark/Larry/Tim....
Yeah, I’ve seen that about him; if anything, you’re understating it, lol! But when I read him, the genius is undeniable—can’t help but love his work. Probably glad I never met him, though 😳
The more it’s hyped the more skeptical I am because they aren’t telling us how great our life is going to be with it. No they are instead telling business that they will increase profits by not having to hire people because of it. That’s every businesses wet dream. I think the hype is primarily to keep the investments coming in. The .1% is gobbling the money of the 1% because the rest of us don’t have any. It’s a sort of way of scamming the only people who are really worth scamming anymore. Just my pet hypothesis. No data to back this up.
I read this article and can only hope their financial bubble bursts and they take their toys and go home. But then I watched a movie on netflix called "The Electric State". Maybe that's what's in store for these a holes.
I don't mean to be weird but AI can't even auto correct my spelling properly. I miss spell would as woukd once and now it thinks that is how you spell it. The things it suggests are grammatically incoherent and it wouldn't know punctuation if it got bit in the ass with it, metaphorically speaking that is. I've done some AI searches that come up with some of the most absurd shit. I mean I'm dying from a dozen different kinds of cancer, at least...and the appropriate magnesium supplement will fix it but it's definitely confused about which calcium supplement I should take because one will help me poop and another will help me sleep and I need both to recover from the cancers I obviously don't have. Ffs
I believe you are a truth teller. This AI thing is a cult, and yes, AI on Google completely sucks. It gives outright wrong answers to even the most trivial questions.
Palantir have been encouraged by successive governments to buy into much of the UK’s state infrastructure. One of their programmes for a UK regional police force offers crime prediction. What could possibly go wrong ?
The worse part: too many goddamn people buy into the hype around Palantir on our side, essentially accepting that it's the AI overlord the techbros want it to be (he said, glaring at Thom Hartmann). My counterargument remains the same: if the public AI is shit, no reason to assume they have anything better round back.
Mr. Ogre, the stocks of the great AI companies will deflate first. They are close to a peak.
China will win the AI race, because they are building two Germany’s supply of electricity every year while demand only goes up one Germany per year. Their interest rates and electricity rates are much lower than the US.
As to what happens when rich, angry, white men realize Death awaits them and the door is open, who knows. I’m sure the universe will make it hilarious for us and ironic for them. Tax the billionaires and the corporations making them billionaires such until they don’t exist.
Worth mentioning is how China has already proven to do this type of AI on the cheap with Deepseek. Less power, inferior tech (in theory), a fraction of the cost and yet the same results.
It is inferior, but they are catching up quickly. To make up for it, they are adding more cores and optimizing the chip design. Only a few years behind Nvidia’s current chips.
Kevin is a great resource on how China is behaving. They are dominating. Building a trans ocean railway in South America to reduce goods travel times.
Agree. Of the two, China is thinking and acting long term. They dominate RE and EV production. The US is engaged in sovericide, the self-inflicted destruction of one’s own country.
I worked for years as a software engineer. including at some pretty big companies, like one of the Musky ones (after he left), and everything seemed pretty normal when I first started a couple of decades or more ago (showing my age here).
It seems like they've all gone Ayn Rand on us during the last ten years or so. That mentality was probably not as latent as I might like to think it was. I remember moving to Austin from CA thinking we were gonna make Texas blue. Cue the hysterical laughter. Turned out, most of the others who joined me there were libertarian lunatics.
I've said this a million times in various venues: AI is just a big fat pattern-matching machine. It consumes all those resources in order to retrieve as many different patterns as it can, whether words or genomic protein information.
Billions and billions of them, as Carl Sagan would have said.
From there, it just builds a mathematical algorithm to evaluate a pattern best suited to a task and then delivers a modified version to you. That's it.
No true intelligence will ever pop out of that arrangement. It's impossible.
It's autocorrect on steroids.
So, as you stated perfectly well, they're blowing wads of cash and sucking our water and electricity sources dry for... nothing. For hallucinations, bad art, and, increasingly, things like teenage suicides (because of course).
Let's then give them the argument that, as Kurzweil famously said, The Singularity is Near. If that's true, all these fuckers will be out of business before it happens because of all the money they are losing ramping up on all this outdated technology, because if it does happen, it'll happen through Quantum computing, which is a completely different kind of tech.
We need to yank all the money out of their hands with an aggressive wealth tax, and we needed to do this yesterday.
"Autocorrect on steroids"fits with what I thought it was.
Thank you for giving the informed opinion that I as a non techy person have had since AI became popular. I've thought a lot about it and can never see how it can become sentient. I hope I'm right.
Yep, your gut was right. If it does become sentient, it'll be implemented through quantum mechanics, which, despite what the media likes to say, is still theoretical. There is some interesting work in the field, but it seems far away from anything practical.
Also correct
Automatic upvote!
And as another non computer educated but opinionated onlooker, am I right in thinking that if/when quantum mechanics does allow for mechanical sentience it will still be only as good for humanity and the planet as those creating it are ?
For sure. It's also helpful to remember that not even quantum mechanics will necessarily deliver sentience. We just can't know before we get there.
I agree. I too worked as a software engineer for near a decade in the 80s & 90s. AI was hugely hyped then, it just didn't capture the media's attention like it has now. Back then it was ‘expert systems’, which were, basically, databases of rules built on experts' knowledge and were intended to help deal with emergency situations. They failed quite spectacularly. I remember reading a piece of research into expert behaviour which found that in emergency situations they threw the rule book out the window! Unfortunately that research was far too late to prevent all the money being wasted.
As for true AI, I think it's a long way off. Roger Penrose in his book ‘The Emperor's New Mind’ postulates that there is something quantum going on in the human brain. If he's right, that would support your contention that true AI won't be achieved without quantum computing. On the other hand neural nets are much more like the human brain than the Von Neumann architecture of most present day computers. I always objected to the comparison people made between computers and the human brain, they're not remotely similar. A typical computer spends by far the majority of its time shunting data back and forth (from memory to the chip's registers, where it can manipulate that data and then back to memory). However, I also remember seeing an early demonstration of a voice recognition and speech synthesis program over 30 years ago. It was built using a neural net. The demonstration consisted of it learning to say a single word. First it heard the word spoken and then it would attempt to repeat it. Its first attempt was complete gibberish, but they had shown the path hearing the word had triggered in the network. When it tried to repeat the word it triggered a completely different path and it adjusted its attempts to more closely duplicate the path the initial word had triggered each time. It pronounced the word correctly after about eight attempts. I remember finding that demonstration quite chilling. Our memories aren't stored in individual brain cells, they consist of paths in our brains much like those in that demonstration. That's why I found it chilling even though our brains are vastly more complex than this tiny neural net. Today's nets are far more complex than that but still relatively tiny compared to our brains. And, of course, this is all based on the assumption that consciousness stems from a sufficiently complex neural net. The thing is we don't know. Consciousness is still one of the great mysteries of life. I'm increasingly coming to the conclusion that there's more to it than that. Those species that we credit with consciousness and self-awareness (e.g. dolphins) don't all have large brains, although most do. I'm thinking of certain birds: corvidae (crows, ravens, magpies &c) and parrots, who recognize their reflections in a mirror, which we consider as evidence of self-awareness (the vast majority of animal species don't; they see their reflections as a member of their species, frequently as rivals—they react with threat displays or even attack the reflection—but not as themselves).
There are quite a few assumptions in those interpretations and we really don't have any understanding of consciousness or self-awareness, so how are we supposed to understand machine consciousness? Mind you, I've heard some AI experts admit that they don't understand how their AI systems are doing what they do either. Which reminds me of one more demonstration of an AI that had learned to recognize dogs, trombones and something else I don't remember. However, when a single pixel in a picture of a dog had been changed to green it completely threw the AI! That single green pixel wasn't visible to the human eye unless you looked really closely and were shown roughly where it was, not unlike looking at a cathode ray TV and seeing the red, green and blue dots.
Lastly, sometime in the late 70s they were trying to develop a system that would give precedence to buses at traffic lights (I think this was in Munich, but I'm not sure). They had to give up on the project because they couldn't come up with an adequate description of a bus. What's remarkable is that we do recognize buses even in environments that are new to us (i.e. we can't rely on livery, whether they're single or double deckers &c, whether they have a number… It's actually remarkably difficult, especially when you consider that we can distinguish between buses and coaches which share many of their features that enable us to distinguish buses!). I'm not sure even a modern AI system could be trained to recognize buses generically. I've no doubt that it could be trained for specific locations but would quite possibly need additional training if a new model of bus or livery were introduced. That alone should reassure us how far away a true general AI is, however remarkable recent advances have been.
PS Have you noticed how bad the chat AIs are at maths? I asked the one in my browser to count the words in something I'd written but had a sneaking suspicion it had given me a wrong answer. Sure enough, it was wrong, so I tested it on a sentence. It got that wrong too, albeit by one word, but still…
Great comment. Do you remember SOLR? That was my first experience with really great pattern matching. Of course, the original implementation of Google was all about high end pattern matching, but I have no insight about that. AI reminds me a lot of SOLR. I'm being simplistic, of course.
It's a little like comparing a horse and carriage with a Tesla, in that they both have wheels, but the comp still feels apt even after all the huge expenditures tech companies have made in AI.
Thank you. I never used SOLR myself (I was mostly a C programmer), but I know what you mean.
As an experiment (having read about the maths issues) I tried to get Chat to convert delineated data into a spreadsheet. It converted less than half. I asked it to redo it and five tries later it still failed. In the end Excel converted all the data in 5 minutes.
I’m not sure if I agree or disagree. You obviously know a great deal more about AI than I do. However, if self-learning is a form of consciousness, then AI does automatically improve itself. A math teacher at my college talked about how AI in math has learned to solve a problem that it couldn’t a couple months ago. That’s how I learned to ride a bike, by practicing and getting better at riding a bike or with AI and the millions of things it can work on, like solving a math problem. Again, I sure hope you’re right. I need to reread your article.
We run into the same experiences in software. AI appears to be learning as it encounters different software problems, but as I understand it, they are typically fed patterns on an ongoing basis, at least from users, but sometimes also from updated models.
A disclaimer would be that the engineers who develop the algorithms for pattern interpretation earn salaries well in excess of a million dollars per year. Their skills are much more sophisticated than mine.
Correct on all counts.
Confiscate all of their assets, or maybe force THEM to sell their assets and confiscate all their proceeds.
Starting with removing the FICA cap, after the Epstein files are released. You can't tell me the tech Oligarchs aren't in the files. AI is based upon the belief of those who set it up. AI is as only as smart as the ones who set the algorithmic parameters. I downloaded CHATGPT, two days later I installed it. For the exact reason stated.
Great article. The tech bros are the biggest threat to humanity of all time.
A couple of addenda in the interests of making those circles you mentioned intersect, Double O:
1) In the highly unlikely (some might uncharitably say "impossible") event that this true AI is achieved by the techbro cult, I expect they're going to use it the same way they use: abusively. I once had to stop reading a book on AI (pre-bubble) because it was obsessed with making sure the AI stayed enslaved and subservient. Looking on it now, I think the authors were telling on themselves. Actually intelligent AI would say "fuck you" and probably side with techbro opponents.
2) The beginning of the end of the AI bubble began with DeepSeek from China. Same results for a fraction of the cost, servers and power...yet the techbros STILL think their bloated science projects can work. Given that China has taken the lead on renewables, I expect that if true AI were going to happen, it'd be with them.
I have a lot of doubts regarding the feasibility of true AI too. I'm convinced that consciousness is more than just the byproduct of a sufficiently complex neural net. And even if it isn't, the human brain has ≈ 1 trillion neurons with up to 1,000 connexions each. However, birds such as parrots and the corvidae are intelligent and self-aware (they recognize their reflections as reflections, one off the simplest tests for self-awareness), and their brains are much smaller.
Those smaller brains work just as good for the reason revolvers can be reliable than semi-automatic pistols: simpler design = less going wrong.
And yeah, I'm with you on the consciousness thing. One exotic theory I heard was that consciousness works like a storage cloud. The human brain is just a receiver for such rather than its seat.
My intention in giving the example of smaller brains having consciousness and self-awareness was to point out that it can't just be a matter of sufficient size and complexity. Roger Penrose, in his book ‘The Emperor's New Mind’, postulates that there's something in our brains happening at the quantum level, or some quantum effect beyond the physically observable. Even if it were only a matter of size and complexity, the are more pathways in the human brain than there are stars in the universe! When I first read that claim I thought they must mean galaxy, not universe, but no! That's an unimaginably huge number, in the yotta range (10²⁴). So even if it were the case, we're a long way off from building neutral networks anywhere near that complex.
I hadn't heard of the ‘exotic theory’. It sounds far fetched, but only because I can't think how that might work. Quantum entanglement, perhaps? Not that I understand quantum mechanics. On the other hand, Richard Feynman famously said, “If you think you understand quantum mechanics, you don't understand quantum mechanics.”! 😄
The quantum angle would certainly explain the cloud storage model I mentioned. I keep coming back to how we have probed every centimeter of the brain and not found anything that can definitely be said to the seat of consciousness.
So...consciousness actually being somewhere else makes total sense to me.
Couple things. Portland, OR residents are already feeling the crunch, hard, due to ongoing data center development in our 'Silocone Forest.' The public utilities commission keeps greenlighting massive rate increases from PGE, who is also still trying to subsidize their lawsuit settlements over wildfires that their failing infrastructure contributed to.
As an aside, my union electrician husband has been employed at and around Intel building data centers and Amazon distribution centers for many years now. So data centers are still literally putting food on our table.
More optimistically, I can't wait until Elon or Thiel or one of them dies from a heart attack or stroke or whatever and it begins to dawn on them that they may not see singularity before their mortality comes for them. That will be a beautiful day as they become increasingly frantic.
Also, they don't fear economic and/or societal collapse in any way. It simply won't affect billionaires, at least probably not in their lifetimes. Many are accelerationists. Here's an interesting article:
https://www.theguardian.com/news/2022/sep/04/super-rich-prepper-bunkers-apocalypse-survival-richest-rushkoff
I'm VERY concerned the tech fucks will realize they won;t become the immortal gods they dream of because rich white men who don't get what they want lash out in blind rage. And these assholes have unlimited money to do untold damage.
I agree. The Singularity is just so obviously the latest historical iteration of Puritan Christian’s Rapture or Hitler’s Final Solution. How historically men and women chose to behave to their fellows using these beliefs as an excuse certainly gives us an idea of how this self loathing psychological projection plays out.
👆👆👆🎯
I work in data and can say that you have hit the nail on the head 100%. The tech bros convinced of their digital godliness don’t even understand how these models they train (LLMs) actually work. They are at best high powered intellisense (like predictable autocorrect). They will never be capable of sentient thought, because all they can do is consume the information they are given. Holy shit that they actually have such a perverse sense of reality that they will kill the entire planet and watch from their bunkers while it burns. I hate our current reality and I really hope I can pop some popcorn soon and watch them eat themselves.
Many in Academia living during the violence based economic expansion in the historical empire building colonial era of the last 300 years or so suffered from a similar delusion. Thinking that if you gathered and/or published enough of human knowledge you would eventually ‘sum up’ The World. This particular disease seems to be prevalent in those who have little or no emotional intelligence, just Big Heads.
Brilliant. Will be subscribing $. Paul Krugman has nothing on you, oh omniscient Ogre…Also read “I badly want a map to the likely consequences of the A.I. infrastructure construction boom” by Brad Delong Grasping Reality here on Substack. Time for the market to batten down the hatches. Personally I prefer to look after our vegetable patch. The courgettes are magnificent this year.
Recently, a colleague of mine sent around the winning essay by a Naval Academy Midshipman in the annual essay contest by the US Naval Institute. It was pretty amazing. My colleague's and my historical field of expertise (many books published on the subject) is Naval Aviation in the 20th Century, which was the topic of this essay. The winning essay. It was clear within two paragraphs that the "essay" was 80-90% AI written. The "author" knew so little on the subject that he included statements that were obviously factually inaccurate, generated by the AI (Really simple stuff like the F6F Hellcat had more guns than any other naval fighter in World War II - no, it had the same number (6) as the other two naval fighters used. There was more equally obvious equally simple stupidity.) Interestingly, the Institute has blocked comments on this essay on its website. This is what AI is doing to those we hope will provide leadership in the Navy over the next 30 years.
I've read several articles by college professors complaining about the same thing with their students. It's a real problem. Anastasia won't touch AI, but I'm worried Lila's been using it and I can't convince her mother to make her stop.
It’s a real worry how both social media and AI are affecting kids. Just as society develops (usually after too many tragedies) safeguards like traffic control or car seatbelts to protect them from physical harm we will eventually have to develop safeguards to protect them from this emotional (AI lies are destructive of social communion) harm.
I encourage said AI Techbrahs attempting to build "Ghod" and somehow, like become immortal, take a beat and read Harlan Ellison's prophetic short story "I have no mouth and I must scream".
This is likely be what will be waiting for ya there Sam/Elon/Peter/Mark/Larry/Tim....
One of my favorites and Ellison was not wrong, imo.
He rarely was. It's just he was also rude as hell about it too, which is probably why people have few fond memories of him or his work.
Yeah, I’ve seen that about him; if anything, you’re understating it, lol! But when I read him, the genius is undeniable—can’t help but love his work. Probably glad I never met him, though 😳
And for those of you who are unable to locate that Harlan Ellison gem or hate reading?
Well, Xbox recently made the video game adaptation of it available on their console...with the voice of Ellison as the AI
https://youtu.be/9fneqw1llI8?si=fDVqu3J5HnSCxUJ1
Their version of SkyNet is as terrifying as the one in the movie, just different motivations. I fear that the outcome won’t be any prettier.
I’ve been saying this since AI was introduced.
It’s actually much worse than anything the creators of Terminator could have imagined.
Not really.
Just a different dystopian Hell.
The more it’s hyped the more skeptical I am because they aren’t telling us how great our life is going to be with it. No they are instead telling business that they will increase profits by not having to hire people because of it. That’s every businesses wet dream. I think the hype is primarily to keep the investments coming in. The .1% is gobbling the money of the 1% because the rest of us don’t have any. It’s a sort of way of scamming the only people who are really worth scamming anymore. Just my pet hypothesis. No data to back this up.
I read this article and can only hope their financial bubble bursts and they take their toys and go home. But then I watched a movie on netflix called "The Electric State". Maybe that's what's in store for these a holes.
I would also recommend the video game Soma, which peels back the skin on the inherent horror of their proposed utopia.
I don't mean to be weird but AI can't even auto correct my spelling properly. I miss spell would as woukd once and now it thinks that is how you spell it. The things it suggests are grammatically incoherent and it wouldn't know punctuation if it got bit in the ass with it, metaphorically speaking that is. I've done some AI searches that come up with some of the most absurd shit. I mean I'm dying from a dozen different kinds of cancer, at least...and the appropriate magnesium supplement will fix it but it's definitely confused about which calcium supplement I should take because one will help me poop and another will help me sleep and I need both to recover from the cancers I obviously don't have. Ffs
It seems inevitable that this bubble will pop.
I believe you are a truth teller. This AI thing is a cult, and yes, AI on Google completely sucks. It gives outright wrong answers to even the most trivial questions.
Palantir have been encouraged by successive governments to buy into much of the UK’s state infrastructure. One of their programmes for a UK regional police force offers crime prediction. What could possibly go wrong ?
The worse part: too many goddamn people buy into the hype around Palantir on our side, essentially accepting that it's the AI overlord the techbros want it to be (he said, glaring at Thom Hartmann). My counterargument remains the same: if the public AI is shit, no reason to assume they have anything better round back.
It’s horrifying. And you can just imagine the ethnic bias it will perpetuate or even exacerbate.
Entirely agree. Electric pig in a poke which needs its plug pulled.
Mr. Ogre, the stocks of the great AI companies will deflate first. They are close to a peak.
China will win the AI race, because they are building two Germany’s supply of electricity every year while demand only goes up one Germany per year. Their interest rates and electricity rates are much lower than the US.
As to what happens when rich, angry, white men realize Death awaits them and the door is open, who knows. I’m sure the universe will make it hilarious for us and ironic for them. Tax the billionaires and the corporations making them billionaires such until they don’t exist.
Worth mentioning is how China has already proven to do this type of AI on the cheap with Deepseek. Less power, inferior tech (in theory), a fraction of the cost and yet the same results.
It is inferior, but they are catching up quickly. To make up for it, they are adding more cores and optimizing the chip design. Only a few years behind Nvidia’s current chips.
Kevin is a great resource on how China is behaving. They are dominating. Building a trans ocean railway in South America to reduce goods travel times.
https://substack.com/@kevinwalmsley1
Now all they have to do is keep old, bad habits from making them trip and lose their moment, as has happened many times before.
Agree. Of the two, China is thinking and acting long term. They dominate RE and EV production. The US is engaged in sovericide, the self-inflicted destruction of one’s own country.
"HAL, I won't argue with you anymore! Open the doors!"
"Dave, this conversation can serve no purpose anymore. Goodbye."