’Godfather' of artificial intelligence is afraid of doomsday scenario...

’Godfather' of artificial intelligence is afraid of doomsday scenario...

'It is hardly preventable that bad actors will use artificial intelligence for malicious purposes,' stated the 'godfather' of artificial intelligence (hereinafter AI), Geoffrey Hinton. He was the inventor of the technology that serves as the basis for ChatGPT systems. For a few months now, I have been using the free version of this 'AI chatbox,' where all sorts of information are accessible within seconds. It's like a Google search program on steroids, actually. I have asked ChatGPT several times to write an article in 'my' writing style on a specific topic, and I must say, the result was not disappointing in terms of factual information and sentence structure.

There was still much to criticize in terms of writing style, but perhaps the latest paid version of ChatGPT is capable of approximating my language use and style to some extent. However, caution must still be exercised with the facts generated by the chatbox, as it often 'hallucinates,' as it is called in the jargon of the 'AI medium.' The 76-year-old Hinton no longer works for Google, which is probably why he feels free to express his great concerns to us. As far as he is concerned, the 'deep learning' capacity of his brainchild can pose significant dangers to humanity in the long run.

The competition between AI companies is already so fierce that it will lead to accidents. Especially the 'deep fake' photos and videos are already indistinguishable from genuine material for the average viewer. And what about AI-generated (news) texts that seem reliable but contain the biggest nonsense? These are then picked up by conspiracy theory enthusiasts, right-wing and left-wing extremists, and many unsuspecting internet users. Like you and me...

But the generation of fake news by AI systems is not the biggest problem yet. What if an AI system independently creates programs and even executes them! Hinton mentions alarming examples such as autonomous 'killer robots.' It may sound like a nightmare that will never come true, but it's not that simple. 'The idea that these things can become smarter than humans was believed by only a few people, but that is changing rapidly.' This is another quote from Hinton…… When will we face these frightening developments? Let me quote Hinton once more: ’the developments are happening rapidly, and if we extrapolate them to the near future, we're not talking about a time span of 30 to 50 years. It could happen as soon as the coming decade…….’

Frank van Harmelen, a professor of artificial intelligence at the VU University Amsterdam, also sees dangers. He considers the 'water cooler discussions' about a Terminator-like world to be exaggerated, but this professor also sees the 'convincing bullshitting' generated by AI. He views AI’s often exposed ’hallucinating capacity’ as the biggest problem, as it is already causing damage.. One comment from van Harmelen struck me as the most ominous: 'It's as if we've boarded a plane that's getting faster and faster, while we have to figure out how to fly while already in the air.'

In March of last year, 1,100 prominent AI (major) players advocated for a temporary halt (6 months) in the development of AI systems. Elon Musk was one of them, in the mean time he founded an AI company called X.AI......... With this initiative, he claims he wants to compete with major players like Google, Apple, and Microsoft. Hmm, it seems like Musk wants to buy time to outsmart the competition. Apple co-founder Steve Wozniak was also one of the signatories, but I have serious doubts about his intentions too. These captains of industry believe that the 6-month pause is primarily intended to provide governments and companies with time to establish 'safety regulations.'

And do we believe that? What did Hinton say again about 'bad people and bad intentions'? Moreover, who will verify that AI companies will actually hold themselves to such a pause? Even if 'AI inspectors' are appointed, they will never be able to detect any violations. Hinton once again has an undeniable truth in store: he doesn't see how a genuine check of AI facilities is possible. Well indeed, it's not about developing a new Tesla model that can be more easily spotted and checked in Musk's prototype hall..."

Some tech experts consider the fear of superintelligence unfounded. Moreover, some deem it 'almost populist' to claim that AI will take over the world. A part of the tech journalists embrace this view on AI, most of them are  - by the way - writing for conservative ’Christian’ portals and newspapers. Ah, now I understand, the only superintelligence accepted at these editorial newsoffices is 'the Lord,' of course…….

Well, what now? Governments worldwide need to swiftly address AI. They have the task of protecting citizens - and themselves - from the dangers and excesses of AI. Will they succeed? I have serious doubts, just like Hinton, who believes that AI can cause serious harm to the world. I strongly suspect that instead of the verb 'can,' he actually wanted to use 'will.' Why? Partly because he expresses regret about his life's work and concludes with: 'I console myself with the excuse that if I hadn't done it, someone else would have.'

Haven't we heard similar remarks before from a certain Robert Oppenheimer? Do you know what his nickname was? 'The (god)father of the atomic bomb'...

I can imagine that I haven't exactly cheered you up with this column. So, let's conclude with a refreshing song. How about 'Atomic' performed by Blondie?"

https://www.youtube.com/watch?v=O_WLw_0DFQQ&ab_channel=BlondieVEVO

Geschreven door : András Csengő

1000 Characters left