Tech’s Big Wig: Put the Brakes on the AI Rollout

Over 1,100 tech luminaries, leaders, and scientists have issued a warning to labs conducting large-scale experiments using artificial intelligence (AI) more powerful than ChatGPT, saying that this technology could endanger humanity. said to pose a serious threat to
and open letter issued by Future of Life Institute, Steve Wozniak, co-founder of Apple, Elon Musk, CEO of SpaceX and Tesla, MIT, a non-profit organization dedicated to reducing global catastrophic and existential risks to humanity Max Tegmark, president of the Future of Life Institute, joins other signatories saying AI “poses serious risks.” To society and to humanity, as shown by extensive research and acknowledged by top AI labs. ”
The signatories have asked for a six-month moratorium on deploying training for more powerful AI systems. GPT-4which is a Large Language Model (LLM) and the popular Chat GPT Natural language processing chatbot. Part of this letter depicted a dystopian future reminiscent of those created by artificial neural networks in his science fiction films such as: terminator and matrixThe letter sharply questions whether advanced AI could lead to a “loss of control over our civilization”.
The letter also warns of political disruption “especially to democracies” from AI. Chatbots behaving like humans can flood social media and other networks with propaganda and lies. It also warned that AI could “automate all jobs, including fulfilling jobs.”
The group called on civic leaders, rather than the tech community, to take charge of decisions about broader AI deployments.
Policy makers should work with the AI community to dramatically accelerate the development of robust AI governance systems. At a minimum, this includes new AI regulators, surveillance, tracking of high-performance AI systems, and a large pool of computing power. The letter also proposed the use of provenance and watermarking systems to distinguish between genuine and synthetic content and to track model leaks, along with robust auditing and certification ecosystems.
“Modern AI systems are now beginning to compete with humans on common tasks,” the letter states. “Should we develop non-human minds that may eventually outnumber, become wiser, obsolete, and replace us? Should it? Such decisions should not be delegated to unelected technical leaders.”
(British Government Today we published a white paper It outlines plans to regulate general-purpose AI, saying it will “avoid hard-line laws that could stifle innovation” and instead rely on existing laws. )
The Future of Life Institute says the AI Lab is an uncontrollable race to develop and deploy “an ever-more powerful digital mind that no one (not even its creators) can comprehend, predict, or reliably control.” claimed to be involved in
The signatories included scientists deepmind technologies, a UK AI lab and a subsidiary of Alphabet, Google’s parent company. Google recently announced Bard, an AI-based conversational chatbot developed using his LaMDA family of LLMs.
LLM is a deep learning algorithm (computer program for natural language processing). Can generate human-like responses to queriesGenerative AI technology can also generate computer code, images, video, and sound.
invested in Microsoft Over $10 billion in ChatGPT and GPT-4 creator OpenAIdid not respond to requests for comment on today’s letter.OpenAI and Google also did not immediately respond to requests for comment.
Andrzej Arendt, CEO of IT consultancy Cyber Geeks, said that while generative AI tools are not yet capable of independently delivering the highest quality software as an end product, they “have no control over code, system configuration, or unit testing. Assistance in the generation of , could speed things up significantly.” Up the job of the programmer.
“Does it make developers redundant? Not necessarily — in part because the results provided by such tools cannot be used without question. We need programmer validation,” Arendt said. continued. “In fact, there has been a change in the way programmers have worked since the beginning of the profession. The work of developers only shifts to some extent to interacting with AI systems.”
According to Arendt, the biggest change will come with the introduction of full-fledged AI systems. This can be compared to the Industrial Revolution of the 1800s replacing an economy based on craft, agriculture and manufacturing.
“The technological leap in AI could be just as big, if not bigger. At this point, we cannot predict all the outcomes,” he said.
Vlad Tushkanov, lead data scientist at Moscow-based cybersecurity firm Kaspersky, said the integration of LLM algorithms into more services could introduce new threats. In fact, LLM engineers are already investigating attacks such as prompt his injection that can be used against LLM and the services it provides.
“Because the situation changes rapidly, it is difficult to predict what will happen next and whether these LLM traits will prove to be a side effect of immaturity or an inherent vulnerability. “However, companies may want to include them in their threat model as they plan to integrate LLMs into their consumer applications.”
That said, LLM and AI technologies are already automating a huge amount of “tedious work” that is useful, needed, but not fun or interesting for people to do. For example, chatbots can sift through millions of alerts, emails, suspected phishing web pages, and potentially malicious executables on a daily basis.
“This amount of work is not possible without automation,” he said. Estimates show that the industry needs millions more professionals, and in this highly creative field, manpower cannot be wasted on monotonous and repetitive tasks.
Generative AI and machine learning won’t replace all IT jobs, including those fighting cybersecurity threats, said Tushkanov. Solutions to these threats are developed in a hostile environment where cybercriminals work against organizations to evade detection.
“Cybercriminals adapt to every new tool and approach, making automation very difficult,” said Tushkanov. “Also, accuracy and quality are very important in cybersecurity, and large language models, for example, are now prone to hallucinations (as our tests show, cybersecurity his task is no exception. is not).”
In its letter, the Future of Life Institute says that with guardrails, humanity can enjoy a prosperous future with AI.
“Design these systems for the obvious benefit to all and give them the opportunity to adapt to society,” the letter said. “Society is pausing the use of other technologies that can have devastating effects on society. Let’s enjoy the summer of AI.
Copyright © 2023 IDG Communications, Inc.
https://www.computerworld.com/article/3691639/tech-big-wigs-hit-the-brakes-on-ai-rollouts.html Tech’s Big Wig: Put the Brakes on the AI Rollout