Microsoft is pushing government AI regulation. Should I believe it?

By now, virtually everyone agrees on its powerful capabilities. Generative AI needs to be regulated. in various ways, various potential hazards: Support authoritarian regimes thanks to their ability to generate misinformation. Allowing Big Tech companies to establish monopolies. eliminate millions of jobs. Take over critical infrastructure. And, in the worst case, it can pose a threat to human existence.
In any case, regulation-phobic governments around the world, including the U.S., will need to finalize rules about whether generative AI can be used, to show they take the risks seriously. .
Such rules are all the more important, because if we don’t put in place enough guardrails soon, it may be too late to stop the spread of AI.
That’s why Microsoft pushes for government action — the kind of government action it wants. Microsoft knows it’s time for his AI gold rush. The company that bets the rights first will eventually win. Microsoft has already made that case and now wants to keep the federal government out of the way.
Unsurprisingly, Microsoft President Brad Smith and OpenAI CEO Sam Altman have become go-to tech executives in the federal government for advice on how to regulate generative AI. In other words, Microsoft Invested $13 billion in OpenAIin the driver’s seat — Altman’s recommendations line up with Smith’s, no doubt.
What specifically are they advising the government to do? And will their recommendations really curb AI, or is it just window dressing?
Altman becomes the face of AI
More than anyone else, Altman is the face of generative AI in the Capitol, and elected officials call him to learn more about generative AI and to advise him on regulation.
The reason is simple. OpenAI created his ChatGPT, a chatbot that revolutionized AI when it was announced in late 2022. Equally important, Mr. Altman’s cautious approach to Congress shows him to be a rational executive who only wants the good, not a tech zealot. for the world. No word on how he, his company, and Microsoft could benefit from billions of dollars by making sure AI regulations reflect what they want.
Altman launched a charm offensive in mid-May that included a dinner with 60 lawmakers from both parties.he testified before many of the same members He was praised by the Senate Judiciary Subcommittee on Privacy, Technology, and Law in terms normally reserved for high-ranking foreign officials.
At the hearing, chairman Senator Richard Blumenthal (D, Connecticut), who usually criticizes big tech, said, “Sam Altman is no different day or night than other CEOs.” and rhetoric, but also in actual actions and willingness to participate and commit to particular actions.”
Altman mainly focused on eschatology leading to the extinction of mankind. He called for Congress to focus regulation on these kinds of issues.
It was a bait and switch.
By focusing legislation on the dramatic but distant potential apocalyptic risks posed by AI (Some believe that this is mostly theoretical, not real For now), Mr. Altman hopes Congress will pass rules that sound important but are intractable. They largely ignore the real dangers that technology poses: intellectual property theft, misinformation spreading in all directions, massive job destruction, ever-increasing technology monopoly, and loss of privacy.
If Congress passes, Altman, Microsoft, and other Big Tech companies will benefit from billions of dollars, the public will be largely unprotected, and elected leaders will curb AI and disrupt the tech industry. You can brag about how you are fighting.
At the same hearing where Mr. Altman was praised, Gary Marcus, professor emeritus at New York University, has lashed out at AI, Altman, and Microsoft. He told Congress he was facing a “perfect storm of corporate irresponsibility, widespread deployment, lack of regulation and inherent unreliability.” He accused OpenAI of being “indebted” to Microsoft and said Congress should not follow his recommendations.
Marcus warned that companies rushing to adopt generative AI only care about profit, succinctly summarizing his testimony that “humanity is on the back burner.”
Brad Smith also speaks out
A week and a half after Mr. Altman appeared in Congress, it was Mr. Smith’s turn to call for AI regulation. Smith, an attorney, joined Microsoft in 1993 and was responsible for resolving antitrust lawsuits filed by the US Department of Justice. In 2015 he became the company’s president and chief legal officer.
He knows how the federal government works so well that Altman met with him for help on how to draft and present a congressional proposal. Smith knows it well, washington post Recently published a whistleblowing article about his work on AI regulationclaims that “his policy wisdom helps others in the industry.”
May 25th, Smith Announcing Microsoft’s official recommendations for regulating AI. Unsurprisingly, they dovetail nicely with Altman’s views, emphasizing the apocalyptic rather than the here and now. The only concrete recommendation was one that everyone disputed: Require effective safety brakes on AI systems controlling critical infrastructure.
Of course, it’s a given and the easiest achievement to achieve. More than that, his recommendations are something only lawyers will love, and in summary, they’re full of lofty legal jargon: “Do nothing, but make it sound like it matters.” “develop a broad legal and regulatory framework based on the technology architecture of AI”; Pursue”, etc.
To be fair, Mr. Smith later realized that there were some important issues that needed to be addressed.: Deepfakes, fake AI-generated videos designed for disinformation. The use of AI by “foreign cyber influence operations, the kind of activity already being conducted by the Russian government, China and Iran.” “The act of altering legitimate content for the purpose of deceiving or defrauding people using AI.”
However, he offered no proposals for regulation of them. And he ignored all the myriad other dangers posed by AI.
Will the cavalry come from Europe?
The US in general is anti-regulatory, especially when it comes to technology. Lobbying by the likes of Microsoft, OpenAI, and Google will likely keep this up. Europe, which is more ambitious about tackling big tech than its U.S. leaders, may therefore be dealing with the dangers of AI.
it’s already happening.european union recently passed a bill regulating AI. This is just a starting point and the final law is likely not finalized until later this year. But early drafts have sharp fangs. AI developers are required to publish a summary of all copyrighted material used to train their AI. Require AI developers to take safeguards to prevent AI from generating illegal content. Reliably reduce facial recognition. Prohibit companies from using biometric data from social media to build AI databases.
This is even more extensive, according to new york times: “European legislation takes a ‘risk-based’ approach to regulating AI, focusing on applications that are most likely to cause harm to humans. This includes places where AI systems are used to operate critical infrastructure such as water and energy, within legal systems, and in determining access to public services and government benefits. Manufacturers of this technology should conduct a risk assessment before routine use of the technology, similar to the drug approval process. ”
It is difficult, if not impossible, for an AI company to deploy one system for Europe and another for the United States. European law could therefore force AI developers to follow its guidelines anywhere in the world.
Altmann has been busy meeting with European leaders, including European Commission President Ursula von der Leyen, to try to curb such restrictions, but so far it has paid off. has not risen.
Microsoft and other tech companies may think they’ve got US political leaders under control, but when it comes to AI, Europe may be where they meet.
Copyright © 2023 IDG Communications Inc.
https://www.computerworld.com/article/3700969/microsoft-pushes-for-government-regulation-of-ai-should-we-trust-it.html Microsoft is pushing government AI regulation. Should I believe it?