PARIS, France— This week, U.S. technology titans touted the benefits of artificial intelligence for humanity at one of Europe’s largest business events, as policymakers around the world fight to limit the technology’s negative impacts.
On Wednesday, at the Viva Tech conference in Paris, Amazon Chief Technology Officer Werner Vogels and Google Senior Vice President for Technology and Society James Manyika discussed the enormous potential that AI is unlocking for economies and communities.
It’s worth noting that their views come just before the EU’s AI Act, the world’s first significant law governing AI, receives final approval. Regulators are looking to limit the damages and abuses of technology, such as misinformation and copyright infringement.
Meanwhile, European Commissioner Thierry Breton, a key architect of Big Tech regulations, is scheduled to appear later in the week.
Vogels, Amazon’s head of technological innovation, believes AI has the potential to tackle complex global challenges.
He stated that while AI has the potential to help businesses of all sizes succeed, “at the same time, we need to use some of this technology responsibly to solve some of the world’s hardest problems.”
Vogels stated that it was critical to discuss “AI for now” — that is, how the technology can currently benefit communities all over the world.
He gave instances of how AI is being utilized in Jakarta, Indonesia, to connect small rice farmers with banking services. AI may also be used to create a more efficient supply chain for rice, which he described as “the most important staple of food,” with half of the world’s population relying on rice as their primary source of nutrition.
Manyika, who manages Google and Alphabet’s responsible innovation efforts, believes AI has significant health and biotechnology benefits.
He stated that a version of Google’s Gemini AI model, which was recently released by the company, is designed for medical applications and can recognize medical-related context.
Google DeepMind, the company’s primary AI unit, has also launched a new version of its AlphaFold 3 AI model, which can grasp “all of life’s molecules, not just proteins,” and has made this technology available to researchers.
Manyika also cited innovations unveiled by the firm at its recent Google I/O event in Mountain View, California, such as new “watermarking” technology for identifying AI-generated text, as well as previously launched photos and audio.
Manyika stated that Google has open-sourced their watermarking technology so that any developer may “build on it, improve on it.”
“I think it’s going to take all of us, these are some of the things, especially in a year like this, a billion people around the world have voted, so concerns around misinformation are important,” Manyika stated. “These are some of the things we should be focused on.”
Manyika also stated that a significant portion of Google’s invention has come from engineers at its French base, and that the company is committed to sourcing much of its innovation from within the European Union.
He stated that Google’s recently announced Gemma AI, a lightweight, open-source model, was substantially developed at the American internet company’s French tech hub.
EU authorities establish worldwide regulations.
Manyika’s remarks came just one day after the EU passed the AI Act, a revolutionary piece of legislation that establishes extensive guidelines for artificial intelligence.
The AI Act takes a risk-based approach to artificial intelligence, which means that various applications of the technology are addressed differently based on the potential hazards they bring.
“I worry sometimes when all our narratives are just focused on the risks,” Manyika stated. “Those are very important, but we should also be thinking about, why are we building this technology?”
“All of the developers in the room are thinking about, how do we improve society, how do we build businesses, how do we do imaginative, innovative things that solve some of the world’s problems.”
He stated that Google is committed to balancing innovation with “being responsible,” as well as “being thoughtful, about will this harm people in any way, will this benefit people in any way, and how we keep researching these things.”
Major U.S. IT giants have been attempting to curry favor with regulators as they face criticism for their massive enterprises having a negative impact on smaller companies in industries ranging from advertising to retail to media production.
With the emergence of AI, opponents of Big Tech are particularly concerned about the growing threat of new advanced generative AI systems undercutting jobs, exploiting copyrighted material for training data, and producing misinformation and destructive content.
Friends in High Places
Big Tech has been trying to gain favor with French officials.
Last week, at the “Choose France” foreign investment event, Microsoft and Amazon pledged 5.2 billion euros ($5.6 billion) in funding for cloud and AI infrastructure and jobs in France.
This week, French President Emmanuel Macron met with Eric Schmidt, former CEO of Google, Yann LeCun, chief AI scientist of Meta, and Google’s Manyika, among other tech luminaries, at the Elysee Palace to discuss how to turn Paris into a global AI hotspot.
Macron welcomed leaders from various digital organizations to France in a message posted by the Elysee and translated into English via Google Translate, thanking them for their “commitment to France to be there at Viva Tech.”
Macron stated that it is “pride of mine to have you here as talents” in the global AI domain.
According to Matt Calkins, CEO of Appian, a US-based enterprise software company, huge tech companies “have a disproportionate influence on the development and deployment of AI technologies.”
“I am concerned that there is potential for monopolies to emerge around Big Tech and AI,” he stated. “They can train their algorithms using privately owned data, as long as it is anonymized. “This is not enough.”
“We need more privacy than this if we use individual and business data,” said Calkins.