Technology

Apple signs the White House’s commitment to AI safety

0
Please log in or register to do it.
Apple signs the White House’s commitment to AI safety


Apple signed the White House’s voluntary commitment to developing safe, secure and trustworthy AI, according to a press release on Friday. The company will soon launch its generative AI offering, Apple Intelligence, into its core products, putting generative AI in front of Apple’s 2 billion users.

Apple joins 15 other technology companies — including Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — that committed to the White House’s ground rules for developing generative AI in July 2023. At the time, Apple had not revealed how deeply it planned to ingrain AI into iOS. But we heard Apple loud and clear at WWDC in June: It’s going all in on generative AI, starting with a partnership that embeds ChatGPT in the iPhone. As a frequent target of federal regulators, Apple wants to signal early that it’s willing to play by the White House’s rules on AI — a possible attempt to curry favor before any future regulatory battles on AI break out.

But how much teeth do Apple’s voluntary commitments to the White House have? Not much, but it’s a start. The White House calls this the “first step” toward Apple and 15 other AI companies developing AI that is safe, secure and trustworthy. The second step was President Biden’s AI executive order in October, and there are several bills currently moving through federal and state legislatures to better regulate AI models.

Under the commitment, AI companies promise to red-team (acting as an adversarial hacker to stress test an organization’s safety measures) AI models before a public release and share that information with the public. The White House’s voluntary commitment also asks AI companies to treat unreleased AI model weights confidentially. Apple and other companies agree to work on AI model weights in secure environments, limiting access to model weights to as few employees as possible. Lastly, the AI companies agree to develop content labeling systems, such as watermarking, to help users distinguish what is and isn’t AI generated.

Separately, the Department of Commerce says it will soon release a report on the potential benefits, risks and implications of open-source foundation models. Open-source AI is increasingly becoming a politically charged regulatory battlefield. Some camps want to limit how accessible model weights to powerful AI models should be in the name of safety. However, doing so could significantly limit the AI startup and research ecosystem. The White House’s stance here could have a significant impact on the greater AI industry.

The White House also noted that federal agencies have made significant progress on tasks set out by the October executive order. Federal agencies have made more than 200 AI-related hires to date, awarded more than 80 research teams’ access to computational resources, and released several frameworks for developing AI (the government loves frameworks).



Source link

Malakat Mall closure in Cyberjaya's impact on vendors & shoppers
Cantopop singer Jacky Cheung cancels tour dates in Hangzhou at last minute amid ill-health