Technology

Tony Fadell takes a shot at Sam Altman in TechCrunch Disrupt interview

0
Please log in or register to do it.
Tony Fadell takes a shot at Sam Altman in TechCrunch Disrupt interview


iPod creator, Nest Labs founder, and investor Tony Fadell took a shot at OpenAI CEO Sam Altman on Tuesday during a spirited interview at TechCrunch Disrupt 2024 in San Francisco. Speaking about his understanding of the longer history of AI development before the LLM craze and the serious issues with LLM hallucinations, he said, “I’ve been doing AI for 15 years, people, I’m not just spouting sh** — I’m not Sam Altman, okay?”

The comment drew a surprised murmur of “oohs” from the shocked crowd amid just a small handful of applause.

Fadell had been on a roll during his interview, touching on a number of topics ranging from what kind of “a**holes” can produce great products to what’s wrong with today’s LLMs.

While admitting that LLMs are “great for certain things,” he explained that there were still serious concerns to be addressed.

“LLMs are trying to be this ‘general’ thing because we’re trying to make science fiction happen,” he said. “[LLMs are] a know-it-all…I hate know-it-alls.”

Instead, Fadell suggested that he would prefer to use AI agents that are trained on specific things and are more transparent about their errors and hallucinations. That way, people would be able to know everything about the AI before “hiring” them for the specific job at hand.

“I’m hiring them to…educate me, I’m hiring them to be a co-pilot with me, or I’m hiring them to replace me,” he explained. “I want to know what this thing is,” adding that governments should get involved to force such transparency.

Otherwise, he noted, companies using AI would be putting their reputations on the line for “some bullshit technology,” he said.

“Right now we’re all adopting this thing and we don’t know what problems it causes,” Fadell pointed out. He noted also that a recent report said that doctors using ChatGPT to create patient reports had hallucinations in 90% of them. “Those could kill people,” he continued. “We are using this stuff and we don’t even know how it works.”

(Fadell seemed to be referring to the recent report where researchers had found that University of Michigan researchers studying AI transcriptions found an excessive number of hallucinations, which could be dangerous in medical contexts.)

The comment about Altman followed as he shared with the crowd that he has been working with AI technologies for years. Nest, for instance, used AI in its thermostat back in 2011.

“We couldn’t talk about AI, we couldn’t talk about machine learning,” Fadell noted, “because people would get scared as sh** — ‘I don’t want AI in my house’ — now everybody wants AI everywhere.”



Source link

Nancy Wu Unhappy She Was Accused Of Copying Sheren Tang’s Acting Style In Rosy Business Sequel
Softbank's Masayoshi Son says Nvidia is still undervalued