Elon Musk, who previously stated that he would rather “eat tasty food and live a shorter life,” has kept his word, saying he enjoys a breakfast donut daily.
In response to a tweet from Peter Diamandis, a physician and the CEO of the non-profit organization XPRIZE, the Twitter CEO revealed his sweet tooth.
On Tuesday, Diamandis tweeted, “Sugar is poison.” Musk replied: “I eat a donut every morning. Still alive.”
I eat a donut every morning. Still alive.
— Elon Musk (@elonmusk) March 28, 2023
At press time, Musk’s tweet had been viewed more than 11.4 million times.
Musk’s daily donut diet revelation is unsurprising, given his previous remarks about his eating habits.
In 2020, Musk told podcaster Joe Rogan, “I’d rather eat tasty food and live a shorter life.” Musk said that while he works out, he “wouldn’t exercise at all” if he could.
According to CNBC, it’s unclear whether Musk’s diet was influenced by his mother, Maye Musk, a model who worked as a dietitian for 45 years.
Musk is not the only celebrity with unusual eating habits.
Rep. Nancy Pelosi, the former House Speaker, survives — and thrives — on a diet of breakfast ice cream, hot dogs, pasta, and chocolate.
Former President Donald Trump has a well-documented fondness for fast food, telling a McDonald’s employee in February that he knows the menu “better than anyone” who works there.
Amazon founder Jeff Bezos enjoys octopus for breakfast, and Meta CEO Mark Zuckerberg prefers to eat meat from animals he has slaughtered himself.
Musk representatives did not respond immediately to Insider’s request for comment after regular business hours.
Elon Musk wants pause on AI Work.
Meanwhile, four artificial intelligence experts have expressed concern after their work was cited in an open letter co-signed by Elon Musk calling for an immediate halt to research.
The letter, dated March 22 and with over 1,800 signatures as of Friday, demanded a six-month moratorium on developing systems “more powerful” than Microsoft-backed (MSFT.O) OpenAI’s new GPT-4, which can hold human-like conversations, compose songs, and summarize lengthy documents.
Since GPT-4’s predecessor, ChatGPT, last year, competitors have rushed to release similar products.
According to the open letter, AI systems with “human-competitive intelligence” pose grave risks to humanity, citing 12 pieces of research from experts such as university academics and current and former employees of OpenAI, Google (GOOGL.O), and its subsidiary DeepMind.
Since then, civil society groups in the United States and the European Union have urged lawmakers to limit OpenAI’s research. OpenAI did not immediately return requests for comment.
Critics have accused the Future of Life Institute (FLI), primarily funded by the Musk Foundation and behind the letter, of prioritizing imagined apocalyptic scenarios over more immediate concerns about AI, such as racist or sexist biases being programmed into the machines.
“On the Dangers of Stochastic Parrots,” a well-known paper co-authored by Margaret Mitchell, who previously oversaw ethical AI research at Google, was cited.
Mitchell, now the chief ethical scientist at Hugging Face, slammed the letter, telling Reuters that it was unclear what constituted “more powerful than GPT4”.
“By taking a lot of dubious ideas for granted, the letter asserts a set of priorities and a narrative on AI that benefits FLI supporters,” she explained. “Ignoring current harms is a privilege some of us do not have.”
On Twitter, her co-authors Timnit Gebru and Emily M. Bender slammed the letter, calling some of its claims “unhinged.”
FLI president Max Tegmark told Reuters that the campaign did not undermine OpenAI’s competitive advantage.
“It’s quite amusing; I’ve heard people say, ‘Elon Musk is trying to slow down the competition,'” he said, adding that Musk had no involvement in the letter’s creation. “This isn’t about a single company.”
RISKS RIGHT NOW
Shiri Dori-Hacohen, an assistant professor at the University of Connecticut, took issue with the letter mentioning her work. She co-authored a research paper last year arguing that the widespread use of AI already posed serious risks.
Her research claimed that the current use of AI systems could influence decision-making in the face of climate change, nuclear war, and other existential threats.
“AI does not need to reach human-level intelligence to exacerbate those risks,” she told Reuters.
“There are non-existent risks that are extremely important but don’t get the same level of Hollywood attention.”
When asked about the criticism, FLI’s Tegmark stated that AI’s short-term and long-term risks should be taken seriously.
“If we cite someone, it just means we claim they’re endorsing that sentence, not the letter or everything they think,” he told Reuters.
Dan Hendrycks, director of the California-based Center for AI Safety, also cited in the letter, defended its contents, telling Reuters that it was prudent to consider black swan events – those that appear unlikely but have catastrophic consequences.
According to the open letter, generative AI tools could be used to flood the internet with “propaganda and untruth.”
Dori-Hacohen called Musk’s signature “pretty rich,” citing a reported increase in misinformation on Twitter following his acquisition of the platform, as documented by the civil society group Common Cause and others.
Twitter will soon introduce a new fee structure for access to its data, which could hinder future research.
“That has had a direct impact on my lab’s work, as well as the work of others studying misinformation and disinformation,” Dori-Hacohen said. “We’re doing our work with one hand tied behind our back.”
Musk and Twitter did not respond immediately to requests for comment.