A man hospitalized after the dietary advice of cats leads to toxic poisoning

NEWYou can now listen to Fox News articles!
A man who used Chatgpt for food advice ended up poisoning himself – and ended up in the hospital.
The 60 -year -old man, who sought to eliminate table salt from his diet for health reasons, used the large language model (LLM) to obtain suggestions on what replace him, according to a case study published this week in the annals of internal medicine.
When the ChatPPPT suggested exchanging sodium chloride (table salt) for sodium bromide, man replaced for a period of three months – although the newspaper article noted that the recommendation probably referred to other purposes, such as cleaning.
Chatgpt could silently reclaim your brain while experts encourage long -term caution
Sodium bromide is a chemical compound that looks like salt, but which is toxic to human consumption.
It was once used as anticonvulsant and sedative, but today is mainly used for cleaning, manufacturing and agriculture, according to the National Institutes of Health.

A man who used Chatgpt for food advice ended up poisoning himself – and ended up in the hospital. (Kurt “Cyberguy” KTUSSON)
When the man arrived at the hospital, he pointed out to suffer from fatigue, insomnia, poor coordination, facial acne, cherry angiomas (red bumps on the skin) and excessive thirst – all symptoms of brommism, a condition caused by long -term exposure to sodium bromide.
The man also showed signs of paranoia, noted the case study, because he said that his neighbor was trying to poison him.
Artificial intelligence detects cancer with an accuracy of 25% greater than the doctors of the UCLA
He also turned out to have hearing and visual hallucinations, and was finally placed on a psychiatric socket after trying to escape.
The man was treated with intravenous liquids and electrolytes, and was also put in antipsychotic medications. He was released from the hospital after three weeks of surveillance.
“This case also highlights the way in which the use of artificial intelligence (AI) can potentially contribute to the development of avoidable health results,” the researchers wrote in the case study.
“These are linguistic prediction tools – they lack common sense and will give rise to terrible results if the human user does not apply his own common sense.”
“Unfortunately, we do not have access to his Chatgpt conversation journal and we can never know with certainty what exactly was the exit he received, because the individual responses are unique and built from previous entries.”
It is “very unlikely” that a human doctor would have mentioned sodium bromide when he spoke with a patient looking for a substitute for sodium chloride, he noted.
New AI tool analyzes are photos in the face of photos to predict health results
“It is important to consider that Chatgpt and other AI systems can generate scientific inaccuracies, do not have the capacity to critically discuss results and, ultimately, fuel the propagation of disinformation,” concluded the researchers.
Dr. Jacob Glanville, CEO of Cetivax, a biotechnology company in San Francisco, stressed that people should not use Chatgpt as a doctor.

When the ChatPPPT suggested exchanging sodium chloride (table salt) for sodium bromide, the man, not illustrated, replaced for a period of three months. (istock)
“These are linguistic prediction tools – they lack common sense and will give rise to terrible results if the human user does not apply his own common sense when he decides what to ask these systems and take into account their recommendations,” said Glanville, which was not involved in the case study, at Fox News Digital.
Click here to obtain the Fox News app
“This is a classic example of the problem: the system has essentially gone:” Do you want an alternative to salt? Sodium bromide is often listed as a replacement of sodium chloride in chemistry reactions, so it is the most score replacement here. “”
Dr Harvey Castro, certified advice Emergency Medicine Doctor And the national president on artificial intelligence based in Dallas, confirmed that AI was a tool and not a doctor.

It is “very unlikely” that a human doctor would have mentioned sodium bromide when he spoke with a patient looking for a substitute for sodium chloride, the researchers said. (istock)
“The models of large languages generate text by predicting the most statistically probable words sequence, and not by verifying the facts,” he told Fox News Digital.
“Chatgpt bromide error shows why the context is king in health councils,” Castro continued. “The AI does not replace professional medical judgment, aligning with the warnings of OpenAi.”
Castro also warned that there was a “regulatory difference” when it comes to using LLM to obtain medical information.
“Our terms say that Chatgpt is not intended to be used in the treatment of any health and does not replace professional advice.”
“FDA prohibitions on bromide do not extend to AI’s advice-global monitoring of health AI remains indefinite,” he said.
There is also the risk that LLM could have a data bias and a lack of verification, leading to hallucinated information.
Click here to register for our Health Newsletter
“If training data includes obsolete, rare or chemically focused references, the model can surface them in inappropriate contexts, such as bromide as a salt substitute,” Castro noted.
“In addition, current LLMs do not have an integrated cross -checking against up -to -date medical databases, unless they are explicitly integrated.”

An expert warned that there is a “regulatory difference” when it comes to using large languages models to obtain medical information. (Jakub Porzycki / Nurphoto)
To avoid cases like this, Castro called for more guarantees for LLM, such as integrated medical knowledge bases, automated risk flags, contextual incentive and a combination of human surveillance and AI.
The expert added: “With targeted guarantees, LLM can evolve risked general practitioners in safer and specialized tools; however, without regulation and surveillance, rare cases like this will probably happen again.”
For more health items, visit www.foxnews.com/health
OPENAI, the manufacturer of Chatgpt, based in San Francisco, provided the following statement to Fox News Digital.
“Our terms say that Chatgpt is not intended to be used in the treatment of any health and does not replace professional advice. We have safety teams working on risk reduction and have trained our AI systems to encourage people to search for professional advice.”