A criminal investigation has been launched into whether an artificial intelligence (AI) chatbot played a role in a deadly mass shooting at Florida State University.Florida Attorney General James Uthmeier said his office is probing whether ChatGPT provided guidance to the alleged gunman, Phoenix Ikner, before the attack on April 17, 2025. The gruesome incident resulted in 2 deaths and 7 injuries.“ChatGPT offered significant advice to the shooter before he committed such heinous crimes,” Utmeier said at a press conference in Tampa.According to the attorney general, the chatbot gave detailed answers about weapons and planning. He said it suggested what kind of gun and ammunition to use, what works best at close range, and even pointed out crowded areas on campus.“My prosecutors have looked at this, and they’ve told me if it was a person on the other end of the screen, we would be charging them with murder,” Utmeier said.The shooting took place outside the student union at the university’s Tallahassee campus. Ikner, a student, used his stepmother’s service pistol to open fire, killing two people and injuring six others before police shot him. The victims were identified as Robert Morales, 57, and Tiru Chabba, 45, both working as vendors on campus.Ikner was critically injured but suvived. Authorities say the motive remains unclear, and there is no indication he knew his victims. He now faces charges including first-degree murder and attempted murder.As part of the probe, the attorney general’s office has issued subpoenas to OpenAI, the company behind ChatGPT. Officials want to know how the system works, how it is trained, and how it deals with users who may want to harm others. They have also asked for details about the company’s staff and any public statements related to the incident.Legal experts say the case could be difficult to pursue. Neama Rahmani, a former prosecutor, said it would be complex to prove responsibility when an AI system is involved. “It is unusual, and [Utmeier] is venturing into uncharted legal waters,” Rahmani said.Rahmani said that even if wrongdoing is proven, any punishment would likely be financial, not criminal. “At the end of the day, you can’t put a corporation in jail anyway, so you’re talking about a fine,” he said.In a statement, OpenAI said it is cooperating with investigators and rejected claims of wrongdoing. A spokesperson said the chatbot’s replies were “factual responses to questions with information that could be found broadly across public sources on the internet. It did not encourage or promote illegal or harmful activity.”The company added that the shooting was a tragedy but insisted the chatbot was not responsible. It also said it had identified an account linked to the suspect and shared the information with law enforcement, while continuing to improve safeguards to detect harmful intent.
