A Florida woman says she was jailed after her ex-partner presented what she describes as AI-generated fake text messages in court, and she insists no one in the system stopped to check whether the digital evidence was real. Her account highlights how quickly synthetic messages can move from a private dispute into a criminal case, with life-altering consequences for the person on the receiving end. It also exposes a justice system that is only beginning to grapple with the reality that convincing deepfakes can now be created by anyone with a smartphone and an internet connection.
According to her, the texts that helped put her behind bars never existed on her phone, and she believes they were fabricated using consumer AI tools that can mimic writing styles and generate entire conversations. The case has become a cautionary tale for judges, lawyers and police officers who still tend to treat screenshots and message logs as inherently trustworthy, even as experts warn that digital footprints can now be manufactured in minutes.

How a breakup spiraled into a criminal case
The Florida dispute began as a familiar story, with a relationship ending badly and communication between former partners turning tense. The woman at the center of the case, identified in multiple reports as Melissa Sims, says she believed the worst of the breakup was behind her until she learned that her ex had gone to authorities with a stack of text messages that appeared to show her threatening and harassing him. She maintains that she never sent those messages and that the language in them did not match how she communicates, but by the time she saw the screenshots they were already being treated as hard proof of wrongdoing.
Investigators and a judge accepted the messages as authentic, according to Sims, and the dispute quickly escalated from a private conflict into a criminal matter. In a separate legal analysis of the incident, the case is described as “Woman Jailed Over Unverified AI, Generated Deepfake Text Evidence, USA,” underscoring that the digital records were never independently checked before being used to justify her arrest and detention. That analysis notes that the investigation relied on what it calls fabricated digital footprints, a phrase that captures how easily synthetic messages can now be woven into a narrative that looks, at first glance, like a genuine chat history.
The night in a Florida jail and a sweeping no-contact order
Once the texts were accepted as real, the consequences for Sims were immediate and severe. She was arrested and taken to what one account describes as a “hellhole” Florida jail, where she spent the night in custody while the case was processed. According to that reporting, spent the night behind bars based on messages that she insists were never on her phone, and that she believes were generated to incriminate her for domestic abuse.
After her release, a judge imposed strict bond conditions that treated the disputed texts as evidence of a serious threat. Sims says she was ordered to stay away from her accuser entirely, including a ban on calling or texting him, and she was warned that any violation could send her back to jail. The restrictions effectively rewrote the rules of her daily life, limiting where she could go and who she could contact, even as she continued to argue that the digital trail used against her was a work of fiction.
“No one verified the evidence,” she says
At the heart of Sims’s complaint is a simple allegation: the people who arrested, charged and restricted her never checked whether the messages were genuine. She has said that “no one verified the evidence,” describing a process in which screenshots were accepted at face value without any forensic review of the devices or accounts involved. Her account is echoed in a detailed investigation that refers to her simply as “Woman” and notes that she was placed under bond conditions based on texts that were never subjected to technical scrutiny.
As part of that bond, the Woman was ordered not to contact her ex, and she says she was left to navigate the fallout of a case built on digital evidence that no one tried to authenticate. The same reporting notes that an expert named D’Ovidio teaches AI forensics, highlighting that there are specialists who can test whether messages or media have been synthetically generated. Yet in Sims’s case, those tools were never brought to bear before the court treated the texts as fact.
Inside the alleged AI-generated deepfake texts
Sims contends that the incriminating messages were not just misinterpreted, but entirely fabricated using artificial intelligence. She believes her ex-partner used consumer tools to create what are often called deepfake texts, synthetic conversations that mimic a person’s tone and style while inserting words they never actually wrote. In her telling, the messages were crafted to look like a pattern of harassment and threats, the kind of language that would alarm police and judges and fit neatly into a domestic abuse narrative.
Investigators later described the case as involving AI-generated video, texts that can be difficult to spot, and experts warned that AI is simply getting too good for untrained eyes to distinguish real from fake. The same analysis notes that these tools can now produce convincing screenshots and chat logs in seconds, complete with timestamps and contact names, which means a motivated accuser can assemble a dossier of “proof” that never passed through a real messaging app.
How the court treated screenshots as hard proof
The Florida case exposes how deeply courts still trust screenshots and message logs, even as the technology to fake them becomes widely available. In Sims’s account, the judge and prosecutors treated the texts as straightforward evidence, focusing on the content of the alleged threats rather than the possibility that the entire conversation had been manufactured. There was no requirement that the accuser hand over his phone for a forensic download, no demand for carrier records, and no independent check of whether the messages ever existed outside the images he provided.
Legal analysts who reviewed the incident describe it as a textbook example of unverified digital evidence being allowed to drive a criminal case. One summary of the matter, labeled “Woman Jailed Over Unverified AI, Generated Deepfake Text Evidence, USA,” stresses that the investigation was built on fabricated digital footprints that no one in the system challenged. That phrase captures a broader problem: courts are accustomed to treating digital trails as inherently reliable, but AI now allows those trails to be planted as easily as they can be followed.
A growing threat inside courtrooms
Sims’s ordeal is not an isolated glitch, but part of what experts describe as a growing threat inside courtrooms. Synthetic media is moving beyond celebrity face swaps and political hoaxes into the far more mundane, and potentially more dangerous, realm of everyday disputes. In one widely shared video, the narrator warns that “a growing threat is showing up inside courtrooms: AI-generated deepfakes,” and uses the experience of Melissa Sims as a case study in how quickly synthetic texts can land someone in jail.
That same explanation notes that when experts tested synthetic media, they saw troubling results, with AI-generated content often slipping past basic checks that judges and lawyers might rely on. The warning is clear: as tools to create deepfake texts, images and videos become more accessible, the justice system will face more cases where the most persuasive evidence is also the most suspect. Without new protocols and training, courts risk turning into amplifiers for fabricated digital stories rather than filters that separate truth from fiction.
Experts on why AI fakes are so hard to spot
Specialists in AI forensics say the difficulty is not just that synthetic media looks real, but that it can be tailored to match the expectations of whoever is reviewing it. In the Florida case, the messages attributed to Sims reportedly fit a pattern of escalating hostility that would naturally alarm police and judges, which may have made them less likely to question whether the texts were genuine. Experts like D’Ovidio, who teaches AI forensics, emphasize that even trained investigators can struggle to distinguish real messages from synthetic ones without specialized tools and full access to the underlying devices.
Another analysis compares the shift to AI-generated content to the move from film photography to digital, but with far higher stakes. It notes that the new tools are “fun, fast, and free,” making it trivial for anyone to create convincing fakes that can be shared as if they were authentic. In one separate case, an AI image landed a woman in jail over a fake sex assault claim, underscoring that the problem is not limited to text messages. The common thread is that the barrier to entry has collapsed, while the systems meant to evaluate evidence are still operating on assumptions from an earlier digital era.
Bond conditions, life disruptions and lingering stigma
For Sims, the impact of the alleged deepfake texts did not end when she walked out of the jail. The bond conditions that followed reshaped her daily life, limiting her movements and communications and placing her under ongoing court supervision. According to one detailed account, the Woman was told she could not speak to or text her accuser, and she has described the experience as being punished for words she never wrote. Even if the case is eventually resolved in her favor, the arrest record, the night in jail and the court orders are likely to follow her for years.
There is also the less visible damage to reputation and relationships. Being accused of domestic abuse, especially when backed by what appears to be a detailed text trail, can alter how friends, employers and even family members see a person. Sims has said she feels as if she was “thrown into hell” based on a digital story someone else wrote for her, and that the system treated that story as more real than her own account. The lingering stigma illustrates how AI-generated evidence can inflict harm even if it is later discredited, because the initial shock of the accusation is hard to fully undo.
Calls for new rules on digital evidence
The Florida case has intensified calls for courts to update how they handle digital evidence, particularly in an era when AI can fabricate messages, images and videos with little effort. Legal commentators argue that judges should no longer accept screenshots as sufficient proof on their own, especially in cases that hinge on alleged threats or harassment. Instead, they say, courts should require corroborating data such as phone records, server logs or full device extractions, and should be prepared to bring in experts when there is any suggestion that AI tools may have been used.
Some analysts point to the “Woman Jailed Over Unverified AI, Generated Deepfake Text Evidence, USA” summary as a warning label for the justice system, a reminder that unverified digital trails can send innocent people to jail. Others highlight the experience of Melissa Sims and the separate case involving an AI image as evidence that the problem spans different types of media and different kinds of accusations. Together, these incidents suggest that without clear standards for authenticating digital evidence, courts risk becoming vulnerable to anyone willing to weaponize AI against an ex-partner, a rival or a stranger.
More from Decluttering Mom:













