Connect with us

Hi, what are you looking for?

News

Florida woman gets sent to jail after ex allegedly used AI-generated fake texts in court ‘No one verified the evidence’

Image Credit: 6abc Philadelphia

Florida woman gets sent to jail after ex allegedly used AI generated fake texts in court 'No one verified the evidence'
Image Credit: 6abc Philadelphia

6abc Philadelphia investigative reporter Chad Pradelli says courts are now facing a growing threat that feels like science fiction but is already landing people in jail: AI-generated deepfakes. 

In his “Action News Special Investigation,” Pradelli frames it as a new kind of evidence problem, where something that looks real can be “totally synthetic” and still get treated like fact.

Pradelli sits down with Melissa Sims, who says her ex-boyfriend used AI-generated, deepfake text messages to convince the system she violated court orders. Sims told Pradelli she spent “two days of hell” in a Florida jail, and her main complaint is simple and chilling: “No one verified the evidence.”

That line sticks because it points to the weak spot AI exploits. The tool is scary, sure, but the bigger danger is the human habit of assuming a screenshot is trustworthy unless someone proves it isn’t.

Melissa Sims Says A Volatile Relationship Turned Into A Legal Nightmare

Chad Pradelli reports that Sims describes her relationship as volatile, and she says that volatility set the stage for everything that followed. Sims and her boyfriend had recently moved to Florida from Delaware County, Pennsylvania, and Pradelli says the trouble started in November 2024.

Melissa Sims Says A Volatile Relationship Turned Into A Legal Nightmare
Image Credit: 6abc Philadelphia

Sims told Pradelli she called police after an argument, claiming her boyfriend had ransacked her home. Then, she says, the situation took a bizarre turn right in front of her eyes.

“Next thing I know, I’m looking at him and he’s slapping himself in the face,” Sims told Pradelli. She also alleges he scratched himself, which matters because when police arrived, Pradelli reports they arrested Sims for battery.

As part of Sims’ bond conditions, Pradelli says a judge ordered her to stay away from her boyfriend and not speak to him. That’s a common restriction, but it also creates a trap door: if “messages” appear later, the court may assume the order was violated even if the messages are fake.

Sims told Pradelli her jail time felt surreal and brutal. “It was horrific,” she said, describing it as being like the show “Orange Is the New Black,” and saying she was placed into what she called basically general population.

This is where the human side gets loud, because two days can feel like forever when you don’t belong there, you’re scared, and you’re trying to understand how your life got flipped upside down by something you say you didn’t even do.

The Fake Texts And The Arrest That Followed

Chad Pradelli says that months after the original arrest, Sims claims her ex-boyfriend created an AI-generated text thread that made it look like she contacted him and insulted him. 

Pradelli describes those texts as calling him names and making disparaging comments, the exact kind of “proof” that would make a judge think the bond order was ignored.

The Fake Texts And The Arrest That Followed
Image Credit: 6abc Philadelphia

“I end up getting arrested for violating my bond,” Sims told Pradelli. Then she adds the line that drives the entire investigation: “No one verified the evidence.”

Pradelli doesn’t present this as a one-off freak story, either. He treats it as a warning sign, because fake evidence doesn’t have to be perfect; it just has to be believable enough to slip through a system moving fast.

And when the system moves fast, it often leans on shortcuts. A printed screenshot or a phone capture can look official, especially to people who aren’t trained to question how easy it is to fabricate digital conversations now.

It’s hard not to feel angry reading this, because “verify first” sounds like the most basic rule on earth, yet Sims’ story suggests the opposite happened. 

When someone loses freedom even briefly based on unverified digital material, it raises a nasty question: how many other cases are walking around with shaky evidence, just because nobody stopped to ask, “How do we know this is real?”

Judges And Experts Say Deepfake Evidence Is Spreading

Chad Pradelli says Sims’ ordeal is part of an increasing pattern, not an isolated fluke, and he brings in Judge Herbert Dixon to explain why. Dixon is a senior judge on the Superior Court of the District of Columbia, and Pradelli reports Dixon says deepfakes have evolved quickly.

“Several years ago, it started out with just fake audio recordings,” Judge Dixon told Pradelli. “And now it’s gotten to a point of fake videos and also fake images.”

Pradelli notes Dixon is also a member of the Council on Criminal Justice, alongside former Philadelphia Police Commissioner Charles Ramsey. Dixon says the Council’s mission is to advance policy in the criminal justice system, and part of that work is figuring out how to handle AI responsibly.

Judges And Experts Say Deepfake Evidence Is Spreading
Image Credit: 6abc Philadelphia

“One of the things that we’re trying to do is to develop a framework for the responsible use of artificial intelligence,” Dixon told Pradelli. Pradelli presses him with the obvious question of the moment: do prosecutors and police need to do more due diligence before bringing charges in the age of AI?

“Some due diligence has to be involved,” Dixon answers in Pradelli’s report. It’s a short sentence, but it carries weight, because it’s basically a judge saying, “We can’t keep acting like the old rules still cover this new problem.”

Pradelli also interviews Drexel University professor Rob D’Ovidio, who teaches AI forensics. D’Ovidio says the situation is getting worse in a way people don’t want to admit out loud.

“This is scary to say, but we’re no longer going to be able to trust what we see in front of us,” D’Ovidio told Pradelli. He explains that AI-generated video, texts, and other forms of “evidence” can be hard to spot because the technology is getting too good.

“The challenge is the detection tools are not keeping up with those capabilities,” D’Ovidio says in Pradelli’s report. And if detection tools can’t keep up, then courts are left relying on human judgment, which is exactly where deepfakes thrive.

Pradelli includes a demonstration from D’Ovidio that shows just how messy this is. D’Ovidio created an AI-generated photo and fed it into three well-known AI detection programs, and Pradelli reports all three returned different results, ranging from 1% to 62% probability that the image was synthetic.

That range is not a small disagreement; it’s a sign that the tools people hope will “solve” the deepfake problem aren’t giving clean answers yet. In a courtroom, “maybe” isn’t good enough, because liberty is not supposed to hang on a probability roulette wheel.

D’Ovidio’s solution, as Pradelli reports it, is a mindset flip. “The standard nowadays is we trust unless proven otherwise,” D’Ovidio says, adding, “I think we’re going to have to flip the script and distrust until we verify.”

That’s a major cultural shift, and it won’t be easy, because courts run on schedules, volume, and habits built over decades. But if they don’t shift, the system becomes a perfect machine for anyone willing to fabricate believable lies.

A Case Ends, But The Risk Doesn’t

Chad Pradelli reports that Sims’ story eventually turns in her favor, but only after months of stress and legal fighting. After eight months of legal wrangling by her attorney, Pradelli says prosecutors dropped the bond violation charge against her last month.

Pradelli also reports Sims went to trial on the battery charge and was acquitted. That matters because it suggests the original conflict that started the chain of events did not end with a conviction, yet Sims still spent time in jail and still had to fight off serious accusations.

A Case Ends, But The Risk Doesn’t
Image Credit: 6abc Philadelphia

Sims is now advocating for a new law that would create AI evidence standards and penalties, Pradelli says. Her warning, delivered straight to camera through Pradelli’s reporting, is blunt: “If this can happen to me, it can happen to anyone.”

And she’s right in the uncomfortable way that makes people nervous, because deepfake tools are not limited to criminals with special skills anymore. If someone can type a prompt, generate content, and present it with confidence, the barrier to manufacturing “proof” has dropped low enough to threaten regular people in ordinary disputes.

Pradelli ends with a policy note that shows lawmakers are already reacting. He reports that in July, Pennsylvania Governor Josh Shapiro signed a new digital forgery law making it a felony to create AI deepfakes that injure, exploit, or scam Pennsylvanians.

Pradelli also notes that 6abc reached out to Sims’ ex-boyfriend, who they are not naming, but they did not hear back. That detail hangs there, because silence doesn’t prove guilt, but it also doesn’t answer the public question of how, exactly, evidence like this made it far enough to put someone behind bars.

My takeaway is that Sims’ case reads like an early warning flare, not a rare accident. If courts don’t build a clear process for verifying digital evidence—especially texts, images, and video—then the system will keep getting ambushed by “proof” that only exists because a computer made it.

And the cruel part is that the damage happens first, while the truth comes later. People can get arrested, booked, jailed, and publicly labeled before anyone even asks the one question that should come first in 2026: “How do we know this isn’t fake?”

You May Also Like

News

Image Credit: Max Velocity - Severe Weather Center