At our Cybersecurity and Privacy Law Summit, we had the privilege of sitting down with Dr. Mary Anne Franks for a special edition of A Little Privacy, Please! Dr. Franks is a law professor at George Washington University School of Law, author of Fearless Speech, and President and Legislative and Tech Policy Director of the Cyber Civil Rights Initiative (CCRI). She’s also a returning guest to A Little Privacy, Please!—she previously joined us to discuss privacy and the First Amendment when her book was released.
The CCRI is an outstanding nonprofit organization with an incredibly noble mission: helping victims of non-consensual distribution of intimate imagery, otherwise known as revenge porn. When we first learned of the CCRI’s work, we were so impressed that our Cybersecurity and Privacy team felt compelled to join forces with the organization. We’ve been providing pro bono legal assistance to victims of these acts, helping them reclaim control over their lives and personal safety.
The following conversation has been condensed and edited for clarity. We encourage you to watch the full interview for additional insights from Dr. Franks on this important topic.
Can you tell us how the Cyber Civil Rights Initiative began?
It was back in 2012. I had just begun writing about online harassment and abuse, especially as it relates to women, and published a law review article on these issues. I got an email from a woman named Holly Jacobs, a graduate student in Florida. She said, “I read one of your pieces where you referenced this term revenge porn, which I had not really ever heard before, but I’m a victim of it, and I would really like to meet with you and tell you my story.”
She came to my office with this three-ring binder. She had been in a trusting relationship with someone and had shared intimate photos with that person. After they broke up and she moved on to a different relationship, that person suddenly started disclosing these photos on websites, porn sites, sent them to her family members, and to the students she was teaching in her graduate course—just ruining her life.
The binder tracked all the times where someone had sent her a threatening or sexually explicit message, all the times where she was trying to do her work and continue her dissertation, all the times where she was confronted with these photos over and over again.
She said, “I’ve tried to talk to lawyers, policy makers, the cops. Everyone tells me the same thing—that this isn’t against the law.” She looked at me and said, “If that’s true, and this is not against the law, then it should be, and I want you to help me make it so.”
That was the beginning of the Cyber Civil Rights Initiative.
I connected Holly with Professor Danielle Citron, and the next year we formed the CCRI. The name was taken from one of Professor Citron’s law review essays called “Cyber Civil Rights,” where she sketched out this idea that just as the Civil Rights Movement needed a theory to talk about why we needed equality in the workplace and everywhere, we needed that online too.
Our vision was a three-pronged approach: a legal approach, a social approach, and a tech approach to try to change laws, change people’s minds, and change tech company policies.
What makes non-consensual intimate image abuse so devastating for victims?
For a long time, part of the real struggle for us at the CCRI was to get people to understand that this was life destroying. The way that Holly described it—she’s trying to write her dissertation and every morning the first thing she does is search her name and see that her image is on, let’s say, 500 sites. So she sends takedown notices to every one of those sites, exhausts herself doing it, and thinks she’s got it now, it’s all gone. And the next day, 500 more sites. A thousand more sites.
It really means the world to these victims that there are law firms willing to contribute their resources to this. A victim on their own, trying to navigate the system and deal with the emotional fallout, the sometimes physical fallout—she may lose her job, she may have to go into hiding. And on top of all of that, trying to navigate a legal system where everything’s a minefield, and there are people who want to blame you at every stage. It is really life changing for them to get legal support.
What should people know about the Take It Down Act and the new federal protections?
The Take It Down Act does two major things. First is the criminalization, which our organization has been advocating for since 2013. When we started, only three states had criminal penalties against this kind of abuse. As of last year, we now have all 50 states. We’re really excited about that.
In 2024, Senator Cruz and Senator Klobuchar worked together on a bipartisan bill that criminalized both non-consensual disclosure of intimate images and deepfakes—digitally manipulated images, not just real images. It’s now a federal crime to disclose somebody’s photos or videos of them in sexually explicit situations. When it comes to digitally manipulated images, if they’re realistic enough to fool a reasonable person, the same penalties apply.
The second major component is the takedown mechanism. It tells certain covered platforms that if they get a report of a non-consensual visual depiction, they have 48 hours to take it down or face FTC penalties. However, there’s a major flaw—an exception that says that if the person who distributed the images is depicted in the image, it doesn’t qualify. While the intention was to avoid criminalizing teenagers sharing selfies, it creates dangerous ambiguity. You shouldn’t have a blank check to distribute images just because you’re in the photo or photoshopped yourself into it.
Beyond that exception, the real problems are the takedown provisions. The system lacks the safeguards that the DMCA has for copyright. There’s no requirement to show evidence that the person making the complaint is actually the person in the photo, and no safeguards against bad faith complaints. The 48-hour deadline is mostly impossible for platforms to meet, which means they’ll probably end up with overly restrictive policies or use this arbitrarily.
The definition of “covered platform” is also problematic—it only applies to user-generated sites. That means revenge porn sites curated by perpetrators and deepfake sites aren’t even covered, while innocuous platforms like Wikipedia could be crippled by bad faith attacks. Good intentions, but not the best execution.
How can legal professionals join the fight against digital abuse?
Please contact CCRI and let us know. You can contact me directly or through the CCRI website. We have a specific place on our website for people interested in donating services or helping us with our mission. We could use help with these kinds of cases. The more lawyers we can get on these things, the better.
We’re also indebted to law firms that help us with amicus briefs and help us develop tools that track laws in various jurisdictions and explain them to people in plain language. We’re always enthusiastic for people to spread the word about what we do and about the kind of effort it takes to make this work. And if people know of ways for us to stay funded and keep doing what we’re doing, we would always appreciate that kind of effort.


