
In a decisive move against the growing threat of AI-generated sexual exploitation, the House of Representatives has overwhelmingly approved the Take It Down Act with a vote of 409–2. The bill directly targets the creation and distribution of deepfake pornography—explicit images or videos generated by artificial intelligence without the consent of the person depicted.
Under the proposed law, it will be illegal to produce or share such content without permission. Online platforms will be required to remove flagged material within 72 hours of notification, and victims will have the legal right to sue the individuals, companies, or platforms that ignore takedown requests. Lawmakers argue that this measure is long overdue, given the speed at which AI image manipulation technology is advancing.
Support for the bill spans the political spectrum, with President Trump and a rare bipartisan coalition endorsing it. Advocates emphasize that deepfake porn disproportionately impacts children, women, and public figures, causing severe emotional, social, and psychological harm. Representative Sheila Jackson Lee, one of the bill’s sponsors, underscored the urgency of action, stating, “Nobody should wake up to find their face on a fake pornographic video that has gone viral without their consent. This is about drawing a line.”
While only two lawmakers voted against the measure, citing concerns over free speech and potential government overreach, supporters insist the legislation carefully balances privacy rights with platform accountability.
The Take It Down Act will now head to the Senate, where it is expected to pass given its strong bipartisan momentum and executive backing. If signed into law, it will mark a major shift in how the United States addresses digital exploitation and the misuse of artificial intelligence—setting a new standard for protecting privacy and dignity in the digital age.