The "Artificial Intelligence Synthetic Content Accountability Act" establishes civil and criminal enforcement mechanisms to address the improper use of synthetic content generated by artificial intelligence. It defines key terms, including "covered synthetic content," which encompasses all synthetic content except for text, and outlines the responsibilities of content providers regarding labeling and watermarking. The act introduces civil liability for the nonconsensual dissemination of covered synthetic content, allowing individuals to sue for damages if their likeness is used without consent in a harmful manner. It also specifies defenses against liability and ensures the protection of plaintiffs' privacy in civil actions. Criminally, the act classifies improper dissemination of covered synthetic content as a fourth-degree felony and mandates that providers embed imperceptible watermarks to ensure accountability.

Additionally, the bill requires large online platforms to implement reasonable identity verification methods for users posting synthetic content, particularly when the content purports to depict reality. Verification must occur each time a user attempts to post such content or if more than sixty minutes have elapsed since the last verification. The bill also stipulates that user information obtained during the verification process can only be disclosed with a court order, ensuring user privacy is maintained. It includes provisions for civil and criminal cases regarding the disclosure of information and features a severability clause to uphold the act's enforceability even if parts are invalidated.