In the near future, I plan to introduce legislation aimed at ensuring transparency in the use of generative artificial intelligence (AI) and requiring the disclosure of synthetic content.  
 
In a world increasingly filled with synthetic media, deepfakes, and AI-generated outputs, digital provenance has become essential for building digital trust. Digital provenance refers to the ability to trace the origin, creation, and modification of AI-generated data or content. Think of it as the “chain of custody” for digital artifacts, ensuring that we know who created what, when, and how.  
 
Pennsylvanians have a right to know when they are viewing or interacting with synthetic content that has been generated or manipulated by AI. It is crucial to provide clear, consistent, and reliable mechanisms for identifying the provenance of digital content. This empowerment will enable individuals to make informed judgments about the information they consume.  
 
The proposed legislation will establish a standardized framework for labeling any image, video, or audio content produced by generative AI systems. This content label will be designed to be permanent and will include critical information such as the entity responsible for creating the content, the name and version number of the generative AI system involved, and the time and date of creation or alteration. Responsibility for adding this watermark will fall on the provider of the generative AI system or any integrated service that enables the AI generation, such as social media platforms or device manufacturers. Failure to provide the required provenance watermark will result in civil violations subject to financial penalties. Moreover, this legislation establishes a civil regulatory enforcement mechanism overseen by the Pennsylvania Attorney General's Office.  
 
I invite you to join me in advocating for clear transparency and accountability from providers of generative AI systems as we work together to enhance public trust in this vital and emerging technology.