Safety

AI audio built to unlock possibilities and positive impact, guided by responsibility and safeguards that protect people from misuse.

Our Safety Misison

At ElevenLabs, we believe deeply in the immense benefits of AI audio. Our technology is used by millions of individuals and thousands of businesses to make content and information accessible to audiences for whom it was previously out of reach, to create engaging education tools, to power immersive entertainment experiences, to bring voices back for people who have lost the ability to speak due to accident or illness, and so much more. 

As with all transformational technologies, we also recognize that when technology is misused, it can cause harm. That’s why we are committed to protecting against the misuse of our models and products – especially efforts to deceive or to exploit others.  Our safety principles guide our everyday work and are reflected in concrete, multi-layered safeguards designed to prevent and address abuse.

“AI safety is inseparable from innovation at ElevenLabs. Ensuring our systems are developed, deployed, and used safely remains at the core of our strategy.”

Mati Staniszewski

Mati Staniszewski

Co-founder at ElevenLabs

“The volume of Al-generated content will keep growing. We want to provide the needed transparency, helping verify the origins of digital content.”

Piotr Dąbkowski

Piotr Dąbkowski

Co-founder at ElevenLabs

Our Safety Principles

Our safety program is guided by the following principles:

Safety illustration

Our Safeguards

We strive to maximize friction for bad actors attempting to misuse our tools, while maintaining a seamless experience for legitimate users. We recognize that no safety system is perfect: on occasion, safeguards may mistakenly block good actors or fail to catch malicious ones.

We deploy a comprehensive set of safeguards in a multi-layered defense system. If one layer is bypassed, the additional layers that lay beyond it are in place to capture the misuse. Our safety mechanisms are continuously evolving to keep pace with advancements in our models, products, and adversarial tactics.

Inform

We incorporate third-party standards such as C2PA and support external efforts to enhance deepfake detection tools. We have publicly released any industry leading AI Audio Classifier to help others determine whether a piece of content was generated using ElevenLabs.

Enforce

Customers who violate our Prohibited Usage Policy are subject to enforcement actions, including bans for persistent or serious violators. We refer criminal and other illegal activity to law enforcement.

Detect

We actively monitor our platform for violations of our Prohibited Usage Policy, leveraging AI classifiers, human reviewers, and internal investigations. We partner with external organizations to obtain insights about potential misuse and have established a mechanism through which the public can report abuse.

Prevent

We redteam our models prior to release and vet our customers at sign up. We also embed product features to deter bad or irresponsible actors, including blocking the cloning of celebrity and other high risk voices, and requiring technological verification for access to our Professional Voice Cloning tool.

Safety Partnership Program

We support leading organizations to develop technical solutions to detect deepfakes in real time. 

Report Content

If you find content which raises concerns, and you believe it was created with our tools, please report it here.

Prohibited content & uses policy

Learn about the types of content and activities that are not allowed when using our tools.

ElevenLabs AI Speech Classifier

Our AI Speech Classifier lets you detect whether an audio clip was created using ElevenLabs.

Coalition for Content Provenance and Authenticity

An open technical standard providing the ability to trace the origin of media.

Content Authenticity Initiative

Promoting the adoption of an open industry standard for content authenticity and provenance.

Frequently asked questions

If you come across content that violates our Prohibited Content and Uses Policy, and you believe it was created on our platform, please report it here. EU users can notify us of content they believe may constitute illegal content (pursuant to DSA Article 16 of the EU Digital Services Act) here. We also designated a single point of contact for EU users (pursuant to DSA Article 12) where they can contact us about other concerns here.

As part of our commitment to responsible AI, ElevenLabs has established policies concerning cooperation with governmental authorities, including law enforcement agencies. In appropriate cases, this may include reporting or disclosing information about prohibited content, as well as responding to lawful inquiries from law enforcement and other governmental entities. Law enforcement authorities can submit legal inquiries by contacting our legal team here. Pursuant to Article 11 of the EU Digital Services Act, Law enforcement authorities in the EU may direct a non-emergency legal process requests to ElevenLabs Sp. z o.o. by submitting their DSA Request via a form here, which was designated as the single point of contact for direct communications with the European Commission, Member States’ Authorities, and the European Board for Digital Services. Authorities may communicate with Eleven Labs in English and Polish. Where required by applicable law, international legal processes may require submission through a Mutual Legal Assistance Treaty.

If you are an EU user, you have six months to appeal an action ElevenLabs has taken against your content or account. If your account or content has been restricted, you can submit your appeal by responding to the notification you received. If you would like to appeal the outcome of your DSA illegal content report, please use the form here.

EU users can also contact certified out-of-court settlement bodies to help resolve their disputes relating to content or account restrictions, as well as related appeals. Decisions by out-of-court dispute settlement bodies are not binding.

The most realistic voice AI platform

Background lines