The Ownership Dilemma: What U.S. Law Says About AI-Generated Works

By Ramyar Daneshgar
Security Engineer & Legal Policy Researcher at CybersecurityAttorney.com

Disclaimer: This article is for educational purposes only and does not constitute legal advice. For legal guidance tailored to your situation, consult a licensed attorney experienced in cybersecurity and data protection law.

Executive Summary


As generative AI becomes more deeply embedded in content creation, questions around intellectual property ownership grow more urgent. Under U.S. law, copyright protection is generally limited to human-authored works. This article unpacks the legal doctrine underpinning authorship, analyzes key case law, and outlines best practices for ensuring legal compliance when using AI-generated content commercially.


Artificial Intelligence has ushered in a new era of content creation, where machines can generate text, images, code, music, and more with minimal human input. As generative AI tools like ChatGPT, DALL·E, and Midjourney become more integrated into content workflows, a key legal question continues to surface: Who owns the output?

This question implicates a complex intersection of intellectual property law, machine learning technology, and authorship doctrine. In this article, we explore how U.S. copyright law applies to AI-generated content, what use cases may trigger legal uncertainty, and how creators and businesses can mitigate risk.


Under the U.S. Copyright Act (17 U.S.C. § 102), copyright protection is granted to "original works of authorship fixed in any tangible medium of expression." Courts have historically interpreted "authorship" to require human involvement. This principle has been reaffirmed in several landmark decisions.

Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018): In this case, a macaque monkey took a photo using a photographer’s unattended camera. The photographer attempted to claim copyright, while animal rights activists argued on the monkey's behalf. The Ninth Circuit ruled that copyright can only vest in humans, explicitly stating that non-human entities cannot hold copyright. This decision set the stage for debates around non-human creators, including AI systems.

Thaler v. Perlmutter (2022): In this case, Stephen Thaler, on behalf of his AI system "Creativity Machine," submitted an application to register an image created entirely by AI. The U.S. Copyright Office rejected the registration, affirming that only works created with sufficient human authorship are eligible for copyright. The District Court upheld this ruling, stating: "Human authorship is a bedrock requirement of copyright."

U.S. Copyright Office’s March 2023 Guidance: This guidance reaffirms that AI-generated material, without significant human involvement, is not eligible for protection. The Office urges applicants to disclose the use of AI and to claim copyright only in the portions created by a human author.


Human-Machine Collaboration: Is There Protectable Input?

The legal outlook changes when a human meaningfully contributes to the AI-generated work. In such scenarios, the human can claim copyright only in their contributions—not in the AI-generated portions.

Key factors that strengthen a copyright claim include:

  • Creative prompts that influence style or structure
  • Selection among AI outputs
  • Editing or rearranging content
  • Adding original elements to the output

This aligns with the principle that expression, not mere generation, determines authorship. Courts and the Copyright Office will examine the level of intellectual effort and creative judgment involved in transforming the AI’s output.


Even if a piece of AI-generated content cannot be copyrighted, it may still be used in commerce. However, the absence of copyright brings the following risks:

  • Lack of exclusivity: Anyone can reuse or redistribute AI-generated content, as it may fall into the public domain.
  • No basis for enforcement: Without copyright, there is no way to enforce rights against unauthorized use.
  • Risk of derivative work claims: If the output is too similar to a protected work used in model training, legal exposure arises.

To mitigate these risks, organizations must treat AI content as a tool, not a source of automatic legal ownership.


Training Data and Infringement: A Rising Threat

Getty Images (US), Inc. v. Stability AI, Inc. (2023): This ongoing case challenges Stability AI’s use of copyrighted images—many from Getty’s database—for training the Stable Diffusion model. Getty argues that copying these images into training data without permission constitutes direct and contributory copyright infringement, as well as trademark dilution.

The implications are broad. If courts determine that training AI models on copyrighted materials without a license is infringing, companies using generative tools could be held liable—especially if the output resembles training data.

Additional cases involving OpenAI, Microsoft, and GitHub Copilot raise questions about fair use, transformative works, and derivative content. Courts are being asked to evaluate whether statistical transformation during model training is legally distinguishable from copying.


To remain compliant in an evolving legal landscape, creators and companies should implement the following safeguards:

  1. Use AI as an assistive tool, not the sole creator
    Ensure that human input meaningfully contributes to the creative process—through editing, guidance, and original structure.
  2. Maintain authorship documentation
    Record the prompt, AI output, human modifications, and the final version to demonstrate authorship claims.
  3. Vet commercial licenses of AI tools
    Choose AI platforms that explicitly provide rights for commercial use and offer indemnification clauses against copyright infringement claims.
  4. Avoid high-risk domains
    Do not use AI content in legal, financial, or medical applications without professional review.
  5. Consult legal counsel for ambiguous cases
    If the use case involves user-generated content, regulated industries, or sensitive materials, involve legal teams early in the product cycle.
  6. Update policies and disclaimers
    Ensure your terms of service, privacy policies, and user agreements are updated to reflect the use of AI-generated content, attribution policies, and licensing terms.
  7. Monitor legal developments
    Stay informed about AI-related litigation, Copyright Office policies, and proposed legislation (e.g., the proposed “NO FAKES Act” or EU’s AI Act).

Generative AI is revolutionizing the creation of text, images, music, and code—but it has also created unprecedented legal ambiguity. Under U.S. law, machines cannot be authors. To assert ownership and exclusivity, human involvement must rise to the level of original expression.

Until courts or lawmakers explicitly expand the scope of copyrightable subject matter to include autonomous AI outputs, businesses should treat AI as a powerful tool, not a legal author. Ownership will continue to follow the human hand behind the machine.


Resources and Further Reading


Disclaimer: This article is for educational purposes only and does not constitute legal advice. For legal guidance tailored to your situation, consult a licensed attorney experienced in cybersecurity and data protection law.

Need professionally vetted legal documents to support your cybersecurity program? LawDepot offers customizable legal templates—NDAs, breach notification letters, data processing agreements, and more—perfect for businesses navigating compliance with GDPR, HIPAA, and CCPA.

👉 Generate compliance-aligned legal docs in minutes

Affiliate Disclosure: CybersecurityAttorney.com may earn a commission — at no additional cost to you. We only recommend platforms that support secure, compliant operations.