Copyright, Contracts, and AI-Generated Material

Image Header: Neon image of a human brain embedded in a computer motherboard. Credit: vchal at Shutterstock.com https://www.shutterstock.com/g/vchal

On March 16, 2023, the United States Copyright Office issued a publication: Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence. The full text can be found here.

The Copyright Office’s Guidance does not have the force of law and will change as the situation evolves, especially as legal precedents are created under US law, but, as of the time of this post, it is effectively the policy in force in the United States.

The main takeaway from the Guidance can be summarized thus: the only parts of a work that are copyrightable are the human-contributed ones, and the work is not copyrightable if an AI technology determines the expressive elements of the work and the creativity is not the product of human authorship. In cases where there are both AI-generated and human-authored elements, copyright will only protect the human-authored aspects of the work, which are “independent of ” and do “not affect” the copyright status of the AI-generated material.

“Expressive elements” are defined as “the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.)” There’s a very subjective line being drawn here, and there have not, to my knowledge, been any cases decided by the Copyright Office in which the AI-created elements are not distinct and separate from the human-created ones. The only thing that can be said for certain is that texts or images created wholly or primarily by a generative AI system are not copyrightable at this time and, in any work containing both human and AI-produced elements, a copyright registration must identify the AI-generated ones.

So, what are the practical effects of the Guidance for writers who want to use Generative AI as tool to enhance the expressive elements of their work that they intend to license to publishers?

It’s clear that the Copyright Office will not register works or portions of works that they deem uncopyrightable, but what does this even mean? Despite the Copyright Office’s focus on copyright registration, few authors register their copyrights for short fiction nowadays. Large publishers often register the copyrights of the books they publish on the authors’ behalf, but it’s common for small book publishers to leave registering copyright to the writer. Of course, self-publishers are responsible for registering their own works. In general, when registration is left up to authors, it doesn’t happen.

Since any work a human author produces that is “fixed in a tangible medium of expression” is automatically copyrighted, registration only gives the author additional benefits, such as being able to take an infringer to court (it’s important to note that the USA is the only Berne-signatory country that requires registration as a pre-requisite to making a claim in court).  But when a work that’s created by or with AI assistance isn’t registered, it’s unclear if all or parts of it are copyrightable. It may be uncopyrightable — for all intents and purposes in the public domain. That creates profound problems for a human who submits the work to a publisher and intends to license rights to publish the work. Almost by definition, if a work isn’t copyrightable, there are no rights (which enable the publisher to create copies of the work in various formats) to license.

A purveyor of an AI-generated work can’t legitimately sign such a contract because the essence of it involves letting the publisher use those rights, which don’t exist. Doing so may even be contract fraud.

Many contracts include a clause warranting that the person (who signs the contract) is the sole author, and this will also be a bar to selling a hybrid human-AI work if the purveyor is honest. I expect these kinds of warranty clauses to become more widespread and more explicit about AI.

For self-publishers, there are also problems, although they’re not as likely to involve legal issues. Amazon, for example, requires publishers to check a box if the work they are publishing is in the public domain, and they have stated that they may take steps to delete AI-generated works if they view them as a problem. Not to mention that any uncopyrightable work on Amazon or other ebook market can be copied and republished by anyone.

Finally, though, the future of AI-generated books and stories is unknowable, mainly because, at the moment, there is no definitive way to determine if they contain AI-generated work or even if they are entirely the work of an AI. We’ve already seen one situation in which an artist denied that the artwork they sold was created with an AI despite evidence that it was. Especially in the short fiction and small book publisher markets, it’s unlikely that even the most egregious examples of contract fraud would go to court. Meanwhile, the best advice for anyone using AI-generated texts is be honest and keep scrupulous records of the process that produced them, with the original AI-generated material and every step that produced the final text. When the Copyright Office better defines what is copyrightable, the determination will depend on the process as well as the final result.

Until that happens and there are enough examples of how the CO determines copyrightability in borderline cases, the only definitive way for the producer of a hybrid work to determine copyrightability is to register it as an unpublished work before submitting it to publishers. While Writer Beware strongly recommends against authors registering copyright before a publisher buys rights, until there is more clarity on copyrightability this may be the best way to find out if a story or novel can legitimately be licensed.  If the work is copyrightable, in whole or in part, registering it will get the Copyright Office’s seal of approval. Note, however, that there are some technical downsides to doing so, and if the work is changed during the editorial process, the registration may have to be changed or redone.

We hope that the CO will soon improve its definition and make this step unnecessary.

4 Comments

  1. A few inaccuracies there, Michael.

    “In cases where there are both AI-generated and human-authored elements, copyright will only protect the human-authored aspects of the work, which are “independent of ” and do “not affect” the copyright status of the AI-generated material.”

    This is partly true. However, the USCO also left room for transformative effort on the part of the artist, as is the case with ANY public domain work. For example, taking a person from one AI image and inserting them into a background without people from another AI image is inherently transformative in nature, and will generate a US copyright in nearly all cases.

    Likewise, the use of AI elements does not invalidate the copyright of an overall work; for example the “Zarya of the Dawn” graphic novel ended up having its copyright *restored*, although the individual images themselves with out the text added and the ordering and placement are not.

    Had the original images had transformative work done on them by a human artist, they’d be protected as well.

    “There’s a very subjective line being drawn here, and there have not, to my knowledge, been any cases decided by the Copyright Office in which the AI-created elements are not distinct and separate from the human-created ones.”

    The good news is, there have been literally thousands of applicable cases. Since AI work is not protected by copyright, it is public domain. The bar for what is copyrightable when using public domain in some manner has been debated in courts for over a hundred and fifty years. We’ve got plenty of precedent. Not specifically for AI, no; but that’s irrelevant, since they’re being treated the same as any other public domain work.

    “A purveyor of an AI-generated work can’t legitimately sign such a contract because the essence of it involves letting the publisher use those rights, which don’t exist. Doing so may even be contract fraud.”

    This will wildly depend on the wording of the contract, of course. Remember: all AI creations ARE protected by copyright in the UK and Japan. The EU is considering following the same path. This means every AI generated work DOES have copyright, DOES have rights which can be sold.

    By extension, this also means all AI works have some mild level of international protection as well. If a Canadian author uses AI to make a book cover, and someone from the US uses his image to make t-shirts? Those images ARE public domain in the US and UK, but if the t-shirts end up on Amazon.co.uk, there’s a viable lawsuit. The Canadian author can and probably should sue for damages in the UK courts.

    “For self-publishers, there are also problems, although they’re not as likely to involve legal issues. Amazon, for example, requires publishers to check a box if the work they are publishing is in the public domain, and they have stated that they may take steps to delete AI-generated works if they view them as a problem. Not to mention that any uncopyrightable work on Amazon or other ebook market can be copied and republished by anyone.”

    Several different bits here. First off, AI work isn’t globally public domain, so there’s a little uncertainty if there’s any requirement to check that box for even wholly AI work. To date, Amazon has not given any guidance, so basically assume nobody is checking it. They have stated they’ll take down works which are obviously creating a poor customer experience, as they have been doing; they’ve made no statement about AI specifically.

    And lastly, you’re right that an uncopyrightable work on Amazon can be copies and published by anyone. That’s the nature of public domain.

    Two problems, though. First off, you have to know which books are actually AI created. This is a problem, because even the best detection apps have a 25%+ false positive rate for GPT4. Second off, you’ll have to PROVE it in court when they sue you. At present, there is no way to prove something was written by AI, for example, so this is problematic at best.

    In short, the odds of someone taking a random book off Amazon, saying “I think this is AI!” and then publishing it themselves are lower than the odds of being struck by lightning. It’ll happen to a few people every year, but it’s not going to be common.

    Overall, this is a really good summary, the few nit-picky details aside. 🙂 Your thoughts about keeping records and being impeccably honest are good wisdom, IMHO.

    I do suspect the ultimate result will be the EU and US going the same way as the UK did. The rationale is pretty obvious: since there is no way to prove something was or was not AI, it makes it incredibly difficult to enforce a rule making AI work public domain. Worse yet, the detection will only grow more difficult over time, as AI work grows closer to that of humans. It’ll never be easier to detect AI text than it is today; and it’s functionally impossible today.

Leave a Reply

JUNE 2, 2023

Call For Change: Current and Former New Leaf Literary & Media Authors Speak Out

READ
JUNE 23, 2023

When the Copyright Trolls Came for Me

READ