It is a truth universally acknowledged, that where there is an issue of concern for writers, someone will find a way to monetize it.
And with AI suddenly omnipresent in our lives (or at least in the media), creators are confronted with a bewildering multiplicity of issues of concern, from unauthorized use of creative works for machine learning, to whether AI-created work is covered by copyright, to crappy AI-created books inundating Amazon and in some cases impersonating real writers, to the replacement of (expensive) creators with (cheap) generative AI tools like ChatGPT and MidJourney, to the looming prospect of machine-created art or novels or journalism becoming indistinguishable from the work of humans.
In this fraught environment, it was probably inevitable that enterprising people would come up with the idea of a service to certify or authenticate human authorship, and invite creators to buy into it. This post takes a look at two such services.
The Authenticity Initiative
The originator of The Authenticity Initiative is Eliza Rae, who also offers social media, brand management, and PR services for authors. The Authenticity Initiative provides a seal to authors who pledge not to use AI-generated content in their work, along with a number of additional perks, including a newsletter and promotional opportunities. The cost: $50 per year.
Of course, as illustrated by the Bob the Wizard kerfluffle (in which a cover artist who swore their art was not AI-assisted turned out to be fibbing) as well as a general knowledge of human nature, the question is the degree to which a voluntary promise is actually equivalent to certification. I reached out to Eliza for comment, and you can see her response to that question in the Q&A below.
WRITER BEWARE: The Authenticity Initiative seems to rely on authors to self-certify that their work contains no AI-generated content. Do you have any concern that some authors may not be honest?
ELIZA RAE: Yes, that’s exactly correct. While technology and laws that govern AI are limited, we decided that a trust based platform for authors and readers to come together was the best way to service this aspect of the community until more legislation and/or publishing platforms have caught up to technology issues and the pitfalls of what is and is not considered legal to scrape or use to train generative AI software.
Secondly, social initiatives such as this are driving forces for change. We hope to build the platform to serve as an additional influence for publisher platforms to hear the collective author and reader voices and their concerns about generative AI. We seek to add to the conversation in the best way we can to help the community.
Lastly, we are considering and vetting programs that identify generative AI content, but this may take some time to prove whether it is a viable resource and does not violate our core mission. My chief concerns are to keep the initiative social in nature, a community of trust, and a place for original voices to be heard. That said, I cannot control if people lie, but we do offer a reporting function on our website for anyone that believes an author has used generative-AI. At which time, we would ask for proof of copyright, and investigate further.
WB: Do you have any procedures in place to verify authors’ claims of no AI-generated content? For example, do you audit authors’ works? If so, how?
ER: Our audit extends to their presence on social media (more than just seeing if they have profiles, we’re looking for genuine presence, and not a bot like persona) and we verify their books are listed on selling platforms and review sites such as Goodreads and BookBub. We will not be running every book every author has published through a program to detect generative AI use. As I mentioned above, we want this to remain a social platform/community of trust, and keep the associated costs of membership to a nominal yearly expense for authors. We will handle issues as they arise.
WB: Why should readers trust The Authenticity Initiative’s seal?
ER: Honestly, I don’t think trust is that black and white. Trust isn’t just in a program that says “yes or no.” There are multiple factors and reasons why readers would join. Most importantly, the authors that share the initiative with their readers have trust that’s already built from them, and those that have joined our newsletter from my sharing, have trust in my business that I’ve built over nearly a decade of working with many authors. And that’s the community we’re trying to build, trusted authors and businesses that have come together to take a stand against something we don’t believe in and impacts the market and reader trust in a real way. The reality of the world, in my opinion, is anyone can manipulate or lie if they want to. I can’t imagine there is a program or initiative out there that people haven’t taken advantage of in some form or fashion, but I won’t let the possibility of someone with nefarious intent sneaking in stop what we are trying to accomplish. I will however do what I can to keep the integrity of our mission as safeguarded as I can. I will implement tools and resources as they become available after proper evaluation, but again, a social initiative and community trust is at the epicenter of what we are creating.
WB: What action, if any, would you take if an author’s pledge were shown to be false?
ER: After an investigation, if an author is found to have used generative-AI knowingly, they will immediately be removed from our list, be required to remove our seal from all works, and cease having access to all our services. A permanent ban of that pen name will occur. If the situation requires consideration for services such as a cover artist using generative AI unbeknownst to the author, those types of things will be handled on a case by case basis. We’re not looking to penalize people that don’t know something has occurred, but we will seek to educate and provide information to avoid such situations from happening. We are currently building an information base for such things.
WB: What makes membership worth $50 every year? For example, your website mentions promotional opportunities–can you comment on what those will be?
ER: Once a year membership fees will allow TAI to maintain the cost of our website, services and programs used, and paid advertising each year. As we grow, it will also allow for additional promotional opportunities such as newsletter slots in our reader newsletter, builder campaigns, and featured books on our website. We have a lot planned and the potential to spread the word for authors is huge, but the platform must continue building right now to get to a place where there are more books to offer our readers and get our message out there.
Authortegrity is a project of Damon Freeman of Damonza, the popular cover art and book formatting service (which recently announced that it will be incorporating generative AI into its book cover designs unless authors opt out).
Not yet launched but inviting writers to sign up for early access, Authortegrity will undertake an initial AI-powered text analysis, followed by human verification. The result, for those who pass the tests: certification of human authorship, “serving as a beacon of trust for readers and literary communities”.
As you’ll see from the Q&A that follows, it’s a complicated process that applies some rigor to a difficult and slippery task–but also includes what could be considered to be compromises, notably the definition of “human-authored” as work with 50% (or more) human-created content. I’m told that testing will begin in September, with the rollout planned for October.
AUTHORTEGRITY: Firstly, I just want to mention the main objective of this system is to provide transparency to readers as well as protect real human authors. Ideally, the system should be testing for fully-AI authored work, and labeling those books as AI-generated, but as it’s a voluntary system, that would rely on those AI-book generators submitting their books in order to be AI-verified, which is unlikely. Therefore, the alternative is that authors submit human-authored books to be verified as such. For the moment, we’re classifying “human-authored” books as work where 50% or more of the book was originally created or written by a human. We recognize AI has great potential to help authors with many aspects of the writing process, including character development, research, editing, etc., but for the service to be of any value, “human-authored” needs some sort of definition.
Secondly, it’s important to note that this system is still in early testing. It’s not a straightforward AI-content check, and relies on many processes working together. So far, the model works, but there is still some way to go before it can be released for larger testing groups. Those authors who have registered for early access will be invited for the later stages of testing. The results of each testing phase determines the changes that need to be made, or even if the system is worthwhile. For now, so far, so good.
WRITER BEWARE: Your website mentions AI-powered text analysis as a first step in verifying human authorship. Can you tell me more about this? Is it proprietary to you?
The AI detection tool is not proprietary to us. We are actively exploring various options for the AI detection part of our verification process. Currently, we are testing Originality.ai, which claims a high accuracy rate, particularly for longer works. However, we acknowledge that no AI detection tool is perfect, and as a result, we are not committing to a single solution just yet. In fact, we may end up using a combination of AI-content detection services.
Very importantly, the system goes beyond relying solely on AI detection. We are dedicated to creating a robust system that combines multiple means of identifying human authorship. While AI detection will certainly play a role in the evaluation process, it only contributes a percentage to the final score. The service incorporates other criteria that carry more weight in ensuring the accuracy and integrity of our certification. This approach will result in a more comprehensive and reliable certification process. The system builds a picture of the author or publisher as whole, not just the book in isolation. For example, real identity checks, ISBN matching, how many other books has this author published? When? Do they have a history of writing books that clearly ARE NOT AI-generated (e.g. published pre-2022). There are other verifications involved that are not focused on the actual content of the book.
WB: Your website also promises “expert human verification”. Who will these humans be and what qualifications will they have?
A: With the sheer volume of reading required, it’s practically impossible for humans to verify each work entirely. However, they will play a crucial role in verifying other criteria that authors are able to provide for the work, like early drafts, evidence of collaboration, etc. Importantly, it won’t be an automated email or artificial intelligence telling an author they didn’t write a book and that’s that. There will be real people reviewing applications that do not pass the checks, and following up with authors with real feedback. We have not yet started hiring for this as we’ve not reached that part of the process yet. For now, the testing phase is assuming those elements are true.
WB: OpenAI recently deactivated its AI classifier due to its “low rate of accuracy”. According to research from Stanford University, human readers were only able to distinguish between human and AI text with around 50% success. Some researchers believe it may never be possible to reliably say if a text is written by a human or AI. How will you ensure that your service offers the high accuracy rate necessary to provide a meaningful certification?
A: The accuracy of the system works on a scoring model, based on a combination of events and processes, that determine the probability that a book was written by a human. It does this by first building a profile of the author, looking at things like their history and behavior, and then puts a few barriers in place that require a small amount of effort to bypass. That’s followed by an AI-content check, and then depending on all the previous outcomes, further checks of varying degrees of “hassle.” The process is designed to be too onerous for an AI-generated book creator to bother with but easy enough for a regular author to accomplish. The certification is “worth it” for the real author but “not worth it” for an AI-book generator. Some of the additional checks in place include real identification, peer vouching checks, provision of early drafts/source material (if necessary), and the ability of readers to confirm or dispute verifications.
Ultimately, the process is built to discourage AI-book generators from using the service at all, due to the effort involved – it will take more effort to cheat the service than any benefit received from service. Again, it’s important to note that this will still be tested extensively, and based on our results, we would tweak it so that the highest level of accuracy is achieved, for the least amount of effort from real authors. There is a higher level of detail that outlines the entire system, but it is a work in progress and is changing daily.
WB: How much of the book will be analyzed, both via algorithm and by humans?
A: The entire manuscript will be checked by AI detection tools, while all other checks will be done by humans and logic-built software. The logic that we’re building into the platform is proprietary.
WB: What will the cost of the service be?
A: We are yet to determine the fee for each verification, if any. A fee would discourage many AI-generated book creators from using the service, which is a good reason to include it as part of the process, but any fee also needs to match the value being offered to real authors. Before any fee system is established, we still need to ensure all other processes are reliable and effective. We are also looking at a potential model where the verification is free, although this might be difficult to achieve because of the costs associated with verifying each book, and it takes away one of the verification steps.
This is an important service that we’re not taking lightly. Testing is rigorous. For now, it’s a work-in-progress. I’d be happy to let you know when we open up the larger-scale testing.
Important to Authors…But to Readers?
As far as I know, The Authenticity Initiative and Authortegrity are the only two human authorship certification services of their kind–for now. I’m sure there will be more. It’s a major issue of concern for creators, and as such, an obvious opportunity for entrepreneurship.
Is it worth buying into one of these services, though? Beyond the issue of whether a pledge can be trusted, or whether 50% of human-written content is sufficient to define a work as human-created, the larger question that overshadows both these initiatives is how much readers actually care.
My news and social media feeds are stuffed with articles and discussions of AI and its implications for creators, but its impact on readers, viewers, and listeners? Not so much. Maybe it’s just too soon to be able to gauge how consumers will adjust to generative AI’s chaotic and incredibly swift arrival in the creative disciplines. But it’s already pretty clear that audiobook listeners aren’t shying away from machine-narrated books, and it doesn’t seem like anyone is avoiding websites or news outlets that run ChatGPT-created articles (the few that admit it). If the average moviegoer doesn’t mind that many popular movies are 80% CGI, why would they balk at AI-generated actors? How many readers will avoid an exciting-sounding book if they know the author relied on ChatGPT or the cover designer used AI art?
Both The Authenticity Initiative and Authortegrity hold out additional visibility–with, presumably, the added sales that could bring–as an incentive to sign up for their services. The chimerical promise of exposure is always a draw for writers, perpetually struggling to find a way to stand out. But will the average book buyer be motivated to go out of their way to find books that are certified to be human-authored? Will displaying a seal make it more likely that they’ll purchase them? I’m not so sure.
It’s early days in the AI revolution. We’ll be waiting a while to find out.