Can E-E-A-T and AI-generated content co-exist?
On Feb. 8, Google released their official search guidance on AI-generated content, and they didn’t say much. They reiterated that they’ve been consistent for years: any spammy content intended to game the SERPs is a violation of their policies, but no matter how content is created, it should be original, high quality, and designed with people in mind.
The question is, how can AI-generated content demonstrate Experience, Expertise, Authoritativeness, and Trustworthiness (E-E-A-T)?
How does E-E-A-T work?
For a person writing a piece of content, experience is based on firsthand knowledge of the topic – practical application of concepts. Expertise and authority come from learned knowledge of the writer and the reputation of both the writer and the website itself. All of this feeds into trust.
Does the writer have adequate knowledge to speak on the subject? In other words, can I trust this source? Quality raters, Google’s algorithm, and everyday users make these judgement calls based on the wealth of information within the content, including the author and their background.
Are E-E-A-T and AI-generated content mutually exclusive?
This entire conversation about E-E-A-T is vastly different when we talk about AI-generated content. ChatGPT and Bard have no firsthand experiences or training to gain expertise. The basis of E-E-A-T for ChatGPT, Bard, or any other advanced chatbot must be its training materials. But since we as users don’t have access to these training materials, how can we judge E-E-A-T for ourselves?
For example, a medical doctor writing about the best treatment options for a broken leg in a medical journal or on a well-known hospital’s website has the highest degree of E-E-A-T (years of training and personal experience treating broken legs). A more robustly trained AI (one with access to a large database of highly reliable sources like medical journals and textbooks) has higher E-E-A-T than a moderately trained AI (one with access to less reliable resources like popular science articles and random blog posts).
However, since the average user isn’t familiar with the training materials for any advanced chatbot, they are forced to assume little to no experience, expertise, authoritativeness, or trustworthiness for content generated by AI alone. They have no way to tell if it was written by well-trained AI based on sources they can trust, as the markers that they would look for to decide whether or not they feel confident in the accuracy of the information are simply not there.
Source: https://www.bankrate.com/loans/auto-loans/what-is-a-bad-credit-auto-loan/
What does this all mean for SEOs and users?
We may see an addition to E-E-A-T that includes (a human writer or editor’s) oversight to correct for this. (E-E-A-T-O doesn’t exactly have a great ring to it — maybe Bard can suggest a catchier name). The writer’s firsthand experience can stand in for the AI’s obvious lack of experience. Also, depending on their background, the writer’s expertise and authority in a subject area can lend a certain degree of credibility for users to trust the content they’re reading. (Perhaps some websites have already decided this for themselves, as we’ve noticed Bankrate including both “edited by” and “verified by” in some of the bylines on their AI-generated articles, giving evidence of oversight by two people).
For specific queries, Bing’s new AI generated results provide citations to facts provided in the output.
Or, perhaps future iterations of ChatGPT and Bard will include citations, though that is entirely unknown at this point. We’re already seeing citations included in Microsoft’s newest iteration of ChatGPT assisted Bing, which is a big step in the right direction. This would allow users to see how the AI constructs the content, allowing them to judge the trustworthiness of the sources themselves. But given how difficult it can be to determine the accuracy and trustworthiness of content across the internet, AI-generation isn’t likely to make it easier — and Google doesn’t seem too keen to police it.