AI Video Has Arrived. Here’s What That Means for Every Content Creator.
I recently made a synthetic video. It showed a professional presenter, confident on camera, delivering a clear and compelling message. It looked like a real production. It cost me nothing.
Both of those things matter. And they point in different directions.
This video was generated entirely using AI tools, at zero production cost.
The Numbers Are Difficult to Ignore
AI video is not an emerging trend. It has already arrived. Search volume for AI video generation grew 840% between January 2024 and January 2026. Production costs have fallen by 91%, from around $4,500 per finished minute to under $400. A 60-second corporate video that once required 13 days of production now takes 27 minutes.
Eighty-six percent of US marketing buyers are already using or actively planning AI video for advertising. That figure will look the same in New Zealand within 18 months.
The capability curve is steep. The ethical conversation has not kept pace.
What the Coca-Cola Case Study Actually Shows
In late 2023, Coca-Cola released an AI-generated Christmas advertisement that became one of the most widely discussed pieces of synthetic content in marketing history. Not because it was bad, but because of what happened when audiences realised it was artificial.
Positive sentiment dropped from 23.8% before the release to 10.2% afterwards. The ad was technically impressive. The backlash was swift. Consumers described it as “soulless,” “lazy,” and “a creepy dystopian nightmare.”
What this tells us is not that AI video is dangerous. It tells us that undisclosed AI video carries specific and measurable commercial risk. Audiences do not object to AI content. They object to the feeling of being deceived.
Seventy-five percent of consumers say they prefer brands to disclose when content is AI-generated. The organisations that understand this are already building disclosure into their workflows. The ones that do not are banking on audiences staying uninformed, which is not a sustainable position.
Two Obligations Every Creator Now Has
The accessibility of AI video creates two distinct obligations. They are not complicated, but they do require intention.
The first obligation is disclosure
If your content uses synthetic presenters, AI-generated voices, or fabricated footage in contexts where the audience would reasonably assume a real person is speaking or real events are depicted, say so. This is not about regulatory compliance. Most jurisdictions do not yet require it. It is about preserving trust, which is harder to rebuild than it is to maintain.
The context matters. An AI-generated background in a product demonstration is different from an AI-generated testimonial from a synthetic customer. AI-assisted video editing is different from a synthetic executive delivering a statement about your company values. The line is not always obvious, but the principle is: if your audience would feel deceived when they find out, disclose it before they do.
Does disclosing AI origin hurt content performance?
No. Research consistently shows disclosed AI content performs comparably to undisclosed. The Coca-Cola case demonstrates the opposite risk: undisclosed AI content, once identified, causes measurable trust damage and sentiment decline.
What types of AI video always require disclosure?
Synthetic testimonials, AI-generated spokespeople presented as real employees, fabricated news-style footage, and any content where a real individual’s likeness has been artificially created or altered without consent.
The second obligation is media literacy
If you are producing AI video, you are also consuming it. So is your team. So are your customers.
The European Parliament Research Service projected eight million deepfakes circulating in 2025. Real-world detection accuracy, even with purpose-built AI tools, drops to between 45% and 55%. Human detection is worse. The implication is that your team will regularly encounter synthetic video, some of it designed to mislead, and the skills to evaluate it critically are not yet widespread.
Media literacy is no longer a nice-to-have for communications and marketing teams. It is a core competency. Understanding how synthetic media is made, what its tells and limitations are, and how to verify what you are seeing is now part of professional due diligence.
Building AI capability in your communications or marketing team?
AI Innovisory runs hands-on workshops covering AI video tools, disclosure frameworks, and media literacy. Practical sessions designed for teams that are already using AI and want to do it responsibly.
Explore AI workshops for your teamWhat This Means Practically for Your Organisation
If you are in marketing, communications, or any content-producing function, AI video tools are already in your competitive landscape. Your competitors are using them or evaluating them. The cost and time advantages are real.
The question is not whether to engage with these tools. The question is how to build the internal frameworks that let you use them effectively without creating the kind of trust damage that undoes the efficiency gains.
That means three things in practice:
- A clear internal policy on when AI video requires disclosure in your specific context, reviewed by legal or governance before it is needed under pressure
- Hands-on familiarity with the tools, so your team can evaluate AI-generated content critically rather than accepting it at face value
- A workflow that treats disclosure as a feature, not a disclaimer. Audiences who know your brand uses AI responsibly are more forgiving of imperfection than audiences who feel misled
The organisations that figure this out early will not just avoid the Coca-Cola outcome. They will build the kind of audience trust that becomes a durable competitive advantage.
I made a synthetic video at zero cost. What I am asking every content creator to consider is: what do you owe your audience now that anyone can do the same?