Japan’s Entertainment Giants Demand AI Accountability as Creatives Pay the Price


By BTB Editorial
Open AI Founder, Sam Altman, in his Studio Ghibli avatar generated via Chat GPT.

Quick, sharp and bite-sized, Radar distills the trending business-meets-culture stories from APAC to the Middle East to global markets into what you need to know. If it’s on our radar, it should be on yours.

Japan’s Content Overseas Distribution Association (CODA) has requested OpenAI to cease training its AI models on copyrighted Japanese content without permission. The association, representing Studio Ghibli, Square Enix, Bandai Namco, FromSoftware parent Kadokawa Corporation, and other major publishers, issued the demand last week via a letter citing widespread unauthourised use of Japanese intellectual property in AI-generated content.

CODA’s investigation concluded that a significant portion of content produced by OpenAI’s Sora video generator closely resembles Japanese properties, indicating the models were trained on copyrighted material without authourisation. Since Sora 2’s September 2025 launch, AI-generated videos featuring characters from Nintendo, Pokémon, One Piece, Demon Slayer, and Studio Ghibli have proliferated across social media platforms.

Studio Ghibli has been particularly affected. ChatGPT’s GPT-4o update in March 2025 enabled “Ghiblification”, allowing users to transform photos into the studio’s distinctive aesthetic within seconds. The feature went viral, with millions of users generating Ghibli-style content, including OpenAI CEO Sam Altman, who adopted a Ghibli-style profile picture.

Japan’s Copyright Act Article 30-4 permits AI training on copyrighted material for non-expressive uses—but only when outputs don’t replicate the expressive elements of original works. Japan’s Agency for Cultural Affairs clarified that when AI training intentionally reproduces works to generate materials containing common creative expressions, the exemption no longer applies. CODA argues the replication during training itself constitutes infringement under Japanese law, which operates on an opt-in principle requiring prior permission. OpenAI’s opt-out system, which requires copyright holders to proactively flag content to prevent its use, fundamentally conflicts with this framework.

BTB So What?

What’s at stake here isn’t just alleged “theft” of creative work. It’s the slow collapse of the value chain that has historically made genuine creative partnerships commercially meaningful.

For years, the industry has tacitly accepted that AI models are trained on copyrighted material, even as legislation struggles to keep pace. What changes now is not the principle, but the impact. When a luxury brand pays a studio such as Studio Ghibli a six-figure sum for a collaboration, it is not simply paying for beautiful images. It is paying for exclusivity, cultural legitimacy, and the right to tell consumers: this is the real thing, authorised by the artists themselves. Generative AI disrupts that logic by doing something more intrusive than copying. It democratises. If anyone can generate “Ghibli-style” content instantly and at no cost, the exclusivity that justified premium partnerships starts to erode. Brands are left with an uncomfortable question of why pay for authenticity when, in a crowded feed, most consumers cannot reliably distinguish a sanctioned collaboration from an AI-generated approximation?

For artists and studios, the stakes are higher still. A significant share of their commercial viability rests on licensing deals and brand partnerships. If brands conclude they can achieve a “close enough” aesthetic without permission or payment, that revenue stream weakens. The irony is that brands still need these creative ecosystems to exist for future collaborations, for cultural credibility, and for the depth of storytelling that only original creators can provide. Yet if enough marketers choose the AI shortcut, they collectively undermine the very ecosystem they rely on.

The unresolved legal landscape only sharpens the risk. In the absence of clear frameworks governing AI-generated content that mimics distinctive artistic styles, brands expose themselves to potential infringement claims, particularly in markets with stricter copyright interpretations. Unlike individual users posting Ghibli-style memes for fun, brands operate with deep pockets and public reputations. For them, legal ambiguity is not a technicality, it is a business liability.