Resources

AI In Entertainment Media: How Innovative Technologies Are Changing The Game

AI In Entertainment Media: How Innovative Technologies Are Changing The Game
SUMMARY

The entertainment media universe that consumers experience spans films, television, OTTs, broadcasters, live sports, traditional publishers, and digital user-generated platforms

AI will impact this more overtly, but the sea change of innovation will happen in areas you don’t necessarily see

For the first part of this AI in Media series we will cover one consumer-facing use case and one enterprise application

Inc42 Daily Brief

Stay Ahead With Daily News & Analysis on India’s Tech & Startup Economy

AI has already begun to have a large impact on the entertainment industry. From it quickly becoming the dominant theme in the Hollywood Strike to more gimmicky stunts such as having an AI-based news anchor. 

There is no doubt that AI has begun to revolutionise the way content is created and consumed. However, before diving into the future, we first must better define the often misunderstood entertainment media universe. 

The entertainment media universe that consumers experience spans films, television, OTTs, broadcasters, live sports, traditional publishers, and digital user-generated platforms. 

AI will impact this more overtly, but the sea change of innovation will happen in areas you don’t necessarily see. From ancillary industries such as advertising to the overall nuts and bolts of the industry broadly defined as media workflow and infrastructure.

This is a vast subject to cover so for the first part of this AI in Media series we will cover one consumer-facing use case and one enterprise application:

Consumer Media Application

Consider gaming. The most interactive and participative form of media out there. Game developers spend billions of dollars creating worlds and characters in them to extend the playability of a franchise and time spent in these worlds. Three forces are coming together to change the AAA gaming ecosystem forever. 

Firstly, AI is enabling game developers to significantly reduce the cost and time spent building additional playable worlds. In addition to this, it is also assisting in improving the fidelity of experience in these worlds. Secondly, NPCs (non-playable characters) are fast evolving and becoming more lifelike, thanks to the ever-improving AI powering the range of actions and vocabularies. 

Companies such as Nvidia are leading the way with both hardware as well as cloud-based solutions addressing real-time text-to-speech to make this a reality. Thirdly, the metaverse. 

This has taken a backseat for now given the lack of users in the ecosystem. Yet, we’re not far from a time when the metaverse merges with the gaming world. The underlying need for community engagement will be enabled by VR/AR to blur the boundaries of reality. 

This very well could be the successor of existing self expression platforms such as Meta’s Instagram and Bytedance’s TikTok. 

Enterprise Media Application

Consider what Video Verse does (full disclosure: Audacity’s portfolio company). They use AI to generate key moments or near-instant highlights from live sporting content. This allows OTTs, broadcasters, and sporting federations a new way to monetize their IP while saving on the cost of editors.

This is demonstrative of the power of meta-tagging. By integrating machine learning models, vision, OCR, NLP, speech, and of course generative – the impact on video production, editing, and distribution is going to be massive.

Take a simple example of what the existing AI technology can easily manage. Say, a businessman is set to run for president. Using AI, a TV news channel should be able to pull all the clips of the said person over the past four decades and using sentiment analysis be able to bucket all his views across different subject matters and how they evolved over the four decades. 

It would then overlay emotion analysis to determine cues as to the accuracy of the statements. This would allow viewers to better decipher what he would do in office if elected. 

Of course, the said candidate would keep up as well. Using a digital twin, he could create highly believable localised content across multiple languages. He would use text-to-speech and video generation to have his digital twin deliver tailored messages highlighting issues relevant to different communities.

All of this is possible currently – however, the exponential growth of computing power coming together with the evolution of LLMs will ensure that all the world’s information is better organised and available in the blink of an eye. In a few years this tech would be consumer-facing. 

By overlaying big tech content with AR such situations could easily come true. Such overlay of content would have far greater applications. For instance, it could allow for better negotiations in business deals, real-time fact-checking public figures, and eventually even in law enforcement.

These seemingly far-fetched applications will soon become mundane. However, the value of human creativity and ingenuity will only increase as the world turns to more synthetic forms of content creation and consumption.  

Note: We at Inc42 take our ethics very seriously. More information about it can be found here.

Inc42 Daily Brief

Stay Ahead With Daily News & Analysis on India’s Tech & Startup Economy

Recommended Stories for You