Press Release

KOBA 2024: Inside the Scene with Clone Voice, Motion Capture, AI Video Search, and Editing Technologies

2024-05-22
Under the theme 'Spark Your Creativity', the 32nd International Broadcast, Media, Audio, and Lighting Exhibition (KOBA 2024) is currently underway.
Global broadcast, media, audio, and lighting companies participate, with Korean broadcasters and media companies attracting attention with their technological prowess.
AI-based clone voice dubbing introduced by TwigFarm at KOBA 2024, debuted in San Francisco with 'YanusSTUDIO', highlighting AI video search technology.

The 32nd 'International Broadcast, Media, Audio, and Lighting Exhibition (KOBA 2024)', now in its 32nd year, is scheduled for a four-day run until the 24th.


The 'International Broadcast⋅Media⋅Sound⋅Lighting Exhibition (KOBA 2024)', now in its 32nd edition, was held over a four-day period until the 24th of this month.

Certified internationally by the Ministry of Trade, Industry, and Energy, 'KOBA 2024' has achieved a heightened status. It serves as a platform where the latest technologies and trends in the rapidly changing global broadcast and media industry are showcased, featuring top-notch equipment from around the world in broadcasting, media, sound, and lighting.

The exhibition kicked off on the 21st, bustling with industry professionals from domestic and international broadcasting and media sectors, alongside general attendees. This year, notable participation from Korean broadcasters like KBS drew significant attention, alongside global giants such as Sony, Panasonic, and Canon.

Domestic technology firms, renowned for their cutting-edge capabilities, also garnered substantial interest. Startups like 'Twigfarm', creators of the 'LETR Works' service for AI-based multilingual subtitling and dubbing, introduced their new AI-based clone voice dubbing feature at their booth during the event.

Moreover, motion technology companies, leveraging their proprietary motion capture technology, attracted attention with four experiential zones, including 'YanusSTUDIO', debuted earlier at 'GDC 2024' in San Francisco last March, appealing to both domestic and international gaming and broadcasting sectors.

AI video analysis specialist 'C-Lab' showcased its flagship service 'VidiGo' integrated with AI video search technology, branded as 'VidiGo Search Engine', highlighting its capabilities for efficient video scene search and management within large-scale video storage environments.

This year's 'KOBA 2024' also hosted the 'KOBA 2024 Media Conference', organized by the Korea Broadcast Engineers Association and Korea E&Ex, featuring 16 sessions covering key media trends and broadcasting technologies such as media clouds, AI content production, UHD broadcasting, radio trends, immersive sound, production standards, lighting trends, and XR production.

The exhibition continues to serve as a pivotal event for industry professionals and enthusiasts alike, providing invaluable insights into the future of global media and broadcast technologies.

Introducing AI technology that creates dubbed content replicated from my voice

At this exhibition, TwigFarm unveiled a dubbed feature based on clone voice technology for their multilingual subtitles and dubbing service 'LETR Works'.

At this exhibition, TwigFarm introduced a dubbing feature based on clone voice technology for their multilingual subtitles and dubbing service 'LETR Works'.

The highlight of this feature is its ability to replicate a user's voice to generate dubbed content. Notably, it supports English dubbing using the user's voice, achievable with just a few clicks.

With this addition, TwigFarm's 'LETR Works', which already showcased high-quality multilingual subtitle generation through AI, has evolved into an upgraded service offering.

On-site that day, Baek Cheol-ho, director at TwigFarm, explained, "Simply by reading about 30 sentences, we can vectorize that voice to create a unique audio model, enabling clone voice services." (Photo: Tech42)

On the spot, Baek Cheol-ho, director at TwigFarm, explained, "By inputting voice data of around 20 to 30 sentences, minimal modeling becomes possible. Simply reading about 30 sentences allows us to vectorize that voice and create a unique audio model, enabling clone voice services."

"Our clone voice feature debuted at this exhibition for the first time and is scheduled for commercial release next month. Our 'LETR rworks' service can be particularly useful for companies with extensive video content, allowing them to reprocess existing content to create new value. Especially in the case of dubbing content, there is significant interest from educational institutions. In countries like the United States and Japan, where dubbing culture rather than subtitling is prevalent, our service can dramatically reduce costs associated with traditional voice actors and shorten the time required for international expansion."

"Additionally, TwigFarm is preparing services that generate new content reflecting cultural, linguistic, and length constraints."

People and virtual characters act together in the same space.

At the exhibition, Motion Technology's booth attracted a crowd with its unique setups. They showcased specialized equipment and services in motion capture such as OptiTrack, Xsens, EZtrack, YanusSTUDIO, Manus, and Stretchsense. Visitors could directly observe and experience these key devices on-site, capturing the interest of broadcasting industry professionals.

Motion Technology operated four experience zones at their booth during the exhibition: the "Virtual Production Zone" featuring real-time dance demonstrations using OptiTrack and EZtrack, the "Digital Human Zone" for real-time monitoring of digital human movements with Xsens equipment, the "Hand Motion Capture Zone" allowing precision finger tracking with Manus and Stretchsense sensors, and the "Facial Motion Capture Zone" specialized in facial animation using Motion Technology's own YanusSTUDIO software.

The "Facial Motion Capture Zone" particularly highlighted Motion Technology's YanusSTUDIO software, known for simplifying facial animation production processes and enhancing quality. This software debuted at GDC 2024 in San Francisco earlier this year, gaining attention for its outstanding performance and cost-effectiveness.

Yang Ki-hyuk, CEO of Motion Technology, explained, “In essence, 'Virtual Production' consists of technology and equipment that can be used to construct virtual spaces or connect virtual characters to real spaces. Our motion capture technology enables a system where real people and virtual characters can act together in the same space using markers attached to cameras."

He added, “There are very few places in Korea developing motion capture technology besides us. Most either import and sell or focus on content creation. In contrast, we offer comprehensive solutions including technology development and content production. Our main customer base is in countries with developed film industries such as Korea, the US, Japan, and China, primarily applying it to the CG field.”

We digitize characters, objects, and dialogue from videos and search them using AI.

On this day, the AI video analysis specialist company 'C-Lab' unveiled 'VidiGo Highlight,' an AI-powered tool that summarizes videos and automatically creates short-form content, and 'VidiGo Search Engine,' equipped with AI video search technology within its flagship service 'VidiGo.'

Founded in 2010, C-Lab has specialized in processing and utilizing large-scale video data, achieving a listing on the KOSDAQ in February 2021. Its main businesses include real-time large-scale video analysis using AI (VidiGo, X-AIVA), synthetic data generation for AI training (X-Labeller, X-GEN), and GPU optimization (Uyuni).

Especially at this exhibition, C-Lab showcased the AI-based video search solution, VidiGo Search Engine, which excels in video scene search and archive management. Its key feature is the ability to digitize characters, objects, dialogue, etc., enabling efficient retrieval of necessary scenes. This significantly reduces the time previously spent manually searching for specific video scenes through scripts or other means.

Hamsaesik, a manager at C-Lab met on-site, emphasized regarding VidiGo Highlight, "By simply inputting the original video or the URL of a YouTube video, it can create short-form content based on specified lengths and templates, while also providing text summaries. Context-based automatic production and sharing are also possible."

Regarding VidiGo Search Engine, he added, "This solution allows searching for specific scenes within videos through AI video analysis, making it applicable to industries with large-scale video storage such as broadcasting stations, entertainment, and OTT."

"For instance, searching for a scene where the protagonist eats in the popular program 'I Live Alone' can quickly locate and present the specific segment. It extracts data based on voice included in the video or specific image vectors to facilitate searches."

Furthermore, alongside 'KOBA 2024,' the 'KOBA 2024 Media Conference,' organized by the Korea Broadcasting Engineers & Technicians Association and Korea E&EX, was jointly held. Throughout the event, approximately 30 lectures across 16 sessions covered key broadcasting technologies and policies related to media trends, media cloud, AI content production, UHD broadcasting production, radio trends, immersive sound, production trends, XR production, broadcasting standards, lighting trends, and other significant issues in the broadcasting industry both domestically and internationally.

← Go to News list