Video localization technology first came about decades ago with the introduction of affordable desktop computers. These allowed a single person to carry out the entire subtitling process.
These first subtitle editors were DOS-based, linked to a TV monitor and video cassette player with a jog shuttle.
Desktop computers and company servers became the mainstream equipment for subtitle production in the 1990s. Things changed profoundly with digitization. Subtitling software mushroomed in the noughties, freeware too.
Word class subtitle editors were developed with features such as accurate frame timing, shot-change detection, wave bar, reading speed indicators, customizable hotkeys, automated backups, sophisticated quality assurance and assisted translation tools.
They also provided the ability to use templates and presets, communicate between team members and convert between any of the myriad of file formats used in the industry. For live subtitling workflows, speech recognition software was also integrated in the user interface to allow for dictation-based workflows.
In short: software tools could support practically anything a subtitler could ask for.
Translation management systems also made their appearance in the noughties. Content volumes skyrocketed and production was centralized with the appearance of DVD.
With cloud infrastructures increasingly adopted, it was inevitable that subtitling toolkits would move to the cloud as well.
This took place the following decade as the streaming era caused another large increase in the volume of con- tent which needed to be localized. The primary factors for the selection of cloud infrastructures by businesses have always been ease of deployment and data security.
The latter had been a prime concern for the media sector: multi-factor authentication, video watermarking, cybersecurity certifications, continuous pen testing and 24/7/365 technical support are now the norm for platforms used by language service providers wishing to offer video localization services to their end clients.
Online subtitle editors are now used by most of the top media localization providers, typically integrated into a translation management system.
The better ones lack none of the prime features of the best desktop software of the previous decade, such as automatic shot change detection and audio scrubbing, a sine-qua-non for frame accurate subtitling.
Integration to a translation management system allows the automatic handling of client orders, auto- mated or bulk assignment of work to resources, live dashboards, file management and user metrics, as well as integration with finance tools for a complete end-to-end solution.
Work allocation and completion are thus managed and controlled more effectively and transparently with in-built communication tools that facilitate remote and collaborative work.
This cuts down duplication of effort, turnaround times and the potential for error. It also offers a seamless experience to users. Production can be scaled up easily as content volumes fluctuate and requirements change.
The adoption of online editors was accelerated by the COVID-19 pandemic. It created a surge in the develop-ment of professional online tools for revoicing purposes following the global closure of dubbing studios during the lockdowns.
Dubbing had been a very local and fragmented industry for years with many family-owned businesses in the market which allowed manual practices to perpetuate.
The forced closures of studios all over the world provided the necessary push to reprioritize software development agendas.
In the past few years, we have seen most top media localizers adopt their own custom-made platforms to enable audio localization work in the cloud.
The benefits of fully integrated cloud systems used for subtitling purposes shone through the pandemic and provided inspiration to streamline all other media localization production in the cloud as well. Script editors are very much like subtitle editors in terms of functionality, with different settings relating to timing rules, line length, character limits and so on.
The industry saw an increasing push to repurpose content and access file metadata as early in the process as possible, to inform technologies such as machine translation that are used downstream.
It made sense for scripting production to move to the cloud as well.
“We have been working hard on developing our scripting tool further to accommodate our client needs best,” said Wayne Garb, OOONA co-founder and CEO. “Functionality such as ‘multilayers’, or the ability to display multiple tracks simultaneously, a must in Japanese subtitle production, has been available in our scripting tool too for a while,” he adds. “We remain customer-responsive in our development roadmap.
A recent study of requirements from our client base indicates a strong demand in scripting and audio localization work, so it is our priority to develop such features to best support this market trend.”
The ability to record remotely combined with the increase in quality and customization of synthetic voices has made tasks such as audio description, which consists of a complex scripting but straightforward recording process, prime candidates for fully online workflows.
“This is the reason behind our partnership with Veritone whose 100-plus synthetic voices are now available through the OOONA Integrated platform and already used in production by end clients,” Garb said.
At OOONA we make sure to listen to all our users’ needs.
“We ran a contest earlier this year,” said Shlomi Harari, OOONA global account manager. “We wanted to collect ideas from our users on functionalities they think we need to focus on.” The results of the #OOONA2022 contest have included many of the features translator associations have been vocal about, such as concordance and termbase searches, predictive typing, and dictation support.
More automation is certainly on the roadmap for OOONA Tools, made possible by solid API connections to third party tools and soft- ware that can further facilitate the localization workflow. A selection of speech recognition and machine translation engines have already been integrated so OOONA’s clients have the option of selecting the right engine for each language they work in.
A deeper integration of these tools is envisaged, with support for customized solutions and toggles for the use of metadata collected upstream to inform the system output.
This will provide solutions tailored to the workflow, be it a subtitling or revoicing one.
* By Alex Yoffe, Product Manager, OOONA *