A leader in post-production services, Deluxe is responsible for delivering prompt, precise solutions for feature films in their last stop ahead of a worldwide release. With this challenge in mind, the company recently hosted its first hackathon, inviting its employees to develop projects to help studios and distributors prepare, localize, and deliver their productions in a more time-efficient way without sacrificing any of Deluxe’s quality benchmarks. Boxoffice spoke with Morgan Fiumi, Deluxe’s chief innovation officer, about the hackathon results.
What were some of the highlights that came out of Deluxe’s first hackathon?
There were several: two that came out for subtitling and one for audio description that we found very exciting.
Starting with the subtitling, we usually receive multiple different versions of the same film and we have the challenge of creating subtitle installations for all of them so that when we repurpose that translation, it won’t have any overlap or existing burn on the text. Right now that is a fairly manual process, but it’s a process that can change slightly across different territories, so it’s not consistent across all worldwide releases. As part of this application that came out of our hackathon, we came up with the chance to use image-recognition tools to identify where the onscreen text is and determine how the subtitle file can avoid any overlapping. You can use that technology to also identify some censorship flags as well, such as scenes depicting smoking—some territories can be very restrictive when it comes certain images. The system we have now gives us the ability to flag images in a film so we can go back and verify. Think of it more as of an assistive tool that can help us flag specific content.
The other tool in the subtitling space addresses a different issue. One might have lots of different edits of a film by territory and by distribution channel. A digital cinema version will be slightly different than a pay-per-view or airlines version, for example. That same subtitle file needs to be conformed across all those windows. The new tool that came out uses voice recognition to match to our subtitle file to ensure that the different versions match. That way we can find where the inconsistencies are across the different versions of a film.
What sort of instructions did you give employees when they set out on this hackathon?
That point of the hackathon was to leverage existing technologies with our application layer to create automated, conformed verification tools. It helps manage this landscape where you have such a large variation of versions for all the different distribution formats and geographic markets.
As far as instructions go, we left it pretty open. We wanted to tap into the broader creativity and technical skills of our employees around the world. We set up five different categories: Impact, does it solve a real problem that’s directly related to the services we offer?; Innovation and Originality, even if solves a problem, is it creative enough?; and then categories on Technical Difficulty, Design, and Presentation. We weighted applications that were more strategically aligned with our services, but we also left it open so we had projects that would be new for us. Subtitling just happens to be a big challenge right now because of the geographical distribution of content and the speed in which content is released. Versioning is the last stop before a movie goes into worldwide release, so challenges related to localization have increased in recent years. That’s why two of our applications wound up addressing that topic.
You mentioned that another highlight from the hackathon was an audio descriptive tool; can you tell us more about that?
That project addresses another important challenge facing audio description: how can we start to gather information on what’s happening in a scene when there’s no dialogue? We’re still at the early stages of this project, but this will help those who are visually impaired with a script for an audio descriptive audio track. This technology isolates words in a scene and uses neuro-networks to put sentences together: “There is a dog chasing a car,” for example. It’s not a final product yet, but the more content we can feed it to create a sample base, we’ll constantly improve that descriptive capability. Ideally, all content going out would have audio descriptive tracks for the visually impaired.
Share this post