This FrameFormer.jpg is the configuration GUI for AmberFin showing the actual parameters used to control the FrameFormer software.
On the first day of the NAB 2019 exhibit hall, Dalet and InSync Technology announced the integration of FrameFormer frame rate conversion into Dalet’s AmberFin processing platform
AmberFin is Dalet’s media processing platform for file conversion and transcoding, as Steve Higgins, general manager of the AmberFin product at Dalet explained for me during our 1:1 interview, and can be used in any situation where frame rate conversion is needed.
“Frame rate conversion is something that AmberFin has specialized in for some time now,” he told me, “and that is where the adoption of inSync’s FrameFormer technology into our system will be so useful.”
Of course, AmberFin has been able to deal with frame rate conversion for some time, but the new partnership with InSync’s technology is going to help it keep pace with ever-changing requirements of today’s production capabilities.
“Standards are developing all the time,” Higgins told me, “and foremost among them are the move toward 4K and the adoption of HDR (High Dynamic Range) which are some of the strengths of the FrameFormer motion compensated frame rate conversion technology.”
The InSync implementation will be complimentary to other approaches already included in AmberFin.
The higher level Workflow Engine, which allows you to create an end to end workflow as in this illustration which has a Watch Folder waiting for an input file.
(click to enlarge)
“We already have Cinnafilm’s Tachyon technology which runs on a GPU and gives customers conversion services as quickly and inexpensively as possible,” Higgins explained. “For them we have to set up what is in effect a hardware-accelerated appliance based on a server with GPU’s to process their conversions as efficiently as we can.
Other customers who already have their own render farms need a software-based approach that leverages the speed of CPU’s.
“This is where we can reap the benefits of inSync’s FrameFormer software-only application for greater flexibility in case they want to push content up to the AWS cloud,” he said. “It’s easy to license for the kind of servers we need to deploy it in.”
For example, you could build a workflow that has a Watch Folder waiting for an input file. If it is 59i it gets transcoded via FrameFormer to 25i. AmberFin then FTP’s the result to some remote site and E-mail’s someone that the conversion is completed.
(click to enlarge)
One of the greatest challenges this kind of technology runs into can be cell phone video with its non-standard frame rates.
“That is certainly important to us, but our service is also geared toward best quality conversion for high end content, usually for international program exchange,” Higgins told me. “So we are much more dealing with motion estimation, plotting where objects might be in between frames.”
I speculated that this sounded like creating the “P” frames in an MPEG GOP (Group of Pictures)
“Yes, the underlying algorithms are very similar,” Higgins said. “But in standards conversion the algorithm has to work out what has moved, and also what is revealed in the background behind the object that has moved. That is what we were demonstrating at NAB 2019.”
You might also like...
In the data recording or transmission fields, any time a recovered bit is not the same as what was supplied to the channel, there has been an error. Different types of data have different tolerances to error. Any time the…
Lawo’s Christian Struck looks at the potential for production automation in immersive sports broadcasting, and how it can help move towards a personalized, object-based experience.
Genelec Senior Technologist Thomas Lund moves the monitoring discussion on to the practical considerations for immersive audio, wherever you are.
The Ultra HD Forum has given a stimulus to UHD deployments with the release of its latest 2.1 guidelines that give proper weight to all the ingredients constituting next generation A/V (Audio/Video).
In this fourth installment of the Immersive Audio series we investigate the production tools needed to produce live immersive content. Moving from channel-based output to object audio presents some interesting challenges as the complex audio image moves around in three-dimensional…