A Team of researchers from NEC Laboratories, Palo Alto Research Center, Amazon, PARC and Stanford University are working together to solve the problem of realistically altering scene text in videos. Their main application behind this research is to create personalized content for marketing and promotional purposes. For example, replace a word on a store sign with a personalized name or message, as shown in the picture below.

Technically, several attempts have been made to automate text replacement in still images based on principles of deep style transfer. The research group is including this progress and their research to tackle the problem of text replacement in videos. Videotext replacement is not an easy task. It must meet the challenges faced in still images while also accounting for time and effects such as lighting changes, blur caused by camera motion or object movement.

One approach to solve video-test replacement could be to train an image-based text style transfer module on individual frames while incorporating temporal consistency constraints in the network loss. But with this approach, the network performing text style transfer will be additionally burdened with handling geometric and motion-induced effects encountered in the video.

Quick Read: https://www.marktechpost.com/2021/09/11/ai-researchers-from-amazon-nec-stanford-unveil-the-first-deep-videos-text-replacement-method-strive/

Paper: https://arxiv.org/pdf/2109.02762.pdf

Github: https://striveiccv2021.github.io/STRIVE-ICCV2021/

Dataset: https://github.com/striveiccv2021/STRIVE-ICCV2021

Source link