[ad_1]
Adobe has begun to procure videos to build its artificial intelligence text-to-video generator, trying to catch up to competitors
Adobe Inc. has begun to procure videos to build its artificial intelligence text-to-video generator, trying to catch up to competitors after OpenAI demonstrated a similar technology.
The software company is offering its network of photographers and artists $120 to submit videos of people engaged in everyday actions such as walking or expressing emotions including joy and anger, according to documents seen by Bloomberg. The goal is to source assets for artificial intelligence training, the company wrote.
Over the past year, Adobe has focused on adding generative AI features to its portfolio of software for creative professionals, including Photoshop and Illustrator. The company has released tools that use text to produce images and illustrations that have been used billions of times so far.
Still, OpenAI’s demonstration of its video-generation model Sora reignited fears among investors that the longtime creative software leader could be disrupted by the new technology. Adobe has said it’s working on video-generation technology, with plans to discuss more about it later this year.
Adobe is requesting more than 100 short clips of people engaged in actions and showing emotions as well as simple anatomy shots of feet, hands or eyes. The company also wants video of people “interacting with objects” such as smartphones or fitness equipment. It cautions against providing copyrighted material, nudity or other “offensive content.”
Pay for the submission works out, on average, to about $2.62 per minute of submitted video, although it could be as much as about $7.25 per minute.
Asked for comment, an Adobe spokesperson pointed to prior statements from executives that the company is developing video-generating features.
The listing highlights the massive amount of data needed to build AI models underlying popular content creation products such as ChatGPT. There has been much debate and controversy over the source of that data. OpenAI Chief Technology Officer Mira Murati said in a viral interview clip with the Wall Street Journal last month that she wasn’t sure whether Sora was trained on user-generated videos from Google’s YouTube as well as Meta Platforms Inc.’s Facebook and Instagram.
Adobe has sought to differentiate its models by training them primarily on its vast library of stock media for marketers and creative agencies. In cases where its stock library falls short, it has procured images directly from contributors. It has also offered pay for contributors to submit a mass amount of photos for AI training — such as images of bananas or flags. Those jobs have paid in the range of 6 cents to 16 cents per image, according to listings seen by Bloomberg.
©2024 Bloomberg L.P.
Also read: Week in tech: What is the OpenAI Sora text-to-video model?
[ad_2]
Source link