Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Subscribe to our daily and weekly newsletters for the latest updates and content from the industry’s leading AI site. learn more
Luma AI made waves with its launch Dream Machine AI video production platform last summer.
Of course, even though it was only seven months ago, the AI video space has progressed rapidly with the release of many AI video production models from competing startups in the US and China, including. The way to run, Kling, Pika 2.0, OpenAI and Sora, Google I see 2, MiniMax’s Hailuo and other open methods such as Hot shot and Genmo’s Mochi 1to name a few. Even Luma itself has changed recently Dream Machine platform including creating new graphics and discussion boards, and launching an iOS app.
But the updates continue: Today, San Francisco-based Luma was released Ray2, his new AI-powered videos, available now through his Dream Machine website and mobile apps that pay subscribers (for starters).
The model offers “fast, natural motion and physics,” according to the founder and CEO of Luma AI. Amit Jain on his X accountand was trained by compute 10 times more than the previous version of Luma AI video, Ray1.
“This raises the bar for generational planning and makes video content accessible to a wider audience,” added Jain.
Luma’s Dream Machine online offers a free tier with 720 pixel resolution with a variable monthly rate: Paid plans start at $6.99 per month: From “Lite,” which offers 1080p images, to Premium ($20.99/month), to Unlimited ($ 66.49/month) and Enterprise ($1,672.92/year).
Currently, Luma’s Ray2 is limited to video-to-video, allowing users to write descriptions that are converted into 5 or 10 second videos.
This model can create new videos in seconds, although currently it may take a minute due to the violation of the requirement of new users.
Examples Luma and early testers in its Creators program show the model’s versatility, including a man walking through an Antarctic snowstorm surrounded by an explosion, and a ballerina dancing on ice in the Arctic.
Interestingly, all the movements in the model videos look alive and fluid – and in many cases, the subjects are moving faster and more naturally than the videos from the AI generators, which often seem to be producing slowly.
This model is able to recreate the real sense of surreal ideas such as a giraffe surfingas user X @JeffSynthesized showed. “Ray 2 is the real deal,” he said wrote on X.
Other AI videographers who have tested the new model seem to agree, too Jerrod Lew writes on X: “Advanced animation, lighting and reality are here and they’re great.”
“…that’s great!” AI videographer Heather Cooper chimed in.
My experiments were a mixed bag, with some complex stimuli to create unnatural and beautiful results. But when it produced leaflets that resembled what I thought in my motivations – like Fencers crossing swords in space stations around Jupiter – it was impressive without a doubt.
Jain said Luma will also add photo-to-video, video-to-video and Ray2 editing capabilities in the future, further expanding the device’s capabilities.
To celebrate the launch of Ray2, Luma Labs is hosting the Ray2 Awards, giving developers the chance to win up to $7,000 in prizes. This includes:
Winners of both awards will be announced on January 27. Entries can be submitted via forms provided by Luma Labs, and creators are encouraged to use the hashtags #Ray2 and #DreamMachine when sharing their work.
In addition, Luma Labs has launched an affiliate program, allowing participants to earn commissions by promoting its products.