Research & Papers
New AI model generates 45-minute lip-synced video from one photo and runs in real time
Matthias BastianThe Decoder
AI Summary
LPM 1.0 is a new AI model that generates 45-minute lip-synced videos in real time from a single photograph, with realistic facial expressions and emotional reactions. Currently functioning as a research project, the model demonstrates significant advancement in video synthesis and facial animation technology.
This article was originally published on The Decoder. Read the full story at the source.
Read Full Article at The DecoderRelated Articles

A Step-by-Step Coding Tutorial on NVIDIA PhysicsNeMo: Darcy Flow, FNOs, PINNs, Surrogate Models, and Inference Benchmarking
MarkTechPost

Meta AI and KAUST Researchers Propose Neural Computers That Fold Computation, Memory, and I/O Into One Learned Model
MarkTechPost

A Coding Implementation of MolmoAct for Depth-Aware Spatial Reasoning, Visual Trajectory Tracing, and Robotic Action Prediction
MarkTechPost

Researchers define what counts as a world model and text-to-video generators do not
The Decoder