The Second Workshop on Efficient and On-Device Generation (EDGE) at CVPR 2025 will focus on the latest advancements of generative AI in the computer vision domain, with an emphasis on efficiencies across multiple aspects. We encourage techniques that enable generative models to be trained more efficiently and run on resource-constrained devices, such as mobile phones and edge devices. Through these efforts, we envision a future where these permeating generative AI capabilities become significantly more accessible with virtuous scalability and plateauing carbon footprint.
The topics involved in the workshop include but are not limited to:
Format: Submissions must use the CVPR 2025 Author Kit for Latex/Word Zip file and follow the CVPR 2025 author instructions and submission policies. Submissions need to be anonymized. A paper accepted to CVPR 2025 main conference can be resubmitted to our Extended Abstract track. A paper accepted by another venue can be resubmitted to our Extended Abstract track if allowed by that venue. Any submission to another CVPR 2025 Workshop cannot be resubmitted to EDGE 2025. The workshop considers two submission tracks:
Only long papers will be included in the CVPR 2025 proceedings.
===
Submission Site: https://openreview.net/group?id=thecvf.com/CVPR/2025/Workshop/EDGE
Submission Deadline: March 21, 2025 (AOE)
Workshop Date: June 12, 2025, 13:00-17:00
Workshop Location: Room: 208 A
Poster Session: June 12, 2025, 12:00-13:00 (before the workshop starts)
Poster Session Location: ExHall D, Board #202-214
Stanford
OpenAI
Nanyang Technological University
GenAI at Meta
Snap Inc.
Luma AI
NVIDIA
Carnegie Mellon University
ByteDance
Time | Activity | Title |
---|---|---|
13:00 - 13:10 | Opening Remarks and Award Announcement | |
13:10 - 13:35 | Ziwei Liu, Nanyang Technological University | "From Multimodal Generative Models to Dynamic World Modeling" |
13:35 - 14:00 | Stefano Ermon, Stanford University | "Accelerating Inference in Diffusion Models" |
14:00 - 14:25 | Ishan Misra, GenAI at Meta | "Scale Efficient Video Generation and Tokenization" |
14:25 - 14:50 | Sergey Tulyakov, Snap Inc. | "Sharpening the Edge: High Quality Image and Video Synthesis on Mobiles" |
14:50 - 14:55 | Break | |
14:55 - 15:20 | Jiaming Song, Luma AI | "Breaking the Algorithmic Ceiling in Pre-Training with a Inference-first Perspective" |
15:20 - 15:45 | Jun-Yan Zhu, Carnegie Mellon University | "Distilling Diffusion Models into Conditional GANs" |
15:45 - 16:10 | Lu Jiang, ByteDance | "Cost-Effective Training of Video Generation Foundation Model" |
16:10 - 16:35 | Enze Xie, Nvidia | "Building Image Generation model from scratch and Acceleration" |
16:35 - 17:00 | Lu Liu, OpenAI | "A Brief Introduction of 4o Image Generation" |
We are pleased to announce the accepted papers for the Second Workshop on Efficient and On-Device Generation (EDGE) at CVPR 2025. Congratulations to all authors!
We are pleased to announce two paper awards at CVPR 2025 EDGE workshop, which will be sponsored by PixVerse. Congratulations to the award winners!
GenAI, Meta
Google DeepMind
Google DeepMind
GenAI, Meta
Google DeepMind
GenAI, Meta
Google DeepMind
GenAI, Meta
GenAI, Meta
UT Austin
CUHK
Google DeepMind
GenAI, Meta
GenAI, Meta
GenAI, Meta
GenAI, Meta
PixVerse is one of world’s largest GenAI platforms with over 70 million users worldwide and driven by in-house developed video generation models that deliver superior quality and efficiency. PixVerse aims to democratize video creation by enabling the billions of viewers—who have never made a video—to produce their first share-worthy video with AI.