52

Please Add Runway Act II, Runway Aleph, Omnihuman, Qwen Image Edit

Hi, I have four feature requests that would make a major difference in completing AI video projects more efficiently. These models address key creative gaps in performance capture, text-based video editing, dynamic lip-sync, and image detail refinement:

1. Runway Act II – Performance-Driven Animation Act II lets users upload a video of themselves performing (expressions, speech, gestures) and map that performance onto a still image or another video subject. This transfers realistic facial and emotional expression to characters, creating expressive animation from a simple acting clip. Adding Act II would let creators turn portraits or concept art into animated performances by using their own movements as the driver — perfect for storytelling, dubbing, and character-driven videos.

2. Runway Aleph – Text-Based Video Editing Aleph allows users to edit existing AI-generated videos using text prompts. You can modify scenes, subjects, or visual styles directly through text commands without re-rendering from scratch. Integrating this would make it possible to quickly adjust visuals, refine compositions, or apply consistent thematic changes across clips — saving time and maintaining coherence throughout the video creation process.

3. Omnihuman – Lip Sync for Moving Characters The current lip-sync features only work with still images. Omnihuman supports accurate lip-syncing on moving characters within pre-existing videos. This would allow users to upload live-action or animated footage and have the AI match lip motion and dialogue seamlessly, preserving the subject’s head and body movement. Platforms like Dzine already support this functionality — adding Omnihuman would greatly improve dynamic character realism.

4. Qwen ImageEdit – Photorealism and Text Handling Superiority Qwen ImageEdit delivers outstanding high-resolution detail, lighting control, and texture realism. It’s also superior at rendering and editing text within images — performing better than Seedream, Nano Banana, and Flux Kontext in clarity and accuracy. This model would enhance both image generation and post-editing workflows, allowing creators to refine visuals, integrate typography naturally, and achieve professional-quality compositions directly within your platform.

Together, these models would unlock performance-driven animation, intelligent text-based video editing, accurate dynamic lip-syncing, and photorealistic image refinement — enabling creators to complete advanced AI projects from start to finish in one place.

Comments

To leave a comment, please authenticate.

Spidey Senses, 3 hours ago
Yes, please add these, i agree, these are all great suggestions
Voters
+ 49
Status
Backlog
Board
Feature Request
Submitted
10 hours ago, California Dreamin