Luisa Crawford
Apr 03, 2026 21:53
Alibaba’s Wan 2.7 AI video mannequin hits Collectively AI with text-to-video now stay, image-to-video and modifying instruments coming quickly at aggressive pricing.

Collectively AI has rolled out Alibaba’s Wan 2.7 video era mannequin on its cloud platform, pricing the text-to-video functionality at $0.10 per second of generated footage. The deployment marks the primary main cloud availability for the four-model suite that Alibaba launched in late March.
The text-to-video mannequin, accessible through the endpoint Wan-AI/wan2.7-t2v, helps 720p and 1080p decision with outputs starting from 2 to fifteen seconds. Audio enter can drive era, and multi-shot narrative management works instantly by way of immediate language—a significant improve over fundamental prompt-to-video programs that power creators into fragmented workflows.
What’s Really Delivery
Proper now, solely text-to-video is stay. Collectively AI says image-to-video and reference-to-video capabilities are “coming quickly,” with video modifying instruments to observe.
The image-to-video mannequin will assist first-frame, first-and-last-frame, and continuation era—helpful for storyboarding workflows. A 3×3 grid-to-video function targets groups constructing structured content material from static property.
Reference-to-video will get extra attention-grabbing for manufacturing work. It will settle for each reference photographs and reference movies as inputs, dealing with multi-character interactions and complicated scene composition at as much as 1080p for 10-second clips.
The Modifying Play
Video Edit, the fourth mannequin within the suite, addresses what’s arguably the most important ache level in AI video: the lack to revise with out ranging from scratch. Collectively AI’s implementation will assist instruction-based modifying through textual content, reference image-based modifications, fashion switch, and temporal function cloning—movement, digicam work, results lifted from supply media.
For inventive groups, maintaining these capabilities inside one API floor eliminates the handoff chaos that at the moment plagues AI video manufacturing. Most workflows right now contain producing in a single device, modifying in one other, and manually patching the outcomes.
Aggressive Positioning
The $0.10 per second pricing places Collectively AI in hanging distance of rivals, although direct comparisons rely closely on decision and period parameters. Wan 2.7 itself has drawn consideration since its March launch—opinions have referred to as it probably the strongest AI video mannequin of 2026, although some skepticism concerning the hype stays.
Alibaba constructed Wan 2.7 inside its Qwen ecosystem, and earlier variations (2.1 and a pair of.2) have been open-sourced. Whether or not 2.7 follows that path hasn’t been confirmed, however the mannequin is now accessible by way of a number of cloud suppliers together with Atlas Cloud and WaveSpeedAI alongside Collectively AI.
Integration Particulars
For builders already on Collectively AI’s platform, including video era requires no new authentication or billing setup. The identical SDKs work throughout textual content, picture, and video inference. The corporate presents serverless endpoints for improvement with quantity pricing accessible for manufacturing workloads.
Groups evaluating the know-how can check instantly in Collectively AI’s playground earlier than committing to API integration. Full documentation covers parameters together with audio inputs, decision management, and the polling loop required for asynchronous video era jobs.
Picture supply: Shutterstock
