Start
generative-video-2026
generative-video-2026 - Skill Dossier
generative-video-2026

generative-video-2026

Pick, prompt, and pipeline frontier video generation models — Sora 2, Veo 3.1, Kling 3, Runway Gen-4 + Aleph + Act-Two, Seedance 2.0, Hailuo, Luma, Pika, PixVerse — plus open weights (Wan 2.2, HunyuanVideo-1.5, LTX-2.3, Mochi, CogVideoX), lip-sync (Hedra, Sync.so, LatentSync), and video-to-video editing. Activate on: AI video, text to video, image to video, video to video, Sora API, Veo API, Kling, Runway Aleph, Wan 2.2, HunyuanVideo, LTX video, lip-sync video, talking head, character consistency video. NOT for: video editing without AI gen (use a NLE), live streaming, finished post-production color grading, or generating audio/music (use generative-music-audio).

Uncategorized

Allowed Tools

ReadWriteEditBash(python:*uv:*pip:*curl:*ffmpeg:*)WebFetch

Share this skill

Skills use the open SKILL.md standard — the same file works across all platforms.

Install all 551 skills as a plugin
claude plugin marketplace add curiositech/windags-skills claude plugin install windags-skills

Claude activates generative-video-2026 automatically when your task matches its description.

View on GitHub
"Use generative-video-2026 to help me build a feature system"
"I need expert help with pick, prompt, and pipeline frontier video generati..."