Choose Your AI Video Track
Start with the track that matches your current bottleneck. The resource set below was updated March 13, 2026.
AI Video Fundamentals
Learn the current AI video model landscape, the core prompt grammar behind usable generations, and the fastest path from first prompt to first finished short clip.
Cinematography & Visual Language for AI Video
Learn the visual grammar behind stronger AI video prompts: shot size, camera angle, camera movement, composition, and lighting choices that change the emotional read of a scene.
Advanced AI Video Workflows & Consistency
Move from single generations to pipeline thinking: image references, scene continuity, video-to-video refinement, and multi-tool workflows that preserve identity and motion across shots.
Commercial AI Video Production & Legal Readiness
Turn AI video from an interesting creative toy into a client-ready production workflow by learning rights, platform terms, labeling duties, and the operational discipline needed for commercial delivery.
How to use this hub with 5tldr
The point is not to hoard links. The point is to turn good sources into reusable knowledge.
1. Open the source
Read the original article first so you understand the author’s framing.
2. Summarize it
Paste the article into 5tldr and generate key points or a learning pack.
3. Save the useful ones
Keep the summaries that matter in Library so you can revisit them later.
4. Review and compare
Return to your saved notes, compare sources, and turn them into your own understanding.
AI Video Fundamentals
Learn the current AI video model landscape, the core prompt grammar behind usable generations, and the fastest path from first prompt to first finished short clip.
AI video is moving too fast for vague tool hype to be useful. You need a grounded starting point: what each model is good at, how prompt structure changes output quality, and how to iterate without wasting credits.
What you should get from this track
2-week plan
Week 1: Benchmark the model landscape
Read the Sora, Runway, Veo, and Flow materials, then run the same 3 prompts across at least 3 tools. Revisit your Day 1 notes on Day 3 and Day 7 to compare how each model handles motion, adherence, and editability.
Week 2: Build your first repeatable short-form workflow
Use the prompting guides to create 5 short clips with one fixed structure: subject, action, environment, style, camera, and audio intent. Reopen your best and worst prompt on Day 10 and Day 14 to rewrite them from memory before refining again.
5tldr study workflow
For each source, read the original first, summarize it in 5tldr, save the useful result to Library, and write one short note from memory before checking the summary again.
Creating videos with Sora
OpenAI's practical Sora guide covers the real workflow: generate, iterate, edit, publish, and use storyboards. The prompting section is especially useful because it emphasizes specificity, cadence, realism anchors, and limiting moving parts.
Why start here: This is one of the clearest current references for how a modern consumer AI video product wants prompts to be structured in practice, not just in theory.
Gen-4 Video Prompting Guide
Runway explains how to build motion-first prompts for Gen-4: start simple, add one variable at a time, and separate subject motion, camera motion, scene motion, and style descriptors. It also explicitly warns against negative prompting and overloading a 5-10 second clip.
Why start here: This is one of the most concrete prompt frameworks available from a production AI video vendor, and it translates directly into better iteration discipline.
Veo 3.1 Ingredients to Video: More consistency, creativity and control
Google's January 13, 2026 update focuses on character identity consistency, background and object consistency, native 9:16 outputs, and 1080p/4K upscaling. It frames Veo 3.1 as both a mobile-first and production-ready workflow tool.
Why start here: This is a current baseline for what a serious 2026 model should offer beyond pure novelty: consistency, vertical output, and production fidelity.
Introducing Flow: Google’s AI filmmaking tool designed for Veo
Google positions Flow as a filmmaking surface rather than a single-generation tool. The post highlights camera controls, Scenebuilder, asset management, and reuse of characters and scenes across clips with consistency.
Why start here: Useful for understanding how the market is shifting from isolated generations toward end-to-end filmmaking workspaces.
Cinematography & Visual Language for AI Video
Learn the visual grammar behind stronger AI video prompts: shot size, camera angle, camera movement, composition, and lighting choices that change the emotional read of a scene.
Most weak AI video prompts fail before the model generates anything. The user never decided what the shot should feel like, how the camera behaves, or what lighting logic supports the mood.
What you should get from this track
2-week plan
Week 1: Learn the shot grammar
Study shot types, angles, and camera motion, then practice 8 framing patterns and 6 movement prompts. Revisit your Day 1 clips on Day 3 and Day 7 to label what visual choice actually changed the emotional read.
Week 2: Re-light and re-compose the same scene
Take 2-3 scenes from Week 1 and regenerate them with different framing, rule-of-thirds positioning, and white-balance / lighting intent. End the week by mixing techniques instead of isolating them one by one.
5tldr study workflow
For each source, read the original first, summarize it in 5tldr, save the useful result to Library, and write one short note from memory before checking the summary again.
Different types of shots and camera angles in film
Adobe walks through the core shot vocabulary that filmmakers reuse constantly: wide, medium, close-up, extreme close-up, low angle, high angle, over-the-shoulder, POV, tracking, and Dutch angle. It clearly distinguishes shot size from camera angle.
Why start here: AI video gets better immediately when your prompt uses film language precisely instead of asking for something 'cinematic' in the abstract.
How to use, and break, the rule of thirds
Adobe explains the rule of thirds as a compositional guideline rather than a rigid law and emphasizes repetition and deliberate practice. It is a useful refresher for making framing choices visible and intentional.
Why start here: Prompt-level composition control improves when you can describe where the subject sits in frame and why.
Understand white balance in filmmaking
This guide explains white balance, color temperature, and the practical difference between daylight, tungsten, and mixed lighting setups. It also connects on-set choices with post-production correction and stylization.
Why start here: A lot of 'cinematic' look work is really color temperature discipline. This helps you prompt and review lighting choices more intentionally.
Camera Motion Concepts – Composable AI Camera Control
Luma introduces a system for learning and composing camera moves from minimal examples, with a focus on reliable, reusable motion control. The write-up is especially interesting because it contrasts concepts with heavier LoRA or fine-tuning workflows.
Why start here: This is a strong bridge between classic cinematography language and AI-native motion control, which is where a lot of current craft is heading.
Advanced AI Video Workflows & Consistency
Move from single generations to pipeline thinking: image references, scene continuity, video-to-video refinement, and multi-tool workflows that preserve identity and motion across shots.
The hard part of AI video is rarely the first good shot. It is getting the second, third, and fourth shot to belong to the same world without rebuilding everything from zero.
What you should get from this track
2-week plan
Week 1: Build a 3-shot continuity test
Create one character or product subject and carry it across 3 shots using image references or ingredient inputs. Review your continuity failures on Day 3 and Day 7, then re-run only the weakest shot instead of rebuilding the sequence.
Week 2: Add a post-production refinement layer
Take the continuity test into a second-stage workflow: modify video, relight, restyle, or extend a shot without losing performance. Mix tools deliberately so each one has a narrow job.
5tldr study workflow
For each source, read the original first, summarize it in 5tldr, save the useful result to Library, and write one short note from memory before checking the summary again.
Creating with Gen-4 Image References
Runway explains how to use one or more reference images to preserve characters, scenes, styles, and objects across generations. The guide includes both simple single-reference prompting and more advanced multi-reference workflows.
Why start here: This is one of the clearest current references for character and scene consistency without building a custom model.
Controlling Aleph edits with a Reference Image
This workflow combines first-frame editing with reference-guided video generation for tighter object placement and subject continuity. It is a practical example of chaining image and video tooling instead of treating them as separate worlds.
Why start here: Useful if you want a repeatable image-first-to-video pipeline with more control over where edits land in-frame.
Modify Video: Shoot Once. Shape Infinitely
Luma's Modify Video workflow focuses on preserving the original motion and performance while restyling worlds, props, lighting, and environments. It is a strong example of video-to-video as a post-production layer rather than a novelty effect.
Why start here: This is directly relevant for creators who want to separate performance capture from visual treatment and keep that distinction clean.
Veo 3.1 Ingredients to Video: More consistency, creativity and control
Google's Veo 3.1 update explicitly emphasizes identity consistency, object consistency, and reusable ingredient-based generation. It is a good complement to Runway's reference system because it frames consistency as a production feature, not just a model benchmark.
Why start here: Useful for comparing how different vendors now think about continuity, vertical deliverables, and higher-fidelity finishing.
Commercial AI Video Production & Legal Readiness
Turn AI video from an interesting creative toy into a client-ready production workflow by learning rights, platform terms, labeling duties, and the operational discipline needed for commercial delivery.
The creative side is only half the job. Once money, clients, or public distribution enter the picture, output ownership, label requirements, and platform-specific terms become part of the production workflow.
What you should get from this track
2-week plan
Week 1: Build a commercial delivery checklist
Read the platform terms and legal materials, then map the difference between vendor usage rights, copyrightability, and labeling duties. Recheck your notes on Day 3 and Day 7 by applying them to a mock client brief instead of rereading passively.
Week 2: Run a mock client production
Produce a short campaign concept with a full paper trail: prompt log, source asset notes, platform used, rights assumption, disclosure plan, and delivery package. End with a postmortem on cost, risk, and review steps.
5tldr study workflow
For each source, read the original first, summarize it in 5tldr, save the useful result to Library, and write one short note from memory before checking the summary again.
Terms of Use
OpenAI's consumer terms assign output rights to the user, subject to the terms and applicable law. This is the baseline you need before treating Sora output as something you can deliver commercially.
Why start here: Commercial use starts with the contract layer. You need to know what rights the vendor says you have before you worry about copyright doctrine.
Usage rights
Runway states that users retain ownership of the content they upload and generate, and that there are no non-commercial restrictions from Runway on use of those generations. It also explicitly mentions advertising, film festivals, and monetized uploads.
Why start here: This is one of the clearest current statements of commercial usage rights from a major AI video vendor.
Copyright and Artificial Intelligence, Part 2: Copyrightability
The U.S. Copyright Office's January 2025 report says generative AI outputs can be protected only where a human author contributed sufficient expressive authorship. Mere prompting alone is not enough by itself.
Why start here: This is the clearest current U.S. baseline for separating vendor output rights from actual copyrightability.
Code of Practice on marking and labelling of AI-generated content
The Commission's 2026 code-of-practice work supports Article 50 AI Act transparency obligations on marking and labeling AI-generated or manipulated content. It focuses on machine-readable marking and clearer public labeling for professional uses.
Why start here: If you are doing commercial distribution in Europe, transparency and labeling are not optional clean-up tasks; they are part of delivery readiness.
关于印发《人工智能生成合成内容标识办法》的通知
China's March 14, 2025 measures formalize labeling requirements for AI-generated and synthesized content, including explicit labels, metadata treatment, and platform handling of suspected generated content.
Why start here: For teams operating in China or distributing into Chinese platforms, labeling and provenance handling need to be designed into the workflow upfront.
How these learning plans are curated
The learning hub exists to help users start, sequence, summarize, and revisit high-quality material. We curate it from external sources, product workflows, and repeated learning needs.
Hands-on workflow coverage
These pages are built from real 5tldr workflows, support questions, and product behavior instead of thin keyword templates.
Updated when the product changes
We refresh guidance when plan rules, supported sources, failure states, and learning workflows change.
Content and ad boundaries stay separate
Educational pages may carry ads. Product workflows, pricing, library, checkout, and paid-user journeys should remain ad-free.