AI video knowledge hub

AI Video Generation Learning Plans

Use a structured 8-week stack to learn model selection, cinematic prompting, consistency workflows, and commercial delivery. Every track is built around verified 2025-2026 sources and concrete output, not passive reading.

4 focused tracksOfficial 2025-2026 sourcesPrompt + workflow drillsCommercial/legal readiness

Choose Your AI Video Track

Start with the track that matches your current bottleneck. The resource set below was updated March 13, 2026.

How to use this hub with 5tldr

The point is not to hoard links. The point is to turn good sources into reusable knowledge.

1. Open the source

Read the original article first so you understand the author’s framing.

2. Summarize it

Paste the article into 5tldr and generate key points or a learning pack.

3. Save the useful ones

Keep the summaries that matter in Library so you can revisit them later.

4. Review and compare

Return to your saved notes, compare sources, and turn them into your own understanding.

AI Video Fundamentals

Learn the current AI video model landscape, the core prompt grammar behind usable generations, and the fastest path from first prompt to first finished short clip.

Owner: AI Video PodAudience: Zero-to-one creators, content marketers, product storytellersDuration: 2 weeks

AI video is moving too fast for vague tool hype to be useful. You need a grounded starting point: what each model is good at, how prompt structure changes output quality, and how to iterate without wasting credits.

What you should get from this track

Compare the current strengths of Sora, Runway, Veo, Luma, and adjacent toolsWrite cleaner text-to-video and image-to-video promptsShip a first short clip with a documented prompt workflow

2-week plan

Week 1: Benchmark the model landscape

Read the Sora, Runway, Veo, and Flow materials, then run the same 3 prompts across at least 3 tools. Revisit your Day 1 notes on Day 3 and Day 7 to compare how each model handles motion, adherence, and editability.

Deliverable: A comparison sheet with prompt, output notes, and a default-tool recommendation

Week 2: Build your first repeatable short-form workflow

Use the prompting guides to create 5 short clips with one fixed structure: subject, action, environment, style, camera, and audio intent. Reopen your best and worst prompt on Day 10 and Day 14 to rewrite them from memory before refining again.

Deliverable: A mini prompt library plus one finished 15-30 second clip assembled from your best generations

5tldr study workflow

For each source, read the original first, summarize it in 5tldr, save the useful result to Library, and write one short note from memory before checking the summary again.

OpenAI Help Center·Beginners learning how to prompt, iterate, and use storyboard mode without guessing

Creating videos with Sora

OpenAI's practical Sora guide covers the real workflow: generate, iterate, edit, publish, and use storyboards. The prompting section is especially useful because it emphasizes specificity, cadence, realism anchors, and limiting moving parts.

Why start here: This is one of the clearest current references for how a modern consumer AI video product wants prompts to be structured in practice, not just in theory.

Runway Help Center·Creators who already have an image or frame and need cleaner motion control

Gen-4 Video Prompting Guide

Runway explains how to build motion-first prompts for Gen-4: start simple, add one variable at a time, and separate subject motion, camera motion, scene motion, and style descriptors. It also explicitly warns against negative prompting and overloading a 5-10 second clip.

Why start here: This is one of the most concrete prompt frameworks available from a production AI video vendor, and it translates directly into better iteration discipline.

Google·Teams benchmarking current model capabilities before choosing a default stack

Veo 3.1 Ingredients to Video: More consistency, creativity and control

Google's January 13, 2026 update focuses on character identity consistency, background and object consistency, native 9:16 outputs, and 1080p/4K upscaling. It frames Veo 3.1 as both a mobile-first and production-ready workflow tool.

Why start here: This is a current baseline for what a serious 2026 model should offer beyond pure novelty: consistency, vertical output, and production fidelity.

Google·Creators comparing standalone generators with more orchestrated production tools

Introducing Flow: Google’s AI filmmaking tool designed for Veo

Google positions Flow as a filmmaking surface rather than a single-generation tool. The post highlights camera controls, Scenebuilder, asset management, and reuse of characters and scenes across clips with consistency.

Why start here: Useful for understanding how the market is shifting from isolated generations toward end-to-end filmmaking workspaces.

Cinematography & Visual Language for AI Video

Learn the visual grammar behind stronger AI video prompts: shot size, camera angle, camera movement, composition, and lighting choices that change the emotional read of a scene.

Owner: AI Video PodAudience: Creators who want their AI videos to look directed, not randomDuration: 2 weeks

Most weak AI video prompts fail before the model generates anything. The user never decided what the shot should feel like, how the camera behaves, or what lighting logic supports the mood.

What you should get from this track

Use shot size and angle as emotional control, not generic keywordsPrompt deliberate camera movement instead of random motionUpgrade scenes through composition, white balance, and color decisions

2-week plan

Week 1: Learn the shot grammar

Study shot types, angles, and camera motion, then practice 8 framing patterns and 6 movement prompts. Revisit your Day 1 clips on Day 3 and Day 7 to label what visual choice actually changed the emotional read.

Deliverable: A personal prompt glossary with at least 14 reusable shot and motion patterns

Week 2: Re-light and re-compose the same scene

Take 2-3 scenes from Week 1 and regenerate them with different framing, rule-of-thirds positioning, and white-balance / lighting intent. End the week by mixing techniques instead of isolating them one by one.

Deliverable: A before/after reel showing how framing and lighting changed quality and mood

5tldr study workflow

For each source, read the original first, summarize it in 5tldr, save the useful result to Library, and write one short note from memory before checking the summary again.

Adobe·Beginners building a usable shot vocabulary for prompts and shot lists

Different types of shots and camera angles in film

Adobe walks through the core shot vocabulary that filmmakers reuse constantly: wide, medium, close-up, extreme close-up, low angle, high angle, over-the-shoulder, POV, tracking, and Dutch angle. It clearly distinguishes shot size from camera angle.

Why start here: AI video gets better immediately when your prompt uses film language precisely instead of asking for something 'cinematic' in the abstract.

Adobe·Creators who want stronger framing and more consistent scene balance

How to use, and break, the rule of thirds

Adobe explains the rule of thirds as a compositional guideline rather than a rigid law and emphasizes repetition and deliberate practice. It is a useful refresher for making framing choices visible and intentional.

Why start here: Prompt-level composition control improves when you can describe where the subject sits in frame and why.

Adobe·Creators trying to control mood and color realism across scenes

Understand white balance in filmmaking

This guide explains white balance, color temperature, and the practical difference between daylight, tungsten, and mixed lighting setups. It also connects on-set choices with post-production correction and stylization.

Why start here: A lot of 'cinematic' look work is really color temperature discipline. This helps you prompt and review lighting choices more intentionally.

Luma·Intermediate users exploring more controlled motion beyond generic pan/zoom prompts

Camera Motion Concepts – Composable AI Camera Control

Luma introduces a system for learning and composing camera moves from minimal examples, with a focus on reliable, reusable motion control. The write-up is especially interesting because it contrasts concepts with heavier LoRA or fine-tuning workflows.

Why start here: This is a strong bridge between classic cinematography language and AI-native motion control, which is where a lot of current craft is heading.

Advanced AI Video Workflows & Consistency

Move from single generations to pipeline thinking: image references, scene continuity, video-to-video refinement, and multi-tool workflows that preserve identity and motion across shots.

Owner: AI Video PodAudience: Freelancers, video operators, and creators moving from single clips to repeatable pipelinesDuration: 2 weeks

The hard part of AI video is rarely the first good shot. It is getting the second, third, and fourth shot to belong to the same world without rebuilding everything from zero.

What you should get from this track

Use reference images to preserve subject and scene identityChain image, video, and edit tools instead of relying on one modelAdd post-style refinement without breaking motion continuity

2-week plan

Week 1: Build a 3-shot continuity test

Create one character or product subject and carry it across 3 shots using image references or ingredient inputs. Review your continuity failures on Day 3 and Day 7, then re-run only the weakest shot instead of rebuilding the sequence.

Deliverable: A 3-shot clip with documented reference assets, prompts, and continuity notes

Week 2: Add a post-production refinement layer

Take the continuity test into a second-stage workflow: modify video, relight, restyle, or extend a shot without losing performance. Mix tools deliberately so each one has a narrow job.

Deliverable: A before/after pipeline map showing first-pass generation, refinement, and final export decisions

5tldr study workflow

For each source, read the original first, summarize it in 5tldr, save the useful result to Library, and write one short note from memory before checking the summary again.

Runway Help Center·Creators who need recurring characters, consistent b-roll, or stable environments

Creating with Gen-4 Image References

Runway explains how to use one or more reference images to preserve characters, scenes, styles, and objects across generations. The guide includes both simple single-reference prompting and more advanced multi-reference workflows.

Why start here: This is one of the clearest current references for character and scene consistency without building a custom model.

Runway Help Center·Operators building more reliable image-to-video or scene-edit workflows

Controlling Aleph edits with a Reference Image

This workflow combines first-frame editing with reference-guided video generation for tighter object placement and subject continuity. It is a practical example of chaining image and video tooling instead of treating them as separate worlds.

Why start here: Useful if you want a repeatable image-first-to-video pipeline with more control over where edits land in-frame.

Luma·Teams refining footage after the first generation or adapting one performance into multiple visual directions

Modify Video: Shoot Once. Shape Infinitely

Luma's Modify Video workflow focuses on preserving the original motion and performance while restyling worlds, props, lighting, and environments. It is a strong example of video-to-video as a post-production layer rather than a novelty effect.

Why start here: This is directly relevant for creators who want to separate performance capture from visual treatment and keep that distinction clean.

Google·People designing multi-shot workflows across tools, not just inside one UI

Veo 3.1 Ingredients to Video: More consistency, creativity and control

Google's Veo 3.1 update explicitly emphasizes identity consistency, object consistency, and reusable ingredient-based generation. It is a good complement to Runway's reference system because it frames consistency as a production feature, not just a model benchmark.

Why start here: Useful for comparing how different vendors now think about continuity, vertical deliverables, and higher-fidelity finishing.

Publisher standards

How these learning plans are curated

The learning hub exists to help users start, sequence, summarize, and revisit high-quality material. We curate it from external sources, product workflows, and repeated learning needs.

Hands-on workflow coverage

These pages are built from real 5tldr workflows, support questions, and product behavior instead of thin keyword templates.

Updated when the product changes

We refresh guidance when plan rules, supported sources, failure states, and learning workflows change.

Content and ad boundaries stay separate

Educational pages may carry ads. Product workflows, pricing, library, checkout, and paid-user journeys should remain ad-free.