Prompt Engineering & Creative Control ·
Sora 2’s “Container vs Prompt” Rule, Applied to Veo3Gen: A Creator Checklist to Stop Ignored Duration/Resolution (as of 2026-05-13)
Stop ignored duration/resolution in AI video: apply Sora 2’s “container vs prompt” rule to Veo3Gen with templates, rewrites, and a QA checklist.
On this page
- Sora 2’s “Container vs Prompt” Rule, Applied to Veo3Gen: A Creator Checklist to Stop Ignored Duration/Resolution (as of 2026-05-13)
- What “container vs prompt” means (in plain creator terms)
- The 5 settings you should never leave in prose (and where to set them instead)
- 1) Duration → set in
- 2) Resolution & aspect ratio → set in
- 3) Model choice / quality tier → set in (if available)
- 4) Character / object identity → set in
- 5) Extending an existing clip → use controls
- The 6 prompt elements that DO belong in text (and how specific to be)
- 1) Clear structure: what happens, how it looks, what we hear
- 2) Shot-by-shot specificity (like a micro storyboard)
- 3) One camera move + one subject action per shot
- 4) Filmmaking terminology for camera language
- 5) Audio requests that sync with visuals (when supported)
- 6) Constraints and “don’ts” that are truly creative
- Common failure modes: container problem or prompt problem?
- It’s probably a problem if…
- It’s probably a problem if…
- A copy‑paste “Container First” template (fill‑in brackets)
- Container (set in Veo3Gen controls)
- Prompt (paste as text)
- Rewrite table: move from prose requests → locked settings
- Mini examples (end-to-end) in the new structure
- Example 1: UGC-style ad (creator testimonial)
- Example 2: Product demo (tabletop + hero shot)
- Example 3: Cinematic shot (character continuity)
- Quick QA before you hit Generate (30-second checklist)
- FAQ
- Does Sora 2 really ignore “make it longer” style requests?
- What sizes and durations should I use?
- How do I keep a character consistent across multiple videos?
- Should I write camera directions in the prompt?
- Related reading
- CTA: ship fewer re-renders with container-first generation
- Try Veo3Gen (Affordable Veo 3.1 Access)
Sora 2’s “Container vs Prompt” Rule, Applied to Veo3Gen: A Creator Checklist to Stop Ignored Duration/Resolution (as of 2026-05-13)
If you’ve ever typed “make it 15 seconds, vertical, 4K” into an AI video prompt… and then watched the model ignore half of it, you’ve hit the core “container vs prompt” problem.
OpenAI’s official Sora 2 guidance draws a sharp line: some attributes are governed only by API/controls and can’t be reliably requested in prose. In other words, these are container settings, not prompt content. The guide explicitly notes that some attributes are controlled only by parameters (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide).
This post translates that rule into a Veo3Gen workflow you can use today (as of 2026-05-13): a mapping of what belongs in Duration / Resolution & Aspect Ratio / References & Inputs versus what belongs in your text prompt, plus a copy‑paste template, rewrite table, and a fast preflight checklist.
What “container vs prompt” means (in plain creator terms)
Think of your generation request as two layers:
- Container (settings): the “box” the video must fit into—duration, size/aspect, and which reference assets the model is allowed to use.
- Prompt (prose): what happens inside the box—story beats, composition, lighting, camera move, sound, and pacing.
OpenAI’s Sora 2 guide calls out that some attributes are governed only by parameters and cannot be requested in prose (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). That’s the heart of the rule: if you keep asking for container changes in the prompt text, you’re increasing the chance of wasted iterations.
The 5 settings you should never leave in prose (and where to set them instead)
Below is the Veo3Gen-friendly translation: set these in controls/settings, not in your prompt paragraph.
1) Duration → set in Duration
Sora 2 exposes duration as a parameter called seconds with supported values (and a default) (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). The broader lesson: duration is a container variable. If you write “make it longer” in prose, don’t be surprised when it doesn’t lock.
2) Resolution & aspect ratio → set in Resolution/Aspect Ratio
In Sora 2, video size is an API parameter called size formatted as {width}x{height} (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). The guide also lists supported export sizes, including higher-resolution exports such as 1920×1080 and 1080×1920 (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). Translation for Veo3Gen: use the Resolution/Aspect Ratio fields to lock vertical vs horizontal and your target export size.
3) Model choice / quality tier → set in Model (if available)
Sora 2 includes a model parameter with values like sora-2 or sora-2-pro (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). Even if Veo3Gen abstracts this, the workflow principle holds: choose the tier in settings, not by writing “use the pro model” in your scene description.
4) Character / object identity → set in References/Inputs
Sora 2 supports character references—upload once and reuse for consistent appearance across videos (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). It also allows referencing up to two uploaded characters via an optional characters parameter (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). For Veo3Gen creators, the takeaway is simple: if you want “the same person/product/mascot,” attach it in References/Inputs rather than describing it from scratch every time.
5) Extending an existing clip → use Video extension / input video controls
Sora 2’s “Video extension” can extend an existing clip using the full initial clip as context, not only the last frame (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). If your goal is “continue this exact shot,” treat that as a container/input operation first; then use prose to specify how it continues.
The 6 prompt elements that DO belong in text (and how specific to be)
Once your container is locked, your prompt becomes much easier to control—and easier to debug.
1) Clear structure: what happens, how it looks, what we hear
A practical prompt pattern is to split your text into sections: action, visual style, and audio. Wavespeed reports Sora 2 responds best to well-organized prompts with clear sections for what happens, how it looks, and what we hear (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/).
2) Shot-by-shot specificity (like a micro storyboard)
For consistency, describe each shot with framing, depth of field, lighting, palette, and action—like a storyboard (https://higgsfield.ai/sora-2-prompt-guide). You don’t need a novel; you need unambiguous production notes.
3) One camera move + one subject action per shot
Higgsfield recommends defining one camera move and one subject action per shot for smoother, more predictable motion (https://higgsfield.ai/sora-2-prompt-guide). This reduces “busy” generations where the model tries to do everything at once.
4) Filmmaking terminology for camera language
Wavespeed notes Sora 2 has strong cinematography literacy and suggests using filmmaking terminology to control how scenes unfold (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/). In practice: “close-up,” “over-the-shoulder,” “slow dolly-in,” “rack focus,” “handheld.”
5) Audio requests that sync with visuals (when supported)
Wavespeed states Sora 2 generates audio natively and recommends requesting sound elements that match the visuals (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/). If you’re using audio-capable pipelines, text is the right place to specify “room tone,” “footsteps,” “product click,” or “soft music bed”—without trying to use audio as a proxy for duration.
6) Constraints and “don’ts” that are truly creative
Use prose constraints for creative boundaries (e.g., “no text overlays,” “no brand logos,” “no jump cuts”) rather than technical settings.
Common failure modes: container problem or prompt problem?
Use this quick diagnosis before you re-render.
It’s probably a container problem if…
- Your output is the wrong length even though you wrote “make it 16 seconds.” (Duration belongs in settings; Sora 2 uses a
secondsparameter with discrete values and a default (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide).) - Your output is the wrong orientation/size even though you wrote “9:16 vertical” or “1080×1920.” (Size is a parameter formatted as
{width}x{height}(https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide).) - Your “same character” looks different each render because you never attached a reference. (Character references are designed for consistent appearance across videos (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide).)
It’s probably a prompt problem if…
- The duration and aspect are correct, but the story beats are off.
- The model keeps changing framing because you didn’t specify a clear shot plan (storyboard-style detail improves consistency (https://higgsfield.ai/sora-2-prompt-guide)).
- Motion feels chaotic because you asked for multiple camera moves and multiple actions at once (one move + one action per shot helps predictability (https://higgsfield.ai/sora-2-prompt-guide)).
A copy‑paste “Container First” template (fill‑in brackets)
Use this structure in Veo3Gen: set the container in fields first, then paste the prompt.
Container (set in Veo3Gen controls)
- Duration:
[4/8/12/16/20 seconds or your available options] - Resolution/Aspect Ratio:
[{width}x{height}](lock vertical/horizontal here) (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide) - References/Inputs:
[character/product reference image(s) or clip(s)](https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide)
Prompt (paste as text)
SHOT LIST (keep it simple):
- Shot 1 (framing + camera move):
[e.g., close-up, slow dolly-in]
Action:[one subject action]
Look:[lighting, palette, DOF](https://higgsfield.ai/sora-2-prompt-guide) - Shot 2:
[repeat]
STYLE NOTES: [genre, cinematography terms, lens vibe, texture] (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/)
AUDIO (if applicable): [sound effects + ambience + music style] (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/)
NEGATIVES / CONSTRAINTS: [no text overlays, no extra characters, no logos]
Rewrite table: move from prose requests → locked settings
| What creators often write in the prompt | Why it fails | Put it here in Veo3Gen instead |
|---|---|---|
| “Make it 20 seconds.” | Duration is a parameter, not a story detail (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). | Duration → set target seconds |
| “Make it longer.” | Prose isn’t a reliable way to change container attributes (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). | Duration → set target seconds |
| “Vertical 9:16.” | Orientation is governed by size/aspect (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). | Resolution/Aspect Ratio → set {width}x{height} |
| “1920x1080.” | Size is an API-style value, best locked as settings (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). | Resolution/Aspect Ratio → 1920×1080 |
| “1080x1920.” | Same as above; lock vertical explicitly (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). | Resolution/Aspect Ratio → 1080×1920 |
| “Use my character from last time.” | Consistency needs an attached reference (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). | References/Inputs → upload/select character |
| “Keep the same mascot.” | Character references are the mechanism for this (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). | References/Inputs → attach mascot ref |
| “Extend this clip by a few seconds.” | Use video extension / input video features (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). | References/Inputs → provide source video, use extend |
| “Make it pro quality / best model.” | Model choice is a parameter (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). | Model (if exposed) → choose tier |
| “Include sound: footsteps + city ambience.” | Audio is a creative request; it belongs in text (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/). | Prompt → Audio section |
| “Slow push-in, then whip pan, then crane up.” | Too many moves creates unpredictable motion (https://higgsfield.ai/sora-2-prompt-guide). | Prompt → split into shots; one move each |
| “Cinematic lighting, teal-orange grade, shallow DOF.” | Creative look belongs in prose and benefits from specificity (https://higgsfield.ai/sora-2-prompt-guide). | Prompt → Look/Style notes |
Mini examples (end-to-end) in the new structure
Example 1: UGC-style ad (creator testimonial)
Container (settings):
- Duration:
[...] - Resolution/Aspect Ratio:
1080x1920(https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide) - References/Inputs:
Attach product image + (optional) character reference(https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide)
Prompt (text):
- Shot 1: handheld medium close-up, slight natural sway. Subject holds the product and smiles. Soft window light, warm tones, shallow depth of field. (https://higgsfield.ai/sora-2-prompt-guide)
- Shot 2: cut to close-up of hands using the product; one smooth push-in. Clean highlights, realistic texture.
- Audio: quiet room tone + subtle fabric movement + a small “click” when the product is used. (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/)
- Constraints: no on-screen text, no logos in background.
Example 2: Product demo (tabletop + hero shot)
Container (settings):
- Duration:
[...] - Resolution/Aspect Ratio:
1920x1080(https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide) - References/Inputs:
Product reference image(s)
Prompt (text):
- Shot 1: locked-off overhead shot. A hand places the product on a clean desk mat; one clear action. Neutral softbox lighting, minimal shadows. (https://higgsfield.ai/sora-2-prompt-guide)
- Shot 2: 3/4 hero angle, slow dolly-in. Product rotates slightly; background falls into smooth bokeh.
- Audio: light “tap” on placement, subtle whoosh during hero move. (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/)
Example 3: Cinematic shot (character continuity)
Container (settings):
- Duration:
[...] - Resolution/Aspect Ratio:
[...] - References/Inputs:
Attach character reference (consistent appearance)(https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide)
Prompt (text):
- Shot 1: wide establishing shot, slow crane down. The referenced character stands under a streetlamp in light rain; one action: they look up as thunder rolls. Cool palette, high contrast, wet reflections. (https://higgsfield.ai/sora-2-prompt-guide)
- Audio: distant thunder + rain patter + subtle city hum, synced to the lightning moment. (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/)
Quick QA before you hit Generate (30-second checklist)
- Duration is set in Duration (not requested in prose) (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide)
- Orientation and size are set in Resolution/Aspect Ratio as
{width}x{height}(https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide) - Any “same person/product” requirement is attached in References/Inputs (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide)
- Prompt is structured (what happens / how it looks / what we hear) (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/)
- Each shot has one camera move + one subject action (https://higgsfield.ai/sora-2-prompt-guide)
FAQ
Does Sora 2 really ignore “make it longer” style requests?
OpenAI’s Sora 2 guide says some attributes are controlled only by parameters and can’t be requested in prose (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). So the reliable fix is to set duration in the container controls.
What sizes and durations should I use?
Sora 2 exposes seconds with supported values and a default, and size as {width}x{height} (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). In Veo3Gen, pick from whatever options your generator exposes, but apply the same idea: lock them in settings.
How do I keep a character consistent across multiple videos?
Use character references: the Sora 2 guide describes uploading a character once and reusing it for consistent appearance (https://developers.openai.com/cookbook/examples/sora/sora2_prompting_guide). In Veo3Gen terms, attach your character in References/Inputs.
Should I write camera directions in the prompt?
Yes—creative camera language belongs in text. Wavespeed notes Sora 2 understands cinematography terminology (https://wavespeed.ai/blog/posts/sora-2-prompting-tips-better-videos-2026/), and Higgsfield recommends storyboard-like shot detail for consistency (https://higgsfield.ai/sora-2-prompt-guide).
Related reading
CTA: ship fewer re-renders with container-first generation
If you’re building a workflow where duration, resolution/aspect, and references are set programmatically (instead of buried in prompt prose), Veo3Gen’s API route makes that pattern straightforward.
- Explore the API: /api
- Review plans and costs: /pricing
Lock the container first, then iterate on creative—so each render is actually testing what you intended.
Try Veo3Gen (Affordable Veo 3.1 Access)
If you want to turn these tips into real clips today, try Veo3Gen:
Try Veo 3 & Veo 3 API for Free
Experience cinematic AI video generation at the industry's lowest price point. No credit card required to start.