Prompt Engineering & Creative Control ·

Runway Gen‑4 Prompting Rules You Can Copy in Veo3Gen (Motion‑First, One‑Change‑at‑a‑Time)

Copy Runway Gen‑4’s motion‑first prompting rules into a Veo3Gen workflow: separate motion layers, iterate one change at a time, and troubleshoot common failures

Why “motion-first” prompts fix the #1 beginner failure (stiff or chaotic clips)

If you’ve ever generated an image-to-video clip that feels stiff (nothing moves) or chaotic (everything moves), the root cause is often the same: the prompt is trying to describe everything at once—character identity, wardrobe, setting, story, style, camera, VFX—without clearly telling the model what should move.

Runway’s Gen‑4 guidance is refreshingly practical here: use a high-quality input image to establish the look, and then use the text prompt primarily to describe motion. (https://help.runwayml.com/hc/en-us/articles/39789879462419-Gen-4-Video-Prompting-Guide) That mindset ports cleanly into Veo3Gen workflows: treat your reference image as the “art direction,” and treat your prompt as the “shot direction.”

Two more rules from the same Gen‑4 guide matter for troubleshooting:

This post turns those ideas into a copyable Veo3Gen routine: motion-first + three motion layers + one-change iteration loop.

The 3 motion layers to separate: subject vs camera vs scene

Runway’s Gen‑4 guide explicitly calls out refining prompts by adding elements like subject motion, camera motion, and scene motion. (https://help.runwayml.com/hc/en-us/articles/39789879462419-Gen-4-Video-Prompting-Guide) In practice, you’ll get more control if you treat these as three separate “knobs” and adjust them one at a time.

1) Subject motion (what the main subject does)

Definition: Movement of the person/object you want the viewer to focus on.

Good use cases: product ads, portrait clips, app hero loops—anywhere you want the subject stable and readable.

Examples you can copy:

  • “The subject slowly turns their head toward camera and blinks once.”
  • “The subject’s hands gently rotate the product 15 degrees, then hold.”
  • “The subject takes one step forward and stops, fabric subtly swaying.”

Tip: Runway recommends referring to subjects in general terms like “the subject.” That can reduce confusion when you’re not trying to cast a named character. (https://help.runwayml.com/hc/en-us/articles/39789879462419-Gen-4-Video-Prompting-Guide)

2) Camera motion (how the viewpoint moves)

Definition: Movement of the camera itself—dolly, pan, tilt, handheld drift, push-in/pull-out.

Good use cases: adding energy to static scenes; guiding attention to a logo, feature, or reveal.

Examples you can copy:

  • “Slow dolly-in, steady and smooth, ending in a medium close-up.”
  • “Gentle handheld camera sway, subtle micro-movements.”
  • “Slow pan left to reveal the subject in frame.”

Watch-out: If you don’t explicitly ask for a stable camera, many models may invent movement. Motion-first prompting helps you decide whether that’s a feature or a bug.

3) Scene motion (what moves in the environment)

Definition: Movement in the background/setting: weather, crowds, screens, reflections, traffic, foliage.

Good use cases: making the world feel alive while keeping the subject readable.

Examples you can copy:

  • “Light rain falling diagonally, puddle ripples spreading outward.”
  • “Neon sign flickers softly; distant traffic bokeh drifts.”
  • “Fog rolls slowly across the street; background pedestrians pass by.”

Creative note: A helpful framing from Focal’s Gen‑4 tips is to describe visual moments rather than abstract concepts, and avoid overstuffing prompts—stick to one strong visual idea. (https://focalml.com/blog/exploring-runway-gen-4-tips-for-crafting-professional-grade-videos/)

The One-Change Loop: iterate without breaking what already works

Runway’s Gen‑4 guide recommends starting simple and iterating, and explicitly notes that adding one new element at a time helps identify which change improved results or triggered unexpected behavior. (https://help.runwayml.com/hc/en-us/articles/39789879462419-Gen-4-Video-Prompting-Guide)

Here’s the Veo3Gen-friendly workflow:

Baseline → add one motion layer → lock it → add the next

  1. Baseline render (no motion words beyond the minimum).

  2. Add ONE motion layer (pick subject OR camera OR scene).

    • Example: only subject motion (“the subject smiles subtly”), keep camera “locked-off,” keep scene quiet.
  3. Lock what worked.

    • If the subject motion looks right, keep those words stable in the next run.
  4. Add the next motion layer.

    • Example: now add camera motion (“slow dolly-in”), keep subject motion unchanged.
  5. Only then add scene motion and style polish.

The warning that saves hours

Don’t change two variables at once. If you rewrite subject motion and camera motion and add weather in the same iteration, you won’t know which change caused the “warpy” result.

Also: use positive phrasing. Runway notes negative phrasing is not supported and may lead to unpredictable results. (https://help.runwayml.com/hc/en-us/articles/39789879462419-Gen-4-Video-Prompting-Guide)

Troubleshooting matrix: symptom → likely cause → prompt edit

Symptom Likely cause (in motion terms) What to change in your next prompt
Jittery “nervous” movement Too much camera motion or unspecified camera behavior Specify “locked-off camera” or “steady, smooth dolly” and remove handheld language
Faces/edges look melty during movement Subject motion too large/fast for the shot Reduce amplitude: “subtle,” “slow,” “small movement,” “gentle turn”
Unwanted camera moves (random pans/zooms) Camera motion not constrained Add explicit camera: “static camera,” “no pan,” or “slow dolly-in only” (use positive phrasing)
Background crawling / distractingly alive Scene motion is too strong or implied Reduce scene motion: “background remains still,” or specify only one environmental motion element
Speed mismatch (everything moves too fast/too slow) Motion intent isn’t quantified Add tempo words: “slow,” “gradual,” “gentle,” “brief,” “then hold”

Use this table with the One-Change Loop: pick one row, make one targeted edit, rerun.

Copy‑paste prompt templates (Veo3Gen‑friendly) for 6 common shots

Each template is written to keep the input image as the look and make the prompt about motion, mirroring Runway’s Gen‑4 advice. (https://help.runwayml.com/hc/en-us/articles/39789879462419-Gen-4-Video-Prompting-Guide)

1) Locked‑off portrait with subtle action

Prompt:

Locked-off camera. The subject holds eye contact, breathes subtly, blinks once, then gives a small natural smile. Background remains still.

2) Dolly‑in product hero

Prompt:

Slow, smooth dolly-in toward the product, steady camera. The subject remains centered and stable. Soft depth of field increases slightly as the camera moves in.

(“Soft depth of field” as a texture cue is a commonly suggested way to add richness. (https://focalml.com/blog/exploring-runway-gen-4-tips-for-crafting-professional-grade-videos/))

3) Handheld street scene (controlled)

Prompt:

Gentle handheld camera movement with subtle micro-shake, natural and steady. The subject walks forward slowly through the frame. Background pedestrians pass by smoothly.

4) Pan reveal

Prompt:

Slow pan right to reveal the subject. The camera movement is smooth and continuous. The subject stays still until fully revealed, then turns slightly toward camera.

5) Rack focus (focus change as the motion)

Prompt:

Static camera. Start focused on the foreground object, then rack focus slowly to the subject in the background. The subject remains still while focus transitions.

6) Environment motion (rain/fog/traffic) while subject stays stable

Prompt:

Locked-off camera. The subject remains still and sharp. Light rain falls steadily and puddles ripple; distant traffic bokeh drifts in the background.

When to lean on the input image vs the text prompt (and why)

A reliable mental model—based on Runway’s Gen‑4 guidance—is:

If you find yourself re-describing how the subject looks (“blue jacket, same face, same product label…”) every iteration, consider improving the input image instead—because changing lots of appearance descriptors can unintentionally change motion behavior (and vice versa).

Quick checklist before you render your next batch

FAQ

How long should my prompt be for motion-first video?

Short enough that motion is obvious at a glance. Runway’s Gen‑4 guide emphasizes simplicity and starting with a simple prompt, then iterating. (https://help.runwayml.com/hc/en-us/articles/39789879462419-Gen-4-Video-Prompting-Guide)

Is it okay to use negative prompts like “no shaking” or “don’t move”?

Runway notes negative phrasing isn’t supported and may lead to unpredictable results, so prefer positive constraints like “locked-off camera” or “steady, smooth motion.” (https://help.runwayml.com/hc/en-us/articles/39789879462419-Gen-4-Video-Prompting-Guide)

What’s the fastest way to stop randomness?

Remove extra motion layers and rebuild with the One-Change Loop—Runway explicitly recommends adding one new element at a time to troubleshoot unexpected results. (https://help.runwayml.com/hc/en-us/articles/39789879462419-Gen-4-Video-Prompting-Guide)

Should I describe style (film grain, lighting) in the motion prompt?

Add style after motion is working. Lighting cues can create mood, and texture cues like film grain/soft depth of field can add realism, but keep it secondary to motion. (https://focalml.com/blog/exploring-runway-gen-4-tips-for-crafting-professional-grade-videos/)

CTA: Put motion-first prompting on autopilot in Veo3Gen

If you’re building a repeatable pipeline for ads, socials, or landing-page loops, the easiest win is operational: standardize your prompt templates and iterate systematically.

  • Explore the developer workflow in the Veo3Gen API
  • Estimate costs and scale tests with pricing

As of 2026‑01‑31, the motion-first method above remains tool-agnostic: separate motion into layers, iterate one change at a time, and let the input image carry the look.

Sources

Limited Time Offer

Try Veo 3 & Veo 3 API for Free

Experience cinematic AI video generation at the industry's lowest price point. No credit card required to start.