Scenes are where many bots become fragile. The problem is usually not the concept itself — it is state management and unclear transitions. A scene that works perfectly in isolation starts misbehaving when users skip steps, go back unexpectedly, or send a command mid-flow.
This guide covers the patterns that hold up in production: how to keep state lean, how to make transitions explicit, how to handle recovery, and how to test scene flows before deploying.
What a scene actually is
A scene is a scoped context for handling a multi-step conversation flow. When a user enters a scene, subsequent messages from that user are routed to the scene handlers instead of the global bot handlers. The scene has its own state object that persists across steps.
A typical scene moves through a small sequence of steps: ask for one input, store the minimum state needed for the next step, ask the next question, and then leave the scene once the flow is complete. The important part is not the exact API shape but the discipline of keeping each step explicit.
Keep scene state small
Store only the minimum information needed to move the flow forward. Large state blobs make recovery harder and increase the chance of stale assumptions after edits. A common mistake is storing full database records in scene state when only an ID is needed.
Scene state is ephemeral by design. If the bot process restarts or the user's session expires, that state is gone. Anything important should be persisted to the database as soon as it is collected, with the scene state acting only as a working scratch pad.
- Store IDs and primitive values, not full document records.
- Persist collected data to MongoDB at each step, not just at the end.
- Use a step field in state to know where the user is in the flow.
- Clear state explicitly when leaving a scene to avoid leaks.
Prefer explicit transitions
A scene should make entry and exit conditions obvious. Avoid hidden branching that depends on unrelated handlers or implicit state mutations. If a step can route to two different next steps, that branching logic should live in the step handler itself.
Explicit branching is easier to reason about because the current step decides what happens next. If the user cancels, the scene exits. If the input is invalid, the same step repeats with guidance. If the input is valid, the scene moves forward. That clarity is what makes the flow testable.
Handling commands mid-scene
Users will send /start or /cancel mid-flow. Without handling this, the command text is treated as regular input to the current step. A safer approach is a scene-level command listener that exits cleanly and resets expectations for the user.
That listener usually handles cancellation and restart explicitly: acknowledge the interruption, leave the current scene, and only then re-enter the flow if the user is truly starting over. Small details like that reduce the chance of state confusion.
Testing scene flows before shipping
The most reliable way to test scenes is to write a full interaction sequence and assert state at each step. NxCreator's hot-reload makes this fast: update the scene, trigger the entry command in your test bot, and walk through the flow. Keep a test Telegram account dedicated to this — do not use your own account or you will lose context constantly.
Test the happy path first, then test cancellation at each step, then test invalid input at each step. That three-pass approach covers the majority of production issues. The final thing to test is re-entry: what happens if a user enters the scene while already in it. Make sure the scene resets cleanly.
A scene that handles cancellation at every step and validates input at every step is production-ready. A scene that only handles the happy path isn't.
