ScenarioExpander
About 291 wordsLess than 1 minute
2025-10-09
📘 Overview
The ScenarioExpander is an operator designed to generate new or alternative scenarios based on original input scenarios. It utilizes a large language model (LLM) service to rewrite or reimagine the original content, creating different versions or expansions of the initial situation.
init
def __init__(self, llm_serving: LLMServingABC)| Parameter | Type | Default | Description |
|---|---|---|---|
| llm_serving | LLMServingABC | Required | LLM serving object implementing the LLMServingABC interface. |
Prompt Template Descriptions
| Prompt Template Name | Primary Use | Applicable Scenarios | Feature Description |
|---|---|---|---|
run
def run(self, storage: DataFlowStorage, input_scenario_key: str, output_key: str = "modified_scenario")| Parameter | Type | Default | Description |
|---|---|---|---|
| storage | DataFlowStorage | Required | DataFlow storage instance, responsible for reading and writing data. |
| input_scenario_key | str | Required | Field name for the original scenario in the input data. |
| output_key | str | "modified_scenario" | Field name for the new scenario in the output data. |
🧠 Example Usage
from dataflow.operators.conversations import ScenarioExpandGenerator
from dataflow.utils.storage import FileStorage
from dataflow.serving import APILLMServing_request
from dataflow.core import LLMServingABC
class ScenarioExpandGeneratorExample:
def __init__(self, llm_serving: LLMServingABC = None):
self.storage = FileStorage(
first_entry_file_name="input.jsonl",
cache_path="./cache_local",
file_name_prefix="dataflow_cache_step",
cache_type="jsonl",
)
self.llm_serving = APILLMServing_request(
api_url="",
model_name="gpt-4o",
max_workers=30
)
self.generator = ScenarioExpandGenerator(
llm_serving=self.llm_serving
)
def forward(self):
self.generator.run(
storage=self.storage.step(),
input_scenario_key="original_scenario",
output_key="modified_scenario"
)
if __name__ == "__main__":
pl = ScenarioExpandGeneratorExample()
pl.forward()🧾 Default Output Format
| Field | Type | Description |
|---|---|---|
| [input_scenario_key] | str | The original input scenario text. |
| modified_scenario | str | The new scenario generated by the LLM. |
Example Input:
{
"original_scenario":"A user is trying to log into their bank account but has forgotten their password."
}Example Output:
{
"original_scenario":"A user is trying to log into their bank account but has forgotten their password.",
"modified_scenario":"A traveling salesperson needs to access their corporate expense report system from a hotel with unreliable Wi-Fi, and their two-factor authentication token has just expired."
}
