# Examples

## Common Workflow Examples

Try some of these ready-to-use TUI configurations for common workflows like test runs, production inference, prompt files, and more.

### Minimal: Test Model

Use a small test model to verify REE is working.&#x20;

{% hint style="info" %}
Note that test models have random weights and will produce nonsensical output. This is expected.
{% endhint %}

In the TUI, fill in the following parameters:

* **Model Name:** `hf-internal-testing/tiny-random-LlamaForCausalLM`
* **Prompt Text:** `Hello world`
* **Max New Tokens:** `24`
* Press `r` to run.

<figure><img src="https://content.gitbook.com/content/jHECdpSAZDuPfU2oZmM2/blobs/w6HvHXK2i83qu4ipUAYU/run_example.png" alt=""><figcaption></figcaption></figure>

### Production: Reproducible Inference with a Real Model

* **Model Name:** `Qwen/Qwen3-0.6B`
* **Prompt Text:** `Explain quantum entanglement in simple terms.`
* **Max New Tokens:** `256`
* **Extra Args:** `--operation-set reproducible --temperature 0.7 --top-p 0.9`
* Press `r` to run.

<figure><img src="https://content.gitbook.com/content/jHECdpSAZDuPfU2oZmM2/blobs/pBjce2jdh0zupJGDvxeo/production_inference_example.png" alt=""><figcaption></figcaption></figure>

### Using a Prompt File

Save your prompt to a .JSONL file:

```json
{"prompt": "What is 2 + 2? Show your reasoning step by step."}
```

In the TUI:

* **Model Name:** `Qwen/Qwen3-0.6B`
* **Prompt Text:** *(leave blank)*
* **Prompt File:** `/path/to/your/prompt.JSONL`
* **Max New Tokens:** `128`
* **Extra Args:**&#x20;

```bash
--operation-set reproducible
```

* Press `r` to run.

<figure><img src="https://content.gitbook.com/content/jHECdpSAZDuPfU2oZmM2/blobs/mmQFU2wZwidZrt5DeQYm/reproducible_with_prompt_file_example.png" alt=""><figcaption></figcaption></figure>

### Short-Circuiting (Reasoning Models)

Short-circuiting forces the model to exit a generation phase early by injecting a specific token at a given step. This is useful for reasoning models (e.g., Qwen3) that have thinking/end-thinking phases, where you want to limit the token budget spent on "thinking."

Both `--short-circuit-length` and `--short-circuit-token` must be provided together in **Extra Args**.

* **Model Name:** `Qwen/Qwen3-14B`
* **Prompt Text:** `Solve this math problem.`
* **Max New Tokens:** `300`
* **Extra Args:**

```bash
--operation-set reproducible --short-circuit-length 100 --short-circuit-token 151668
```

* Press `r` to run.

<figure><img src="https://content.gitbook.com/content/jHECdpSAZDuPfU2oZmM2/blobs/fXKLwUikBHXPDm8U6Mww/short_circuiting_example.png" alt=""><figcaption></figcaption></figure>

### Validating a Receipt

Validation ensures that a receipt remains internally consistent and that its hashes are untampered and uncorrupted, without requiring re-computation.

After a successful run, switch the TUI to validate mode:

* **Subcommand:** `validate`
* **Receipt Path:** Paste the path to your receipt JSON file (e.g., `~/.cache/gensyn/Qwen--Qwen3-0.6B/.../metadata/receipt_20260311_155048.json`)
* Press `r` to run.

<figure><img src="https://content.gitbook.com/content/jHECdpSAZDuPfU2oZmM2/blobs/8RNvDWhKSYakEeZyjHag/validate_demo.png" alt=""><figcaption></figcaption></figure>

### Verifying a Receipt

Verification re-runs the entire inference pipeline and comparing the results with the receipt to ensure reproducibility.

To prove a receipt is reproducible by re-running the full inference pipeline:

* **Subcommand:** `verify`
* **Receipt Path:** Paste the path to the receipt JSON file
* Press `r` to run.&#x20;

REE will re-execute the computation and compare the output against the receipt. This is slower than `validate` since it runs the full pipeline, but it's the strongest proof that the result is reproducible.

<figure><img src="https://content.gitbook.com/content/jHECdpSAZDuPfU2oZmM2/blobs/Uau3dNioxzif1aMvQqRT/verify_demo.png" alt=""><figcaption></figcaption></figure>

{% hint style="success" %}
Use `validate` for a quick integrity check or use `verify` when you need definitive proof.
{% endhint %}
