Skip to content

Add Seed-OSS architecture support (SeedOssForCausalLM)#2116

Draft
PMeeske wants to merge 1 commit intomicrosoft:mainfrom
PMeeske:feat/seed-oss-architecture
Draft

Add Seed-OSS architecture support (SeedOssForCausalLM)#2116
PMeeske wants to merge 1 commit intomicrosoft:mainfrom
PMeeske:feat/seed-oss-architecture

Conversation

@PMeeske
Copy link
Copy Markdown

@PMeeske PMeeske commented May 3, 2026

Status: Draft / Suggestion. Opening as draft for community review and to give the maintainers a chance to weigh in on direction before promoting. The patch is functional end-to-end and was validated against a real production model, but the upstream-fit decisions (PR target branch, CI gates, test coverage expectations) are yours.

What

Adds support for ByteDance's Seed-OSS architecture (SeedOssForCausalLM) to onnxruntime_genai.models.builder. The motivating model is NousResearch/Hermes-4.3-36B (Hermes fine-tune of Seed-OSS-36B), which is currently rejected by the builder with NotImplementedError: model is not currently supported.

Why

Seed-OSS is structurally a Llama-family decoder with two extra per-layer normalisations injected into the residual branches. Because the C++ runtime (DecoderOnly_Model) is architecture-agnostic for decoder-only models — it loads and runs whatever ONNX graph the builder produces — the only blocker is the Python-side builder router and a one-line addition to the C++ LLM-type registry. No kernel changes, no KV cache changes, no graph executor changes.

Architecture spec

Verified against the public Seed-OSS-36B config + the Hermes-4.3-36B fine-tune. Key invariants:

Input → input_layernorm → Attention → attn_post_norm ─→ + ─→
                                                         │
                                                         └── (residual)

      → post_attention_layernorm → MLP → ffn_post_norm ─→ + ─→
                                                         │
                                                         └── (residual)
Aspect Value
attention_bias True (QKV projection biases present)
attention_out_bias False (o_proj is clean)
Attention GQA — num_key_value_heads != num_attention_heads (8 KV / 80 Q on the 36B)
RoPE rope_theta = 1e7 (higher than Llama-3's 5e5)
Extra norms (vs Llama) attn_post_norm after attention, ffn_post_norm after MLP — both inside the residual path
Activation silu (same as Llama)
Normalisation RMSNorm (same as Llama)

The two post-norms are the only structural delta from Llama. attention_bias is auto-detected by the existing base.py logic via hasattr(attn.q_proj, 'bias'), so no manual override is required at runtime — but SeedOssModel.__init__ sets the explicit flag for clarity in case the heuristic is bypassed.

Patch shape

File Change LoC
src/python/py/models/builders/seed_oss.py NEW. SeedOssModel(LlamaModel) with make_layer override that injects attn_post_norm + ffn_post_norm between attention/MLP and the residual add. Auto-detects post-norm tensor location (HF layouts vary: self_attn.post_attn_layernorm vs layer.attn_post_norm — both supported). 132
src/python/py/models/builder.py +1 import in the from builders import (...) block, +1 elif in create_model routing SeedOssForCausalLM to SeedOssModel. +3
src/python/py/models/builders/__init__.py +1 line: from .seed_oss import SeedOssModel. +1
src/models/model_type.h Add \"seed_oss\" to the LLM string array, alphabetically (between \"qwen3\" and \"smollm3\"); bump std::array<...,21><...,22>. +1, -1

Total: ~135 lines net, no C++ runtime changes beyond the type-registry string.

Validation

End-to-end build verified against NousResearch/Hermes-4.3-36B:

python -m onnxruntime_genai.models.builder \
  -i /path/to/Hermes-4.3-36B \
  -o ./hermes-4.3-36b-onnx-int4-dml \
  -p int4 -e dml \
  --extra_options int4_block_size=32 int4_accuracy_level=4
  • All 64 decoder layers + final norm + LM head quantised cleanly (~64 INT4 MatMul nodes per layer)
  • Output graph: model.onnx (704 KB) + model.onnx.data (21 GB at INT4 + biases at f16)
  • Loads through Microsoft.ML.OnnxRuntimeGenAI.DirectML 0.13.2 C# bindings on AMD DirectML hardware (RX 9060 XT, 16 GB VRAM)
  • Produces coherent generation on chat prompts — no DmlFusedNode errors, no shape mismatches

The same patch produces a CUDA build cleanly (-e cuda), and the 2-layer stub smoke-tests in our internal harness pass.

A note on genai_config.json model.type

After the builder writes genai_config.json, the model.type field is set to seed_oss (matching config.json's model_type). The C++ runtime recognises this once \"seed_oss\" is in IsLLM's array, so no manual config-rewrite is needed at inference time.

(Operators on a release that doesn't yet include this PR can work around the C++ side by setting model.type = \"llama\" in genai_config.json after the build — Seed-OSS is sufficiently Llama-shaped that the runtime accepts it. With this PR merged, that hack is unnecessary.)

Things I'd appreciate maintainer guidance on

  1. CI test coverage. Happy to add a 2-layer stub builder test for Seed-OSS following the pattern of existing per-arch coverage if there is one — a pointer to the right file would help.
  2. Whether you want the layer_attrs defensive-init pattern. SeedOssModel.__init__ includes if not hasattr(self, 'layer_attrs') or self.layer_attrs is None: self.layer_attrs = {} — needed because LlamaModel base initialisation paths vary across recent commits. Could be removed once base.py guarantees the attribute. Open to either approach.
  3. Documentation placement. Happy to add a short note to the per-architecture support docs once you point me at the right location.

Out of scope (deliberately)

  • Adding new test fixtures pulling Seed-OSS weights — wanted to keep this PR's scope tight.
  • Changes to the C++ decoder pipeline — none required.
  • Vision/multimodal Seed-OSS variants — none exist publicly today; if/when they ship I'd open a separate VLM PR.

Spec material in this PR description was derived from internal architectural notes assembled while validating the patch against the 36B production model. Happy to expand any section, tighten the language, or restructure to match your contribution-guide template.

ByteDance's Seed-OSS architecture (`SeedOssForCausalLM`) is structurally
a Llama-family decoder with two extra per-layer normalisations injected
into the residual branches:

  Input → input_layernorm  → Attention → attn_post_norm ─→ + ─→ (residual)
       → post_attention_layernorm → MLP → ffn_post_norm  ─→ + ─→ (residual)

It also sets `attention_bias=True` (QKV projection biases present),
`attention_out_bias=False` (o_proj clean), uses GQA, and a high
`rope_theta=1e7`. NousResearch's Hermes-4.3-36B is the most prominent
Seed-OSS-based open-weight model and was the motivating use case here.

This patch adds:
  * `builders/seed_oss.py`: `SeedOssModel` inheriting from `LlamaModel`
    with a `make_layer` override that wires the two extra layernorms
    inside the residual path. Auto-detects post-norm tensor location
    (some HF weight layouts expose it as `self_attn.post_attn_layernorm`,
    others as `layer.attn_post_norm` — both supported). `layer_attrs`
    is initialised defensively since LlamaModel base does not always
    populate it depending on the version path.
  * `builders/__init__.py`: export `SeedOssModel`.
  * `builder.py`: route `SeedOssForCausalLM` to `SeedOssModel` in
    `create_model` and add the import.
  * `models/model_type.h`: register `seed_oss` in the LLM type array
    (size 21 → 22) so the C++ runtime selects the standard
    `DecoderOnly_Model` pipeline at load. No graph or KV-cache changes
    in C++ are required — Seed-OSS's structural similarity to Llama
    means the existing decoder-only runtime handles the resulting graph
    transparently.

Validated on a CPU build target and on `-e dml` against
`NousResearch/Hermes-4.3-36B` (36B, FP16 BF16 source, INT4 output via
the `int4_block_size=32` path); the resulting graph loads cleanly
through `Microsoft.ML.OnnxRuntimeGenAI.DirectML 0.13.2` and produces
coherent generation on AMD DirectML hardware (RX 9060 XT, 16 GB VRAM).
Total delta is ~135 lines of Python plus a single one-line C++ array
update, with no kernel or runtime changes.
@PMeeske
Copy link
Copy Markdown
Author

PMeeske commented May 3, 2026

@microsoft-github-policy-service agree

@PMeeske
Copy link
Copy Markdown
Author

PMeeske commented May 3, 2026

For transparency: the Seed-OSS architectural mapping in this PR — the make_layer topology, post-norm injection points, attention_bias / attention_out_bias detection, and the C++ runtime "architecture-agnostic decoder-only" rationale — was authored by me on 2026-04-27 in a private project, before shipping it upstream. I used AI assistance (Claude Code) to apply the patch to your current main, add a defensive layer_attrs init for cross-version compatibility, write the commit message and this PR body, and orchestrate the fork/branch/PR mechanics. The substantive engineering decisions are mine; the upstream-fit polish was AI-assisted. Happy to expand on any specific design choice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants