You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+13Lines changed: 13 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -172,6 +172,19 @@ LLM-based nodes require a model configured in `models.yaml` and runtime paramete
172
172
173
173
As of now, LLM inference is supported for TGI, vLLM, OpenAI, Azure, Azure OpenAI, Ollama and Triton compatible servers. Model deployment is external and configured in `models.yaml`.
174
174
175
+
## SyGra as a Platform
176
+
177
+
SyGra can be used as a reusable platform to build different categories of tasks on top of the same graph execution engine, node types, processors, and metric infrastructure.
178
+
179
+
### Eval
180
+
181
+
Evaluation tasks live under `tasks/eval` and provide a standard pattern for:
182
+
183
+
- Computing **unit metrics** per record during graph execution
184
+
- Computing **aggregator metrics** after the run via graph post-processing
0 commit comments