Paragraph Count Match (par_ct_match)
Contents
Metric description
Paragraph count match counts paragraphs in the output with optional min and max bounds and thresholds for what counts as a paragraph (minimum sentences or words per paragraph).
How to interpret the score
- 100: the count satisfies the configured rules.
- 0: fails the rules or could not be evaluated.
API usage
Prerequisites
After the environment variables are configured, the next step is to create a JSON payload for the custom-runs request. For a field-by-field description of the payload (top-level keys, evaluations, and each row in data), see Custom run request body.
Shortname: par_ct_match
Default threshold: 100
Structural metrics run without an LLM (deterministic checks). Your run may still include model_slug where the API expects it; scoring does not depend on it for this category.
Inputs (each object in data)
output(str, required): Text split into paragraphs.
metric_args
-
min_count(numberoptional): Minimum number of paragraphs required. -
max_count(numberoptional): Maximum number of paragraphs allowed. -
min_sentences_in_paragraph(numberoptional): Minimum sentences for a block to count as a paragraph. Default:1. -
min_words_in_paragraph(numberoptional): Minimum words for a block to count as a paragraph. Default:1.
Eval metadata
Structural metrics do not populate eval_metadata; the field is omitted or ull on the result object.
Example
import json
import os
import requests
from dotenv import load_dotenv
load_dotenv(override=True)
_API_KEY = os.getenv("AEGIS_API_KEY")
_BASE_URL = os.getenv("AEGIS_API_BASE_URL")
_CUSTOM_RUN_URL = f"{_BASE_URL}/runs/custom"
def post_custom_run(payload: dict) -> requests.Response:
"""POST JSON payload to Aegis custom runs; returns the raw response."""
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {_API_KEY}",
}
return requests.post(
_CUSTOM_RUN_URL,
headers=headers,
data=json.dumps(payload),
)
if __name__ == "__main__":
data = [
{"output": "Para one.\n\nPara two."}
]
payload = {
"threshold": 100,
"model_slug": "o4-mini",
"is_blocking": True,
"data_collection_id": None,
"evaluations": [
{
"metrics": [
{
"metric": "par_ct_match",
"metric_args": {"min_count": 1, "max_count": 5},
},
],
"threshold": 100,
"model_slug": "o4-mini",
"data": data,
}
],
}
response = post_custom_run(payload)
response.raise_for_status()
print(json.dumps(response.json(), indent=2))