Pets
Tested
- Avg AIMAC Debt 7.24
- Avg Cost $0.22
- Models Tested 42
- Zero Debt 2
- Avg Violations 13.0
Insights for Pets
- Common Issues: Elements must meet minimum color contrast ratio thresholds, Links must have discernible text, Select element must have an accessible name
- Category Difficulty: Rank #10 of 28 categories (based on average model performance)
- Real-World Comparison: The WebAIM Million audits accessibility across 1 million real websites annually. Using different automated tools, AI models showed 78.6% fewer detected issues than real Pets websites (13.0 axe-core violations vs 61.1 WAVE errors).
Model Gallery
Click a screenshot to view the generated HTML page, or click the model name for detailed results and prompts.
Model Results
AIMAC Debt:The model's accessibility debt for this category (lower = better)
| lower = better | in usd | |||
|---|---|---|---|---|
| DeepSeek V3.2 Speciale | 0.00 | $0.006 | 0 | 0 |
| GPT 5.3 Codex | 0.00 | $0.10 | 0 | 0 |
| GPT 5.4 Mini | 2.00 | $0.03 | 0 | 1 |
| Nemotron 3 Super (free) | 3.05 | $0.00 | 0 | 3 |
| Qwen3 Max | 3.32 | $0.036 | 0 | 4 |
| GLM 4.7 Flash | 3.53 | $0.004 | 0 | 5 |
| Kimi K2.5 | 3.53 | $0.03 | 0 | 5 |
| GPT 5.4 | 3.53 | $0.16 | 0 | 5 |
| MiniMax M2.7 | 3.53 | $0.07 | 0 | 5 |
| GPT 5.4 Pro | 3.71 | $6.85 | 0 | 6 |
| o4 Mini | 3.85 | $0.02 | 0 | 7 |
| Trinity Large Preview (free) | 3.98 | $0.00 | 0 | 8 |
| Olmo 3.1 32B Instruct | 4.00 | $0.004 | 0 | 4 |
| Claude Haiku 4.5 | 4.09 | $0.08 | 0 | 9 |
| gpt oss 120b | 4.19 | $0.004 | 0 | 10 |
| Grok 4.1 Fast | 4.19 | $0.003 | 0 | 10 |
| Qwen3.5 Flash | 4.19 | $0.0035 | 0 | 10 |
| Qwen3.5 397B A17B | 4.37 | $0.03 | 0 | 12 |
| Qwen3.6 Plus Preview (free) | 4.37 | $0.00 | 0 | 12 |
| o3 | 4.58 | $0.04 | 0 | 15 |
| Mistral Medium 3.1 | 4.70 | $0.014 | 0 | 17 |
| Kimi K2 Thinking | 4.80 | $0.03 | 0 | 19 |
| GPT 5.1 Codex Mini | 4.80 | $0.018 | 0 | 19 |
| Nova 2 Lite | 6.90 | $0.01 | 0 | 7 |
| Codestral 2508 | 7.85 | $0.005 | 0 | 11 |
| Qwen3 Coder Flash | 8.09 | $0.01 | 0 | 13 |
| Gemini 3.1 Pro Preview | 8.28 | $0.13 | 0 | 20 |
| Trinity Large Thinking | 8.37 | $0.01 | 0 | 16 |
| DeepSeek V3.2 | 8.44 | $0.0036 | 0 | 17 |
| Devstral 2 2512 | 8.51 | $0.004 | 0 | 18 |
| Qwen3 Coder Plus | 8.64 | $0.025 | 0 | 20 |
| Mistral Small 4 | 8.76 | $0.006 | 0 | 18 |
| R1 0528 | 9.03 | $0.02 | 0 | 28 |
| Mistral Large 3 2512 | 12.53 | $0.016 | 1 | 9 |
| Qwen3 Max Thinking | 12.73 | $0.04 | 2 | 8 |
| GLM 5 | 13.51 | $0.07 | 1 | 18 |
| Gemini 3 Flash Preview | 13.58 | $0.02 | 1 | 19 |
| KAT Coder Pro V2 | 13.80 | $0.017 | 1 | 23 |
| Grok 4.20 | 14.33 | $0.07 | 1 | 37 |
| Claude Opus 4.6 | 17.60 | $0.62 | 4 | 26 |
| Claude Sonnet 4.6 | 18.74 | $0.43 | 3 | 23 |
| Qwen3 Coder Next | 21.94 | $0.02 | 3 | 14 |