Accurate Color Measurement for Any VLM
VLM and multimodal vision encoders lose color precision during patching and downsampling. Coriro measures the actual pixels, extracts palettes in perceptually uniform OKLCH, and passes the results alongside the image — so the model receives measured color data instead of estimates.
Core runtime is pure Python. No GPU required. Zero lock-in.
Usage
from coriro import measure m = measure("image.png") # get results in the format your pipeline needs m.to_json() # raw measurement JSON (schema) m.to_xml() # XML context block m.to_prompt() # natural language for system prompts
What it measures
Palettes include explicit measurement criteria when consolidation is enabled (default). If a color isn’t listed, it’s below the threshold — omission is signal, not oversight.
- Dominant color and ranked palette in perceptually uniform OKLCH
- ICC profile conversion to sRGB before measurement (Display P3, Adobe RGB)
- Perceptual ΔE consolidation — near-identical colors collapsed, black and white families unified
- Chroma-aware supplementation for high-saturation colors missed by area-based extraction
- Spatial color distribution via fixed grid partitioning
- Text foreground colors via neural text detection
- Solid accent region detection for UI elements (CTAs, icons, badges)
- CNN-guided pixel stabilization for compression artifacts and anti-aliasing
How it fits
Coriro adds a color measurement step to your inference pipeline — parallel to the vision encoder, not inside it. The model receives measured color data alongside the image. No weights modified. No configuration changed. Remove it and the pipeline runs as before.
Works with
Claude, GPT, Gemini, Llama, Qwen-VL, and any multimodal pipeline. Output integrates through tool calls, context blocks, system prompts, or structured features in custom architectures.
If your app runs on a JS frontend (Next.js, Vercel, etc.), host Coriro as a small Python sidecar API and forward its output alongside the image.