dark mode is everywhere but images are tricky some images like diagrams and screenshots look better inverted for dark backgrounds while photos and complex graphics become unreadable when flipped
websites face an awkward choice leave images as is and they jar against dark backgrounds invert everything and photos look terrible manually tag each image and you don’t scale
heuristics don’t help much because edge cases are everywhere a white background doesn’t guarantee safe inversion if there’s subtle color or gradient text
the approach
invertornot.com uses a neural network to predict whether an image should be inverted the idea came from gwern’s idea list and seemed worth building
i fine tuned efficientnet b0 on labeled images the model is small and fast which matters for an api that processes hundreds of images per request predictions happen in under 50ms on cpu
the api
three endpoints let you check images by upload, url, or hash responses include the prediction, sha1 of the image, and any errors redis caches everything so the model never processes the same image twice
results stay cached for urls since they typically point to stable images this keeps latency low and reduces compute
learning from mistakes
users can submit corrections when predictions are wrong these feed back into a finetuning dataset through a redis queue the model improves over time as it sees real edge cases
training data started with manual labeling plus this feedback loop the correction system makes the model continuously better without needing manual review
results and tradeoffs
validation accuracy sits around 95 percent on the initial dataset common cases like code screenshots, flowcharts, and product photos work reliably
edge cases still trip it up images with mixed content or unusual color schemes are harder the feedback loop helps but some images are genuinely ambiguous
the model is conservative by default when uncertain it leans toward not inverting since broken photos are worse than slightly mismatched backgrounds
deployment
the api runs on a single server with fastapi and redis onnx conversion keeps inference fast enough that gpu isn’t needed cors is open since any website should be able to use it
the frontend is simple drag and drop an image, see the prediction instantly, try example images to get a feel for what works
summary
invertornot solves a specific problem for dark mode implementations tree based heuristics don’t work well so a neural network makes sense efficientnet b0 is small enough to deploy easily while being accurate enough for real use
the feedback system is key users find edge cases faster than i ever could and corrections flow back automatically
useful if you’re building dark mode and want better image handling than manual tagging or rigid rules
check it out at invertornot.com or github