Learning from failed scans
January 20, 2026
Since version 1.0, released last September, FairScan has been updated 11 times with regular improvements. Many of those updates focus on FairScan's core, the automatic processing of captured images, with the goal of producing clean PDFs effortlessly. And yet, the result is not always what you expect. The result is sometimes not as good as what you get from a commercial app, even if it's done in a much more respectful way.
Of course, what those commercial apps produce is the result of years of work. Some of them also store all user scans in the cloud, which gives them access to massive amounts of data they can use to improve their algorithms. FairScan doesn't collect any data from its users, and its detection model is trained on a public dataset.
With the latest release, however, there is now an easy way to share the images that FairScan doesn't handle well. And that can help make FairScan better.
Where the processing can fail
When you tap on the capture button in FairScan (the big round one), you get a preview of the scanned page after a fraction of a second. It's not the photo you just captured, but the result of the automatic image processing pipeline that the app runs behind the scenes. As the app is still far from perfect, each step of that pipeline can produce suboptimal results.

Here are some typical examples:
- Document detection is the most critical step. It may fail to detect edges accurately, especially in difficult conditions, for example when multiple documents appear in the frame.
- Color detection tries to determine automatically whether a document should be processed as grayscale or as color, but this is not always as straightforward as it sounds, particularly with low-saturation colors or under dim lighting.
- Brightness and contrast are adjusted automatically, which can sometimes result in documents that are too bright or too dark.
When the app produces a good-looking document (which is hopefully most of the time), that's great for users. But those cases don't help improve FairScan. The real learning comes from the situations where the result is disappointing. Those "bad" cases highlight what needs to be improved and help guide future work.
How you can help
When a page is processed badly, you can now report it easily:
- Open the menu at the top right and go to the About screen
- Tap on Report a problem with last captured image

Your system will then let you choose which app to use to send the image. The email address is prefilled with contact@fairscan.org. Nothing is sent directly by FairScan: the app only prepares the data and hands it over to the app you choose to actually send it. Your email app should let you see the image before sending it, and you can add a short description of the issue you noticed.
Please note that the image that is sent is not the result of FairScan's processing. It is the raw image captured by your device, which serves as the input of the processing pipeline. Also note that tapping End scan on the export screen deletes all temporary images, including the last captured one.
What I will do with what you send (and what I won't do)
I will not share or publish your images. This is probably what you would expect from an app that aims to be respectful, but it's worth stating explicitly. Please avoid sending sensitive documents, though: I would not feel comfortable receiving them. Also keep in mind that email is not encrypted.
When you send an image, I reproduce locally the processing that happened on your device. That's why the FairScan version you used is automatically included in the email subject. It allows me to observe the issue myself and investigate it properly.
Over time, these reports help reveal patterns: situations or document types that are not well represented in the dataset I built to train the document detection model. When that happens, I do not add the received images to the dataset. Instead, I create similar images myself. This is because copyright may apply both to the photo itself and to the documents that appear in the image.
The dataset currently contains over 600 images. That's already useful, but it's still a very small sample compared to the variety of documents and real-world conditions in which FairScan is used every day. Inevitably, it has biases that need to be identified and addressed. This is especially important since the same dataset is used not only for document detection, but also to evaluate other heuristics in the processing pipeline, such as color detection (see this blog post).
With FairScan, you own your data. Your scans are not automatically uploaded to a cloud to be analyzed or reused by large companies.
If you're willing to help improve FairScan, you can now easily share an image that the app didn't process correctly. This directly helps improve the automatic processing pipeline and make its results more reliable over time.
Every report contributes to better scans, for everyone.