February 19th, 2025 4pm CET/ 9am CST
Webinar "AI Transformation with Corporate LLM"
Contact us

Smart trichoscopy application with a hardware case

Custom mobile app development aiding a global healthtech company in medical diagnostics

ABOUT
the project

Client:

Healthtech Solution Provider

Location:

Country flag

Poland

| City flag

Germany

Company Size:

20+ Employees

Industry:

Services:

Technologies:

Flutter

Camera2/CameraX

Dart FFI

OpenCV and C++

SQLite 3

Android APIs

Firebase

Intercom Integration

BLoC Pattern

The trichoscopy application is designed to assist physicians in the comprehensive evaluation of scalp and hair conditions. The app transforms a standard smartphone into a digital trichoscope. Thanks to using a proprietary computer vision algorithm, the software automatically calculates key hair parameters, including measured area, hair count, density, average length, anagen-telogen ratio, and the number and density of vellus and terminal hairs. Thanks to quick analyses of photos, the app significantly improves the efficiency and accuracy of doctors’ work.

Quotation marks Quotation marks

A key challenge of this project was the fact that the specialized hardware case existed as a single prototype. As a result, our development process was guided solely by the client’s documentation without direct access to the physical hardware. This limitation required precision and flexibility during development to meet the project’s technical requirements. And we made it with flying colors.

Maksym Marina

Maksym Marina

Flutter Software Engineer

Trichoscopy application landscape image

Customer

Our customer is a global healthtech company specializing in innovative hair research and diagnostics solutions. They offer patented technology that delivers precise and objective trichoscopy measurements, helping to improve the diagnosis of hair-related diseases and treatment monitoring.

Business Challenge

Our customer wanted to develop a mobile iOS and Android application that would work with a company’s proprietary computer vision algorithm and analyze patients’ hair photos taken with the help of a smartphone on its own or paired with a special hardware case.

Why Leobit

Our customer was referred to Leobit through their collaboration with another healthtech company, for whom we successfully developed an AI-based digital dermoscopy application. The customer was impressed by the quality of our work, our adherence to project timelines, and our ability to solve technical challenges, such as distortion issues when using additional photo lenses for image capture. Given that the trichoscopy application posed similar challenges, the customer was confident that Leobit was the right partner to bring their innovative solution to life.

Project in details section_Trichoscopy app

Project
in detail

Leobit was tasked with developing a cross-platform mobile application on Flutter that would communicate seamlessly with a customer’s computer vision algorithms via an API.

The app allows doctors to take photos of patients’ scalps, choose zoom levels, and tag the specific part of the scalp being photographed. Doctors can assign these photos to existing patients or create new patient profiles. Once the examination session is complete, the photos are archived and uploaded to the server. We implemented background services to ensure that the upload process occurs only when the conditions for successful upload—such as internet availability and sufficient battery level—are met. This approach guarantees reliable data transfer without interruption. Additionally, when doctors open a photo, they can simply press an analysis button to receive immediate analysis results.

By default, mobile cameras do not offer control over the lens type (standard, wide-angle, or telephoto). However, in this application, precise lens control was essential. To address this, we modified the camera’s native API to allow users to manually select the specific lens they want to use. This enhancement gave doctors full control over lens selection, significantly improving the app’s functionality.

Given the wide variety of Android devices, our customers decided to focus on those with high-quality cameras to guarantee optimal functionality. These recommended devices ensured the app’s full performance, but the app can also work on other devices. We implemented a system that reads the phone’s hardware before the app is used. This includes checking which lenses are available, their focal lengths, supported zoom levels, and resolution. The app then adapts its behavior based on this information. For instance, if a device lacks a telephoto lens, certain features are disabled or adjusted accordingly.

To facilitate real-time user support, we integrated the Intercom SDK into the app. This integration allows users to chat with online support and receive prompt responses to their questions directly from within the app.

To calibrate the camera, we utilize Aruco boards and OpenCV. The process involves capturing an image of the Aruco grid, which is then analyzed to generate the necessary coefficients for optimal camera performance. During the image analysis, our goal is to detect and identify all the individual squares on the board. This detection is handled by a small AI algorithm embedded within OpenCV, ensuring precise and accurate calibration for the camera.

To ensure smooth usability in offline mode, we integrated a local SQLite 3 database into the application. This allows for caching of essential data such as examinations, patient profiles, and other critical information. Any data generated during offline usage is stored locally and automatically synchronized once an internet connection is reestablished. This is managed seamlessly through background services, ensuring no data is lost and maintaining workflow continuity.

We implemented Firebase for application distribution, local testing, and error monitoring. Thanks to using Firebase Crashlytics, any errors encountered by users are immediately logged and uploaded (with the user’s permission). This real-time error reporting allows our team to analyze issues quickly and release fixes promptly, ensuring a stable and reliable application experience.

Trichoscopy process image
project-in-detail

Architecture Development

To ensure fast and efficient app performance, Leobit developed a three-layer architecture based on the BLoC architecture pattern. This setup ensures that all interactions with the interface are sent to the BLoC layer, which handles requests to the infrastructure. Similarly, responses to requests are processed by the BLoC layer before being displayed in the UI. This design ensured smooth and organized communication between different components of the app.

project-in-detail

Image processing

One significant challenge was dealing with image distortion caused by the external hardware case and its additional lenses, resulting in a “fish-eye” effect. Such distorted images are unsuitable for analysis, so we calibrated each image to correct the distortion, transforming it into a format compatible with the algorithm. If this process had been handled in Flutter alone, the performance would have dropped from 30fps to around 2fps. This would make application slow and unusable.

So, every picture is captured through Flutter, but the actual image processing occurs in C++ using the OpenCV framework. This process allows for fast and efficient image modification, calibration, and transfer back to Flutter.

project-in-detail

Extending Flutter functionality with C++

Flutter was primarily used for the app’s interface, but it couldn’t meet all of the performance requirements. For more complex tasks, our developer suggested supplementing Flutter with Kotlin for Android and Swift for iOS, using the native Android and iOS APIs. Since the image analysis algorithms were written in C++, we forced Flutter to call these C++ functions for optimization purposes using Dart FFI. As a result, we were able to reduce image processing time from 2 minutes to less than 10 seconds.

project-in-detail

Setting up hardware

Our customer developed a one-of-a-kind hardware case, which is supposed to enhance the quality of the patient’s scalp photos. The first time the hardware case is connected, calibration is required to ensure accurate image recognition. The customer provides a special calibration board to help with this process.

We used C++, OpenCV, and Dart FFI to ensure the images were calibrated correctly for analysis. In addition to macro lenses, the case also includes lights, allowing users to illuminate the photography area with standard, polarized, or UV light. Using a serial protocol, the app detects the hardware case’s connection and provides control over the lighting—allowing doctors to choose the light type or switch it on or off.

The Journey
Behind Our Success

Team:
1 Flutter Developer
1 Project Manager
Proof of Concept (1 week)
Application Skeleton (2 weeks)
Data Management (1 month)
Camera Tuning (1 month)
iOS Support (currently)

Integrating Image Analysis

Basic Examination Functionality

App Navigation

Authentication

Handling doctor’s information, patients, examinations, etc.

Calibration

Lens Locking

Hardware Case Adaptation

Implementing iOS analog of Android Native Functionality

Technology Solutions

  • A three-layer architecture based on the BLoC architecture pattern
  • Extended Flutter functionality with C++ through Darf FFI
  • IoT hardware case calibration using C++, OpenCV, and Dart FFI
  • Modification of the camera’s native API to allow users to manually select the specific lens
  • Fish-eye distortion elimination thanks to using Darf FFi and OpenCV library

Value Delivered

  • Seamless communication between IoT hardware case and mobile application
  • Reduced image processing time from 2 minutes to less than 10 seconds by using Darf FFI to force Flutter to call C++ functions for optimization purposes
  • Enhanced camera capabilities