Color transfer is a long-standing problem that seeks to transfer the color style of a reference image onto a source image. By using different references, one can alter the color style without changing the original image content in order to emulate different illumination, weather conditions, scene materials, or even artistic color effects. It can be used for color correction for rendered imagery, artistic style transfer, turning a day scene into a night scene, image analogies.
Color transfer task defines the following inputs:
- A content image — the image we want to transfer the color palette to;
- A style image — the image we want to transfer the color palette from;
- A generated image — the image that contains the final result.

A lot of research over the last years has focused on this task. The best results were obtained using deep learning techniques inspired by the idea of semantic-guided transfer. In this case, the network architecture VGG-19 is used for extracting content feature maps. But the number of weight parameters are quite large and the models are very heavy (over 533MB). Due to its size, the deploying VGG is a difficult task especially if you are looking to deploy a model to run locally on mobile. Therefore, we need a fast statistical method that you can easily deploy on mobile platforms.
We present a new method of color transfer based on simple statistics. The main idea of the proposed method is to sort separately content and style image pixels by its brightness, get some statistics from 5-10%, 24-26%, 49-51%, 74-76% and 99-99.5% quantiles of the brightness distribution. We choose the transformation to match the covariance and mean of quantile distribution.
In particular, let x_i = (R, G, B)^T be a source image pixel. Each pixel is transformed as:
x_{S'} = Ax_s + b
where A is a 3×3 matrix and b is a three-dimensional vector. New data points x_{S'} will have the desired mean and covariance:
µ_{S'} = \frac{1}{n} \sum_{i}^{}x^i
Σ _{S'} = \sum_{i}^{} (y_i - µ_i)(y_i - µ_i)^T
We went to find some A and b to satisfy the matching of the style and result image statistics. Hence, the transform is:
x_{S'} = Σ_S^{\frac{1}{2}}Σ_C^{-\frac{1}{2}}(x - µ_C) + µ_S
To compute the square root of the matrix we use a 3D color matching formulations. First, let the eigenvalue decomposition of a covariance matrix be A = UΣU^T . Then, we define a matrix square-root as: A^{\frac{1}{2}} = UΣ{\frac{1}{2}}U^T . Then, the transformation is given by A = Σ_S^{\frac{1}{2}}Σ_C^{\frac{1}{2}} .
At the next step we need histogram equalization and use it to improve the contrast of the source image. Then, we apply linear interpolation to find the covariance matrix and mean for each pixel and calculate new pixel values. And at the last step, we need to clip any values that fall outside the range [0, 255] and merge the channels back together.
However, our method still has limitations. It may mismatch some regions which exist in the source but not in the reference, and thus cause the incorrect color transfer. Our color transfer is suitable for semantically similar images. So, it may lead to some unnatural color effects for image pairs without a semantic relationship, such as the yellow water in the 3rd failure example
This simple algorithm can be easily implemented using different rendering frameworks. You can speed up transformation calculation using GPU unit and achieve 30 fps for 600px*400px frames on the Apple A10 Fusion GPU.

About the author. Anna Khanko is a Machine Learning researcher at Digamma.ai. She received her MS in Computer Science at the National Technical University of Ukraine where she was doing the research on Natural Language Processing. These days, Anna continues to work on NLP and Computer Vision areas, she developed an information retrieval system, chatbots, and image processing applications. Anna is always keen to learn new things and broaden her professional horizons.