Generating color palettes

Colormind is a color palette generator, go to the homepage to try it out.

As a designer one of the first things I do when starting a new project is to get a sense of the color palette. Making color palettes is a difficult process because while most people can tell when a combination of colors is pleasing, it's hard to explain exactly why. It's thus doubly difficult to produce something that's pleasing and fits with certain pre-requisites, like branding guidelines.

In fact this is something I still have trouble with, and more often than not I resort to a "guess and check" approach, sampling colors from online color generators and sometimes photography. This is obviously pretty tedious and I've long thought that there should be a way to automate the process, somehow distilling the required intuition into a machine learning model.

One of my earlier stabs at this involved an LSTM (a neural net that outputs sequences). The results look ok but the LSTM really doesn't like bright colors. Because the model is afraid to be wrong, it likes to choose middling values - dull browns and greys.

In 2016 learned about a new thing called GAN. Unlike traditional neural nets that optimize for absolute accuracy, a GAN generates images that are visually plausible, solving the averaging problem of the earlier methods.

I downloaded the pix2pix code and modified it to perform color infill for partial palettes, training on data from Adobe Color. The GAN produces colors that often differ from human choices, but still looks deliberate and cohesive.

I trained the network on an NVIDIA GPU, with each epoch taking slightly more than an hour. On my setup the GPU performs roughly 10x faster than the CPU version.

Training GANs at this point is still a bit of a black art, and I tweaked the knobs until things looked about right. The L1 term in the generator has the biggest visual impact, affecting the spatial awareness of the model. When high values are used the output is less random but also less colourful.

The other obvious question is whether it overfits so that it's only repeating the input back to us. Although GANs are supposed to be resistant to this problem I passed the training data through it to see the result.

So this works pretty well for human-designed color palettes, but I also wanted to try extracting colors from other sources. In visual design the best inspiration always come from adjacent fields.

Paintings and other artwork are great for color extraction as they tend to use color in an intentional way.

The main issue with photography and similar sources is that most photos don't actually make for good color palettes, and common color quantization algorithms don't do a good job of making color palettes in the first place. I go over how I approached this problem in the next post.

The final generator uses a combination of Adobe Color data and a hand-picked selection of palettes from Dribbble. In the back it's an ensemble of trained models with a variety of hyperparameters to produce slightly different effects. It's not perfect and does require some human discretion, but I think it’s a pretty useful tool for color inspiration.