Blendoku & Color Interpolation

Lately I’ve been playing Blendoku on my phone. This is a very pretty game, the object of which is to fill in a bunch of missing colors. It looks sort of like a crossword puzzle with several colored squares and even more black squares. The player’s job is to fill in all of the black squares with the colors that best blend the gaps between colorful squares. The screenshot below shows an almost completed game.

If we wanted to make our own random levels of Blendoku, how would we do it? I haven’t rebuilt the entire game (yet), but I have figured out how to make the underlying color gradients. Here’s how to do it in R.

Blendoku is played on an n × m grid and begins with several key cells filled with colors. The objective of the game is to fill the remaining cells with the remaining provided colors. If we want to generate a random grid of colors that blend in the way that Blendoku grids do, we’ll need some matrix algebra. First, select random cells from your grid and assign these cells random colors. Then, calculate the distance matrix, D, of Euclidean distances between every cell in the grid. Every element of D is the distance between two corresponding cells in your grid. Next, calculate W=1/D, the inverse weights matrix. Row standardize this matrix so that every row sums to one. Finally, the missing color values are computed by matrix multiplying the inverse weights matrix by the vector of colors we started with. The end result is a color value for every grid cell that is a spatially-weighted average of the random colors we started with.

In the code provided here, the process is a little bit more complex because we are dealing with three dimensional color space and I have made a few tweaks to the algorithm to minimize it’s (surprisingly heavy) memory and processor usage.

It is fun to experiment with the inverse weights matrix. In particular, if we exponentiate D (the distance matrix) before we take it’s inverse, we can control the “speed” of blending between colors. So, if we let W=1/D, the blending mode is just a linear function of distance. If W=1/D^0=1, then every cell is weighted equally (with a value of 1) regardless of distance. Therefore, all of the colors we fill in will be the same – just the average of the given random colors we started with. This can be seen in the center of the image below. If W=1/(D^2), then the nearby cells are weighted much more heavily than further away cells. This means that the transitions between colors are a little more distinct or dramatic. In fact, something extremely cool happens as the exponent increases: the resulting grid approximates a Voronoi Diagram. Finally, if the value of the exponent is negative, then the spatial weights become bigger the further you get from a color. This means that if there is blue cell in the upper left corner, it’s influence will be felt in the lower right corner. Check out the top left square in the image below to see how this works.  I’ve tried to illustrate many of these different characteristics below. The image appears smoother because I’ve chosen to use a 50 × 50 grid rather than the 10 × 10 from above.

Note: To run the code below, you will need R – a free and excellent program for statistical computing.

```library(fields)
library(reshape2)

random.colors <- function(row.n, col.n, n)
{

random.r <- sample(0:255, n, replace=T)/255
random.g <- sample(0:255, n, replace=T)/255
random.b <- sample(0:255, n, replace=T)/255
random.row <- sample(1:row.n, n, replace=T)
random.col <- sample(1:col.n, n, replace=T)

dedup <- !duplicated(paste(random.row,random.col,sep=" - "))

starting.colors <- data.frame(row=random.row, col=random.col, red=random.r, green=random.g, blue=random.b)
starting.colors <- starting.colors[dedup,]
starting.colors\$color <- rgb(red=starting.colors\$red, green=starting.colors\$green, blue=starting.colors\$blue)

return(starting.colors)
}

random.picture <- function(row.n=10, col.n=10, starting.colors, exponent=1, interpolate=F, highlight=F)
{
starting.matrix <- data.frame(row=rep(1:row.n, times=col.n), col=sort(rep(1:col.n, times=row.n)))
starting.matrix <- merge(starting.matrix, starting.colors[,c("row","col","color")], by=c("row","col"), all.x=T, all.y=F)
starting.matrix <- acast(starting.matrix, row~col, value.var="color")

image <- data.frame(row=rep(1:row.n, times=col.n), col=sort(rep(1:col.n, times=row.n)))
image <- image[!paste(image\$row,image\$col,sep="-")%in%paste(starting.colors\$row,starting.colors\$col,sep="-"),]

dist.matrix <- as.matrix(rdist(image[,c("col","row")],starting.colors[,c("col","row")]))
dist.matrix <- 1/(dist.matrix^exponent)
dist.matrix <- (dist.matrix)/rowSums(dist.matrix)

image\$red.new <- (dist.matrix %*% starting.colors\$red)
image\$green.new <- (dist.matrix %*% starting.colors\$green)
image\$blue.new <- (dist.matrix %*% starting.colors\$blue)

image\$color <- rgb(red=image\$red.new, green=image\$green.new, blue=image\$blue.new, alpha=1, maxColorValue=1)
image.matrix <- acast(image, row~col, value.var="color", interpolate=F)

par(mar=c(0,0,0,0))
plot(0,0,type="n",xlim=c(0,col.n),ylim=c(0,row.n), xlab="", ylab="", xaxt="n", yaxt="n", frame=F)
rasterImage(image.matrix, xleft=0, ybottom=row.n, xright=col.n, ytop=0, interpolate=interpolate)
rasterImage(starting.matrix, xleft=0, ybottom=row.n, xright=col.n, ytop=0, interpolate=F)

if(highlight==T)
{
starting.colors\$border <- ifelse((0.299*starting.colors\$red + 0.587*starting.colors\$green + 0.114*starting.colors\$blue)>0.5,"#000000","#FFFFFF")
rect(starting.colors\$col-1, starting.colors\$row-1, starting.colors\$col, starting.colors\$row, col=NA, border=starting.colors\$border)
}
}

row.n <- 10
col.n <- 10
n.col <- 4
my.colors <- random.colors(row.n, col.n, n.col)
random.picture(row.n=row.n, col.n=col.n, starting.colors=my.colors, exponent=2, interpolate=F, highlight=T)
```

Benjamin Radford is a data scientist and political scientist. He received his Ph.D. in Political Science from Duke University where he studied security, peace, & conflict and political methodology. He specializes in data science, cybersecurity, political forecasting, and arms proliferation. He is currently a Principal Data Scientist with Sotera Defense Solutions.