@themanual4am What happens when an NCA sees a novel image? Here we see a big difference from the original paper. In that one, the system must be trained how to grow the target image starting with a single pixel. In this new system, you could do the same thing, and it would be much faster: the genetic language provides an optimized search space. As long as the new emoji is expressible in that language, it should be quick to find.
However, the authors of the new paper took it even one step farther. They added an extra neural network that predicts what the genetic encoding of any image would be. That’s not strictly necessary, but it makes it so you can take a novel target image and immediately produce a genetic representation that approximately produces the new image without additional training.
I think if there’s any further work to be done here, it probably lies in exploring how this design change affects learnability and evolutionary dynamics. Perhaps I can think of a good way to do that.