Yakkity uses a non-ML algorithm which produces mediocre results. Improving the algorithm is the top priority. An easy win would be to train the weights by which it ranks candidates. This would require only labeling a few hundred candidates and performing least squares regression. A harder win would be to make a generative neural network algorithm to produce novel mondegreens. I'm not even sure if that would be better---the need for nearness to the pronunciation of the original sequence is a fairly tight constraint. But who knows---maybe that constraint is unnecessarily tight and is preventing more creative puzzles? Also, the ability to coin new words would be nice.
Second priority: Yakkity visuals. This is something I would probably do only if the algorithm got up to snuff.