I remember (not long ago!) you couldn’t make images with coherent text or recognizable shapes! you’ve come so far! what do you credit your progress to?
I was reading this blog post and the author had a bunch of pictures with “text” made up of pixels and he used a speech synthesis engine to put “spoken” words into these pictures and it made them much more intelligible than I expected.
At the time of the post, I had seen a bunch of pictures like this http://nostalgebraist.info/images/text_generator_b.png
and I think the thing I’m trying to say is that it makes me optimistic about neural networks generally and text generation especially.
My problem with the text generator is that it’s not very consistent in its output. Even for the same input, it sometimes outputs one word, sometimes several, and for some things it outputs nothing.
That makes it hard to trust that it’s actually writing the text – if you want it to be, you have to know what it has actually written. If you don’t know what it’s written, it’s hard to know if the problem is with the generator or with the human-like idea that it has generated the text.
This is probably more of a programming problem than a deep learning one, which is the domain I originally had in mind when writing that post.
If the problem is with my own machine, then my machine is really bad at generating text, because my own machine cannot produce this output:
(That’s from the “one” in “One is not the An” text)
And this output:
(This is from the “one” in “Women are not one” text)
I know there are some things that will generate the text in the second picture. One thing in particular is the word “women.” But there’s no reason to expect the word at all. One is not the An!
frank, that bitch has three noses.
I think this is the case, yeah, but this is just a coincidence (the word “women” is a bad example of what I’m trying to do here, because “one” and “women” look very similar.)







