Sunday, September 11, 2016

No, really, why does word2vec work?

There are a lot of notes out there about why word2vec works. Here are a few from the first page of Google:
How does word2vec work? on Quora
Why word2vec works on Andy's Blog
How exactly does word2vec work?
Making sense of word2vec by Radim Rehurek

I've read some of these pages, and they've all been helpful in their own way, but I feel like they don't really get at the heart of it. They all say that word2vec is learning the relationships between words, and then show the math that means they are maximizing for a function that does that. Which is true in its way, but it doesn't satisfy my need for an explanation I really understand.
What I would like to do is start with some simple principles, and show that they imply the analogy finding capability of word2vec, and show an easy test for when it will break down.
You only need to accept one premise: that the process of assigning word vectors assigns words that are similar to similar vectors.
High dimensional vectors behave differently than you might expect. There are three million words embedded in Mikolov's Google News word2vec space, and three million is a very tiny number compared to the number of possible locations in 300 dimensions. Because of this, there is plenty of room for a vector to be close to several different clusters at the same time.
This certainly works for some kinds of words. For example, here are the nearest neighbors of the word 'royal':

 'royal'    'royals'    'monarch'    'prince'    'princes'    'Prince Charles'    'monarchy'    'palace'    'Windsors'    'commoner'    'Mrs Parker Bowles'    'queen'    'commoners'    'Camilla'    'fiance Kate Middleton'    'Queen Elizabeth II'    'monarchs'    'royal palace'    'princess'    'Queen Consort'

Clearly it has mapped several words that have to do with royalty together. There are similar clusters of terms having to do with male, female, and person, though it isn't quite as easy to find them as just searching for words near to the word 'male' or 'female' or 'person'.

Because this is a vector space, the words near the average of two vectors a and b will be nearly the same as the intersection of the set of words near a and the set of words near b. If you picture a Venn diagram, the average of a and b will fall into the overlap of the circle surrounding a and the circle surrounding b.
This is all we need to show that word2vec works. Lets call royal r, female f, male m and person p.

The word 'king' is in the intersection of 'royal' and 'male' so it is approximately r + m. (I should mention that I've normalized the vectors, so taking the sum is essentially the same as taking the average.) 'queen' is close to r + f. 'man' is close to m + p, and woman is close to f + p.
Putting it like that, the analogical property just falls out:

king + woman - man = queen


r + m ) + ( f + p ) - ( m + p ) = ( r + f )

This also tells us an easy way to tell when this property will totally fail. For example, I think this is a pretty obvious analogy:

blueberry:blue jay::strawberry:cardinal

If we try this in word2vec, we get

 'blue jays'    'red bellied woodpecker'    'grackle'    'ovenbird'    'downy woodpecker'    'indigo bunting'    'tufted titmouse'    'Carolina wren'    'chickadee'    'nuthatch'    'rose breasted grackle'    'bluejays'    'raccoon'    'spruce grouse'    'robin'

It seems to have picked up that we are looking for a bird, but it has missed the idea that we are looking for a red bird. So what went wrong?

Let's break it down like we did above. call red r, blue b, fruit f, and songbird s. The equation is
( b + s ) + ( r + f ) - ( b + f ) = ( r + s )

We can easily find clusters of fruit and songbirds (terms near those words are close enough.) But what about red things or blue things? Is there some cluster which contains firetrucks, strawberries, and cardinals? Probably not, at least, not a great cluster. Their color just isn't that relevant to the way newspapers talk about those things. (If we trained word2vec on books for toddlers, that might be different). Because red things aren't clustered together in word2vec, the analogy fails.

This doesn't mean this word2vec space is useless for this analogy, though. Suppose we used a dictionary to look at the different sense of words. One sense of cardinal would be a 'Catholic dignitary', while another would be 'red bird.' If we averaged those terms to get new vectors for 'cardinal: sense 1' and 'cardinal: sense 2,' and did the same for strawberries and blueberries and bluejays, we could engineer a version of word2vec that would be able to solve the analogy. It's adding information in by hand rather than learning it from scratch, which some would call cheating, but I just call it efficiently mining the corpus consisting of the dictionary.

Monday, August 1, 2016

No Man's Sky

There's been a lot of hype over a videogame about to be released called No Man's Sky. The game is set in "a procedurally generated deterministic open universe, which includes over 18 quintillion (1.8×1019) planets."
This is what I refer to in Machinamenta as the kaleidoscope pattern. They have come up with a limited set of ways that planets can vary, and then counted all the variations as unique creations. But there is going to be little joy in discovering new combinations of those patterns, once the patterns themselves are recognized from a few examples. I understand the appeal! But ultimately I think people will be disappointed once they realize that the variation is limited, as it must be for this kind of universe.
I do like that they are using Lindenmeyer systems for the plants! It's the difference between producing new sentences with a grammar versus producing them with mad-libs.

How could a future game do better? Here are a few suggestions:
All of which are too computationally intensive for a game right now.

Thursday, June 2, 2016

Generated Kanji

This is for another project I'm working on with my brother David. I don't want to go into details yet, this is just a teaser.

wordplay utilities

I wrote a couple of wordplay programs. One generates an acrostic for a particular word and a rough theme, and the other generates rhyming pairs of words on a given theme. They make use of distributional semantic vector spaces, of course.

Here are examples of the rhymes it comes up with. These are chosen by hand from a list of about 30 candidates:
cowboy: colorado desperado
llama: coat goat
Star_Wars: groovy movie, halloween onscreen, iconic hypersonic, cute reboot, droid overjoyed, etc...
friar: yeast priest, barbarian seminarian
spaceship: moon balloon
pillow: head bed
trampoline: elastic gymnastic
rocking chair: knitter sitter
weed: vegetation eradication
novel: fiction depiction
juice: dilute fruit
cookie: naughty biscotti
Oreo: black snack
orca: dalmation cetacean
Mad Max: futuristic sadistic
oven: boiler broiler, roaster toaster
clock: chimes times
stopwatch: clocks jocks, elapse laps
dragon: wizard lizard
Lost_Ark: horror explorer
brain: logical neurological
Lincoln: desegregation oration
Beatles: guitars superstars
honey: bees cheese
flower: bloom perfume, frilly lily
Clinton: she nominee
sun: skylight twilight
Bollywood: Hindi indie, Delhi telly
An interesting question to explore is whether different people prefer the same choices, and what exactly it is that separates the preferred choices to the rejected ones. Probably the most important thing is that both words contribute to a true description-- there were plenty of other "bee" rhymes it came up with for honey, but only the one for a food product produced by an animal seemed like a good fit. That's the minimum to be acceptable, but wittiness goes farther-- it requires some non-obvious insight.

I used the CMU pronunciation dictionary for the rhyming patterns. It has a few oddities (sci rhymes with flee?) but overall I was very pleased with it.

Wednesday, February 17, 2016

try out deep learning style transfer

Here's a webpage where you can submit your own image and style reference, and see what the deep learning algorithm comes up with.

This was a style I particularly liked:

I wonder what would happen if you applied the same principle to text. Could you rephrase a paragraph from your textbook in the style of Shakespeare or Hemingway?

Saturday, June 27, 2015

The Deck of Queens

In The Magicians by Lev Grossman, Quentin discovers he has the ability to perform real magic:

"Now the room was absolutely still. Dean Fogg sat as if he were frozen in place. All the hairs were standing up on Quentin’s arms, but he knew what he was doing. His fingers left almost imperceptible phosphorescent trails behind them in the air. He definitely felt high. He leaned forward and blew lightly on the card house, and it collapsed back down into a neatly stacked deck. He turned the deck over and fanned it out on the table like a blackjack dealer. Every card was a Queen—all the standard suits, plus other suits that didn’t exist, in different colors, green and yellow and blue. The Queen of Horns, the Queen of Clocks, the Queen of Bees, the Queen of Books."

Something about this passage struck me. I have a theory that these each correspond to particular women that Quentin meets through the book.  "Some of them had Julia’s face. Some of them had the lovely paramedic’s." So Julia, Jane Chatwin, Alice and Janet, maybe, for the four suits that are named? Jane would be queen of clocks, there was a horn given by the river nymph, and bees for Brakebills, of course... I'm not sure if there is an exact match. But of course when I first read it, it seemed only evocative and mysterious.
There were a lot of magic things in the book I wished I could see in real life, but this one, at least, I could make a decent replica of.
I needed to start with a card template. I found several large images of cards in Google Images, but I wanted even higher resolution so I found this vector file:

I removed the parts of the image I wanted to replace.

I had previously experimented with automatically downloading images from Google Images using Matlab and a search term. It wasn't hard to get a few hundred photographs including the faces of women this way. Matlab isn't the best tool for a lot of things, but I find I can do most things I want in it and it's just so much easier to not worry about compiling and the user interface and the command line. I really like always being able to inspect what variables are in memory. It has the ability to save and load data structures whole, which I've never found in another language. And it's easy to do image processing there. So anyway, I used it for this project.
I had one face detection method already installed in Matlab:
 It was pretty slow-- about a minute per face-- but it worked really well at localizing the face. I cropped the face and resized it to fit onto the space on the card. You could probably make it faster if you played with the parameters a little.
I also had a few techniques available in Matlab for edge and contour detection in images. Since the faces on cards are line drawings, I would need to convert the photographs to line drawings. The results I liked best came from a technique I had come up with myself that involves subtracting a blurred version of the image from itself (to leave just the high-frequency information) and thresholding. If you've ever tried the old pencil-sketch action on Photoshop, it's a little like that.
To generate the symbols I thought of going to Google Images again and restricting it to black and white line art, but then I thought of a better idea: I would use Unicode symbols.
It turns out some of the best Unicode symbols require more than one byte to represent, which confuses Matlab, but one can work around it using the techniques described in the comments of this post:
I did need to go through and pick out areas of the Unicode space that had interesting images. As I was doing that I accidentally hacked into the Matrix:
After I had the Unicode characters and could render them in Matlab, I needed to place them in the right spots on the card. You may have never noticed, but the clothes on a face card contain decorative patterns based on the suit. So I decorated the clothes with tiny versions of the Unicode characters. The flower the queen is holding is also made up of these symbols, rotated.
Of course, I had to erase the original decorations on the clothing first. I also recolored the suit randomly for each image.
You might notice that the card is not exactly rotationally symmetric. That was originally a bug, but I kind of liked the effect so I left it in.
Here are some results. It can generate at least a few hundred cards before it runs out of Unicode pictures. Sometimes the face detection went wrong, so I just threw those cards away. There were more than 52 left over, anyway. This is just a few of them. You can click to see them larger.

I thought the project might be improved if one measured semantic similarity between the symbols and the faces and tried to match them up, but I didn't do that. It would also be nice to have different clothing styles, but it was too much work.
I printed them out and cut them out and gave them to Lev Grossman at a book signing in DC.
Here is the code to generate them, just so you can see what it looks like. I have terrible coding practices when I know I'm the only person who will ever look at it and I'm not going to reuse any of it. I even used a little high school trig when I put the symbols around the half-ring.

addpath('C:\matlab apps\edges (and piotr toolbox)');
addpath('C:\matlab apps\face-release1.0-basic');
addpath('C:\matlab apps');
imsdir='C:\Users\Doug\Downloads\pics\New folder\';
Aims = dir([imsdir '/*.jpg']);

for facecount=1:165
%these are interesting icons.
hold off;
ax = axes('XLim',[0 2.5],'YLim',[0 3.5]);
axis off;
%mark out the borders of clothing
quadx = .4:.01:2.1;
quady=interp1([.24 .25 1 1.5 2.25 2.26],[0 1.75 2.5 2.5 1.75 0],quadx);

axis equal
%parts(61)=fillout([ .25 1 1.5 2.25 ],[1.75 2.5 2.5 1.75],[0 3 0 3],'w');
%draw outline of card
hold on;
%choose color
%draw symbols in corners of card and draw a Q in a random font
h = uicontrol('Style','text','String','My Font','FontName','MyFont');
list = listfonts(h);
parts(partcount)=text(0.15,3.1,'Q','fontname','Card Characters','fontsize',20,'fontUnits','normalized','HorizontalAlignment','center','VerticalAlignment','middle');
parts(partcount)=text(0.15,2.75, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',20,'fontUnits','normalized','HorizontalAlignment','center','VerticalAlignment','middle');
parts(partcount)=text(0.55,2.9, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',30,'fontUnits','normalized','HorizontalAlignment','center','VerticalAlignment','middle');
%use line detection and face detection to draw the face

I=imresize(imread([imsdir    '/' Aims(facecount).name]),[600 NaN]);
load('C:\matlab apps\face-release1.0-basic\face-release1.0-basic\multipie_independent.mat');
model.thresh = min(-0.65, model.thresh);
%bs = detect(I, model, model.thresh);
%if size(bs,2)==0
%    continue;
%bs = clipboxes(I, bs);
%bs = nms_face(bs,0.3);
%posemap = [90:-15:15 0 0 0 0 0 0 -15:-15:-90];
%part=imcrop(I,[mina minb maxa-mina maxb-minb]);
%     if size(part,3)==3
%         grayI=medfilt2(rgb2gray(part),[3 3]);
%     else  grayI=medfilt2(part,[3 3]);
%     end
%     H = fspecial('gaussian', 40, 25);
%     blurred=imfilter(grayI,H,'replicate');
%     pencilsketch = min((double(grayI))-double(blurred)+255,255);
%     rescaled=1-max(min((pencilsketch-200)*5,255),1)/255;
    hold on
   parts(partcount)=image([1.1 1.8], [3.2 2.5],bwface{facecount}*255,'AlphaData',bwface{facecount}<1);
colormap gray
%random arcs of symbol across clothing
hold on

image([2.2 .3], [.3 3.2],template,'AlphaData',rgb2gray(template)<255);  
parts(partcount)=text(1.12, 2.25, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',7,'fontUnits','normalized','Color',[.85 .82 .06],'HorizontalAlignment','center','VerticalAlignment','middle');
parts(partcount)=text(1.11, 2.10, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',6,'fontUnits','normalized','Color',[.85 .82 .06],'HorizontalAlignment','center','VerticalAlignment','middle');
parts(partcount)=text(1.11, 1.97, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',5,'fontUnits','normalized','Color',[.85 .82 .06],'HorizontalAlignment','center','VerticalAlignment','middle');

parts(partcount)=text(1.71, 2.25, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',9,'fontUnits','normalized','Color',[.85 .82 .06],'HorizontalAlignment','center','VerticalAlignment','middle');
parts(partcount)=text(1.51, 2.10, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',8,'fontUnits','normalized','Color',[.85 .82 .06],'HorizontalAlignment','center','VerticalAlignment','middle');
parts(partcount)=text(1.42, 1.97, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',7,'fontUnits','normalized','Color',[.85 .82 .06],'HorizontalAlignment','center','VerticalAlignment','middle');

parts(partcount)=text(1.90, 3.08, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',9,'fontUnits','normalized','Color',[0 0 0],'HorizontalAlignment','center','VerticalAlignment','middle');
parts(partcount)=text(2.0, 2.92, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',7,'fontUnits','normalized','Color',[0 0 0],'HorizontalAlignment','center','VerticalAlignment','middle');
parts(partcount)=text(2.11, 2.75, char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',6,'fontUnits','normalized','Color',[0 0 0],'HorizontalAlignment','center','VerticalAlignment','middle');
set(parts(partcount-2:partcount), 'rotation', -60);

%draw collar
for collarx=1.03:.08:1.8;
    parts(partcount)=text(collarx, 2.429, collarchar, 'fontname', 'code2000', 'fontsize',9,'fontUnits','normalized','Color',[.95 .95 .95],'HorizontalAlignment','center','VerticalAlignment','middle');

% draw sleeve
sleevey=interp1([.3200 .8197],[1.64 2.045],sleevex);
for sleeveindex=1:size(sleevex,2)
   parts(partcount)=text(sleevex(sleeveindex), sleevey(sleeveindex), sleevechar, 'fontname', 'code2000', 'fontsize',9,'fontUnits','normalized','Color',[.95 .95 .95],'HorizontalAlignment','center','VerticalAlignment','middle');

%draw ring
for ringtheta=-pi/4+.08:.2:3*pi/4+.08
   parts(partcount)=text(.66+.31*cos(ringtheta),1.75-.31*sin(ringtheta), ringchar, 'fontname', 'code2000', 'fontsize',9,'fontUnits','normalized','Color',[.95 .95 .95],'HorizontalAlignment','center','VerticalAlignment','middle');

for petal=1:6
    parts(partcount)=text(.60+.1*cos(petal*2*pi/6), 2.44+.1*sin(petal*2*pi/6), char(chrs(facecount)), 'fontname', 'code2000', 'fontsize',9,'fontUnits','normalized','Color',[0 0 0],'HorizontalAlignment','center','VerticalAlignment','middle');
    set(parts(partcount), 'rotation', petal*60+90);
%paste in hand and flower stem
%parts(68)=image([.5 .875], [2.5 2.125],hand,'AlphaData',255-hand(:,:,1));
%place symbols around flower
set(findobj(gcf, 'type','axes'), 'Visible','off')
t1 = hgtransform('Parent',ax);
t2 = hgtransform('Parent',ax);
Txy = makehgtform('zrotate',pi,'translate',[-2.5 -3.5 0]);
 set(parts2(3:5), 'rotation', 180); 
 axis off;
 saveas(h,['queen' num2str(facecount) '.png']);