Thursday, September 29, 2016
Sunday, September 11, 2016
No, really, why does word2vec work?
There are a lot of notes out there about why word2vec works. Here are a few from the first page of Google:
How does word2vec work? on Quora
Why word2vec works on Andy's Blog
How exactly does word2vec work?
Making sense of word2vec by Radim Rehurek
I've read some of these pages, and they've all been helpful in their own way, but I feel like they don't really get at the heart of it. They all say that word2vec is learning the relationships between words, and then show the math that means they are maximizing for a function that does that. Which is true in its way, but it doesn't satisfy my need for an explanation I really understand.
What I would like to do is start with some simple principles, and show that they imply the analogy finding capability of word2vec, and show an easy test for when it will break down.
You only need to accept one premise: that the process of assigning word vectors assigns words that are similar to similar vectors.
High dimensional vectors behave differently than you might expect. There are three million words embedded in Mikolov's Google News word2vec space, and three million is a very tiny number compared to the number of possible locations in 300 dimensions. Because of this, there is plenty of room for a vector to be close to several different clusters at the same time.
This certainly works for some kinds of words. For example, here are the nearest neighbors of the word 'royal':
'royal' 'royals' 'monarch' 'prince' 'princes' 'Prince Charles' 'monarchy' 'palace' 'Windsors' 'commoner' 'Mrs Parker Bowles' 'queen' 'commoners' 'Camilla' 'fiance Kate Middleton' 'Queen Elizabeth II' 'monarchs' 'royal palace' 'princess' 'Queen Consort'
Clearly it has mapped several words that have to do with royalty together. There are similar clusters of terms having to do with male, female, and person, though it isn't quite as easy to find them as just searching for words near to the word 'male' or 'female' or 'person'.
Because this is a vector space, the words near the average of two vectors a and b will be nearly the same as the intersection of the set of words near a and the set of words near b. If you picture a Venn diagram, the average of a and b will fall into the overlap of the circle surrounding a and the circle surrounding b.
This is all we need to show that word2vec works. Lets call royal r, female f, male m and person p.
The word 'king' is in the intersection of 'royal' and 'male' so it is approximately r + m. (I should mention that I've normalized the vectors, so taking the sum is essentially the same as taking the average.) 'queen' is close to r + f. 'man' is close to m + p, and woman is close to f + p.
Putting it like that, the analogical property just falls out:
king + woman - man = queen
because
( r + m ) + ( f + p ) - ( m + p ) = ( r + f )
This also tells us an easy way to tell when this property will totally fail. For example, I think this is a pretty obvious analogy:
blueberry:blue jay::strawberry:cardinal
If we try this in word2vec, we get
'blue jays' 'red bellied woodpecker' 'grackle' 'ovenbird' 'downy woodpecker' 'indigo bunting' 'tufted titmouse' 'Carolina wren' 'chickadee' 'nuthatch' 'rose breasted grackle' 'bluejays' 'raccoon' 'spruce grouse' 'robin'
It seems to have picked up that we are looking for a bird, but it has missed the idea that we are looking for a red bird. So what went wrong?
Let's break it down like we did above. call red r, blue b, fruit f, and songbird s. The equation is
( b + s ) + ( r + f ) - ( b + f ) = ( r + s )
We can easily find clusters of fruit and songbirds (terms near those words are close enough.) But what about red things or blue things? Is there some cluster which contains firetrucks, strawberries, and cardinals? Probably not, at least, not a great cluster. Their color just isn't that relevant to the way newspapers talk about those things. (If we trained word2vec on books for toddlers, that might be different). Because red things aren't clustered together in word2vec, the analogy fails.
This doesn't mean this word2vec space is useless for this analogy, though. Suppose we used a dictionary to look at the different sense of words. One sense of cardinal would be a 'Catholic dignitary', while another would be 'red bird.' If we averaged those terms to get new vectors for 'cardinal: sense 1' and 'cardinal: sense 2,' and did the same for strawberries and blueberries and bluejays, we could engineer a version of word2vec that would be able to solve the analogy. It's adding information in by hand rather than learning it from scratch, which some would call cheating, but I just call it efficiently mining the corpus consisting of the dictionary.
How does word2vec work? on Quora
Why word2vec works on Andy's Blog
How exactly does word2vec work?
Making sense of word2vec by Radim Rehurek
I've read some of these pages, and they've all been helpful in their own way, but I feel like they don't really get at the heart of it. They all say that word2vec is learning the relationships between words, and then show the math that means they are maximizing for a function that does that. Which is true in its way, but it doesn't satisfy my need for an explanation I really understand.
What I would like to do is start with some simple principles, and show that they imply the analogy finding capability of word2vec, and show an easy test for when it will break down.
You only need to accept one premise: that the process of assigning word vectors assigns words that are similar to similar vectors.
High dimensional vectors behave differently than you might expect. There are three million words embedded in Mikolov's Google News word2vec space, and three million is a very tiny number compared to the number of possible locations in 300 dimensions. Because of this, there is plenty of room for a vector to be close to several different clusters at the same time.
This certainly works for some kinds of words. For example, here are the nearest neighbors of the word 'royal':
'royal' 'royals' 'monarch' 'prince' 'princes' 'Prince Charles' 'monarchy' 'palace' 'Windsors' 'commoner' 'Mrs Parker Bowles' 'queen' 'commoners' 'Camilla' 'fiance Kate Middleton' 'Queen Elizabeth II' 'monarchs' 'royal palace' 'princess' 'Queen Consort'
Clearly it has mapped several words that have to do with royalty together. There are similar clusters of terms having to do with male, female, and person, though it isn't quite as easy to find them as just searching for words near to the word 'male' or 'female' or 'person'.
This is all we need to show that word2vec works. Lets call royal r, female f, male m and person p.
The word 'king' is in the intersection of 'royal' and 'male' so it is approximately r + m. (I should mention that I've normalized the vectors, so taking the sum is essentially the same as taking the average.) 'queen' is close to r + f. 'man' is close to m + p, and woman is close to f + p.
Putting it like that, the analogical property just falls out:
king + woman - man = queen
because
( r + m ) + ( f + p ) - ( m + p ) = ( r + f )
This also tells us an easy way to tell when this property will totally fail. For example, I think this is a pretty obvious analogy:
blueberry:blue jay::strawberry:cardinal
If we try this in word2vec, we get
'blue jays' 'red bellied woodpecker' 'grackle' 'ovenbird' 'downy woodpecker' 'indigo bunting' 'tufted titmouse' 'Carolina wren' 'chickadee' 'nuthatch' 'rose breasted grackle' 'bluejays' 'raccoon' 'spruce grouse' 'robin'
It seems to have picked up that we are looking for a bird, but it has missed the idea that we are looking for a red bird. So what went wrong?
Let's break it down like we did above. call red r, blue b, fruit f, and songbird s. The equation is
( b + s ) + ( r + f ) - ( b + f ) = ( r + s )
We can easily find clusters of fruit and songbirds (terms near those words are close enough.) But what about red things or blue things? Is there some cluster which contains firetrucks, strawberries, and cardinals? Probably not, at least, not a great cluster. Their color just isn't that relevant to the way newspapers talk about those things. (If we trained word2vec on books for toddlers, that might be different). Because red things aren't clustered together in word2vec, the analogy fails.
This doesn't mean this word2vec space is useless for this analogy, though. Suppose we used a dictionary to look at the different sense of words. One sense of cardinal would be a 'Catholic dignitary', while another would be 'red bird.' If we averaged those terms to get new vectors for 'cardinal: sense 1' and 'cardinal: sense 2,' and did the same for strawberries and blueberries and bluejays, we could engineer a version of word2vec that would be able to solve the analogy. It's adding information in by hand rather than learning it from scratch, which some would call cheating, but I just call it efficiently mining the corpus consisting of the dictionary.
Subscribe to:
Posts (Atom)