2015-05-09 51 views
5

我使用word2vec來表示一個小短語(3到4個字)作爲一個獨特的矢量,要麼通過添加每個單詞嵌入或通過計算字嵌入的平均值。word2vec,總和或平均單詞嵌入?

從我做過的實驗中總會得到相同的餘弦相似度。我懷疑它與word2vec生成的單詞向量在訓練之後是否與單位長度(歐幾里得範數)相同?或者我在代碼中有一個BUG,或者我錯過了一些東西。

下面是代碼:

import numpy as np 
from nltk import PunktWordTokenizer 
from gensim.models import Word2Vec 
from numpy.linalg import norm 
from scipy.spatial.distance import cosine 

def pattern2vector(tokens, word2vec, AVG=False): 
    pattern_vector = np.zeros(word2vec.layer1_size) 
    n_words = 0 
    if len(tokens) > 1: 
     for t in tokens: 
      try: 
       vector = word2vec[t.strip()] 
       pattern_vector = np.add(pattern_vector,vector) 
       n_words += 1 
      except KeyError, e: 
       continue 
     if AVG is True: 
      pattern_vector = np.divide(pattern_vector,n_words) 
    elif len(tokens) == 1: 
     try: 
      pattern_vector = word2vec[tokens[0].strip()] 
     except KeyError: 
      pass 
    return pattern_vector 


def main(): 
    print "Loading word2vec model ...\n" 
    word2vecmodelpath = "/data/word2vec/vectors_200.bin" 
    word2vec = Word2Vec.load_word2vec_format(word2vecmodelpath, binary=True) 
    pattern_1 = 'founder and ceo' 
    pattern_2 = 'co-founder and former chairman' 

    tokens_1 = PunktWordTokenizer().tokenize(pattern_1) 
    tokens_2 = PunktWordTokenizer().tokenize(pattern_2) 
    print "vec1", tokens_1 
    print "vec2", tokens_2 

    p1 = pattern2vector(tokens_1, word2vec, False) 
    p2 = pattern2vector(tokens_2, word2vec, False) 
    print "\nSUM" 
    print "dot(vec1,vec2)", np.dot(p1,p2) 
    print "norm(p1)", norm(p1) 
    print "norm(p2)", norm(p2) 
    print "dot((norm)vec1,norm(vec2))", np.dot(norm(p1),norm(p2)) 
    print "cosine(vec1,vec2)",  np.divide(np.dot(p1,p2),np.dot(norm(p1),norm(p2))) 
    print "\n" 
    print "AVG" 
    p1 = pattern2vector(tokens_1, word2vec, True) 
    p2 = pattern2vector(tokens_2, word2vec, True) 
    print "dot(vec1,vec2)", np.dot(p1,p2) 
    print "norm(p1)", norm(p1) 
    print "norm(p2)", norm(p2) 
    print "dot(norm(vec1),norm(vec2))", np.dot(norm(p1),norm(p2)) 
    print "cosine(vec1,vec2)",  np.divide(np.dot(p1,p2),np.dot(norm(p1),norm(p2))) 


if __name__ == "__main__": 
    main() 

這裏是輸出:

Loading word2vec model ... 

Dimensions 200 
vec1 ['founder', 'and', 'ceo'] 
vec2 ['co-founder', 'and', 'former', 'chairman'] 

SUM 
dot(vec1,vec2) 5.4008677771 
norm(p1) 2.19382594282 
norm(p2) 2.87226958166 
dot((norm)vec1,norm(vec2)) 6.30125952303 
cosine(vec1,vec2) 0.857109242583 


AVG 
dot(vec1,vec2) 0.450072314758 
norm(p1) 0.731275314273 
norm(p2) 0.718067395416 
dot(norm(vec1),norm(vec2)) 0.525104960252 
cosine(vec1,vec2) 0.857109242583 

我使用的是這裏Cosine Similarity (Wikipedia)定義的餘弦相似度。規範和點積的值確實不同。

任何人都可以解釋爲什麼餘弦相同嗎?

謝謝 大衛

回答

7

餘弦測量兩個向量之間的角度,並沒有採取任何向量的長度考慮在內。當你除以短語的長度時,你只是縮短了矢量,而不是改變它的角度位置。所以你的結果對我來說是正確的。

+0

謝謝你的回答。 我發現這個頁面,它解釋了餘弦相似性,Pearson相關性和OLS係數都可以看作是內積(即位置和比例等)的變體。 http://brenocon.com/blog/2012/03/cosine-similarity-pearson-correlation-and-ols-coefficients/ –