2017-06-13 90 views
0

我會通過使用對巴比數據集keras存儲網絡下面的代碼 -Keras存儲網絡實現

  '''Trains a memory network on the bAbI dataset. 
      References: 
      - Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, Alexander M. Rush, 
       "Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks", 
       http://arxiv.org/abs/1502.05698 
      - Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus, 
       "End-To-End Memory Networks", 
       http://arxiv.org/abs/1503.08895 
      Reaches 98.6% accuracy on task 'single_supporting_fact_10k' after 120 epochs. 
      Time per epoch: 3s on CPU (core i7). 
      ''' 
      from __future__ import print_function 

      from keras.models import Sequential, Model 
      from keras.layers.embeddings import Embedding 
      from keras.layers import Input, Activation, Dense, Permute, Dropout, add, dot, concatenate 
      from keras.layers import LSTM 
      from keras.utils.data_utils import get_file 
      from keras.preprocessing.sequence import pad_sequences 
      from functools import reduce 
      import tarfile 
      import numpy as np 
      import re 


      def tokenize(sent): 
       '''Return the tokens of a sentence including punctuation. 
       >>> tokenize('Bob dropped the apple. Where is the apple?') 
       ['Bob', 'dropped', 'the', 'apple', '.', 'Where', 'is', 'the', 'apple', '?'] 
       ''' 
       return [x.strip() for x in re.split('(\W+)?', sent) if x.strip()] 


      def parse_stories(lines, only_supporting=False): 
       '''Parse stories provided in the bAbi tasks format 
       If only_supporting is true, only the sentences 
       that support the answer are kept. 
       ''' 
       data = [] 
       story = [] 
       for line in lines: 
        line = line.decode('utf-8').strip() 
        nid, line = line.split(' ', 1) 
        nid = int(nid) 
        if nid == 1: 
         story = [] 
        if '\t' in line: 
         q, a, supporting = line.split('\t') 
         q = tokenize(q) 
         substory = None 
         if only_supporting: 
          # Only select the related substory 
          supporting = map(int, supporting.split()) 
          substory = [story[i - 1] for i in supporting] 
         else: 
          # Provide all the substories 
          substory = [x for x in story if x] 
         data.append((substory, q, a)) 
         story.append('') 
        else: 
         sent = tokenize(line) 
         story.append(sent) 
       return data 


      def get_stories(f, only_supporting=False, max_length=None): 
       '''Given a file name, read the file, 
       retrieve the stories, 
       and then convert the sentences into a single story. 
       If max_length is supplied, 
       any stories longer than max_length tokens will be discarded. 
       ''' 
       data = parse_stories(f.readlines(), only_supporting=only_supporting) 
       flatten = lambda data: reduce(lambda x, y: x + y, data) 
       data = [(flatten(story), q, answer) for story, q, answer in data if not max_length or len(flatten(story)) < max_length] 
       return data 


      def vectorize_stories(data, word_idx, story_maxlen, query_maxlen): 
       X = [] 
       Xq = [] 
       Y = [] 
       for story, query, answer in data: 
        x = [word_idx[w] for w in story] 
        xq = [word_idx[w] for w in query] 
        # let's not forget that index 0 is reserved 
        y = np.zeros(len(word_idx) + 1) 
        y[word_idx[answer]] = 1 
        X.append(x) 
        Xq.append(xq) 
        Y.append(y) 
       return (pad_sequences(X, maxlen=story_maxlen), 
         pad_sequences(Xq, maxlen=query_maxlen), np.array(Y)) 

      try: 
       path = get_file('babi-tasks-v1-2.tar.gz', origin='https://s3.amazonaws.com/text-datasets/babi_tasks_1-20_v1-2.tar.gz') 
      except: 
       print('Error downloading dataset, please download it manually:\n' 
         '$ wget http://www.thespermwhale.com/jaseweston/babi/tasks_1-20_v1-2.tar.gz\n' 
         '$ mv tasks_1-20_v1-2.tar.gz ~/.keras/datasets/babi-tasks-v1-2.tar.gz') 
       raise 
      tar = tarfile.open(path) 

      challenges = { 
       # QA1 with 10,000 samples 
       'single_supporting_fact_10k': 'tasks_1-20_v1-2/en-10k/qa1_single-supporting-fact_{}.txt', 
       # QA2 with 10,000 samples 
       'two_supporting_facts_10k': 'tasks_1-20_v1-2/en-10k/qa2_two-supporting-facts_{}.txt', 
      } 
      challenge_type = 'single_supporting_fact_10k' 
      challenge = challenges[challenge_type] 

      print('Extracting stories for the challenge:', challenge_type) 
      train_stories = get_stories(tar.extractfile(challenge.format('train'))) 
      test_stories = get_stories(tar.extractfile(challenge.format('test'))) 

      vocab = set() 
      for story, q, answer in train_stories + test_stories: 
       vocab |= set(story + q + [answer]) 
      vocab = sorted(vocab) 

      # Reserve 0 for masking via pad_sequences 
      vocab_size = len(vocab) + 1 
      story_maxlen = max(map(len, (x for x, _, _ in train_stories + test_stories))) 
      query_maxlen = max(map(len, (x for _, x, _ in train_stories + test_stories))) 

      print('-') 
      print('Vocab size:', vocab_size, 'unique words') 
      print('Story max length:', story_maxlen, 'words') 
      print('Query max length:', query_maxlen, 'words') 
      print('Number of training stories:', len(train_stories)) 
      print('Number of test stories:', len(test_stories)) 
      print('-') 
      print('Here\'s what a "story" tuple looks like (input, query, answer):') 
      print(train_stories[0]) 
      print('-') 
      print('Vectorizing the word sequences...') 

      word_idx = dict((c, i + 1) for i, c in enumerate(vocab)) 
      inputs_train, queries_train, answers_train = vectorize_stories(train_stories, 
                      word_idx, 
                      story_maxlen, 
                      query_maxlen) 
      inputs_test, queries_test, answers_test = vectorize_stories(test_stories, 
                     word_idx, 
                     story_maxlen, 
                     query_maxlen) 

      print('-') 
      print('inputs: integer tensor of shape (samples, max_length)') 
      print('inputs_train shape:', inputs_train.shape) 
      print('inputs_test shape:', inputs_test.shape) 
      print('-') 
      print('queries: integer tensor of shape (samples, max_length)') 
      print('queries_train shape:', queries_train.shape) 
      print('queries_test shape:', queries_test.shape) 
      print('-') 
      print('answers: binary (1 or 0) tensor of shape (samples, vocab_size)') 
      print('answers_train shape:', answers_train.shape) 
      print('answers_test shape:', answers_test.shape) 
      print('-') 
      print('Compiling...') 

      # placeholders 
      input_sequence = Input((story_maxlen,)) 
      question = Input((query_maxlen,)) 

      # encoders 
      # embed the input sequence into a sequence of vectors 
      input_encoder_m = Sequential() 
      input_encoder_m.add(Embedding(input_dim=vocab_size, 
              output_dim=64)) 
      input_encoder_m.add(Dropout(0.3)) 
      # output: (samples, story_maxlen, embedding_dim) 

      # embed the input into a sequence of vectors of size query_maxlen 
      input_encoder_c = Sequential() 
      input_encoder_c.add(Embedding(input_dim=vocab_size, 
              output_dim=query_maxlen)) 
      input_encoder_c.add(Dropout(0.3)) 
      # output: (samples, story_maxlen, query_maxlen) 

      # embed the question into a sequence of vectors 
      question_encoder = Sequential() 
      question_encoder.add(Embedding(input_dim=vocab_size, 
              output_dim=64, 
              input_length=query_maxlen)) 
      question_encoder.add(Dropout(0.3)) 
      # output: (samples, query_maxlen, embedding_dim) 

      # encode input sequence and questions (which are indices) 
      # to sequences of dense vectors 
      input_encoded_m = input_encoder_m(input_sequence) 
      input_encoded_c = input_encoder_c(input_sequence) 
      question_encoded = question_encoder(question) 

      # compute a 'match' between the first input vector sequence 
      # and the question vector sequence 
      # shape: `(samples, story_maxlen, query_maxlen)` 
      match = dot([input_encoded_m, question_encoded], axes=(2, 2)) 
      match = Activation('softmax')(match) 

      # add the match matrix with the second input vector sequence 
      response = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen) 
      response = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen) 

      # concatenate the match matrix with the question vector sequence 
      answer = concatenate([response, question_encoded]) 

      # the original paper uses a matrix multiplication for this reduction step. 
      # we choose to use a RNN instead. 
      answer = LSTM(32)(answer) # (samples, 32) 

      # one regularization layer -- more would probably be needed. 
      answer = Dropout(0.3)(answer) 
      answer = Dense(vocab_size)(answer) # (samples, vocab_size) 
      # we output a probability distribution over the vocabulary 
      answer = Activation('softmax')(answer) 

      # build the final model 
      model = Model([input_sequence, question], answer) 
      model.compile(optimizer='rmsprop', loss='categorical_crossentropy', 
          metrics=['accuracy']) 

      # train 
      model.fit([inputs_train, queries_train], answers_train, 
         batch_size=32, 
         epochs=120, 
         validation_data=([inputs_test, queries_test], answers_test)) 

這是我的理解是對模型生成部 -

後下面代碼創建故事的緻密載體和質疑部分 - 以下

  input_encoded_m = input_encoder_m(input_sequence) 
      input_encoded_c = input_encoder_c(input_sequence) 
      question_encoded = question_encoder(question) 

輸出將有形狀

input_encoded_m將具有形狀 - 樣品,story_maxlen,query_maxlen input_encoded_c將具有形狀 - 樣品,story_maxlen,query_maxlen question_encoded將具有形狀 - 樣品,query_maxlen,embedding_dim

input_encoded_m和input_encoded_c具有相同輸入嵌入在不同的維度,即(68和4)。和question_encoded將嵌入問題。

現在下面部分匹配單詞中的故事和問題,並就這意味着匹配的詞被標識的輸出端適用SOFTMAX激活 -

  match = dot([input_encoded_m, question_encoded], axes=(2, 2)) 
      match = Activation('softmax')(match) 

我不清楚爲什麼不同的嵌入式相同的輸入向量被添加到從上面的步驟匹配矩陣。評論說「第二個輸入矢量」,但我們沒有處理第二個輸入呢..無法理解這一點,任何幫助? #添加的匹配矩陣與所述第二輸入矢量序列 響應=添加([比賽,input_encoded_c])#(樣本,story_maxlen,query_maxlen)

還什麼置換上述步驟的輸出確實在這方面 - 響應=(Permute((2,1))(response)#(samples,query_maxlen,story_maxlen)

這只是將上面的部分與LSTM層的問題連接起來?請糾正,如果我的理解是錯在這裏 -

  # concatenate the match matrix with the question vector sequence 
      answer = concatenate([response, question_encoded]) 

我coudn't找到這個任何地方,所以在這裏張貼的任何直觀的解釋。

任何幫助,高度讚賞!

謝謝。

回答

0

首先,match變量不僅識別匹配的詞,它給出了輸入的概率分佈。這些可以看作是每個輸入句子的權重。

通過使用兩個不同的矩陣嵌入輸入序列,其結果是代碼中的input_encoded_cinput_encoded_m。使用第一個嵌入,我們找到匹配權重。然後對第二個嵌入向量應用權重,我們找到答案。將權重應用於我們計算它們的相同向量是不合邏輯的。

然後來Permute。爲了生成答案 我們將查詢添加到response,以使我們對響應的維度進行相同的維度排列。

在論文End-to-End Memory Network中,如果您閱讀2.1節,它將幫助您理解。