2017-02-10 344 views
1

如何讓這個python程序更快地讀取大文本文件?我的代碼花費了將近五分鐘的時間來閱讀文本文件,但我需要它做得更快。我認爲我的算法不在O(n)中。python文本文件讀取速度慢

一些樣品數據(actual data是470K +行):

Aarika 
Aaron 
aaron 
Aaronic 
aaronic 
Aaronical 
Aaronite 
Aaronitic 
Aaron's-beard 
Aaronsburg 
Aaronson 

我的代碼:

import string 
import re 


WORDLIST_FILENAME = "words.txt" 

def load_words(): 
wordlist = [] 
print("Loading word list from file...") 
with open(WORDLIST_FILENAME, 'r') as f: 
    for line in f: 
     wordlist = wordlist + str.split(line) 
print(" ", len(wordlist), "words loaded.") 
return wordlist 

def find_words(uletters): 
wordlist = load_words() 
foundList = [] 

for word in wordlist: 
    wordl = list(word) 
    letters = list(uletters) 
    count = 0 
    if len(word)==7: 
     for letter in wordl[:]: 
      if letter in letters: 
       wordl.remove(letter) 
       # print("word left" + str(wordl)) 
       letters.remove(letter)      
       # print(letters) 
       count = count + 1 
       #print(count) 
       if count == 7: 
        print("Matched:" + word) 
        foundList = foundList + str.split(word) 
foundList.sort() 
result = '' 
for items in foundList: 
     result = result + items + ',' 
print(result[:-1]) 


#Test cases 
find_words("eabauea" "iveabdi") 
#pattern = "asa" " qlocved" 
#print("letters to look for: "+ pattern) 
#find_words(pattern) 
+1

聽起來更適合http://codereview.stackexchange.com/。 – alecxe

+0

如果你也可以解釋你的程序應該做什麼,它會有所幫助。 – MYGz

+0

有一件事......'wordlist = wordlist + str.split(line)'複製每行的單詞列表。做'wordlist.extend(line.strip()。split())'。或者,如果你想擺脫重複和更快的單詞查找,請將'wordlist'設置爲'set',並執行'.update'。 – tdelaney

回答

0

閱讀單柱文件到一個列表中與splitlines()

def load_words(): 
    with open("words.txt", 'r') as f: 
     wordlist = f.read().splitlines() 
    return wordlist 

您可以使用timeit

from timeit import timeit 

timeit('load_words()', setup=setup, number=3) 
# Output: 0.1708553659846075 seconds 

至於如何實現的東西看起來像一個模糊匹配算法,你可以嘗試 fuzzywuzzy

# pip install fuzzywuzzy[speedup] 

from fuzzywuzzy import process 

wordlist = load_words() 
process.extract("eabauea", wordlist, limit=10) 

輸出:

[('-a', 90), ('A', 90), ('A.', 90), ('a', 90), ("a'", 90), 
('a-', 90), ('a.', 90), ('AB', 90), ('Ab', 90), ('ab', 90)] 

的結果是,如果更有趣你過濾更長的匹配:

results = process.extract("eabauea", wordlist, limit=100) 
[x for x in results if len(x[0]) > 4] 

輸出:

[('abaue', 83), 
('Ababua', 77), 
('Abatua', 77), 
('Bauera', 77), 
('baulea', 77), 
('abattue', 71), 
('abature', 71), 
('ablaqueate', 71), 
('bauleah', 71), 
('ebauche', 71), 
('habaera', 71), 
('reabuse', 71), 
('Sabaean', 71), 
('sabaean', 71), 
('Zabaean', 71), 
('-acea', 68)] 

但隨着470K +排它確實需要一段時間:

timeit('process.extract("eabauea", wordlist, limit=3)', setup=setup, number=3) 
# Output: 384.97334043699084 seconds