我需要讀取一些非常大的文本文件(100+ Mb),用正則表達式處理每一行並將數據存儲到結構中。我的結構繼承自defaultdict,它有一個read(self)方法來讀取self.file_name文件。使用多重處理讀取多個文件
看看這個非常簡單的(但不是真正的)例子,我不使用正則表達式,但我分割線:
import multiprocessing
from collections import defaultdict
def SingleContainer():
return list()
class Container(defaultdict):
"""
this class store odd line in self["odd"] and even line in self["even"].
It is stupid, but it's only an example. In the real case the class
has additional methods that do computation on readen data.
"""
def __init__(self,file_name):
if type(file_name) != str:
raise AttributeError, "%s is not a string" % file_name
defaultdict.__init__(self,SingleContainer)
self.file_name = file_name
self.readen_lines = 0
def read(self):
f = open(self.file_name)
print "start reading file %s" % self.file_name
for line in f:
self.readen_lines += 1
values = line.split()
key = {0: "even", 1: "odd"}[self.readen_lines %2]
self[key].append(values)
print "readen %d lines from file %s" % (self.readen_lines, self.file_name)
def do(file_name):
container = Container(file_name)
container.read()
return container.items()
if __name__ == "__main__":
file_names = ["r1_200909.log", "r1_200910.log"]
pool = multiprocessing.Pool(len(file_names))
result = pool.map(do,file_names)
pool.close()
pool.join()
print "Finish"
最後我需要加入在一個容器中的每個結果。保留行的順序非常重要。返回值時,我的方法太慢了。解決方案更好 我在Linux上使用python 2.6
我的問題是cpu-bound而不是IO-bound。在這個例子中,我正在分割線,但在實際情況下,我正在處理一個複雜和長的正則表達式以及IO時間(seek,...)是很小,然後CPU時間 –