<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<document DateTime="2017-06-23T04:27:08.592Z">
<PeakInfo No="1" mz="505.2315648572003965"
Intensity="4531.0000000000000000"
Rel_Intensity="3.2737729673489735"
Resolution="1879.5638812957554364"
SNR="14.0278637770897561"
Area="1348.1007591467391649"
Rel_Area="2.3371194184605959"
Index="238.9999999999976694"/>
<PeakInfo No="2" mz="522.1330917856538463"
Intensity="3382.0000000000000000"
Rel_Intensity="2.4435886505350317"
Resolution="3502.9921209527169594"
SNR="10.4705882352940982"
Area="881.4468100654634100"
Rel_Area="1.5281101521284057"
Index="925.0000000000000000"/>
</document>
上面是我最近一直在使用的xml文件的一部分。每個文件都包含超過400分PeakInfo的,我也做一個Python腳本來分析每個文件:使用lxml和xpath加速xml解析過程
from lxml import etree
import pandas as pd
import tkinter.filedialog
import os
import pandas.io.formats.excel
full_path = tkinter.filedialog.askdirectory(initialdir='.')
newfolder = full_path+'\\xls files'
os.chdir(full_path)
os.makedirs(newfolder)
data = {}
for files in os.listdir(full_path):
if os.path.isfile(os.path.join(full_path, files)):
plist = pd.DataFrame()
filename = os.path.basename(files).rpartition('.')[0]
if len(filename) == 2:
filename = filename[:1]+'0'+filename[1:]
xmlp = etree.parse(files)
for p in xmlp.xpath('//PeakInfo'):
data['Exp. m/z'] = p.attrib['mz']
data['Intensity'] = p.attrib['Intensity']
plist = plist.append(data, ignore_index=True)
plist['Exp. m/z'] = plist['Exp. m/z'].astype(float)
plist['Exp. m/z'] = plist['Exp. m/z'].map('{:.4f}'.format)
plist['Intensity'] = plist['Intensity'].astype(float)
plist['Intensity'] = plist['Intensity'].map('{:.0f}'.format)
pandas.io.formats.excel.header_style = None
plist.to_excel(os.path.join(newfolder, filename+'.xls'),index=False)
這段代碼改變,如果它只有兩個字符(即A1至A01)的文件名,然後再換MZ和強度並保存爲xls文件。問題是解析每個文件需要很長時間。是否有任何提示顯着加快過程的技巧?
這是使用'pandas'工作的最糟糕的場景。使用XML解析器並使用'xlsx package'寫入Excel。 – stovfl
@stovfl xlsx包是什麼意思?你的意思是openpyxl或其他? –
是的,例如''openpyxl'可以直接寫入'xlsx'。 – stovfl