我想創建一個.csv文件,其中包含我已經存儲到Twitter搜索API列表中的數據。我使用我選擇的關鍵字(本例中爲'reddit')保存了最後100條推文,並且我試圖將每條推文保存到.csv文件中的單元格中。我的代碼如下,我返回一個錯誤是:UnicodeEncodeError創建.csv文件
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 0: ordinal not in range(128)
如果有人知道我能做些什麼來解決這個問題,將不勝感激!
import sys
import os
import urllib
import urllib2
import json
from pprint import pprint
import csv
import sentiment_analyzer
import codecs
class Twitter:
def __init__(self):
self.api_url = {}
self.api_url['search'] = 'http://search.twitter.com/search.json?'
def search(self, params):
url = self.make_url(params, apitype='search')
data = json.loads(urllib2.urlopen(url).read().decode('utf-8').encode('ascii', 'ignore'))
txt = []
for obj in data['results']:
txt.append(obj['text'])
return '\n'.join(txt)
def make_url(self, params, apitype='search'):
baseurl = self.api_url[apitype]
return baseurl + urllib.urlencode(params)
if __name__ == '__main__':
try:
query = sys.argv[1]
except IndexError:
query = 'reddit'
t = Twitter()
s = sentiment_analyzer.SentimentAnalyzer()
params = {'q': query, 'result_type': 'recent', 'rpp': 100}
urlName = t.make_url(params)
print urlName
txt = t.search(params)
print s.analyze_text(txt)
myfile = open('reddit.csv', 'wb')
wr = csv.writer(myfile, quoting=csv.QUOTE_MINIMAL)
wr.writerow(txt)
+1。引用的例子甚至似乎對這種情況有一些東西。 – abarnert