我的Spark數據幀列中有一些奇怪的字符。我想刪除它。當我選擇該特定列並執行.show()時,我會看到它如下從火花文本中刪除特定字符
Dominant technology firm seeks ambitious, assertive, confident, headstrong salesperson to lead our organization into the next era! If you are ready to thrive in a highly competitive environment, this is the job for you. ¥ Superior oral and written communication skills¥ Extensive experience with negotiating and closing sales ¥ Outspoken ¥ Thrives in competitive environment¥ Self-reliant and able to succeed in an independent setting ¥ Manage portfolio of clients ¥ Aggressively close sales to exceed quarterly quotas ¥ Deliver expertise to clients as needed ¥ Lead the company into new markets
|
你看到的角色是¥。
我寫下面的代碼來從數據幀的「說明」列中刪除此
from pyspark.sql.functions import udf
charReplace=udf(lambda x: x.replace('¥',''))
train_cleaned=train_triLabel.withColumn('dsescription',charReplace('description'))
train_cleaned.show(2,truncate=False)
然而它會引發錯誤:
File "/Users/i854319/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/Users/i854319/spark/python/pyspark/sql/functions.py", line 1563, in <lambda>
func = lambda _, it: map(lambda x: returnType.toInternal(f(*x)), it)
File "<ipython-input-32-864efe6f3257>", line 3, in <lambda>
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 0: ordinal not in range(128)
然而,當我做到這一點上的測試字符串該字符被替換方法識別。
s='hello ¥'
print s
s.replace('¥','')
hello ¥
Out[37]:
'hello '
任何想法我錯了?
AAH。多麼愚蠢的錯誤。萬分感謝! – Baktaawar