我在AWS S3中有一個CSV文件,正在加載到AWS Glue,即用於對來自S3的源數據文件應用轉換。它提供了PySpark腳本環境。數據看起來有點像這樣:如何檢查列的數值是否包含通過SQL查詢的字母
"ID","CNTRY_CD","SUB_ID","PRIME_KEY","DATE"
"123","IND","25635525","11243749772","2017-10-17"
"123","IND","25632349","112322abcd","2017-10-17"
"123","IND","25635234","11243kjsd434","2017-10-17"
"123","IND","25639822","1124374343","2017-10-17"
預期結果應該是這樣的:
"123","IND","25632349","112322abcd","2017-10-17"
"123","IND","25635234","11243kjsd434","2017-10-17"
在這裏,我通過整型的名字「PRIME_KEY」工作的領域,可能包含英文字母,這導致數據格式不正確。
現在的需求是,我需要找出Integer類型的主鍵列是否包含任何使用SQL查詢的數字值的字母數字字符。正則表達式到目前爲止,我已經嘗試了幾個變種,以做到這一點像下面的一個,但沒有運氣:
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
glueContext = GlueContext(SparkContext.getOrCreate())
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
# s3 output directory
output_dir = "s3://aws-glue-scripts../.."
# Data Catalog: database and table name
db_name = "sampledb"
glue_tbl_name = "sampleTable"
datasource = glueContext.create_dynamic_frame.from_catalog(database = db_name, table_name = glue_tbl_name)
datasource_df = datasource.toDF()
datasource_df.registerTempTable("sample_tbl")
invalid_primarykey_values_df = spark.sql("SELECT * FROM sample_tbl WHERE CAST(PRIME_KEY AS STRING) RLIKE '([a-z]+[0-9]+)|([0-9]+[a-z]+)'")
invalid_primarykey_values_df.show()
這個腳本的輸出如下:
SELECT *
FROM table_name
WHERE column_name IS NOT NULL AND
CAST(column_name AS VARCHAR(100)) LIKE \'%[0-9a-z0-9]%\'
源腳本
+ --- + -------- + -------- + ------------ + ---------- + ----------- + --------------- +
| ID | CNTRY_CD | SUB_ID | PRIME_KEY | DATE |
+ --- + -------- + -------- + ------------ + ---------- + ----------- + --------------- +
| 123 | IND | 25635525 | [11243749772,null] | 2017-10-17 |
| 123 | IND | 25632349 | [null,112322ab .. | 2017-10-17 |
| 123 | IND | 25635234 | [null,11243kjsd .. | 2017-10-17 |
| 123 | IND | 25639822 | [1124374343,null] | 2017-10-17 |
+ -------- + -------- + -------------------- + ------ ---- + ----------- + --------------- +
我突出顯示了我正在處理的字段的值。它看起來有點不同於源數據。
任何幫助,將不勝感激。由於