我試圖通過https檢索網頁服務器,luasec使用lua。對於大多數頁面,我的腳本按預期工作,但如果資源包含特殊字符(如「é」),我將被髮送到301響應的循環中。爲什麼網絡服務器會回覆301以及請求的確切位置?
讓這個代碼sniplet說明我的困境(節錄,以保護無辜的實際服務器的詳細信息):
local https = require "ssl.https"
local prefix = "https://www.example.com"
local suffix = "/S%C3%A9ance"
local body,code,headers,status = https.request(prefix .. suffix)
print(status .. " - GET was for \"" .. prefix .. suffix .. "\"")
print("headers are " .. myTostring(headers))
print("body is " .. myTostring(body))
if suffix == headers.location then
print("equal")
else
print("not equal")
end
local body,code,headers,status = https.request(prefix .. headers.location)
print(status .. " - GET was for \"" .. prefix .. suffix .. "\"")
這會導致矛盾
HTTP/1.1 301 Moved Permanently - GET was for "https://www.example.com/S%C3%A9ance" headers are { ["content-type"]="text/html; charset=UTF-8";["set-cookie"]="PHPSESSID=e80oo5dkouh8gh0ruit7mj28t6; path=/";["content-length"]="0";["connection"]="close";["date"]="Wed, 15 Mar 2017 19:31:24 GMT";["location"]="S%C3%A9ance";} body is "" equal HTTP/1.1 301 Moved Permanently - GET was for "https://www.example.com/S%C3%A9ance"
一個人怎麼可能能夠檢索難以捉摸的頁面,使用lua和儘可能少的附加依賴關係?