如果你想獲得一個網頁內容到一個變量中,urllib.request.urlopen
只是read
響應:
import urllib.request
...
url = 'http://example.com/'
response = urllib.request.urlopen(url)
data = response.read() # a `bytes` object
text = data.decode('utf-8') # a `str`; this step can't be used if data is binary
下載和保存文件的最簡單方法是使用urllib.request.urlretrieve
功能:
import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve(url, file_name)
import urllib.request
...
# Download the file from `url`, save it in a temporary directory and get the
# path to it (e.g. '/tmp/tmpb48zma.txt') in the `file_name` variable:
file_name, headers = urllib.request.urlretrieve(url)
但請記住urlretrieve
被認爲是legacy並可能會被棄用(不知道爲什麼,雖然)。
所以最正確辦法做到這一點是使用了urllib.request.urlopen
函數返回一個代表HTTP響應,並將其複製到使用shutil.copyfileobj
一個真正的文件一個文件對象。
import urllib.request
import shutil
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
shutil.copyfileobj(response, out_file)
如果這似乎太複雜,你可能想去簡單,存儲在bytes
對象全部下載,然後將其寫入文件。但是這隻適用於小文件。
import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
data = response.read() # a `bytes` object
out_file.write(data)
它可以在飛行中提取.gz
(也許其他格式)的壓縮數據,但這樣的操作可能需要HTTP服務器支持的文件隨機訪問。
import urllib.request
import gzip
...
# Read the first 64 bytes of the file inside the .gz archive located at `url`
url = 'http://example.com/something.gz'
with urllib.request.urlopen(url) as response:
with gzip.GzipFile(fileobj=response) as uncompressed:
file_header = uncompressed.read(64) # a `bytes` object
# Or do anything shown above using `uncompressed` instead of `response`.
@alvas,這是一個賞金?回答者在SO上仍然(並且相當)活躍。爲什麼不直接添加評論並詢問? –
因爲時間的考驗是一個很好的答案,值得獎勵。另外,我們應該開始做很多其他問題來檢查答案是否與今天有關。特別是當SO答案的排序相當瘋狂時,有時候過時的甚至是最差的答案會達到頂峯。 – alvas