2012-11-23 32 views

回答

83

的命令是:

wget -r -np -l 1 -A zip http://example.com/download/ 

選項的含義:

-r, --recursive   specify recursive download. 
-np, --no-parent   don't ascend to the parent directory. 
-l, --level=NUMBER  maximum recursion depth (inf or 0 for infinite). 
-A, --accept=LIST  comma-separated list of accepted extensions. 
+11

的'-nd'(無目錄)如果您不想創建任何額外的目錄(即,所有文件都將位於根文件夾中),則標誌很方便。 –

+0

如何調整此解決方案以使其從給定頁面更深入?我嘗試了-l 20,但wget馬上停下來。 – Wrench

47

以上解決方案並不爲我工作。 對我來說,只有這一個工程:

wget -r -l1 -H -t1 -nd -N -np -A.mp3 -erobots=off [url of website] 

選項的含義:

-r   recursive 
-l1   maximum recursion depth (1=use only this directory) 
-H   span hosts (visit other hosts in the recursion) 
-t1   Number of retries 
-nd   Don't make new directories, put downloaded files in this one 
-N   turn on timestamping 
-A.mp3  download only mp3s 
-erobots=off execute "robots.off" as if it were a part of .wgetrc 
+1

來源:http://www.commandlinefu.com/commands/view/12498/download-all-music-files-off-of-a-website-using-wget –

+0

是的,謝謝!我不記得它來自哪裏,只是躺在我的腳本中。 –

+0

不知道對不起。提出新的問題! ;) –

1

對於其他方案有一些平行的魔法我用:

curl [url] | grep -i [filending] | sed -n 's/.*href="\([^"]*\).*/\1/p' | parallel -N5 wget - 
相關問題