您可以使用substr只搶到第一1000個字符,或者,你可以使用
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://www.example.com/');
curl_setopt($ch, CURLOPT_RANGE, '0-500');
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec($ch);
curl_close($ch);
echo $result;
只會下載第一個500個字節。你可以臥通過運行這樣的事情極其醜陋的垃圾代碼:
$url = 'http://www.example.com/';
$range = array();
$repeats = 10;
function average($a){
return array_sum($a)/count($a) ;
}
for ($i=0;$i<$repeats;$i++) {
$time_start = microtime(true);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RANGE, '0-500');
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec($ch);
$time_end = microtime(true);
$time = $time_end - $time_start;
curl_close($ch);
$range[] = $time;
}
echo "With range: average = ".round(average($range),2)." seconds (Min: ".round(min($range),2).", Max: ".round(max($range),2).")\n";
$range = array();
for ($i=0;$i<$repeats;$i++) {
$time_start = microtime(true);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec($ch);
$time_end = microtime(true);
$time = $time_end - $time_start;
curl_close($ch);
$range[] = $time;
}
echo "Without range: average = ".round(average($range),2)." seconds (Min: ".round(min($range),2).", Max: ".round(max($range),2).")\n";
如果我運行在我的網站(http://www.focalstrategy.com/),我得到:
With range: average = 0.38 seconds (Min: 0.35, Max: 0.41)
Without range: average = 0.56 seconds (Min: 0.53, Max: 0.7)
反對http://en.wikipedia.org/wiki/PHP,我得到:
With range: average = 0.11 seconds (Min: 0.05, Max: 0.5)
Without range: average = 0.48 seconds (Min: 0.34, Max: 0.78)
反對Stack Overflow我得到:
With range: average = 1.31 seconds (Min: 1.1, Max: 1.46)
Without range: average = 1.37 seconds (Min: 1.18, Max: 1.7)
和反對eBay我得到:
With range: average = 1.75 seconds (Min: 1.56, Max: 1.99)
Without range: average = 1.74 seconds (Min: 1.51, Max: 2.14)
您可以通過測試該SO和eBay不支持範圍請求見。
總之,支持這個的網站會加快速度,那些不會的網站,不會,你只會得到整個代碼。
這主要是壞消息。但是,您可以使用「curl_multi」使多個連接立即運行,並且任何時候一個完成,解析它,然後從隊列中獲取一個新URL並開始下載。 – DanRedux
此外,你不應該獲取第一次出現的「title」($ titleArr的索引0)嗎?您目前正在獲取第二個索引(請記住,數組從0開始)。 – DanRedux
嗯,這仍然是一個壞消息,運行多次爬行不會加速任何事情 –