我有以下代碼在Azure函數中運行(代碼來自堆棧溢出here),大部分時間都按照它應該下載的大文件。但是,有時它只是停止向文件添加數據,並且從不再次啓動。文件越大,發生的次數越多。我沒有得到任何錯誤,什麼都沒有。在沒有進步的情況下,例如10秒後,還是有其他方式來喚醒整個過程,或者有其他方式來關注過程?天青下載node.js超時
var azure = require('azure-storage');
var fs = require('fs');
module.exports = function (context, input) {
context.done();
var accessKey = 'myaccesskey';
var storageAccount = 'mystorageaccount';
var containerName = 'mycontainer';
var blobService = azure.createBlobService(storageAccount, accessKey);
var recordName = "a_large_movie.mov";
var blobName = "standard/mov/" + recordName;
var blobSize;
var chunkSize = (1024 * 512) * 8; // I'm experimenting with this variable
var startPos = 0;
var fullPath = "D:/home/site/wwwroot/myAzureFunction/input/";
var blobProperties = blobService.getBlobProperties(containerName, blobName, null, function (error, blob) {
if (error) {
throw error;
}
else {
blobSize = blob.contentLength;
context.log('Registered length: ' + blobSize);
fullPath = fullPath + recordName;
console.log(fullPath);
doDownload();
}
}
);
function doDownload() {
var stream = fs.createWriteStream(fullPath, {flags: 'a'});
var endPos = startPos + chunkSize;
if (endPos > blobSize) {
endPos = blobSize;
context.log('Reached end of file endPos: ' + endPos);
}
context.log("Downloading " + (endPos - startPos) + " bytes starting from " + startPos + " marker.");
blobService.getBlobToStream(
containerName,
blobName,
stream,
{
"rangeStart": startPos,
"rangeEnd": endPos-1
},
function(error) {
if (error) {
throw error;
}
else if (!error) {
startPos = endPos;
if (startPos <= blobSize - 1) {
doDownload();
}
}
}
);
}
};
您是否嘗試將塊大小從4MB減少到1MB? –
不是1 MB具體。不過,我以512 KB的塊大小開始,並且它給出了與上面提到的相同的偶爾超時 –
Hi @GauravMantri - 我將塊更改爲1MB,它確實使它更好。但是,現在一個大文件仍然超時,所以問題沒有消失。但它確實改善了它。塊大小和可能的超時之間的連接是什麼? –