下面是一個工作示例,它會在本地計算機上提取測試文件。該文件比1GIG小得多,但給出了總體規劃。
import paramiko
import os
import shutil
import time
import getpass
# get params
user = getpass.getuser()
pwd = getpass.getpass("Enter password: ")
bufsize = 2**20
host = 'localhost'
test_file_lines = 1000000
# create test file
now = time.asctime()
testfile_path = os.path.abspath('deleteme')
local_path = 'deleteme.copy'
print('writing test file...')
start = time.time()
with open(testfile_path, 'w') as fp:
for _ in range(test_file_lines):
fp.write(now + '\n')
delta = time.time() - start
file_size = os.stat(testfile_path).st_size
print("file size %d, %d KB/Sec" % (file_size, file_size/1024/delta))
# make connection
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host,username=user,password=pwd)
sftp=ssh.open_sftp()
sftp_file=sftp.open(testfile_path, bufsize=bufsize)
print('copying file...')
start = time.time()
shutil.copyfileobj(sftp_file,
open(local_path, 'wb', bufsize),
bufsize)
delta = time.time() - start
print('%.3f seconds, %d KB/Sec' % (delta, file_size/1024/delta))
#assert open(testfile_path).read() == open(local_path).read(), "files match"
運行我的機器上我得到了
Enter password:
writing test file...
file size 25000000, 21017 KB/Sec
copying file...
10.225 seconds, 2387 KB/Sec
我們預計有些慢了,因爲有一個讀取和寫入以及網絡成本(其本地主機所以並沒有真正觸及電線) ,但這似乎有點慢。我使用的是低功耗的2核心筆記本電腦,在這個應用程序和sshd之間,使用了很多cpu,大概是爲了加密。更高功率的機器可能會更好。
可能是這樣的http://www.programcreek.com/python/example/618/shutil.copyfileobj,'copyfileobj(sftp_file,open('save.csv','wb'),1024)' – sal
I對命名約定感到困惑......'server_address'包含遠程文件的名稱?然後'shutil.copyfileobj(sftp_file,open('localfile','wb'))'應該這樣做。你可以通過指定更大的塊大小來找到更好的性能(例如,'blocksize = 1048576'然後'shutil.copyfileobj(sftp_file,open('localfile','wb',blocksize),blocksize)' – tdelaney
@tdelaney謝謝,是的I糾正這個問題更清楚 – rlartiga