2013-03-18 39 views
1

我嘗試通過來自亞馬遜S3的重複項來恢復單個文件或目錄,但是我得到一個錯誤。Duplicity - 無法恢復單個文件

Local and Remote metadata are synchronized, no sync needed. 
Last full backup date: none 
Traceback (most recent call last): 
    File "/usr/bin/duplicity", line 1251, in <module> 
    with_tempdir(main) 
    File "/usr/bin/duplicity", line 1244, in with_tempdir 
    fn() 
    File "/usr/bin/duplicity", line 1198, in main 
    restore(col_stats) 
    File "/usr/bin/duplicity", line 538, in restore 
    restore_get_patched_rop_iter(col_stats)): 
    File "/usr/bin/duplicity", line 560, in restore_get_patched_rop_iter 
    backup_chain = col_stats.get_backup_chain_at_time(time) 
    File "/usr/lib/python2.6/dist-packages/duplicity/collections.py", line 934, in get_backup_chain_at_time 
    raise CollectionsError("No backup chains found") 
CollectionsError: No backup chains found 

我做錯了什麼?

在這裏,我怎麼做備份 出口PASSPHRASE = * *** 出口AWS_ACCESS_KEY_ID = * *** 出口AWS_SECRET_ACCESS_KEY = * *** GPG_KEY = * *** BACKUP_SIM_RUN = 1

LOGFILE="/var/log/s3-backup.log" 
DAILYLOGFILE="/var/log/s3-backup-daily.log" 

# The source of your backup 
SOURCE=/home/u54433 

# The destination 
DEST=s3+http://********** 


trace() { 
     stamp=`date +%Y-%m-%d_%H:%M:%S` 
     echo "$stamp: $*" >> ${DAILYLOGFILE} 
} 

cat /dev/null > ${DAILYLOGFILE} 

trace "removing old backups..." 
duplicity remove-older-than 2M --force --sign-key=${GPG_KEY} ${DEST} >> ${DAILYLOGFILE} 2>&1 

trace "start backup files..." 
duplicity --sign-key=${GPG_KEY} --exclude="**/logs" --s3-european-buckets --s3-use-new-style ${SOURCE} ${DEST} >> ${DAILYLOGFILE} 2>&1 

cat "$DAILYLOGFILE" >> $LOGFILE 

export PASSPHRASE= 
export AWS_ACCESS_KEY_ID= 
export AWS_SECRET_ACCESS_KEY= 

回答

2

使用 - -S3使用全新的風格在所有口是心非選項調用。

我有和你一樣的問題。我添加了缺失的選項「重複性刪除 - 以前」,現在一切正常。

0

對於任何人回到這個問題尋找一個明確的答案,@ shaikh-systems link導致認識到IAM子賬戶密鑰的Duplicity/AWS通信中存在一些問題。要恢復,我通過使用/ export我的主帳戶密鑰/祕密來解決它。我使用重複性0.6.21。