2013-07-06 97 views
7

我對我的靜態文件使用django-storage和amazon s3。下面的文檔,我把這些設置在我的settings.pyDjango存儲未檢測到更改的靜態文件

STATIC_URL = 'https://mybucket.s3.amazonaws.com/' 

ADMIN_MEDIA_PREFIX = 'https://mybucket.s3.amazonaws.com/admin/' 

INSTALLED_APPS += (
    'storages', 
) 

DEFAULT_FILE_STORAGE = 'storages.backends.s3boto.S3BotoStorage' 
AWS_ACCESS_KEY_ID = 'mybucket_key_id' 
AWS_SECRET_ACCESS_KEY = 'mybucket_access_key' 
AWS_STORAGE_BUCKET_NAME = 'mybucket' 
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage' 

而且我第一次運行採集靜態一切運行正常,我的靜態文件上傳到我的S3桶。

然而,在修改我的靜態文件和運行python manage.py collectstatic這儘管靜態文件進行了修改

-----> Collecting static files 
    0 static files copied, 81 unmodified. 

不過,如果我重新命名更改的靜態文件輸出後,改變的靜態文件是正確的複製到我的s3桶。

爲什麼django-storages上傳我更改的靜態文件?有配置問題還是問題更深?

回答

12

如果「目標」文件比源文件「更年輕」,collectstatic將跳過文件。看起來像亞馬遜S3存儲爲您的文件返回錯誤的日期。

您可以調查[code] [1]並調試服務器響應。也許時區有問題。

或者你可以只通過--clear參數collectstatic使所有文件都刪S3收集

+0

由於這個工作。我要等一下,看看是否有人發佈我的問題的確切解決方案,但如果不是,你可以有+50 :) – bab

+0

- 清除似乎沒有與我一起工作s3。如果我手動刪除s3中的文件,它們都會被重新複製。 – mgojohn

5

https://github.com/antonagestam/collectfast

從之前的readme.txt:該MD5校驗和ETag從比較自定義管理命令S3和如果兩個是相同的跳過文件副本。如果您使用git作爲更新時間戳的源控制系統,這使得運行收集靜態數據的速度更快 。

0

這個問題有點老了,但如果它在未來幫助某人,我想我會分享我的經驗。以下建議發現在其他線程我證實,對我來說,這確實是由時區差異造成的。我的django時間沒有錯誤,但設置爲EST,S3設置爲GMT。在測試中,我回到了django-storages 1.1.5,它似乎確實可以運行collectstatic。部分由於個人偏好,我不願意a)回滾三個版本的django存儲並丟失任何潛在的錯誤修復,或者b)更改我的項目組件的時區,以便基本歸結爲便利功能(雖然這是一個重要的一)。

我寫了一個簡短的腳本來完成與collectstatic相同的工作,但沒有上述改動。它將需要對應用程序進行一些修改,但如果將其放置在應用程序級別並將「static_dirs」替換爲項目應用程序的名稱,則應該適用於標準情況。它通過終端運行'python whatever_you_call_it.py -e environment_name(把它設置爲你的aws桶)。與這條線的具體設置

TIME_ZONE = 'UTC' 

運行collectstatic:

import sys, os, subprocess 
import boto3 
import botocore 
from boto3.session import Session 
import argparse 
import os.path, time 
from datetime import datetime, timedelta 
import pytz 

utc = pytz.UTC 
DEV_BUCKET_NAME = 'dev-homfield-media-root' 
PROD_BUCKET_NAME = 'homfield-media-root' 
static_dirs = ['accounts', 'messaging', 'payments', 'search', 'sitewide'] 

def main(): 
    try: 
     parser = argparse.ArgumentParser(description='Homfield Collectstatic. Our version of collectstatic to fix django-storages bug.\n') 
     parser.add_argument('-e', '--environment', type=str, required=True, help='Name of environment (dev/prod)') 
     args = parser.parse_args() 
     vargs = vars(args) 
     if vargs['environment'] == 'dev': 
      selected_bucket = DEV_BUCKET_NAME 
      print "\nAre you sure? You're about to push to the DEV bucket. (Y/n)" 
     elif vargs['environment'] == 'prod': 
      selected_bucket = PROD_BUCKET_NAME 
      print "Are you sure? You're about to push to the PROD bucket. (Y/n)" 
     else: 
      raise ValueError 

     acceptable = ['Y', 'y', 'N', 'n'] 
     confirmation = raw_input().strip() 
     while confirmation not in acceptable: 
      print "That's an invalid response. (Y/n)" 
      confirmation = raw_input().strip() 

     if confirmation == 'Y' or confirmation == 'y': 
      run(selected_bucket) 
     else: 
      print "Collectstatic aborted." 
    except Exception as e: 
     print type(e) 
     print "An error occured. S3 staticfiles may not have been updated." 


def run(bucket_name): 

    #open session with S3 
    session = Session(aws_access_key_id='{aws_access_key_id}', 
     aws_secret_access_key='{aws_secret_access_key}', 
     region_name='us-east-1') 
    s3 = session.resource('s3') 
    bucket = s3.Bucket(bucket_name) 

    # loop through static directories 
    for directory in static_dirs: 
     rootDir = './' + directory + "/static" 
     print('Checking directory: %s' % rootDir) 

     #loop through subdirectories 
     for dirName, subdirList, fileList in os.walk(rootDir): 
      #loop through all files in subdirectory 
      for fname in fileList: 
       try: 
        if fname == '.DS_Store': 
         continue 

        # find and qualify file last modified time 
        full_path = dirName + "/" + fname 
        last_mod_string = time.ctime(os.path.getmtime(full_path)) 
        file_last_mod = datetime.strptime(last_mod_string, "%a %b %d %H:%M:%S %Y") + timedelta(hours=5) 
        file_last_mod = utc.localize(file_last_mod) 

        # truncate path for S3 loop and find object, delete and update if it has been updates 
        s3_path = full_path[full_path.find('static'):] 
        found = False 
        for key in bucket.objects.all(): 
         if key.key == s3_path: 
          found = True 
          last_mode_date = key.last_modified 
          if last_mode_date < file_last_mod: 
           key.delete() 
           s3.Object(bucket_name, s3_path).put(Body=open(full_path, 'r'), ContentType=get_mime_type(full_path)) 
           print "\tUpdated : " + full_path 
        if not found: 
         # if file not found in S3 it is new, send it up 
         print "\tFound a new file. Uploading : " + full_path 
         s3.Object(bucket_name, s3_path).put(Body=open(full_path, 'r'), ContentType=get_mime_type(full_path)) 
       except: 
        print "ALERT: Big time problems with: " + full_path + ". I'm bowin' out dawg, this shitz on u." 


def get_mime_type(full_path): 
    try: 
     last_index = full_path.rfind('.') 
     if last_index < 0: 
      return 'application/octet-stream' 
     extension = full_path[last_index:] 
     return { 
      '.js' : 'application/javascript', 
      '.css' : 'text/css', 
      '.txt' : 'text/plain', 
      '.png' : 'image/png', 
      '.jpg' : 'image/jpeg', 
      '.jpeg' : 'image/jpeg', 
      '.eot' : 'application/vnd.ms-fontobject', 
      '.svg' : 'image/svg+xml', 
      '.ttf' : 'application/octet-stream', 
      '.woff' : 'application/x-font-woff', 
      '.woff2' : 'application/octet-stream' 
     }[extension] 
    except: 
     'ALERT: Couldn\'t match mime type for '+ full_path + '. Sending to S3 as application/octet-stream.' 

if __name__ == '__main__': 
    main() 
2

創建一個設置文件只是爲collectstatic同步,這種配置

python manage.py collectstatic --settings=settings.collectstatic 
+1

這解決了我的問題。 – kmomo