3

我使用ansible來部署我的應用程序。 我來到了我想要將我的咕嚕資產上傳到新創建的存儲桶的位置,這裏是我所做的: {{hostvars.localhost.public_bucket}}是存儲桶名稱, {{client}}/{{version_id}}/assets/admin是包含多層文件夾的文件夾的路徑,資產上傳:如何使用ansible遞歸地上傳文件夾到aws s3

- s3: 
    aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}" 
    aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}" 
    bucket: "{{hostvars.localhost.public_bucket}}" 
    object: "{{client}}/{{version_id}}/assets/admin" 
    src: "{{trunk}}/public/assets/admin" 
    mode: put 

以下是錯誤消息:

fatal: [x.y.z.t]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "s3"}, "module_stderr": "", "module_stdout": "\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 2868, in <module>\r\n main()\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 561, in main\r\n upload_s3file(module, s3, bucket, obj, src, expiry, metadata, encrypt, headers)\r\n File \"/home/ubuntu/.ansible/tmp/ansible-tmp-1468581761.67-193149771659393/s3\", line 307, in upload_s3file\r\n key.set_contents_from_filename(src, encrypt_key=encrypt, headers=headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/boto/s3/key.py\", line 1358, in set_contents_from_filename\r\n with open(filename, 'rb') as fp:\r\nIOError: [Errno 21] Is a directory: '/home/abcd/efgh/public/assets/admin'\r\n", "msg": "MODULE FAILURE", "parsed": false} 

我通過文件去了,我沒有找到遞歸選項ansible s3_module。 這是一個錯誤還是我錯過了什麼?

回答

4

通過使用ansible,它看起來像你想要的東西冪等,但ansible尚不支持S3目錄上傳或任何遞歸,所以你應該使用AWS CLI做這樣的工作:

command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive" 
3

ansible s3模塊不支持目錄上傳或任何遞歸。 對於這個任務,我建議使用s3cmd檢查下面的語法。

command: "aws s3 cp {{client}}/{{version_id}}/assets/admin s3://{{hostvars.localhost.public_bucket}}/ --recursive" 
1

我能夠通過遍歷目錄列表我想上傳的輸出來完成這個使用S3模塊。我通過命令模塊運行的小內聯python腳本只輸出目錄中文件路徑的完整列表,格式爲JSON。

- name: upload things 
    hosts: localhost 
    connection: local 

    tasks: 
    - name: Get all the files in the directory i want to upload, formatted as a json list 
     command: python -c 'import os, json; print json.dumps([os.path.join(dp, f)[2:] for dp, dn, fn in os.walk(os.path.expanduser(".")) for f in fn])' 
     args: 
      chdir: ../../styles/img 
     register: static_files_cmd 

    - s3: 
      bucket: "{{ bucket_name }}" 
      mode: put 
      object: "{{ item }}" 
      src: "../../styles/img/{{ item }}" 
      permission: "public-read" 
     with_items: "{{ static_files_cmd.stdout|from_json }}" 
2

由於Ansible 2.3,你可以使用:s3_sync

- name: basic upload 
    s3_sync: 
    bucket: tedder 
    file_root: roles/s3/files/ 

注:如果您使用非默認的區域,應設置region明確,否則,你得到一個沿線有點模糊的錯誤:An error occurred (400) when calling the HeadObject operation: Bad Request An error occurred (400) when calling the HeadObject operation: Bad Request

下面是一個完整的劇本,與上面試圖做的匹配:

- hosts: localhost 
    vars: 
    aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}" 
    aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"  
    bucket: "{{hostvars.localhost.public_bucket}}" 
    tasks: 
    - name: Upload files 
    s3_sync: 
     aws_access_key: '{{aws_access_key}}' 
     aws_secret_key: '{{aws_secret_key}}' 
     bucket: '{{bucket}}' 
     file_root: "{{trunk}}/public/assets/admin" 
     key_prefix: "{{client}}/{{version_id}}/assets/admin" 
     permission: public-read 
     region: eu-central-1 

注:

  1. 你也許可以清除區域,我只是說這例舉上述
  2. 我的觀點,我剛添加的關鍵是明確的。您可以(並且可能應該)使用環境變量是:

從文檔:

如果參數未在模塊內設置,下面的環境變量可以按遞減順序使用優先AWS_URL或EC2_URL,AWS_ACCESS_KEY_ID或AWS_ACCESS_KEY或EC2_ACCESS_KEY,AWS_SECRET_ACCESS_KEY或AWS_SECRET_KEY或EC2_SECRET_KEY,AWS_SECURITY_TOKEN或EC2_SECURITY_TOKEN,AWS_REGION或EC2_REGION

相關問題