0
在Ansible 2.2中,我想遍歷從S3中讀取的大量文件。ansible包含數組切片
這裏是我的role/tasks/main.yml
- name: Simulate variable read from S3 set_fact: list_of_files_to_import: [ "a.tar.gz", "b.tar.gz", "c.tar.gz", "d.tar.gz", "e.tar.gz", "f.tar.gz", "g.tar.gz", "h.tar.gz", ... "zz.tar.gz" ] - name: Process each file from S3 include: submodule.yml with_items: list_of_files_to_import
這裏是role/tasks/submodule.yml
--- - name: Restore TABLE {{ item }} debug: var={{ item }}
這崩潰,因爲有太多的文件。
我發現,我可以切片的陣列,並在同一時間發送部分:
- name: Process each file from S3 include: submodule.yml with_items: "{{ list_of_files_to_import[0:5] }}" - name: Process each file from S3 include: submodule.yml with_items: "{{ list_of_files_to_import[5:10] }}" - name: Process each file from S3 include: submodule.yml with_items: "{{ list_of_files_to_import[10:15] }}" - name: Process each file from S3 include: submodule.yml with_items: "{{ list_of_files_to_import[15:20] }}"
而是硬編碼這些小塊,我想嘗試像
- name: Process each file from S3 include: submodule.yml with_items: "{{ list_of_files_to_import[{{start}}:{{end}}] }}"
但我們cannot get variable-defined variable names
如何處理Ansible 2.2中的大量項目?
列表有多大?崩潰的原因是什麼?太多包括? –
它是300個文件的順序。我看到的唯一原因是「內存不足」。但是,是的,它似乎不能處理這麼多包括。 –
我不確定您的詳情,但它看起來像問題[#16391](https://github.com/ansible/ansible/issues/16391)。如果是這樣的話,這個問題應該在下一個Ansible版本中解決。 – nelsonda