我有一個卡夫卡連接jar需要作爲碼頭集裝箱運行。我需要在容器中的日誌文件上捕獲所有連接日誌(最好在目錄/ file -/etc/kafka/kafka-connect-logs中),稍後可以將其推送到本地主機(在其上運行docker引擎)碼頭工人的數量。當我將connect-log4j.properties
更改爲追加到日誌文件時,我發現沒有創建日誌文件。如果我在沒有docker的情況下嘗試使用相同的操作,並通過更改connect-log4j.properties
將日誌寫入日誌文件來運行本地linux虛擬機上的kafka連接,它將完美工作,但不會從docker中運行。任何建議都會非常有幫助。卡夫卡連接日誌碼頭集裝箱
Docker File
FROM confluent/platform
COPY Test.jar /usr/local/bin/
COPY kafka-connect-docker.sh /usr/local/bin/
COPY connect-distributed.properties /usr/local/bin/
COPY connect-log4j.properties /etc/kafka/connect-log4j.properties
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-yq", "curl"]
RUN ["chown", "-R", "confluent:confluent", "/usr/local/bin/kafka-connect-docker.sh", "/usr/local/bin/connect-distributed.properties", "/usr/local/bin/Test.jar"]
RUN ["chmod", "+x", "/usr/local/bin/kafka-connect-docker.sh", "/usr/local/bin/connect-distributed.properties", "/usr/local/bin/Test.jar"]
RUN ["chown", "-R", "confluent:confluent", "/etc/kafka/connect-log4j.properties"]
RUN ["chmod", "777", "/usr/local/bin/kafka-connect-docker.sh", "/etc/kafka/connect-log4j.properties"]
EXPOSE 8083
CMD [ "/usr/local/bin/kafka-connect-docker.sh" ]
connect-log4j.properties
# Root logger option
log4j.rootLogger = INFO, FILE
# Direct log messages to stdout
log4j.appender.FILE=org.apache.log4j.FileAppender
log4j.appender.FILE.File=/etc/kafka/log.out
# Define the layout for file appender
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.conversionPattern=%m%
log4j.logger.org.apache.zookeeper=ERROR
log4j.logger.org.I0Itec.zkclient=ERROR
kafka-connect-docker.sh
#!/bin/bash
export CLASSPATH=/usr/local/bin/Test.jar
exec /usr/bin/connect-distributed /usr/local/bin/connect-distributed.properties
它時,我使用的是默認connect-log4j.properties
(追加日誌控制檯),但我無法創建工作正常docker中的日誌文件。此外,沒有docker的同一進程在本地虛擬機中工作正常(創建日誌文件)。
您是否嘗試立即聲明該卷並將該文件直接放在卷文件夾中?像docker文件中的VOLUME/etc/kafka,然後在docker run中映射這個卷? – hecko84
謝謝!這有助於:-) –
我也把結果作爲答案,見下文 – hecko84