2017-02-20 96 views
1

我在EC2上有一個mongodb服務。它會在一段時間後自動崩潰。 當我做systemctl status mongodb,它給了我下面的輸出:服務器狀態非常慢後,mongodb服務崩潰

●mongodb.service - 高性能,無架構面向文檔的數據庫 加載:加載(/etc/systemd/system/mongodb.service ; 啓用;供應商預設:啓用)活動:失敗(結果:信號) 自2017年5月17日星期二02:00:00 UTC; 3H 37分鐘前主PID:1152 (代碼=殺死,信號= KILL)

年02月17 02:00:00 IP-172-31-29-240 systemd [1]:mongodb.service:主 過程退出,代碼=殺死,狀態= 9/KILL Feb 17 02:00:00 ip-172-31-29-240 systemd [1]:mongodb.service:單元輸入失敗 狀態。 Feb 17 02:00:00 ip-172-31-29-240 systemd [1]:mongodb.service: 結果'signal'失敗。警告:自從 設備啓動以來,日記帳已被輪換。日誌輸出不完整或不可用。

當我檢查了/var/log/mongodb/mongod.log文件,我得到這個:

2017-02-20T11:11:05.624+0000 I CONTROL [main] ***** SERVER RESTARTED ***** 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten] MongoDB starting : pid=29177 port=27017 dbpath=/var/lib/mongodb 64-bit host=ip-172-31-29-240 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten] db version v3.2.11 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten] git version: 009580ad490190ba33d1c6253ebd8d91808923e4 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten] allocator: tcmalloc 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten] modules: none 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten] build environment: 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten]  distmod: ubuntu1604 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten]  distarch: x86_64 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten]  target_arch: x86_64 
2017-02-20T11:11:05.638+0000 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { bindIp: "127.0.0.1", port: 27017 }, storage: { dbPath: "/var/lib/mongodb", journal: { enabled: true } }, systemLog: { destination: "file", logAppend: true, path: "/var/log/mongodb/mongod.log", quiet: true } } 
2017-02-20T11:11:05.665+0000 I -  [initandlisten] Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'. 
2017-02-20T11:11:05.665+0000 W -  [initandlisten] Detected unclean shutdown - /var/lib/mongodb/mongod.lock is not empty. 
2017-02-20T11:11:05.665+0000 W STORAGE [initandlisten] Recovering data from the last clean checkpoint. 
2017-02-20T11:11:05.665+0000 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), 
2017-02-20T11:11:06.246+0000 I CONTROL [initandlisten] 
2017-02-20T11:11:06.246+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 
2017-02-20T11:11:06.246+0000 I CONTROL [initandlisten] **  We suggest setting it to 'never' 
2017-02-20T11:11:06.246+0000 I CONTROL [initandlisten] 
2017-02-20T11:11:06.246+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 
2017-02-20T11:11:06.246+0000 I CONTROL [initandlisten] **  We suggest setting it to 'never' 
2017-02-20T11:11:06.246+0000 I CONTROL [initandlisten] 
2017-02-20T11:11:06.280+0000 I FTDC  [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongodb/diagnostic.data' 
2017-02-20T11:11:06.280+0000 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker 
2017-02-20T11:11:06.283+0000 I NETWORK [initandlisten] waiting for connections on port 27017 
2017-02-20T11:11:28.189+0000 I COMMAND [conn3] command myDataBase.myCollection command: aggregate { aggregate: "myCollection", pipeline: [ { $match: { myFlag: true, timestamp: { $lt: new Date(1485907200000) } } }, { $group: { _id: null, startDate: { $min: "$timestamp" }, previousCount: { $sum: 1 } } }, { $project: { _id: 0, startDate: 1, previousCount: 1 } } ] } keyUpdates:0 writeConflicts:0 numYields:236 reslen:114 locks:{ Global: { acquireCount: { r: 478 } }, Database: { acquireCount: { r: 239 } }, Collection: { acquireCount: { r: 239 } } } protocol:op_query 2605ms 
2017-02-20T11:11:28.192+0000 I COMMAND [conn8] command myDataBase.myCollection command: aggregate { aggregate: "myCollection", pipeline: [ { $match: { myFlag: true, timestamp: { $lt: new Date(1485907200000) } } }, { $group: { _id: null, startDate: { $min: "$timestamp" }, previousCount: { $sum: 1 } } }, { $project: { _id: 0, startDate: 1, previousCount: 1 } } ] } keyUpdates:0 writeConflicts:0 numYields:244 reslen:114 locks:{ Global: { acquireCount: { r: 494 } }, Database: { acquireCount: { r: 247 } }, Collection: { acquireCount: { r: 247 } } } protocol:op_query 2764ms 
2017-02-20T11:11:28.235+0000 I COMMAND [conn5] command myDataBase.myCollection command: aggregate { aggregate: "myCollection", pipeline: [ { $match: { timestamp: { $gte: new Date(1484910685417) } } }, { $group: { _id: { $dateToString: { format: "%Y-%m-%d", date: "$timestamp" } }, count: { $sum: 1 } } }, { $sort: { _id: 1 } } ] } keyUpdates:0 writeConflicts:0 numYields:244 reslen:538 locks:{ Global: { acquireCount: { r: 494 } }, Database: { acquireCount: { r: 247 } }, Collection: { acquireCount: { r: 247 } } } protocol:op_query 2817ms 
2017-02-20T11:11:28.303+0000 I COMMAND [conn8] command myDataBase.myCollection command: aggregate { aggregate: "myCollection", pipeline: [ { $match: { timestamp: { $gte: new Date(1484910685581) } } }, { $group: { _id: { $dateToString: { format: "%Y-%m-%d", date: "$timestamp" } }, count: { $sum: 1 } } }, { $sort: { _id: 1 } } ] } keyUpdates:0 writeConflicts:0 numYields:144 reslen:538 locks:{ Global: { acquireCount: { r: 294 } }, Database: { acquireCount: { r: 147 } }, Collection: { acquireCount: { r: 147 } } } protocol:op_query 110ms 
2017-02-20T11:11:28.438+0000 I COMMAND [conn3] command myDataBase.myCollection command: aggregate { aggregate: "myCollection", pipeline: [ { $match: { timestamp: { $gte: new Date(1484910688306) } } }, { $group: { _id: { $dateToString: { format: "%Y-%m-%d", date: "$timestamp" } }, count: { $sum: 1 } } }, { $sort: { _id: 1 } } ] } keyUpdates:0 writeConflicts:0 numYields:144 reslen:538 locks:{ Global: { acquireCount: { r: 294 } }, Database: { acquireCount: { r: 147 } }, Collection: { acquireCount: { r: 147 } } } protocol:op_query 129ms 
2017-02-20T11:11:28.444+0000 I COMMAND [conn5] command myDataBase.myCollection command: aggregate { aggregate: "myCollection", pipeline: [ { $match: { timestamp: { $gte: new Date(1484910688272) } } }, { $group: { _id: { $dateToString: { format: "%Y-%m-%d", date: "$timestamp" } }, count: { $sum: 1 } } }, { $sort: { _id: 1 } } ] } keyUpdates:0 writeConflicts:0 numYields:144 reslen:538 locks:{ Global: { acquireCount: { r: 294 } }, Database: { acquireCount: { r: 147 } }, Collection: { acquireCount: { r: 147 } } } protocol:op_query 154ms 
2017-02-20T11:12:15.504+0000 I COMMAND [conn1] command myDataBase.myCollection command: aggregate { aggregate: "myCollection", pipeline: [ { $match: { timestamp: { $gte: new Date(1484910735374) } } }, { $group: { _id: { $dateToString: { format: "%Y-%m-%d", date: "$timestamp" } }, count: { $sum: 1 } } }, { $sort: { _id: 1 } } ] } keyUpdates:0 writeConflicts:0 numYields:143 reslen:538 locks:{ Global: { acquireCount: { r: 292 } }, Database: { acquireCount: { r: 146 } }, Collection: { acquireCount: { r: 146 } } } protocol:op_query 128ms 
2017-02-20T11:12:15.582+0000 I COMMAND [conn5] command myDataBase.myCollection command: aggregate { aggregate: "myCollection", pipeline: [ { $match: { timestamp: { $gte: new Date(1484910735439) } } }, { $group: { _id: { $dateToString: { format: "%Y-%m-%d", date: "$timestamp" } }, count: { $sum: 1 } } }, { $sort: { _id: 1 } } ] } keyUpdates:0 writeConflicts:0 numYields:145 reslen:538 locks:{ Global: { acquireCount: { r: 296 } }, Database: { acquireCount: { r: 148 } }, Collection: { acquireCount: { r: 148 } } } protocol:op_query 140ms 
2017-02-20T11:12:15.588+0000 I COMMAND [conn8] command myDataBase.myCollection command: aggregate { aggregate: "myCollection", pipeline: [ { $match: { timestamp: { $gte: new Date(1484910735428) } } }, { $group: { _id: { $dateToString: { format: "%Y-%m-%d", date: "$timestamp" } }, count: { $sum: 1 } } }, { $sort: { _id: 1 } } ] } keyUpdates:0 writeConflicts:0 numYields:144 reslen:538 locks:{ Global: { acquireCount: { r: 294 } }, Database: { acquireCount: { r: 147 } }, Collection: { acquireCount: { r: 147 } } } protocol:op_query 155ms 
2017-02-20T11:12:24.765+0000 I COMMAND [conn3] query myDataBase.myCollection query: { $query: {}, orderby: { timestamp: -1 } } planSummary: COLLSCAN, COLLSCAN cursorid:47222766812 ntoreturn:200 ntoskip:0 keysExamined:0 docsExamined:18423 hasSortStage:1 keyUpdates:0 writeConflicts:0 numYields:145 nreturned:37 reslen:4578802 locks:{ Global: { acquireCount: { r: 292 } }, Database: { acquireCount: { r: 146 } }, Collection: { acquireCount: { r: 146 } } } 177ms 
2017-02-20T11:12:25.268+0000 I COMMAND [conn3] getmore myDataBase.myCollection planSummary: COLLSCAN, COLLSCAN cursorid:47222766812 ntoreturn:200 keysExamined:0 docsExamined:18423 keyUpdates:0 writeConflicts:0 numYields:151 nreturned:200 reslen:2664936 locks:{ Global: { acquireCount: { r: 304 } }, Database: { acquireCount: { r: 152 } }, Collection: { acquireCount: { r: 152 } } } 321ms 
2017-02-20T11:12:25.518+0000 I COMMAND [conn3] command myDataBase.myCollection command: aggregate { aggregate: "myCollection", pipeline: [ { $match: { myFlag: true, timestamp: { $lt: new Date(1485907200000) } } }, { $group: { _id: null, startDate: { $min: "$timestamp" }, previousCount: { $sum: 1 } } }, { $project: { _id: 0, startDate: 1, previousCount: 1 } } ] } keyUpdates:0 writeConflicts:0 numYields:144 reslen:114 locks:{ Global: { acquireCount: { r: 294 } }, Database: { acquireCount: { r: 147 } }, Collection: { acquireCount: { r: 147 } } } protocol:op_query 173ms 
2017-02-20T11:12:25.691+0000 I COMMAND [conn3] command myDataBase.compressionstats command: aggregate { aggregate: "compressionstats", pipeline: [ { $group: { _id: null, originalDataSum: { $sum: "$originalDataSize" }, compressedDataSum: { $sum: "$compressedDataSize" } } } ] } keyUpdates:0 writeConflicts:0 numYields:2 reslen:125 locks:{ Global: { acquireCount: { r: 10 } }, Database: { acquireCount: { r: 5 } }, Collection: { acquireCount: { r: 5 } } } protocol:op_query 127ms 
2017-02-20T11:12:26.525+0000 I WRITE [conn3] update myDataBase.sessions query: { _id: "doOtZATLM5R61__wpN-PmFH_80BMIpFH" } update: { _id: "doOtZATLM5R61__wpN-PmFH_80BMIpFH", session: "{"cookie":{"originalMaxAge":null,"expires":null,"httpOnly":true,"path":"/"},"passport":{"user":"587cc6630bd9ff5c39e54e94"}}", expires: new Date(1488798745083) } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keyUpdates:1 writeConflicts:0 numYields:1 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } 601ms 
2017-02-20T11:12:26.532+0000 I COMMAND [conn2] command secondDB.secondCollection command: getMore { getMore: 16340341641, collection: "secondCollection", batchSize: 1000 } planSummary: COLLSCAN cursorid:16340341641 keysExamined:0 docsExamined:0 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:103 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } protocol:op_query 1372ms 
2017-02-20T11:12:26.570+0000 I COMMAND [conn3] command myDataBase.$cmd command: update { update: "sessions", writeConcern: { w: 1 }, ordered: true, updates: [ { q: { _id: "doOtZATLM5R61__wpN-PmFH_80BMIpFH" }, u: { _id: "doOtZATLM5R61__wpN-PmFH_80BMIpFH", session: "{"cookie":{"originalMaxAge":null,"expires":null,"httpOnly":true,"path":"/"},"passport":{"user":"587cc6630bd9ff5c39e54e94"}}", expires: new Date(1488798745083) }, multi: false, upsert: true } ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:55 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_query 702ms 
2017-02-20T11:12:26.905+0000 I COMMAND [conn3] query myDataBase.users query: { _id: ObjectId('587cc6630bd9ff5c39e54e94') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:1 nreturned:1 reslen:524 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } 204ms 
2017-02-20T11:12:27.126+0000 I COMMAND [conn3] killcursors myDataBase.myCollection keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 86ms 
2017-02-20T11:12:33.002+0000 I COMMAND [ftdc] serverStatus was very slow: { after basic: 120, after asserts: 170, after connections: 370, after extra_info: 580, after globalLock: 690, after locks: 990, after network: 1070, after opcounters: 1170, after opcountersRepl: 1240, after storageEngine: 1680, after tcmalloc: 1910, after wiredTiger: 2390, at end: 2820 } 

注意:服務狀態輸出和記錄來自不同的日子,然而,問題是一樣的,我只是沒」 t這次複製服務狀態輸出。

+0

我也有同樣的問題 –

+1

嘗試增加內存,這就是我發佈問題後立即做了什麼。沒有再次得到問題。 –

+0

硬盤有14GB的空間已經在aws服務器上 –

回答

0

我有同樣的問題。增加內存後,錯誤消失。

+0

我也有同樣的問題 –