2017-09-15 72 views
1

我使用Laravel Forge來啓動我的EC2環境,這爲我製作了一個LEMP堆棧。我最近開始根據請求獲得504個超時。PHP FPM 7.1插槽泄漏導致NGINX-504網關超時

我沒有系統管理員(因此訂閱僞造),但我通過日誌望去,縮小了問題倒在我的日誌中的這些來重複條目:

在:/var/log/nginx/default-error.log

2017/09/15 09:32:17 [error] 2308#2308: *1 upstream timed out (110: Connection timed out) while sending request to upstream, client: x.x.x.x, server: xxxx.com, request: "POST /upload HTTP/2.0", upstream: "fastcgi://unix:/var/run/php/php7.1-fpm.sock", host: "xxxx.com", referrer: "https://xxxx.com/rest/of/the/path" 

在:/var/log/php7.1-fpm-log

[15-Sep-2017 09:35:09] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 14 total children 

好像FPM打開一個永遠不會死的連接,並從我的RDS負載日誌,我可以看到,RAM爲常數最大化了。

我已經試過:

  • 回滾到我的應用程序的一個明確的穩定版本(2個月前)
  • 重新安裝我的EC2 5.6,7.0和7.1(連同其各自fpm
  • 做所有的14.04和16.04
  • 創建一個更大的RDS
以上210

現在唯一可行的是一個強大的RDS(8GB內存)+每300次請求殺死fpm池連接。但顯然在這個問題上投入資源並不是解決方案。

這裏是我的/etc/php/7.1/fpm/pool.d/www.conf

user = forge 
group = forge 
listen = /run/php/php7.1-fpm.sock 
listen.owner = www-data 
listen.group = www-data 
listen.mode = 0666 
pm = dynamic 
pm.max_children = 30 
pm.start_servers = 7 
pm.min_spare_servers = 6 
pm.max_spare_servers = 10 
pm.process_idle_timeout = 7s; 
pm.max_requests = 300 

配置這裏是我的nginx.conf

listen 80; 
listen [::]:80; 
listen 443 ssl http2; 
listen [::]:443 ssl http2; 
server_name xxxx.com; 
root /home/forge/xxxx.com/public; 

# FORGE SSL (DO NOT REMOVE!) 
ssl_certificate /etc/nginx/ssl/xxxx.com/111111/server.crt; 
ssl_certificate_key /etc/nginx/ssl/xxxx.com/111111/server.key; 

ssl_protocols xxxx; 
ssl_ciphers ...; 
ssl_prefer_server_ciphers on; 
ssl_dhparam /etc/nginx/dhparams.pem; 

add_header X-Frame-Options "SAMEORIGIN"; 
add_header X-XSS-Protection "1; mode=block"; 
add_header X-Content-Type-Options "nosniff"; 

index index.html index.htm index.php; 

charset utf-8; 

# FORGE CONFIG (DOT NOT REMOVE!) 
include forge-conf/xxxx.com/server/*; 

location/{ 
    try_files $uri $uri/ /index.php?$query_string; 
} 

location = /favicon.ico 
location = /robots.txt 

access_log /var/log/nginx/xxxx.com-access.log; 
error_log /var/log/nginx/xxxx.com-error.log error; 

error_page 404 /index.php; 

location ~ \.php$ { 
    fastcgi_split_path_info ^(.+\.php)(/.+)$; 
    fastcgi_pass unix:/var/run/php/php7.1-fpm.sock; 
    fastcgi_index index.php; 
    fastcgi_read_timeout 60; 
    include fastcgi_params; 
} 

location ~ /\.(?!well-known).* { 
    deny all; 
} 

location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { 
    expires 30d; 
    add_header Pragma public; 
    add_header Cache-Control "public"; 
} 
+0

看看這有助於http://www.techietown.info/2017/01/tuning-nginx-php-fpm-for-high-traffic/ –

回答

0

OK配置,後調試和測試的很多,我注意到這幾個原因。

  • 對我來說首要原因:,我用我的MySQL的AWS RDS實例有500MB的內存。回顧一下,一旦數據庫大小超過400Mb,所有這些問題就開始了。

    • 解決方案:確保您的數據庫大小在任何時候都有2x內存。否則整個B +樹不適合內存,所以它必須做不斷的交換。這可以使您的查詢時間超過15秒。
  • 類似問題的主要原因:未優化的SQL查詢。

    • 解決方案:在本地主機保持類似數據的服務器上的數據的大小。