2011-03-08 66 views
9

我使用c3p0作爲連接池運行連接到MySQL設置的Spring/Hibernate。由於某些奇怪的原因,當系統負載不足時(當然),它會斷開連接。用完數據庫連接!

該網站是相當穩定,直到我們開始達到一個新的水平(超過100個併發用戶)。此時數據庫將會融化(掛住CPU)。我的第一個行動是通過廣泛的緩存和優化查詢等來提高性能的應用程序。

現在它會間歇性地耗盡連接。它似乎甚至不依賴於負載。更多的時間,這讓我覺得這是一個泄漏,但對於我的生活,我無法弄清楚它會來自哪裏。

WARN [2011-03-07 17:19:42,409] [TP-Processor38] (JDBCExceptionReporter.java:100) - SQL Error: 0, SQLState: null 
ERROR [2011-03-07 17:19:42,409] [TP-Processor38] (JDBCExceptionReporter.java:101) - An attempt by a client to checkout a Connection has timed out. 
ERROR [2011-03-07 17:19:42,410] [TP-Processor38] (HttpHeadFilter.java:46) - There was a problem passing thru filter:/is-this-guy-crazy-or-just-a-huge-dancing-with-the-stars-fan 
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.hibernate.exception.GenericJDBCException: could not execute query 
     at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:659) 
     at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:552) 
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) 
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) 
     at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) 
     at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) 
     at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:343) 
     at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:109) 

Caused by: java.sql.SQLException: An attempt by a client to checkout a Connection has timed out. 
    at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:106) 
    at com.mchange.v2.sql.SqlUtils.toSQLException(SqlUtils.java:65) 
    at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool.checkoutPooledConnection(C3P0PooledConnectionPool.java:527) 
    at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128) 

這裏是我的配置:

<bean id="dataSource" class="org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy"> 
     <property name="targetDataSource" ref="rootDataSource" /> 
    </bean> 
    <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> 
     <property name="mappingLocations" value="classpath:hibernate-mapping.xml" /> 
     <property name="hibernateProperties"> 
      <props> 
       <prop key="hibernate.connection.provider_class">net.sf.hibernate.connection.C3P0ConnectionProvider</prop> 
       <prop key="hibernate.dialect">${hibernate.dialect}</prop> 
       <prop key="hibernate.show_sql">${hibernate.show_sql}</prop> 
       <prop key="hibernate.cache.use_second_level_cache">true</prop> 
       <prop key="hibernate.cache.use_query_cache">true</prop> 
       <prop key="hibernate.cache.generate_statistics">true</prop> 
       <prop key="hibernate.cache.provider_class">net.sf.ehcache.hibernate.EhCacheProvider</prop> 
       <prop key="hibernate.generate_statistics">${hibernate.generate_statistics}</prop> 
       <prop key="hibernate.connection.zeroDateTimeBehavior">convertToNull</prop> 
       <prop key="hibernate.bytecode.use_reflection_optimizer">${hibernate.bytecode.use_reflection_optimizer}</prop> 
       <!--<prop key="hibernate.hbm2ddl.auto">${hibernate.hbm2ddl.auto}</prop>--> 
       <prop key="hibernate.jdbc.batch_size">${hibernate.jdbc.batch_size}</prop> 

       <!--Actually, it seems the following property affects batch size (or explicit per relationship in the mapping)--> 
       <!--<prop key="hibernate.default_batch_fetch_size">${hibernate.jdbc.batch_size}</prop>--> 
      </props> 
     </property> 
     <property name="dataSource" ref="dataSource" /> 
    </bean> 

    <bean id="rootDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"> 
     <property name="driverClass" value="${jdbc.driver}" /> 
     <property name="jdbcUrl" value="${jdbc.url}" /> 
     <property name="user" value="${jdbc.username}" /> 
     <property name="password" value="${jdbc.password}" /> 
     <property name="initialPoolSize" value="20" /> 
     <property name="maxPoolSize" value="200" /> 
     <property name="checkoutTimeout" value="30000" /> 
     <property name="maxStatements" value="180" /> 

     <property name="minPoolSize"> 
      <value>${hibernate.c3p0.minPoolSize}</value> 
     </property> 
     <property name="acquireRetryAttempts"> 
      <value>${hibernate.c3p0.acquireRetryAttempts}</value> 
     </property> 
     <property name="acquireIncrement"> 
      <value>${hibernate.c3p0.acquireIncrement}</value> 
     </property> 
     <property name="idleConnectionTestPeriod"> 
      <value>${hibernate.c3p0.idleConnectionTestPeriod}</value> 
     </property> 
     <property name="maxIdleTime"> 
      <value>${hibernate.c3p0.maxIdleTime}</value> 
     </property> 
     <property name="maxIdleTimeExcessConnections"> 
      <value>${hibernate.c3p0.maxIdleTimeExcessConnections}</value> 
     </property> 
     <property name="maxConnectionAge"> 
      <value>${hibernate.c3p0.maxConnectionAge}</value> 
     </property> 
     <property name="preferredTestQuery"> 
      <value>${hibernate.c3p0.preferredTestQuery}</value> 
     </property> 
     <property name="testConnectionOnCheckin"> 
      <value>${hibernate.c3p0.testConnectionOnCheckin}</value> 
     </property> 
     <property name="numHelperThreads"> 
      <value>${hibernate.c3p0.numHelperThreads}</value> 
     </property> 
     <property name="unreturnedConnectionTimeout"> 
      <value>${hibernate.c3p0.unreturnedConnectionTimeout}</value> 
     </property> 
     <property name="debugUnreturnedConnectionStackTraces"> 
      <value>${hibernate.c3p0.debugUnreturnedConnectionStackTraces}</value> 
     </property> 
     <property name="automaticTestTable"> 
      <value>${hibernate.c3p0.automaticTestTable}</value> 
     </property> 
    </bean> 
    hibernate.c3p0.acquireIncrement=5 
hibernate.c3p0.minPoolSize=20 
hibernate.c3p0.acquireRetryAttempts=30 
hibernate.c3p0.idleConnectionTestPeriod=3600 
hibernate.c3p0.maxIdleTime=7200 
hibernate.c3p0.maxIdleTimeExcessConnections=1800  
hibernate.c3p0.maxConnectionAge=14400 
hibernate.c3p0.preferredTestQuery=select 1; 
hibernate.c3p0.testConnectionOnCheckin=false 
hibernate.c3p0.numHelperThreads=6 
hibernate.c3p0.unreturnedConnectionTimeout=0 
hibernate.c3p0.debugUnreturnedConnectionStackTraces=true 
hibernate.c3p0.automaticTestTable=test_connection; 

我正在運行的OpenSessionInViewInterceptor應關閉連接:

<bean id="openSessionInViewInterceptor" class="org.springframework.orm.hibernate3.support.OpenSessionInViewInterceptor"> 
    <property name="sessionFactory"> 
     <ref bean="sessionFactory" /> 
    </property> 
    <property name="flushModeName"> 
     <value>FLUSH_AUTO</value> 
    </property> 

</bean> 

我也利用彈簧註解@Transactional因爲我重用我非網頁前端代碼中的服務。

這裏確實只有兩個選項,它在完成時不會釋放連接。或者它正在閒聊聊天室,就像它試圖穿上褲子一樣。 如果任何人有任何想法,我將不勝感激 THX

跟進:最終事實證明,我被泄露,由於使用的OpenSessionInViewInterceptor連接。我的彈簧安全性作爲一個過濾器運行,所以它將連接到數據庫並且從不關閉它們。解決的辦法是將OpenSessionInViewInterceptor移動到OpenSessionInViewFilter。

回答

5

@Transactional泄漏連接的可能性很小 - 否則,您的網站會在前100個請求後停止工作。

但是還有另外一個原因,發生這種情況:

也許你已經爲「死」的連接和一些查詢需要更長時間的超時。這意味着您的池從池中刪除了一個繁忙的連接,並從數據庫中請求另一個連接 - 直到數據庫拔出插件。

要調試此功能,請爲您的連接池啓用日誌記錄,以便您可以查看它何時請求新的連接。

+0

我maxConnectionAge設置爲14400根據文檔以秒爲單位(不MS),所以這將是240分鐘。我絕對會嘗試開啓日誌記錄。問題在於它打開並關閉了一個TON連接,因此隔離它發生的位置很困難。特別是在負載下。 – matsientst 2011-03-08 18:16:38

12

嘗試啓用日誌記錄並將c3p0.debugUnreturnedConnectionStackTraces屬性設置爲true。還將c3p0.unreturnedConnectionTimeout設置爲小於您的平均查詢時間(1秒?)。那麼任何比超時時間長的事件都會記錄堆棧跟蹤。這應該可以讓你很快縮小事情的範圍。

如果堆棧跟蹤沒有模式,那可能只是您的池太小。你說100個併發用戶,但是有什麼想法每秒有多少個查詢?如果它每秒鐘有100個查詢,並且你有20個連接,那麼每個sql執行需要少於200毫秒(20個連接=>每秒掛鐘時間> 20個總工作秒數以執行100個查詢)。

+0

+1使用較小的超時更快地追蹤它 – 2011-03-09 15:30:05

3

不管C3P0的配置如何(通過休眠),你可能會受到MySQL本身的限制。請記住,默認情況下,MySQL允許的最大連接數爲100!所以,即使您告訴C3P0將200,500或1000個連接集中起來,這也無法實現。使用打開一個MySQL殼:

$ msql -u [user] -p 

和類型,以獲得允許的最大連接數如下:

$ show variables where Variable_name='max_connections'; 

如果返回的數字太低了您的應用程序,考慮改變它(編輯my.cnf文件,通常位於Linux系統上的/ etc/mysql /內)。

0

我也有這個問題。原因是用戶沒有對主機的授權,因爲/ etc/hosts條目已被修改。