Recently we encountered a production in an application. This application was connecting to multiple systems of records (SOR). Oracle RAC cluster is one of the primary systems of record. This Oracle RAC cluster was slowing down due to resource constraints. This slowdown in the Oracle RAC cluster degraded the entire application’s response time. In this post, let’s discuss the steps we pursued to troubleshoot this problem.

Video: To see the visual walk-through of this post, click below:

https://youtu.be/-Wvq7DJofSE

Capturing Troubleshooting artifacts



Fig: 360 degree artifacts
The application was running on a WebLogic server. All of a sudden, this application started to become unresponsive. We ran the yCrash open-source script against it. This script captures 16 different troubleshooting artifacts such as Garbage Collection log, thread dump, heap dump, netstat, vmstat, iostat, top, … from the application stack. When an application becomes unresponsive, there could be multiple reasons for it: Garbage collection pauses, threads getting blocked, network connectivity issues, CPU starvation, memory constraints…. Thus, it’s ideal to capture all the troubleshooting artifacts.

Analyzing thread dumps
Once all the artifacts were captured, we uploaded the captured artifacts to the yCrash server. yCrash server analyzed all these artifacts and generated a root cause analysis immediately.



Fig: Screenshot of last executed methods from yCrash


Tool generated a warning indicating that the 46% of threads in the application were executing ‘sun.nio.ch.FileDispatcherImpl.read0(Native Method)‘ and it might be a concern. Thus, we investigated the stack trace of those threads. Below is the top part of those threads stack trace:

java.lang.Thread.State: RUNNABLE
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.  java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java  :223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImp  l.java:378)
- locked <0x00007fa93f8cfac0> (a java.lang.Object)
at oracle.net.nt.TimeoutSocketChannel.read(TimeoutSoc  ketChannel.java:174)
at oracle.net.ns.NSProtocolNIO.doSocketRead(NSProtoco  lNIO.java:555)
at oracle.net.ns.NIOPacket.readHeader(NIOPacket.java:  258)
at oracle.net.ns.NIOPacket.readPacketFromSocketChanne  l(NIOPacket.java:190)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOP  acket.java:132)
at oracle.net.ns.NIOPacket.readFromSocketChannel(NIOP  acket.java:105)
at oracle.net.ns.NIONSDataChannel.readDataFromSocketC  hannel(NIONSDataChannel.java:91)
at oracle.jdbc.driver.T4CMAREngineNIO.prepareForUnmar  shall(T4CMAREngineNIO.java:764)
at oracle.jdbc.driver.T4CMAREngineNIO.unmarshalUB1(T4  CMAREngineNIO.java:429)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.jav  a:407)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:  268)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:6  55)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4  CCallableStatement.java:265)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4  CCallableStatement.java:86)
at oracle.jdbc.driver.T4CCallableStatement.executeFor  Rows(T4CCallableStatement.java:965)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTi  meout(OracleStatement.java:1205)
at oracle.jdbc.driver.OraclePreparedStatement.execute  Internal(OraclePreparedStatement.java:3666)
at oracle.jdbc.driver.T4CCallableStatement.executeInt  ernal(T4CCallableStatement.java:1358)
at oracle.jdbc.driver.OraclePreparedStatement.execute  (OraclePreparedStatement.java:3778)
- locked <0x00007fa93f8b5998> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OracleCallableStatement.execute  (OracleCallableStatement.java:4251)
- locked <0x00007fa93f8b5998> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.  execute(OraclePreparedStatementWrapper.java:1081)
at weblogic.jdbc.wrapper.PreparedStatement.execute(Pr  eparedStatement.java:119)

From the stack trace you can observe that all those threads were waiting for response from the Oracle database. Basically, stack trace is indicating that the threads are leveraging Oracle JDBC driver to make backend database calls and those calls are getting stuck. 46% of threads connecting to the Oracle database was too high, for the amount of traffic this application processes. It clearly indicates that threads are getting stuck while making calls to the Oracle database.

Flame graph in thread dump analysis


Fig: Flame graph of Threads generated by yCrash

Above is the flame graph of the thread dump generated by the yCrash tool. Flame graph shows the consolidated view of all the threads in one single view. This Oracle database slowdown issue was also visible in the Flame graph. Several tips of the Flame graph contain the same ‘sun.nio.ch.FileDispatcherImpl.read0(Native Method)’ method. If you can see the towers beneath these they indicate the Oracle JDBC driver stack trace. It’s hard to explain the flame graph flow in text. You might want to watch this below video, where I attempt to explain this flame graph visually.

Oracle database slowdown
After this double confirmation, it was conclusive that 46% of threads are stuck waiting for response from the Oracle database. Thus, we escalated this issue to the Oracle database Administrator (DBA). DBA immediately confirmed that there were certain resource constraints on the Oracle database side, which caused the transactions to slow down. Once sufficient capacity was allocated to the Oracle database, the application started to respond normally.