Uploaded image for project: 'SnappyData'
  1. SnappyData
  2. SNAP-1999

JVM crash while executing query using smart connector mode. hss_err reports failure at 'jj_scan_token' '(I)Z' in 'com/pivotal/gemfirexd/internal/impl/sql/compile/SQLParser'

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Done
    • Priority: Highest
    • Resolution: Done
    • Affects Version/s: None
    • Fix Version/s: 1.0.1
    • Component/s: Column
    • Labels:

      Description

      This test creates table in snappyJob using ct schema, with 1.5 GB data in csv.
      After table creation and data loading, the tests executes queries in snappyJob, smart connector mode and sqlScripts. Result validation is done in case of snappyJob and smart connector mode.

      The query exectuion fails in smart connector mode as one of the server crash. The server crash happened during execution of Q11. The hs_err file suggests the failure at 'jj_scan_token' '(I)Z' in 'com/pivotal/gemfirexd/internal/impl/sql/compile/SQLParser' .

      Query11 from ct :
      "select '5-CTFIX_ORDER' as SrcFl, a.* from ORDERS_DETAILS a , ORDERS_DETAILS b " +
      "where a.glb_root_order_id = b.glb_root_order_id and a.trd_date >='20160413' and b.trd_date >='20160413' " +
      "and b.src_sys ='CRIO' order by a.glb_root_order_id, a.trd_datE"

      From spark driver log
      Job started with 1 stages: SparkListenerJobStart(34,1504526859075,WrappedArray(org.apache.spark.scheduler.StageInfo@20e8a632),{spark.sql.execution.id=40, spark.scheduler.pool=default, spark.rdd.scope.noOverride=true, spark.rdd.scope={"id":"786","name":"CollectResults"}})
      Stage 68 completed with 128 tasks.
      Job completed with Result : JobSucceeded
      Job started with 1 stages: SparkListenerJobStart(35,1504526859911,WrappedArray(org.apache.spark.scheduler.StageInfo@32e46e13),{spark.sql.execution.id=40, spark.scheduler.pool=default, spark.rdd.scope=

      {"id":"789","name":"Exchange"}

      , spark.rdd.scope.noOverride=true})
      17/09/04 05:09:46.461 PDT task-result-getter-0<tid=0x> WARN TaskSetManager: Lost task 26.0 in stage 69.0 (TID 6420, 10.80.141.108, executor 2): java.sql.SQLException: (SQLState=X0Z01 Severity=20000) (dev7.telx.snappydata.io[29404]) Node '[dev7.telx.snappydata.io[29404],connId=85]

      {caused by: Channel closed. [SOCKET END OF TRANSMISSION]}

      ' went down or data no longer available while iterating the results (method 'prepareStatement'). Please retry the operation.
      at io.snappydata.thrift.common.ThriftExceptionUtil.newSnappyException(ThriftExceptionUtil.java:164)
      at io.snappydata.thrift.internal.ClientService.newExceptionForNodeFailure(ClientService.java:745)
      at io.snappydata.thrift.internal.ClientService.newSnappyExceptionForNodeFailure(ClientService.java:787)
      at io.snappydata.thrift.internal.ClientService.handleException(ClientService.java:678)
      at io.snappydata.thrift.internal.ClientService.prepareStatement(ClientService.java:1031)
      at io.snappydata.thrift.internal.ClientPreparedStatement.prepare(ClientPreparedStatement.java:126)
      at io.snappydata.thrift.internal.ClientPreparedStatement.<init>(ClientPreparedStatement.java:79)
      at io.snappydata.thrift.internal.ClientConnection.prepareStatement(ClientConnection.java:177)
      at io.snappydata.thrift.internal.ClientConnection.prepareStatement(ClientConnection.java:69)
      at sun.reflect.GeneratedMethodAccessor169.invoke(Unknown Source)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      at java.lang.reflect.Method.invoke(Method.java:498)
      at org.apache.tomcat.jdbc.pool.ProxyConnection.invoke(ProxyConnection.java:126)
      at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:108)
      at org.apache.tomcat.jdbc.pool.DisposableConnectionFacade.invoke(DisposableConnectionFacade.java:81)
      at com.sun.proxy.$Proxy10.prepareStatement(Unknown Source)
      at org.apache.spark.sql.execution.columnar.ColumnBatchIteratorOnRS.<init>(ColumnBatch.scala:294)
      at org.apache.spark.sql.execution.columnar.impl.SmartConnectorColumnRDD.compute(JDBCSourceAsColumnarStore.scala:679)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      at org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:113)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      at org.apache.spark.sql.execution.WholeStageCodegenRDD.computeInternal(WholeStageCodegenExec.scala:543)
      at org.apache.spark.sql.execution.WholeStageCodegenRDD$$anon$1.<init>(WholeStageCodegenExec.scala:505)
      at org.apache.spark.sql.execution.WholeStageCodegenRDD.compute(WholeStageCodegenExec.scala:504)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
      at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
      at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
      at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:95)
      at org.apache.spark.scheduler.Task.run(Task.scala:126)
      at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324)
      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      at java.lang.Thread.run(Thread.java:748)
      Caused by: org.apache.thrift.transport.TTransportException: Channel closed.
      <snip>

      vm_6 is atrted on dev7.telx.snappydata.io[29404]
      last log in the snappyserver.log for vm_6, while queries were executing in smart connector mode
      17/09/04 05:07:10.086 PDT pool-1-thread-2<tid=0x2d4> INFO snappystore: TraceDDLReplay: EmbedStatement: Persisting statement on HDFS CREATE FUNCTION NUCLEUS_MATCHES(TEXT VARCHAR(8000), PATTERN VARCHAR(8000)) RETURNS INTEGER EXTERNAL NAME 'org.datanucleus.store.rdbms.adapter.DerbySQLFunction.matches' CALLED ON NULL INPUT LANGUAGE JAVA PARAMETER STYLE JAVA this : dev7(12812)<v2>:39106
      17/09/04 05:07:10.088 PDT pool-1-thread-2<tid=0x2d4> INFO snappystore: TraceDDLReplay: EmbedStatement: Sending GfxdDDLFinishMessage for DDL statement CREATE FUNCTION NUCLEUS_MATCHES(TEXT VARCHAR(8000), PATTERN VARCHAR(8000)) RETURNS INTEGER EXTERNAL NAME 'org.datanucleus.store.rdbms.adapter.DerbySQLFunction.matches' CALLED ON NULL INPUT LANGUAGE JAVA PARAMETER STYLE JAVA[126] to other VMs in the distributed system for execution: [dev9(16027)<v1>:43415, dev9(16364)<v3>:35228, dev8(2274)<v5>:33423, dev9(16849)<v6>:54574, dev8(1784)<v0>:59900, dev8(2029)<v4>:22857]. This VM is responsible for persisting the statement on HDFS.
      17/09/04 05:07:24.806 PDT Executor task launch worker for task 13905<tid=0x272> INFO FileOutputCommitter: File Output Committer Algorithm version is 1
      17/09/04 05:07:25.724 PDT Executor task launch worker for task 13905<tid=0x272> INFO FileOutputCommitter: Saved output of task 'attempt_20170904050724_0228_m_000000_0' to file:/export/dev7a/users/sagarwal/snappyHydraLogs/ct/2017-09-03-23-26-39/ctSplitMode-0904-045744/vm_0_lead1_dev8_1404/snappyQueryFiles/Snappy_Q15/_temporary/0/task_20170904050724_0228_m_000000
      17/09/04 05:07:25.724 PDT Executor task launch worker for task 13905<tid=0x272> INFO SparkHadoopMapRedUtil: attempt_20170904050724_0228_m_000000_0: Committed

      hs_err.log for vm_6*
      #

      1. A fatal error has been detected by the Java Runtime Environment:
        #
      2. SIGSEGV (0xb) at pc=0x00000000000001e0, pid=12812, tid=0x00007f3af1aa6700
        #
      3. JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build 1.8.0_131-b11)
      4. Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode linux-amd64 compressed oops)
      5. Problematic frame:
      6. C 0x00000000000001e0
        #
      7. Core dump written. Default location: /export/dev7a/users/sagarwal/snappyHydraLogs/ct/2017-09-03-23-26-39/ctSplitMode-0904-045744/vm_6_snappyStore3_dev7_12442/core or core.12812
        #
      8. If you would like to submit a bug report, please visit:
      9. http://bugreport.java.com/bugreport/crash.jsp
        #

      --------------- T H R E A D ---------------

      Current thread (0x00007f3e4531e000): JavaThread "pool-1-thread-116" [_thread_in_Java, id=14569, stack(0x00007f3af19a6000,0x00007f3af1aa7000)]

      siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x00000000000001e0

      Registers:
      RAX=0x00000006d26e26c0, RBX=0x00007f3dbea6a488, RCX=0x0000000000000006, RDX=0x00007f3f43670fb4
      RSP=0x00007f3af1aa0340, RBP=0x00000006bc81eb08, RSI=0x00007f3f43670fb4, RDI=0x00007f3af1aa02c0
      R8 =0x00000000000001e0, R9 =0x00007f3f3d000000, R10=0x0000000000000000, R11=0x00007f3f51435640
      R12=0x0000000000000000, R13=0x00000006927ee1d8, R14=0x00000006927ee668, R15=0x00007f3e4531e000
      RIP=0x00000000000001e0, EFLAGS=0x0000000000010202, CSGSFS=0x0000000000000033, ERR=0x0000000000000014
      TRAPNO=0x000000000000000e
      .... <snip> ...
      Instructions: (pc=0x00000000000001e0)
      0x00000000000001c0:
      [error occurred during error reporting (printing registers, top of stack, instructions near pc), id 0xb]

      Register to memory mapping:

      RAX=0x00000006d26e26c0 is an oop
      com.pivotal.gemfirexd.internal.impl.sql.compile.SQLParser$LookaheadSuccess

      • klass: 'com/pivotal/gemfirexd/internal/impl/sql/compile/SQLParser$LookaheadSuccess'
        RBX= {method}

        {0x00007f3dbea6a488}

        'jj_scan_token' '(I)Z' in 'com/pivotal/gemfirexd/internal/impl/sql/compile/SQLParser'
        RCX=0x0000000000000006 is an unknown value
        RDX=0x00007f3f43670fb4 is at entry_point+916 in (nmethod*)0x00007f3f43670890
        RSP=0x00007f3af1aa0340 is pointing into the stack for thread: 0x00007f3e4531e000
        RBP=0x00000006bc81eb08 is an oop
        com.pivotal.gemfirexd.internal.impl.sql.compile.SQLParser

      • klass: 'com/pivotal/gemfirexd/internal/impl/sql/compile/SQLParser'
        RSI=0x00007f3f43670fb4 is at entry_point+916 in (nmethod*)0x00007f3f43670890
        RDI=0x00007f3af1aa02c0 is pointing into the stack for thread: 0x00007f3e4531e000
        R8 =0x00000000000001e0 is an unknown value
        R9 =0x00007f3f3d000000 is an unknown value
        R10=0x0000000000000000 is an unknown value
        R11=0x00007f3f51435640: <offset 0x47d640> in /export/shared/software/jdk1.8.0_131/jre/lib/amd64/server/libjvm.so at 0x00007f3f50fb8000
        R12=0x0000000000000000 is an unknown value
        R13=
        [error occurred during error reporting (printing register info), id 0xb]

      Stack: [0x00007f3af19a6000,0x00007f3af1aa7000], sp=0x00007f3af1aa0340, free space=1000k
      Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
      C 0x00000000000001e0

        Attachments

          Activity

            People

            • Assignee:
              sagarwal Sonal Agarwal
              Reporter:
              sagarwal Sonal Agarwal
            • Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: