Issues while loading data to the Essbase.

Hi All,,
I'm trying to load data to essbase and i'm getting the following error.It says unknown source.But when i do the reverse it is working fine without any errors.
I'm doing a simple data load from the file..Please help me.
org.apache.bsf.BSFException: exception from Jython:
Traceback (innermost last):
File "<string>", line 23, in ?
com.hyperion.odi.essbase.ODIEssbaseException: Error records reached the maximum error threshold : 1
     at com.hyperion.odi.essbase.ODIEssbaseDataWriter.loadData(Unknown Source)
     at sun.reflect.GeneratedMethodAccessor100.invoke(Unknown Source)
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
     at java.lang.reflect.Method.invoke(Unknown Source)
     at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
     at org.python.core.PyMethod.__call__(PyMethod.java)
     at org.python.core.PyObject.__call__(PyObject.java)
     at org.python.core.PyInstance.invoke(PyInstance.java)
     at org.python.pycode._pyx125.f$0(<string>:23)
     at org.python.pycode._pyx125.call_function(<string>)
     at org.python.core.PyTableCode.call(PyTableCode.java)
     at org.python.core.PyCode.call(PyCode.java)
     at org.python.core.Py.runCode(Py.java)
     at org.python.core.Py.exec(Py.java)
     at org.python.util.PythonInterpreter.exec(PythonInterpreter.java)
     at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:144)
     at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
     at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
     at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
     at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
     at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
     at com.sunopsis.dwg.cmd.e.j(e.java)
     at com.sunopsis.dwg.cmd.h.z(h.java)
     at com.sunopsis.dwg.cmd.e.run(e.java)
     at java.lang.Thread.run(Unknown Source)
Caused by: com.hyperion.odi.essbase.ODIEssbaseException: Error records reached the maximum error threshold : 1
     at com.hyperion.odi.essbase.ODIEssbaseDataWriter.checkMaxError(Unknown Source)
     at com.hyperion.odi.essbase.ODIEssbaseDataWriter.sendRecordArrayToEsbase(Unknown Source)
     ... 31 more
com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Error records reached the maximum error threshold : 1
     at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
     at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
     at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
     at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
     at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
     at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
     at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
     at com.sunopsis.dwg.cmd.e.j(e.java)
     at com.sunopsis.dwg.cmd.h.z(h.java)
     at com.sunopsis.dwg.cmd.e.run(e.java)
     at java.lang.Thread.run(Unknown Source)
Thanks
K

Sorry John,,
I setup the log file and i see the following in my log file..it says that cannot connect to the essbase server...
How ever the user id which is setup in the physical schema essbase server has full rights to essbase server and the application.
2009-09-30 12:03:32,561 INFO [DwgCmdExecutionThread]: ODI Hyperion Essbase Adapter Version 9.3.1.1
2009-09-30 12:03:32,561 INFO [DwgCmdExecutionThread]: Connecting to Essbase application [Test] on [Server]:[1423] using username [hypadmin].
2009-09-30 12:03:33,311 INFO [DwgCmdExecutionThread]: Successfully connected to the Essbase application.
2009-09-30 12:03:33,311 INFO [DwgCmdExecutionThread]: Essbase Load IKM option RULES_FILE = AMORT3
2009-09-30 12:03:33,311 INFO [DwgCmdExecutionThread]: Essbase Load IKM option RULE_SEPARATOR = |
2009-09-30 12:03:33,311 INFO [DwgCmdExecutionThread]: Essbase Load IKM option PRE_LOAD_MAXL_SCRIPT =
2009-09-30 12:03:33,311 INFO [DwgCmdExecutionThread]: Essbase Load IKM option POST_LOAD_MAXL_SCRIPT =
2009-09-30 12:03:33,327 INFO [DwgCmdExecutionThread]: Essbase Load IKM option ABORT_ON_PRE_MAXL_ERROR = true
2009-09-30 12:03:33,327 INFO [DwgCmdExecutionThread]: Essbase Load IKM option CLEAR_DATABASE = None
2009-09-30 12:03:33,327 INFO [DwgCmdExecutionThread]: Essbase Load IKM option COMMIT_INTERVAL = 1000
2009-09-30 12:03:33,327 INFO [DwgCmdExecutionThread]: Essbase Load IKM option CALCULATION_SCRIPT = null
2009-09-30 12:03:33,327 INFO [DwgCmdExecutionThread]: Essbase Load IKM option RUN_CALC_SCRIPT_ONLY = false
2009-09-30 12:03:33,358 DEBUG [DwgCmdExecutionThread]: LoadData Begins
_2009-09-30 12:03:33,358 DEBUG [DwgCmdExecutionThread]: Error occured in sending record chunk...Cannot begin data load. Analytic Server Error(1042015): Network error: Cannot Locate Connect Information For [server]_
2009-09-30 12:03:33,374 DEBUG [DwgCmdExecutionThread]: Sending data record by record to essbase
2009-09-30 12:03:33,374 INFO [DwgCmdExecutionThread]: Logging out and disconnecting from the essbase application.
K
Edited by: kanna143 on Sep 30, 2009 10:35 AM

Similar Messages

  • Issue while loading data to sample essbase app using odi

    while executing data load the error is
    Now, while loading data to an essbase app i am getting the following error:
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 23, in ?
    com.hyperion.odi.essbase.ODIEssbaseException: Error records reached the maximum error threshold : 1
    at com.hyperion.odi.essbase.ODIEssbaseDataWriter.loadData(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
    at org.python.core.PyMethod.__call__(PyMethod.java)
    at org.python.core.PyObject.__call__(PyObject.java)
    at org.python.core.PyInstance.invoke(PyInstance.java)
    at org.python.pycode._pyx4.f$0(<string>:23)
    at org.python.pycode._pyx4.call_function(<string>)
    at org.python.core.PyTableCode.call(PyTableCode.java)
    at org.python.core.PyCode.call(PyCode.java)
    at org.python.core.Py.runCode(Py.java)
    at org.python.core.Py.exec(Py.java)
    at org.python.util.PythonInterpreter.exec(PythonInterpreter.java)
    at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:144)
    at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
    at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
    at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
    at com.sunopsis.dwg.cmd.e.i(e.java)
    at com.sunopsis.dwg.cmd.g.y(g.java)
    at com.sunopsis.dwg.cmd.e.run(e.java)
    at java.lang.Thread.run(Unknown Source)
    Caused by: com.hyperion.odi.essbase.ODIEssbaseException: Error records reached the maximum error threshold : 1
    at com.hyperion.odi.essbase.ODIEssbaseDataWriter.sendRecordArrayToEsbase(Unknown Source)
    ... 32 more
    com.hyperion.odi.essbase.ODIEssbaseException: com.hyperion.odi.essbase.ODIEssbaseException: Error records reached the maximum error threshold : 1
    at org.apache.bsf.engines.jython.JythonEngine.exec(JythonEngine.java:146)
    at com.sunopsis.dwg.codeinterpretor.k.a(k.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.scripting(SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.execScriptingOrders(SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTaskTrt(SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSqlI.treatTaskTrt(SnpSessTaskSqlI.java)
    at com.sunopsis.dwg.dbobj.SnpSessTaskSql.treatTask(SnpSessTaskSql.java)
    at com.sunopsis.dwg.dbobj.SnpSessStep.treatSessStep(SnpSessStep.java)
    at com.sunopsis.dwg.dbobj.SnpSession.treatSession(SnpSession.java)
    at com.sunopsis.dwg.cmd.DwgCommandSession.treatCommand(DwgCommandSession.java)
    at com.sunopsis.dwg.cmd.DwgCommandBase.execute(DwgCommandBase.java)
    at com.sunopsis.dwg.cmd.e.i(e.java)
    at com.sunopsis.dwg.cmd.g.y(g.java)
    at com.sunopsis.dwg.cmd.e.run(e.java)
    at java.lang.Thread.run(Unknown Source)

    Hi,
    It means in your options of the KM you have it set to quit when it hits one error, if you set it to 0 (infinite) then it will not stop no matter how many data load errors it hits.
    If you set it to 0 and run the interface, depending on how you set up the options in the KM it can write to two log files, which you should check.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Special character issue while loading data from SAP HR through VDS

    Hello,
    We have a special character issue, while loading data from SAP HR to IdM, using a VDS and following the standard documentation: http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e09fa547-f7c9-2b10-3d9e-da93fd15dca1?quicklink=index&overridelayout=true
    French accent like (é,à,è,ù), are correctly loaded but Turkish special ones (like : Ş, İ, ł ) are transformed into u201C#u201D in Idm.
    The question is : does someone know any special setting to do in the VDS or in IdM for special characters to solve this issue??
    Our SAP HR version is ECC6.0 (ABA/BASIS7.0 SP21, SAP_HR6.0 SP54) and we are using a VDS 7.1 SP5 and SAP NW IdM 7.1 SP5 Patch1 on oracle 10.2.
    Thanks

    We are importing directly to the HR staging area, using the transactions/programs "HRLDAP_MAP", "LDAP" and "/RPLDAP_EXTRACT", then we have a job which extract data from the staging area to a CSV file.
    So before the import, the character appears correctly in SAP HR, but by the time it comes through the VDS to the IDM's temporary table, it becomes "#".
    Yes, our data is coming from a Unicode system.
    So, could it be a java parameter to change / add in the VDS??
    Regards.

  • Error encountered while loading data into an Essbase cube from oracle DB

    Hi All,
    I am trying to load data into an Essbase cube from Oracle db. Essbase is installed in an 64 bit HP-UX machine and oracle database on HP -UX 11.23 B server. While performing the above I am getting the below error messages:
    K/INFO - 1021013 - ODBC Layer Error: [08001] ==> [[DataDirect][ODBC Oracle Wire Protocol driver][Oracle]ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    HP-UX Error: 2: No such file or directory].
    OK/INFO - 1021014 - ODBC Layer Error: Native Error code [1034] .
    ERROR - 1021001 - Failed to Establish Connection With SQL Database Server. See log for more information.
    Can you please help me out in identifying the issue, as why it could have occured , because of network problem or is there anything related to databse?
    Regards,
    Priya

    The error is related to Oracle, I would check that it is all up and running
    ORA-01034: ORACLE not available
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Issue while loading data from DSO to Cube

    Gurus,
    I'm facing a typical problem while loading Cube from a DSO. The load is based upon certain conditions.
    Though the data in DSO satisfies those condition, but the Cube is still not being loaded.
    All the loads are managed through Process Chains.
    Would there be any reason in specific/particular for this type of problem?
    Any pointers would be of greatest help !
    Regards,
    Yaseen & Soujanya

    Yaseen & Soujanya,
    It is very hard to guess the problem with the amount of information you have provided.
    - What do you mean by cube is not being loaded? No records are extracted from DSO? Is the load completing with 0 records?
    - How is data loaded from DSO to InfoCube? Full? Are you first loading to PSA or not?
    - Is there data already in the InfoCube?
    - Is there change log data for DSO or did someone delete all the PSA data?
    Sincere there are so many reasons for the behavior you are witnessing, your best option is to approach your resident expert.
    Good luck.
    Sudhi Karkada
    <a href="http://main.nationalmssociety.org/site/TR/Bike/TXHBikeEvents?px=5888378&pg=personal&fr_id=10222">Biking for MS Relief</a>

  • Error while loading data into the cube

    Hi,
    I loaded data on to PSA and when I am loading the data to the cube through DataTransferProcess, I get an error (red color).
    Through "Manage", I could see teh request in red. How do I get to knoe the exact error? Also what could be the possibel reason for this?
    Also can some one explain the Datatransfer process(not in process chain)?
    Regards,
    Sam

    Hi Sam
        after you load the data  through DTP(after click on execute button..) > just go to monitor screen.. in that press  the refresh button..> in that it self.. you can find the  logs..
       otherwise.. in the request  screen also..  beside of the request number... you can see the logs icon.. you can click on this..
    DTP  means..
    DTP-used for data transfer process from psa to data target..
    check thi link..for DTP:
    http://help.sap.com/saphelp_nw04s/helpdata/en/42/f98e07cc483255e10000000a1553f7/frameset.htm
    to load data in to the datatargets or infoproviders like DSO, cube....
    in your  case.. the problem may be.. check the date formats.. special cherectrs... and
    REGARDS
    @JAY

  • Short dump issue while loading data

    Hello All,
    We are trying to load the data to  the cube , but every time it is throwing us a short dump with the below error.
    We have found a SAP Note as well  for the below error , Please suggest, how to implement the same.
    Error:
    Runtime Errors         CONVT_NO_NUMBER
    METHOD end_routine.
    === Segments ===
       FIELD-SYMBOLS:
         <RESULT_FIELDS>    TYPE _ty_s_TG_1.
       DATA:
         MONITOR_REC     TYPE rstmonitor.
       ... "insert your code here
    --  fill table "MONITOR" with values of structure "MONITOR_REC"
    -   to make monitor entries
       ... "to cancel the update process
        raise exception type CX_RSROUT_ABORT.
    *Fiscal year quarter addition
       loop at RESULT_PACKAGE assigning <RESULT_FIELDS>.
         if <RESULT_FIELDS>-/bic/zsp_Mon BETWEEN 1 and 3.
           CONCATENATE 'FY' <RESULT_FIELDS>-fiscper+2(2) 'Q3' INTO
           <RESULT_FIELDS>-/bic/zsp_FYQA.
         elseif <RESULT_FIELDS>-/bic/zsp_Mon BETWEEN 4 and 6.
           CONCATENATE 'FY' <RESULT_FIELDS>-fiscper+2(2) 'Q4' INTO
           <RESULT_FIELDS>-/bic/zsp_FYQA.
         elseif <RESULT_FIELDS>-/bic/zsp_Mon BETWEEN 7 and 9.
           CONCATENATE 'FY' <RESULT_FIELDS>-fiscper+2(2) 'Q1' INTO
           <RESULT_FIELDS>-/bic/zsp_FYQA.
         elseif <RESULT_FIELDS>-/bic/zsp_Mon BETWEEN 10 and 12.
           CONCATENATE 'FY' <RESULT_FIELDS>-fiscper+2(2) 'Q2' INTO
           <RESULT_FIELDS>-/bic/zsp_FYQA.
      endif.
    endloop.
    ENDDMETHOD.                    "end_routine
    THOD inverse_end_routine.
    IMPORTING

    Hi,
    As suggested by note, you need to put this code in end routine.
    Check the syntax , if error go through it and fix syntax (take help of ABAPer if required) and
    activate Transformation and DTP.
    Thank-You.
    Regards,
    VB

  • Short Dump while loading data into the GL ODS

    Hello Guyz
    I am getting a database error while loading the GL ODS in BW. I am performing an INIT load, and the error occurs during the activation stage:
    Database Error

    Hello Atul,
            PSAPTEMP has nothing to do with PSA. It is a temporary storage space and is present in all systems.
    You can check this in transaction DB02 if you are using Bw 3.x and DB02OLD if you are using BI 7.0. Once in DB02/DB02OLD, you will see a block which says tablespaces. In this block, there is a push button for freespace statistics, click on it and it will take you to a screen which shows the free space for all tablespaces.
    Hope this helps,
    Regards.

  • Semantic Partitioning Delta issue while load data from DSO to Cube -BW 7.3

    Hi All,
    We have created the semantic partion with the help of BADI to perform Partitions. Everthing looks good,
    first time i have loaded the data it is full update.
    Second time i init the data load. I pulled the delta  from DSO to Cube . DSO is standard where as Cube is partition with the help of semantic partition. What i can see is the records which are updated in Latest delta are shown into the report rest all are ignored by the system.
    I tried compression but still it did not worked.
    Can some has face this kind
    Thanks

    Yaseen & Soujanya,
    It is very hard to guess the problem with the amount of information you have provided.
    - What do you mean by cube is not being loaded? No records are extracted from DSO? Is the load completing with 0 records?
    - How is data loaded from DSO to InfoCube? Full? Are you first loading to PSA or not?
    - Is there data already in the InfoCube?
    - Is there change log data for DSO or did someone delete all the PSA data?
    Sincere there are so many reasons for the behavior you are witnessing, your best option is to approach your resident expert.
    Good luck.
    Sudhi Karkada
    <a href="http://main.nationalmssociety.org/site/TR/Bike/TXHBikeEvents?px=5888378&pg=personal&fr_id=10222">Biking for MS Relief</a>

  • Exception while loading data into the cache

    I'm getting the following error while attempting to pre-populate the cache:
    2010-11-01 16:27:21,766 ERROR [STDERR] (Logger@9229983 n/a) 2010-11-01 16:27:21.766/632.975 Oracle Coherence EE n/a <Error> (thread=DistributedCache, member=1): SynchronousListener cannot be added on the service thread:
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$BinaryMap.addMapListener(PartitionedCache.CDB:14)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.addMapListener(PartitionedCache.CDB:1)
         at com.tangosol.coherence.component.util.SafeNamedCache.addMapListener(SafeNamedCache.CDB:27)
         at com.tangosol.net.cache.CachingMap.registerListener(CachingMap.java:1463)
         at com.tangosol.net.cache.CachingMap.ensureInvalidationStrategy(CachingMap.java:1579)
         at com.tangosol.net.cache.CachingMap.registerListener(CachingMap.java:1484)
         at com.tangosol.net.cache.CachingMap.get(CachingMap.java:487)
         at com.jpm.ibt.primegps.cachestore.AbstractGpsCacheStore.isEnabled(AbstractGpsCacheStore.java:54)
         at com.jpm.ibt.primegps.cachestore.AbstractGpsCacheStore.store(AbstractGpsCacheStore.java:83)
         at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.store(ReadWriteBackingMap.java:4783)
         at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeInternal(ReadWriteBackingMap.java:4468)
         at com.tangosol.net.cache.ReadWriteBackingMap.putInternal(ReadWriteBackingMap.java:1147)
         at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:853)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.put(PartitionedCache.CDB:98)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPutAllRequest(PartitionedCache.CDB:41)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$PutAllRequest.onReceived(PartitionedCache.CDB:90)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:619)
    2010-11-01 16:27:21,829 ERROR [STDERR] (pool-14-thread-3) Exception in thread "pool-14-thread-3"
    2010-11-01 16:27:21,829 ERROR [STDERR] (Logger@9229983 n/a) 2010-11-01 16:27:21.766/632.975 Oracle Coherence EE n/a <Error> (thread=DistributedCache, member=1): Assertion failed: poll() is a blocking call and cannot be called on the Service thread
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:5)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$BinaryMap.get(PartitionedCache.CDB:26)
         at com.tangosol.util.ConverterCollections$ConverterMap.get(ConverterCollections.java:1559)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.get(PartitionedCache.CDB:1)
         at com.tangosol.coherence.component.util.SafeNamedCache.get(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.CachingMap.get(CachingMap.java:491)
         at com.jpm.ibt.primegps.cachestore.AbstractGpsCacheStore.isEnabled(AbstractGpsCacheStore.java:54)
         at com.jpm.ibt.primegps.cachestore.AbstractGpsCacheStore.store(AbstractGpsCacheStore.java:83)
         at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.store(ReadWriteBackingMap.java:4783)
         at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeInternal(ReadWriteBackingMap.java:4468)
         at com.tangosol.net.cache.ReadWriteBackingMap.putInternal(ReadWriteBackingMap.java:1147)
         at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:853)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.put(PartitionedCache.CDB:98)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPutAllRequest(PartitionedCache.CDB:41)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$PutAllRequest.onReceived(PartitionedCache.CDB:90)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:619)
    2010-11-01 16:27:21,922 ERROR [STDERR] (pool-14-thread-3) (Wrapped: Failed request execution for DistributedCache service on Member(Id=1, Timestamp=2010-11-01 16:20:09.372, Address=169.65.134.90:7850, MachineId=28250, Location=site:EMEA.AD.JPMORGANCHASE.COM,machine:WLDNTEC6WM754J,process:5416) (Wrapped: Failed to store key="9046019") poll() is a blocking call and cannot be called on the Service thread) com.tangosol.util.AssertionException: poll() is a blocking call and cannot be called on the Service threadI'm a bit stumped as my code doesn't call poll() anywhere and this appears to be caused by the following in my CacheStore class:
    public void store(Object key, Object value) {
            log.info("CacheStore currently " + isEnabled());
            if (isEnabled()) {
                throw new UnsupportedOperationException("Store method not currently supported");
    public boolean isEnabled() {
            return ((Boolean) CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).get(ENABLED)).booleanValue();
        }the only thing I can think of is maybe it has a problem calling a cache from within a CacheStore (if that makes sense). What I have is a CONTROL_CACHE which just stores a boolean value to indicate whether the store(), storeAll(), erase(), and eraseAll() methods should do anything. Is this correct?

    Hi Jonathan,
    I am trying to implement a write-behind cache but my configs may be wrong. The config for the cache with the cachestore looks like:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd" >
    <cache-config>
         <caching-scheme-mapping>
         <cache-mapping>
              <cache-name>PARTY_CACHE</cache-name>
              <scheme-name>party_cache</scheme-name>
         </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <near-scheme>
                   <scheme-name>party_cache</scheme-name>
                   <service-name>partyCacheService</service-name>
                   <!-- a sensible default ? -->
                   <thread-count>5</thread-count>
                   <front-scheme>
                        <local-scheme>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <distributed-scheme>
                             <backing-map-scheme>
                                  <read-write-backing-map-scheme>
                                       <internal-cache-scheme>
                                            <local-scheme>
                                            </local-scheme>
                                       </internal-cache-scheme>
                                       <cachestore-scheme>
                                            <class-scheme>
                                                 <class-name>spring-bean:partyCacheStore</class-name>
                                            </class-scheme>
                                       </cachestore-scheme>
                                  </read-write-backing-map-scheme>
                             </backing-map-scheme>
                        </distributed-scheme>
                   </back-scheme>
                   <autostart>true</autostart>
              </near-scheme>
         </caching-schemes>
    </cache-config>and the control cache config looks like:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd" >
    <cache-config>
         <caching-scheme-mapping>
         <cache-mapping>
              <cache-name>CONTROL_CACHE</cache-name>
              <scheme-name>control_cache</scheme-name>
         </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <near-scheme>
                   <scheme-name>control_cache</scheme-name>
                   <service-name>controlCacheService</service-name>
                   <!-- a sensible default ? -->
                   <thread-count>5</thread-count>
                   <front-scheme>
                        <local-scheme>
                             <high-units>100</high-units>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <distributed-scheme>
                             <backing-map-scheme>
                                  <read-write-backing-map-scheme>
                                  </read-write-backing-map-scheme>
                             </backing-map-scheme>
                        </distributed-scheme>
                   </back-scheme>
                   <autostart>true</autostart>
              </near-scheme>
         </caching-schemes>
    </cache-config>They have different service names but I'm guessing this isn't what you mean and I thought I was using write-behind but again I'm guessing my config is not correct?

  • Error while loading data into ODS

    Hi we have BI for NW04S and CRM4.0. While loading data for a CRM ODS , I have received a error message which I never seen before. Yesterdaya data load was smooth and I only see issue while loading data today
    The error message in the monitor is as follows
    "Port 'A000000003' does not exists in the table of Port Description' ( message ID E0, NO 31 ).
    have anybody encountered this.
    Thanks
    Arun

    Thanks Marc,
      we found the table EDIPOA and found that the port that is missing. It seems that the enytry( row)  for our CRM system is missing which has this port as a field in this table.
      Not sure whether to manually populate it or it's part of the source system repair etc.
      please let me know
      Thanks
    Arunava

  • Issue while loading berkeley database

    Hi,
    I have an issue while loading data into the Berkeley Database. When I load the xml files into Berkeley database, some files are being created. the files are named something like db.001,db.002,log.00000001 etc. I have a server running which tries to access the files .When I try to reload the Berkeley db while my server is running, I am unable to load the Berkeley database. I have to stop the server, load the berkeley DB and restart the server again . Is there any way in which I can reload the database without having to restart the server. Your response would be of help to me to find a solution to this issue. I am currently using Berkeley Database version 2.2.13
    Thanks,
    Priyadarshini
    Message was edited by: Priyadarshini
    user569257

    Hi Priyadarshini,
    The db.001 and db.002 are the environment's region files and the log.00000001 is one of the environment's transactional logs. The region files are created when you use an environment, and their size and number depend on the subsystems that you configure on your environment (memory pool, logging, transactions, locking). The log files reflect the modifications that you perform on your environment's database(s), and they are used along with the transaction subsystem to provide recoverability, ACID capabilities and protect against application or system failures.
    Is there a reason why that server tries to access these files ? The server process that runs while you load your database should not interfere with those files as they are used by Berkeley DB.
    Regards,
    Andrei

  • Memory dump while loading data

    Hello,
    We are facing memory issues while loading data in BW system.
    This particular loading process is taking more than 4GB (3GB of EM and 1GB of heap) and still failing.
    We cannot increase the memory beyond this. Can you please suggest a way to load the data ?
    Runtime Errors         TSV_TNEW_PAGE_ALLOC_FAILED
    Date and Time          09.03.2010 22:15:49
    Error analysis
         The internal table "{A:2*\TYPE=%_T00003S00000114O0000015636}" could not be
          further extended. To enable
         error handling, the table had to be delete before this log was written.
         As a result, the table is displayed further down or, if you branch to
         the ABAP Debugger, with 0 rows.
         At the time of the termination, the following data was determined for
         the relevant internal table:
         Memory location: "Session memory"
         Row width: 1140
         Number of rows: 30824
         Allocated rows: 30824
         Newly requested rows: 8 (in 1 blocks)
    How to correct the error
         The amount of storage space (in bytes) filled at termination time was:
         Roll area...................... 14627872
         Extended memory (EM)........... 3004121016
         Assigned memory (HEAP)......... 1000024976
         Short area..................... " "
         Paging area.................... 24576
         Maximum address space.......... 4294967295
         If the error occures in a non-modified SAP program, you may be able to
         find an interim solution in an SAP Note.
         If you have access to SAP Notes, carry out a search with the following
         keywords:
         "TSV_TNEW_PAGE_ALLOC_FAILED" " "
         "GP44XYX8R5ZMOGVGGMHG0U1U04O" or "GP44XYX8R5ZMOGVGGMHG0U1U04O"
         "CONVERT_FROM_MEMORY"

    Try to load by reducing data package size.
    I think you have some routine in transformation/update rule. Check that whether you are using "SELECT *" in that. It may be one reason for this error.
    You can check OSS Note 396642, 379266 and 997930.
    Edited by: Pravender on Mar 10, 2010 1:09 PM

  • DB Connect Load - "Unknow error while uploading data from the DB Table"

    Hi Experts,
    We have our BI7 system connected to Oracle DB based third party tool. The loads are performing quite well in DEV environment.
    I would like to know, how we transport DB Connect datasources to Quality systems? Any different process to be followed for DB Connect datasources?
    At present the connections between BI Quality and the third party quality systems are established. We transported the DataSource from BI DEV system to BI quality system, but on trigerring an infopackage we are not able to perform loads. It prompts - "Unknow error while uploading data from the DB Table".
    Also on comparing the DataSources in DEV system and Quality system there are no fields in "Proposal" tab of datasource in Quality system. Also I cannot change or activate Datasource in Quality system as we dont have change access in quality.
    Please advice.
    Thanks,
    Abhijit

    Hi,
    Sorry for bumping an old thread ....
    Did this issue get ever get resolved?
    I am facing the same one. The loads work successfully in Dev. The transport for DBConnect DS also moved in successfully.
    One strange this is that DB User for dev did not automatically change to db user from quality when I transported the DBConnect datasource. DBCon DS still shows me the DB User from Dev in Quality system
    I get "Unknown Error" whenever I trigger the data package.
    Advait

  • Error while loading data on to the cube : Incompatible rule file.

    Hi,
    I am trying to load data on to essbase cube from a data file. I have a rule file on the cube already, and I am getting the following error while loading the data. Is there any problem with the rules file?
    SEVERE: Cannot begin data load. Essbase Error(1019058): Incompatible rule file. Duplicate member name rule file is used against unique name database.
    com.essbase.api.base.EssException: Cannot begin data load. Essbase Error(1019058): Incompatible rule file. Duplicate member name rule file is used against unique name database.
         at com.essbase.server.framework.EssOrbPluginDirect.ex_olap(Unknown Source)
         at com.essbase.server.framework.EssOrbPluginDirect.essMainBeginDataload(Unknown Source)
         at com.essbase.api.session.EssOrbPlugin._invokeMainMethod(Unknown Source)
         at com.essbase.api.session.EssOrbPlugin._invokeMethod2(Unknown Source)
         at com.essbase.api.session.EssOrbPlugin._invokeMethod(Unknown Source)
         at com.essbase.server.framework.EssOrbPluginDirect._invokeProtected(Unknown Source)
         at com.essbase.api.session.EssOrbPluginEmbedded.invokeMethod(Unknown Source)
         at com.essbase.api.session.EssOrbPluginEmbedded.invokeMethod(Unknown Source)
         at com.essbase.api.session.EssOrbPlugin.essMainBeginDataload(Unknown Source)
         at com.essbase.api.datasource.EssCube.beginDataload(Unknown Source)
         at grid.BudgetDataLoad.main(BudgetDataLoad.java:85)
    Error: Cannot begin data load. Essbase Error(1019058): Incompatible rule file. Duplicate member name rule file is used against unique name database.
    Feb 7, 2012 3:13:37 PM com.hyperion.dsf.server.framework.BaseLogger writeException
    SEVERE: Cannot Load buffer term. Essbase Error(1270040): Data load buffer [3] does not exist
    com.essbase.api.base.EssException: Cannot Load buffer term. Essbase Error(1270040): Data load buffer [3] does not exist
         at com.essbase.server.framework.EssOrbPluginDirect.ex_olap(Unknown Source)
         at com.essbase.server.framework.EssOrbPluginDirect.essMainLoadBufferTerm(Unknown Source)
         at com.essbase.api.session.EssOrbPlugin._invokeMainMethod(Unknown Source)
         at com.essbase.api.session.EssOrbPlugin._invokeMethod2(Unknown Source)
         at com.essbase.api.session.EssOrbPlugin._invokeMethod(Unknown Source)
         at com.essbase.server.framework.EssOrbPluginDirect._invokeProtected(Unknown Source)
         at com.essbase.api.session.EssOrbPluginEmbedded.invokeMethod(Unknown Source)
         at com.essbase.api.session.EssOrbPluginEmbedded.invokeMethod(Unknown Source)
         at com.essbase.api.session.EssOrbPlugin.essMainLoadBufferTerm(Unknown Source)
         at com.essbase.api.datasource.EssCube.loadBufferTerm(Unknown Source)
         at grid.BudgetDataLoad.main(BudgetDataLoad.java:114)
    Error: Cannot Load buffer term. Essbase Error(1270040): Data load buffer [3] does not exist
    Thanks,
    Santhosh

    " Incompatible rule file. Duplicate member name rule file is used against unique name database."
    I am just guessing here as I have never used the duplicate name functionality in Essbase, nor do I remember which versions it was in. However with that said I think your answer is in your error message.
    Just guessing again.... It appears that your rule file is set to allow duplicate member names while your database is not. With that information in hand (given to you in the error message) I would start to explore that.

Maybe you are looking for

  • Cant delete file from a folder..

    Hello All, I am using Oracle Database 10g on Linux OS I want to delete some file using PL-SQL for that I have written following program. DECLARE fileHandler UTL_FILE.FILE_TYPE; BEGIN UTL_FILE.FREMOVE ('MY_DIRECTORY','my_filet.txt'); EXCEPTION when ot

  • Documents in the cloud sync

    Hey, I just updated my OSX to Mountain Lion and now one of the new add ons is the ability of documents in the cloud to also update on the mac automatically. I was wondering how to access these files. Up to now I was going on the cloud, I was download

  • Import Flat File to file module

    hello, i have created owb file location and file module with OMBPlus.Now I want to import my sample.csv file to file module .How can I do so in OMBPlus ? regards, tanveer

  • DWCS3 Template Problem

    From my index.php file I created a Template with 2 editable regions. Opened a new file from the Template and made some changes. Closed the file and went back to Template; the contents had changed to match the Child! Went to look at my original index.

  • Rebate Basis VBRP BONBA showing double the value

    Hello In the pricing procedure the rebate basis is carried over to sub total 7  which means that it is stored in VBRP - BONBA. Normally tthe condition value against rebate basis in the billing document is same as that of VBRP-BONBA for a given line i