Error in loading a node

Hi Experts,
When i try to open a node(for example: a folder in portal content) its giving the error:
<b>"Could not load or refresh node Tree creation failed on node:<location>"</b>
How can i resolve this?
valuable answers will b rewarded.
Thanks in advance,
Suba

Hi
Please try the following to resolve the issue:
- Login into the portal, Navigate to System Administration -> System
  Configuration -> Service Configuration -> Applications ->
  com.sap.portal.productivity.desktop
- Set the values of "Upgrade all desktop objects at startup" and
  "Upgrade imported desktop objects" to True and save.
- Restart the J2EE
Hope this helps

Similar Messages

  • DAC message while running execution plan - "Error while loading nodes"

    I have just installed and setup Informatica 8.6.1, DAC, BI apps 7.9.6 for a Oracle Ebs R12.1.1 source instance
    In informatica I have defined 2 relational sources "DataWarehouse" and "ORA_R1211" - the same names as in physical data sources of DAC
    I have mentioned the flatfile parameter as "ORA_R1211_Flatfile" in DAC
    after successfully build, when I run the ETL the error "Error while loading nodes" occurs.
    the log file shows the following details:
    START OF ETL
    20 SEVERE Sat Apr 23 21:57:58 GST 2011
    ANOMALY INFO::: Error while loading nodes.
    EXCEPTION CLASS::: java.lang.NullPointerException
    com.siebel.etl.engine.core.SessionHandler.getNodes(SessionHandler.java:2842)
    com.siebel.etl.engine.core.SessionHandler.loadNodes(SessionHandler.java:473)
    com.siebel.etl.engine.core.ETL.thisETLProcess(ETL.java:372)
    com.siebel.etl.engine.core.ETL.run(ETL.java:658)
    com.siebel.etl.engine.core.ETL.execute(ETL.java:910)
    com.siebel.etl.etlmanager.EtlExecutionManager$1.executeEtlProcess(EtlExecutionManager.java:210)
    com.siebel.etl.etlmanager.EtlExecutionManager$1.run(EtlExecutionManager.java:165)
    java.lang.Thread.run(Thread.java:619)
    21 SEVERE Sat Apr 23 21:57:58 GST 2011
    *     CLOSING THE CONNECTION POOL DataWarehouse
    22 SEVERE Sat Apr 23 21:57:58 GST 2011
    *     CLOSING THE CONNECTION POOL ORA_R1211
    23 SEVERE Sat Apr 23 21:57:58 GST 2011
    END OF ETL
    *****************

    Hi,
    Mark the current EP as completed and re-assemble the subject area, generate parameters and build the EP and run the load.
    Thanks,
    Navin KumarBolla

  • DIO Port Config & DIO Port Write Block Diagram Errors (Call Library Function Node:libra​ry not found or failed to load)

    Hi Guys, need help on this.
    I have this LabVIEW program that used to work on the old computer.
    The old computer crashes most of the time, so I upgraded the computer
    and used its Hard Drive as slave to the new computer.
    I have no idea where are its installers since the guy that made the program 
    is not in my department anymore.
    I downloaded all the drivers needed from NI: NIDAQ9.0, NIVISA,NI488.2, 
    and drivers of some instruments needed in the setup. I'm using LabVIEW8.2.
    Everything's fine until I open the LabVIEW program for our testing.
    Here goes the error:
       DIO Port Config
       DIO Port Write
    Block Diagram Errors
       Call Library Function Node: library not found or failed to load
    Attachments:
    ErrorList.JPG ‏200 KB

    Honestly, I'm a newbie on Labview. I just want this old program to run on the new computer.
    The guys that installed the drivers on the old computer are no longer here in my department.
    And I have no idea where the drivers are. So I just downloaded the drivers needed for my hardware and instruments.
    Here's my hardware: (cards: PCI-DIO-96, PCI-GPIB), (instruments: SCB100,E4407B, HP83623, HP3458, HP8657)
    OS: Windows XP Pro
    By the way, I have unzipped the TraditionalDAQ drivers. First I tried the 7.4.1, but installation error appeared.
    I thought maybe the installer is corrupted, so I downloaded the 7.4.4 and unzipped it.
    But, still same installation error appears. I don't understand, both TraditionalDAQ drivers have same installation error.
    Now I have tried the DAQmx8.7.2 driver, bu still the DIO Port Config and DIO Port Write have errors.

  • DACExecutionPlan: Error while loading nodes. Error while loading nodes.null

    Hi There,
    One of my DAC Execution Plan is giving following error: Error while loading nodes. Error while loading nodes.null
    We have restarted DAC services.
    I have reassembled subject area, generated execution plan parameters, and rebuilt execution plan.
    But still its is giving same error.
    Any thoughts pls?
    Thanks,
    Raghu

    Actually while the EP is in a failed state, I added few tasks and assembled subject area and built EP.
    Later when I restarted the failed EP, it was giving below error.
    Sol: I maked EP as completed and ran the EP, it went fine.
    Thanks for yor help,
    Raghu

  • Error In Loading Hirarchy's form BW

    hai all,
    I am facing error in loading Hirarchy's from BI to BPC
    I sucessfully loaded ID and all other propertys form BI but I'm facing error in hirarchy's
    in my conversion file i have to load the data with CC_ appended for all nodes
    get me out....
    Thanks,
    Rajesh

    I have created the transformation file for
    NODENAME = NODENAME
    HIER_NODE = HIER_NODE
    HIER_NAME = *STR(CC_)+HIER_NAME(5:17)
    THE ERROR I'M GETTING IS ;IS INVALID PARAMETER

  • Error in Loading Meta Data File for Service 'CL_ACCOUNTING_DOCUMENT_DP'

    Hi Guys,
    Need to your assistance in solving the below Error.
    1. Error while loading metadata file for various service which required Connectors to be created .
    Example  : CL_ACCOUNTING_DOCUMENT_DP',
    BEP
    ZCB_COST_CENTER_SRV
    1
    Cost Center Service
    CB_COST_CENTER_SRV
    1
    BEP
    ZCB_GOODS_RECEIPT_SRV
    1
    Goods Receipt Service
    CB_GOODS_RECEIPT_SRV
    1
    2. While Expanding the node for connectors in ESH_COCKPIT for SAPAPPLH  . Below Error occurs
    Could not rename Data Type "SIG_IL_USA_2" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "SIG_IL_SDR_2" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "SIG_IL_RES_2" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "SIGN_TYPE_UD_1" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "SIGN_TYPE_SM_1" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "SIGN_TYPE_RR_1" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "RMXTE_TRIALID_1" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "QZUSMKZHL_1" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "QWERKVORG_1" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "QVNAME_2" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "QVMENGE_2" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "QVINSMK_2" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "QVGRUPPE_2" in SWC EAAPPLH - errors occurred during renaming
    Could not rename Data Type "QVEZAEHLER_2" in SWC EAAPPLH - errors occurred during renaming

    Hi,
    do you have solved this issue? We have the same problem with ESH_COCKPIT and SAPAPPLH component.
    Regards,
    Martin Sindlar

  • Error 0(Native: listNetInterfaces:[3]) and error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

    Hi Gurus,
    I'm trying to upgrade my test 9.2.0.8 rac to 10.1 rac. I cannot upgrade to 10.2 because of RAM limitations on my test RAC. 10.1 Clusterware software was successfully installed and the daemons are up with OCR and voting disk created. Then during the installation of RAC software at the end, root.sh needs to be run. When I run root.sh, it gave the error: while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory. I have libpthread.so.0 in /lib. I looked up on metalink and found Doc ID: 414163.1 . I unset the LD_ASSUME_KERNEL in vipca (unsetting of LD_ASSUME_KERNEL was not required in srvctl because there was no LD_ASSUME_KERNEL in srvctl). Then I tried to run vipca manually. I receive the following error: Error 0(Native: listNetInterfaces:[3]). I'm able to see xclock and xeyes. So its not a problem with x.
    OS: OEL5 32 bit
    oifcfg iflist
    eth0 192.168.2.0
    eth1 10.0.0.0
    oifcfg getif
    eth1 10.0.0.0 global cluster_interconnect
    eth1 10.1.1.0 global cluster_interconnect
    eth0 192.168.2.0 global public
    cat /etc/hosts
    192.168.2.3 sunny1pub.ezhome.com sunny1pub
    192.168.2.4 sunny2pub.ezhome.com sunny2pub
    192.168.2.33 sunny1vip.ezhome.com sunny1vip
    192.168.2.44 sunny2vip.ezhome.com sunny2vip
    10.1.1.1 sunny1prv.ezhome.com sunny1prv
    10.1.1.2 sunny2prv.ezhome.com sunny2prv
    My questions are:
    should ping on sunny1vip and sunny2vip be already working? As of now they dont work.
    if you look at oifcfg getif, I initially had eth1 10.0.0.0 global cluster_interconnect,eth0 192.168.2.0 global public then I created eth1 10.1.1.0 global cluster_interconnect with setif. Should it be 10.1.1.0 or 10.0.0.0. I looked at the subnet calculator and it says for 10.1.1.1, 10.0.0.0 is the subnet. In metalink they had used 10.10.10.0 and hence I used 10.1.1.0
    Any ideas on resolving this issue would be very much appreciated. I had been searching on oracle forums, google, metalink but all of them refer to DOC Id 414163.1 but it does n't seem to work. Please help. Thanks in advance.
    Edited by: ayyappa on Aug 20, 2009 10:13 AM
    Edited by: ayyappa on Aug 20, 2009 10:14 AM
    Edited by: ayyappa on Aug 20, 2009 10:15 AM

    a step forward towards resolution but i need some help from the gurus.
    root# cat /etc/hosts
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6
    192.168.2.3 sunny1pub.ezhome.com sunny1pub
    192.168.2.4 sunny2pub.ezhome.com sunny2pub
    10.1.1.1 sunny1prv.ezhome.com sunny1prv
    10.1.1.2 sunny2prv.ezhome.com sunny2prv
    192.168.2.33 sunny1vip.ezhome.com sunny1vip
    192.168.2.44 sunny2vip.ezhome.com sunny2vip
    root# /u01/app/oracle/product/crs/bin/oifcfg iflist
    eth1 10.0.0.0
    eth0 192.168.2.0
    root# /u01/app/oracle/product/crs/bin/oifcfg getif
    eth1 10.0.0.0 global cluster_interconnect
    eth0 191.168.2.0 global public
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
    ****ORACLE_HOME environment variable not set!
    ORACLE_HOME should be set to the main directory that contain oracle products. set and export ORACLE_HOME, then re-run.
    root# export ORACLE_BASE=/u01/app/oracle
    root# export ORACLE_HOME=/u01/app/oracle/product/10.1.0/Db_1
    root# export ORA_CRS_HOME=/u01/app/oracle/product/crs
    root# export PATH=$PATH:$ORACLE_HOME/bin
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
    VIP does not exist.
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl add nodeapps -n sunny1pub -o $ORACLE_HOME -A 192.168.2.33/255.255.255.0
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl add nodeapps -n sunny2pub -o $ORACLE_HOME -A 192.168.2.44/255.255.255.0
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
    VIP exists.: sunny1vip.ezhome.com/192.168.2.33/255.255.255.0
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny2pub -a
    VIP exists.: sunny2vip.ezhome.com/192.168.2.44/255.255.255.0
    Once I execute the add nodeapps command as root on node 1, I was able to get vip exists for config nodeapps on node 2. The above 2 statements resulted me with same values on both nodes. After this I executed root.sh on both nodes, I did not receive any errors. It said CRS resources are already configured.
    My questions to the gurus are as follows:
    Should ping on vip work? It does not work now.
    srvctl status nodeapps -n sunny1pub(same result for sunny2pub)
    VIP is not running on node: sunny1pub
    GSD is not running on node: sunny1pub
    PRKO-2016 : Error in checking condition of listener on node: sunny1pub
    ONS daemon is not running on node: sunny1pub
    [root@sunny1pub ~]# /u01/app/oracle/product/crs/bin/crs_stat -t
    Name Type Target State Host
    ora....pub.gsd application OFFLINE OFFLINE
    ora....pub.ons application OFFLINE OFFLINE
    ora....pub.vip application OFFLINE OFFLINE
    ora....pub.gsd application OFFLINE OFFLINE
    ora....pub.ons application OFFLINE OFFLINE
    ora....pub.vip application OFFLINE OFFLINE
    Will crs_stat and srvctl status nodeapps -n sunny1pub work after I upgrade my database or should they be working now already? I just choose to install 10.1.0.3 software and after running root.sh on both nodes, I clicked ok and then the End of installation screen appeared. Under installed products, I see 9i home, 10g home, crs home. Under 10g home and crs home, I see cluster nodes(sunny1pub and sunny2pub) So it looks like the 10g software is installed.

  • Error while loading service JCo RFC Provider

    When I click on the "JCo RFC Provider" node under services in Visual administrator, I get an error:
    "Error while loading service JCo RFC Provider"
    Does anyone know what has gone wrong here, and how I go about fixing it?
    Thanks in advance
    Regards
    Richard
    PS. We're using EP6sp14

    You have to use the correct Visual Admin version:
    if your netweaver is 6.40 use VA 6.40 ... and for 7.00 the same, VA 7.00
    (you get them from OS where netweaver is installed)
    other possiblity: use Administrator login and check if it works, then correct your permissions (mostyl 'Administrators' group is missing)
    Regards,
    Chris

  • R12 installation giving me error [while loading shared libraries: librt.so]

    Hello ..
    I am installing Fresh R12 installation on Fedora. I was doing Single node installation. Installation was going fine but at the point when it said run "autoconfig.sh" for APPS_TIER. I went to $INST_TOP/admin/scripts and ran "autoconfig.sh". After that installer showed me all the errors and when i try to see the log file.
    I am getting following errors.
    [oracle@aurie / ]$ ls
    ls: error while loading shared libraries: librt.so.1: cannot open shared object file: No such file or directory
    [oracle@aurie / ]$ su - root
    su: error while loading shared libraries: libcrypt.so.1: cannot open shared object file: No such file or directory
    Now, I cant do anything on this machine, I try rebooting it but no matter what command i try to run i am getting above error.
    Cant find any solution online.
    any help would be appreciated.
    Thanks

    Hello Hussein,
    I chnaged my Fedora to following ..
    [root@aurie /]# cat /etc/issue
    Red Hat Enterprise Linux Server release 5 (Tikanga)
    Kernel \r on an \m
    [root@aurie /]# cat /proc/version
    Linux version 2.6.18-8.el5 ([email protected]) (gcc version 4.1.1 20070105 (Red Hat 4.1.1-52)) #1 SMP Fri Jan 26 14:15:21 EST 2007
    But installer is failing at the end of DB installation and it is not able to create database
    ==== ApplyDatabase_05092341.log ===========
    Executable : /d01/oracle/VIS/db/tech_st/11.1.0/bin/sqlplus
    The log information will be written to
    /d01/oracle/VIS/db/tech_st/11.1.0/appsutil/log/VIS_aurie/adcrdb_VIS.txt
    Creating the control file for VIS_aurie database ...
    exit_code=127
    Checking for errors ...
    .end std out.
    ***sqlplus: error while loading shared libraries: libclntsh.so.11.1: cannot open shared object file: No such file or directory***
    ***egrep: /d01/oracle/VIS/db/tech_st/11.1.0/appsutil/log/VIS_aurie/adcrdb_VIS.txt: No such file or directory***
    **.end err out.**
    -------------------ADX Database Utility Finished---------------
    RC-00118: Error occurred during creation of database
    Raised by oracle.apps.ad.clone.ApplyDatabase
    =============
    ============05092156.log============
    Processing Disk68....
    ^M
    runProcess_2
    Statusstring Registering Database
    Executing command: /R12_setup/startCD/Disk1/rapidwiz/jre/Linux/1.6.0//bin/java -DCONTEXT_VALIDATED=true -mx512M -classpath /R12_setup/startCD/Disk1/rapidwiz/jlib/java:/R12_setup/startCD/Disk1/rapidwiz/jlib/xmlparserv2.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/ojdbc14.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/OraInstaller.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/ewt3.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/share.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/srvm.jar oracle.apps.ad.clone.ApplyDatabase -e /d01/oracle/VIS/db/tech_st/11.1.0/appsutil/VIS_aurie.xml -stage /R12_setup/startCD/Disk1/rapidwiz -showProgress -phase reg -nopromptmsg
    Log file located at /d01/oracle/VIS/db/tech_st/11.1.0/appsutil/log/VIS_aurie/ApplyDatabase_05092341.log
    ^M
    | 0% completed
    runProcess_3
    Statusstring Configuring Database
    Executing command: /R12_setup/startCD/Disk1/rapidwiz/jre/Linux/1.6.0//bin/java -DCONTEXT_VALIDATED=true -mx512M -classpath /R12_setup/startCD/Disk1/rapidwiz/jlib/java:/R12_setup/startCD/Disk1/rapidwiz/jlib/xmlparserv2.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/ojdbc14.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/OraInstaller.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/ewt3.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/share.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/srvm.jar oracle.apps.ad.clone.ApplyDatabase -e /d01/oracle/VIS/db/tech_st/11.1.0/appsutil/VIS_aurie.xml -stage /R12_setup/startCD/Disk1/rapidwiz -showProgress -phase cfg -nopromptmsg
    Log file located at /d01/oracle/VIS/db/tech_st/11.1.0/appsutil/log/VIS_aurie/ApplyDatabase_05092342.log
    ^M
    | 0% completed ^M
    / 0% completed ^M
    - 0% completed ^M
    \ 0% completed RC-50004: Fatal: Error occurred in ApplyDatabase:
    Control file creation failed
    Cannot execute configure of database using RapidClone
    RW-50010: Error: - script has returned an error: 1
    RW-50004: Error code received when running external process. Check log file for details.
    Running Database Install Driver for VIS instance
    Processing DriverFile = /R12_setup/startCD/Disk1/rapidwiz/template/adridb.drv
    Running Instantiation Drivers for /R12_setup/startCD/Disk1/rapidwiz/template/adridb.drv
    instantiate file:
    source : /R12_setup/startCD/Disk1/rapidwiz/template/adrun11g.sh
    dest : /d01/oracle/VIS/db/tech_st/11.1.0/temp/VIS_aurie/adrun11g.sh
    backup : /d01/oracle/VIS/db/tech_st/11.1.0/temp/VIS_aurie/adrun11g.sh to /d01/oracle/VIS/db/tech_st/11.1.0/appsutil/out/VIS_aurie/templbac/adrun11g.sh
    setting permissions: 755
    setting ownership: oracle:dba
    instantiate file:
    source : /R12_setup/startCD/Disk1/rapidwiz/template/adrundb.sh
    dest : /d01/oracle/VIS/db/tech_st/11.1.0/temp/VIS_aurie/adrundb.sh
    backup : /d01/oracle/VIS/db/tech_st/11.1.0/temp/VIS_aurie/adrundb.sh to /d01/oracle/VIS/db/tech_st/11.1.0/appsutil/out/VIS_aurie/templbac/adrundb.sh
    setting permissions: 755
    setting ownership: oracle:dba
    Step 0 of 5
    Command: /d01/oracle/VIS/db/tech_st/11.1.0/temp/VIS_aurie/adrun11g.sh
    Step 1 of 5: Doing UNIX preprocessing
    Processing Step 1 of 5
    Step 1 of 5
    Command: /d01/oracle/VIS/db/tech_st/11.1.0/temp/VIS_aurie/adrundb.sh
    Step 2 of 5: Doing UNIX preprocessing
    Processing Step 2 of 5
    Executing: /d01/oracle/VIS/db/tech_st/11.1.0/temp/VIS_aurie/adrundb.sh
    STARTED INSTALL PHASE : DATABASE : Sun May 10 09:33:17 PDT 2009
    Preparing environment to install databases ...
    Setting LD_LIBRARY_PATH to - /R12_setup/startCD/Disk1/rapidwiz/lib/Linux -
    Setting PATH to - /R12_setup/startCD/Disk1/rapidwiz/jlib/java:/R12_setup/startCD/Disk1/rapidwiz/jlib/xmlparserv2.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/ojdbc14.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/OraInstaller.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/ewt3.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/share.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/srvm.jar -
    Setting CLASSPATH to - /R12_setup/startCD/Disk1/rapidwiz/jlib/java:/R12_setup/startCD/Disk1/rapidwiz/jlib/xmlparserv2.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/ojdbc14.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/OraInstaller.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/ewt3.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/share.jar:/R12_setup/startCD/Disk1/rapidwiz/jlib/oui/srvm.jar -
    ... installing VISION demo database
    FINISHED INSTALL PHASE : DATABASE : Sun May 10 09:33:18 PDT 2009
    /d01/oracle/VIS/db/tech_st/11.1.0/temp/VIS_aurie/adrundb.sh has succeeded
    =============
    Now, I am again hitting the same issue .. whatever I try to do after this fail installation i am getting following
    *[oracle@aurie 11.1.0]$ ls*
    ls: error while loading shared libraries: librt.so.1: cannot open shared object file: No such file or directory
    *[oracle@aurie 11.1.0]$ cat /etc/issue*
    cat: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    *[oracle@aurie 11.1.0]$ cd /*
    *[oracle@aurie /]$ ls*
    ls: error while loading shared libraries: librt.so.1: cannot open shared object file: No such file or directory

  • ORA-12709: error while loading create database character set after upgrade

    Dear All
    i m getting ORA-12709: error while loading create database character set, After upgraded the database from 10.2.0.3 to 11.2.0.3 in ebusiness suit env.
    current application version 12.0.6
    please help me to resolve it.
    SQL> startup;
    ORACLE instance started.
    Total System Global Area 1.2831E+10 bytes
    Fixed Size 2171296 bytes
    Variable Size 2650807904 bytes
    Database Buffers 1.0133E+10 bytes
    Redo Buffers 44785664 bytes
    ORA-12709: error while loading create database character set
    -bash-3.00$ echo $ORA_NLS10
    /u01/oracle/PROD/db/teche_st/11.2.0/nls/data/9idata
    export ORACLE_BASE=/u01/oracle
    export ORACLE_HOME=/u01/oracle/PROD/db/tech_st/11.2.0
    export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/perl/bin:$PATH
    export PERL5LIB=$ORACLE_HOME/perl/lib/5.10.0:$ORACLE_HOME/perl/site_perl/5.10.0
    export ORA_NLS10=/u01/oracle/PROD/db/teche_st/11.2.0/nls/data/9idata
    export ORACLE_SID=PROD
    -bash-3.00$ pwd
    /u01/oracle/PROD/db/tech_st/11.2.0/nls/data/9idata
    -bash-3.00$ ls -lh |more
    total 56912
    -rw-r--r-- 1 oracle oinstall 951 Jan 15 16:05 lx00001.nlb
    -rw-r--r-- 1 oracle oinstall 957 Jan 15 16:05 lx00002.nlb
    -rw-r--r-- 1 oracle oinstall 959 Jan 15 16:05 lx00003.nlb
    -rw-r--r-- 1 oracle oinstall 984 Jan 15 16:05 lx00004.nlb
    -rw-r--r-- 1 oracle oinstall 968 Jan 15 16:05 lx00005.nlb
    -rw-r--r-- 1 oracle oinstall 962 Jan 15 16:05 lx00006.nlb
    -rw-r--r-- 1 oracle oinstall 960 Jan 15 16:05 lx00007.nlb
    -rw-r--r-- 1 oracle oinstall 950 Jan 15 16:05 lx00008.nlb
    -rw-r--r-- 1 oracle oinstall 940 Jan 15 16:05 lx00009.nlb
    -rw-r--r-- 1 oracle oinstall 939 Jan 15 16:05 lx0000a.nlb
    -rw-r--r-- 1 oracle oinstall 1006 Jan 15 16:05 lx0000b.nlb
    -rw-r--r-- 1 oracle oinstall 1008 Jan 15 16:05 lx0000c.nlb
    -rw-r--r-- 1 oracle oinstall 998 Jan 15 16:05 lx0000d.nlb
    -rw-r--r-- 1 oracle oinstall 1005 Jan 15 16:05 lx0000e.nlb
    -rw-r--r-- 1 oracle oinstall 926 Jan 15 16:05 lx0000f.nlb
    -rw-r--r-- 1 oracle oinstall 1.0K Jan 15 16:05 lx00010.nlb
    -rw-r--r-- 1 oracle oinstall 958 Jan 15 16:05 lx00011.nlb
    -rw-r--r-- 1 oracle oinstall 956 Jan 15 16:05 lx00012.nlb
    -rw-r--r-- 1 oracle oinstall 1005 Jan 15 16:05 lx00013.nlb
    -rw-r--r-- 1 oracle oinstall 970 Jan 15 16:05 lx00014.nlb
    -rw-r--r-- 1 oracle oinstall 950 Jan 15 16:05 lx00015.nlb
    -rw-r--r-- 1 oracle oinstall 1.0K Jan 15 16:05 lx00016.nlb
    -rw-r--r-- 1 oracle oinstall 957 Jan 15 16:05 lx00017.nlb
    -rw-r--r-- 1 oracle oinstall 932 Jan 15 16:05 lx00018.nlb
    -rw-r--r-- 1 oracle oinstall 932 Jan 15 16:05 lx00019.nlb
    -rw-r--r-- 1 oracle oinstall 951 Jan 15 16:05 lx0001a.nlb
    -rw-r--r-- 1 oracle oinstall 944 Jan 15 16:05 lx0001b.nlb
    -rw-r--r-- 1 oracle oinstall 953 Jan 15 16:05 lx0001c.nlb
    Starting up:
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options.
    ORACLE_HOME = /u01/oracle/PROD/db/tech_st/11.2.0
    System name: SunOS
    Node name: proddb3.zakathouse.org
    Release: 5.10
    Version: Generic_147440-19
    Machine: sun4u
    Using parameter settings in server-side spfile /u01/oracle/PROD/db/tech_st/11.2.0/dbs/spfilePROD.ora
    System parameters with non-default values:
    processes = 200
    sessions = 400
    timed_statistics = TRUE
    event = ""
    shared_pool_size = 416M
    shared_pool_reserved_size= 40M
    nls_language = "american"
    nls_territory = "america"
    nls_sort = "binary"
    nls_date_format = "DD-MON-RR"
    nls_numeric_characters = ".,"
    nls_comp = "binary"
    nls_length_semantics = "BYTE"
    memory_target = 11G
    memory_max_target = 12G
    control_files = "/u01/oracle/PROD/db/apps_st/data/cntrl01.dbf"
    control_files = "/u01/oracle/PROD/db/tech_st/10.2.0/dbs/cntrl02.dbf"
    control_files = "/u01/oracle/PROD/db/apps_st/data/cntrl03.dbf"
    db_block_checksum = "TRUE"
    db_block_size = 8192
    compatible = "11.2.0.0.0"
    log_archive_dest_1 = "LOCATION=/u01/oracle/PROD/db/apps_st/data/archive"
    log_archive_format = "%t_%s_%r.dbf"
    log_buffer = 14278656
    log_checkpoint_interval = 100000
    log_checkpoint_timeout = 1200
    db_files = 512
    db_file_multiblock_read_count= 8
    db_recovery_file_dest = "/u01/oracle/fast_recovery_area"
    db_recovery_file_dest_size= 14726M
    log_checkpoints_to_alert = TRUE
    dml_locks = 10000
    undo_management = "AUTO"
    undo_tablespace = "APPS_UNDOTS1"
    db_block_checking = "FALSE"
    session_cached_cursors = 500
    utl_file_dir = "/usr/tmp"
    utl_file_dir = "/usr/tmp"
    utl_file_dir = "/u01/oracle/PROD/db/tech_st/10.2.0/appsutil/outbound"
    utl_file_dir = "/u01/oracle/PROD/db/tech_st/10.2.0/appsutil/outbound/PROD_proddb3"
    utl_file_dir = "/usr/tmp"
    plsql_code_type = "INTERPRETED"
    plsql_optimize_level = 2
    job_queue_processes = 2
    cursor_sharing = "EXACT"
    parallel_min_servers = 0
    parallel_max_servers = 8
    core_dump_dest = "/u01/oracle/PROD/db/tech_st/10.2.0/admin/PROD_proddb3/cdump"
    audit_file_dest = "/u01/oracle/admin/PROD/adump"
    db_name = "PROD"
    open_cursors = 600
    pga_aggregate_target = 1G
    workarea_size_policy = "AUTO"
    optimizer_secure_view_merging= FALSE
    aq_tm_processes = 1
    olap_page_pool_size = 4M
    diagnostic_dest = "/u01/oracle"
    max_dump_file_size = "20480"
    Tue Jan 15 16:16:02 2013
    PMON started with pid=2, OS id=18608
    Tue Jan 15 16:16:02 2013
    PSP0 started with pid=3, OS id=18610
    Tue Jan 15 16:16:03 2013
    VKTM started with pid=4, OS id=18612 at elevated priority
    VKTM running at (10)millisec precision with DBRM quantum (100)ms
    Tue Jan 15 16:16:03 2013
    GEN0 started with pid=5, OS id=18616
    Tue Jan 15 16:16:03 2013
    DIAG started with pid=6, OS id=18618
    Tue Jan 15 16:16:03 2013
    DBRM started with pid=7, OS id=18620
    Tue Jan 15 16:16:03 2013
    DIA0 started with pid=8, OS id=18622
    Tue Jan 15 16:16:03 2013
    MMAN started with pid=9, OS id=18624
    Tue Jan 15 16:16:03 2013
    DBW0 started with pid=10, OS id=18626
    Tue Jan 15 16:16:03 2013
    LGWR started with pid=11, OS id=18628
    Tue Jan 15 16:16:03 2013
    CKPT started with pid=12, OS id=18630
    Tue Jan 15 16:16:03 2013
    SMON started with pid=13, OS id=18632
    Tue Jan 15 16:16:04 2013
    RECO started with pid=14, OS id=18634
    Tue Jan 15 16:16:04 2013
    MMON started with pid=15, OS id=18636
    Tue Jan 15 16:16:04 2013
    MMNL started with pid=16, OS id=18638
    DISM started, OS id=18640
    ORACLE_BASE from environment = /u01/oracle
    Tue Jan 15 16:16:08 2013
    ALTER DATABASE MOUNT
    ORA-12709 signalled during: ALTER DATABASE MOUNT...

    ORA-12709 signalled during: ALTER DATABASE MOUNT...Do you have any trace files generated at the time you get this error?
    Please see these docs.
    ORA-12709: WHILE STARTING THE DATABASE [ID 1076156.6]
    Upgrading from 9i to 10gR2 Fails With ORA-12709 : Error While Loading Create Database Character Set [ID 732861.1]
    Ora-12709 While Trying To Start The Database [ID 311035.1]
    ORA-12709 when Mounting the Database [ID 160478.1]
    How to Move From One Database Character Set to Another at the Database Level [ID 1059300.6]
    Thanks,
    Hussein

  • Error while loading hierarchies

    Dear all
    I am faicng the following errors while loading hierarchies from R/3
    1.Error"Error in the hierarchial structure"
    When i open the logs i can see the follwing analysis,but not able to understand
    Analysis
    The node with the technical name 0000104 appears more than once under the node with the ID 10400011. A node must be hung only once under its direct predecessor (parent). The nodes with the same technical name have the IDs 10400017 and 10400012.
    When one of the nodes is an interval, this means that the other node is contained in this interval. When both nodes are intervals, this means that both interlap.
    Possible solution
    Remove the Duplicate Nodes
    Can anyone explain this please
    2.Error: InfoObject ZH_LKSTOM is not available for Datasource ZH_LONKLA_HIER
    I am trying to load the hierarchy for the Info object ZH_LONKLA
    Can anyone kindly help me out in these
    Regards
    Veena.

    1.Error"Error in the hierarchial structure"
    the hierarchy does have duplicate values. the possibility is that you are loading a set from R/3, where the system allows to have duplicate values in different nodes, but that doesnt hold true in the case of BI. so ask ur functional folks to nake the set unique and the load should go fine.
    .Error: InfoObject ZH_LKSTOM is not available for Datasource ZH_LONKLA_HIER
    I am trying to load the hierarchy for the Info object ZH_LONKLA
    Not sure whats the relation between ZH_LKSTOM and ZH_LONKLA

  • Error while loading the Web template "0ANALYSIS_PATTERN" (return value "4")

    I activated all 0* objects in BTMP  and checked that they are all activated, but I found errors when opening the BEx Web Analyzer
    Error messages :
    Error while loading the Web template "0ANALYSIS_PATTERN" (return value "4")
    ERROR: Command type SWITCH_AXES of object not recognized
    ERROR: Parameter GENERAL_TEXT_ELEMENT not recognized; check your metadata
    So I ran RS_TEMPLATE_MAINTAIN_70 (0ANALYSIS_PATTERN) version D report to check and choose validate.
    Error messages :
    Template Include Item HEADER_TEMPLATE unresolved or empty. Close and reopen template.     
    Template Include Item FOOTER_TEMPLATE unresolved or empty. Close and reopen template.          
    Width of object item:GROUP_ITEM:GROUP_ITEM_1 is very small (1amp;1)          
    Query  does not exist in the BI System (Object QU )
    Cannot instantiate data provider
    I also ran RS_TEMPLATE_MAINTAIN_70 (0ANALYSIS_PATTERN) version A and got the following errors:
    Command type SWITCH_AXES of object  not recognized
    Command type SWITCH_AXES of object  not recognized
    Command type SWITCH_AXES of object  not recognized
    Parameter GENERAL_TEXT_ELEMENT not recognized; check your metadata
    Parameter GENERAL_TEXT_ELEMENT of object item:GROUP_ITEM:GROUP_ITEM_1 not recognized
    Command type SWITCH_AXES of object item:GROUP_ITEM:GROUP_ITEM_1 not recognized
    Command type SWITCH_AXES of object item:GROUP_ITEM:GROUP_ITEM_1 not recognized
    Command type SWITCH_AXES of object item:GROUP_ITEM:GROUP_ITEM_1 not recognized
    Command SWITCH_AXES cannot be located in parameter ACTION
    Command SWITCH_AXES cannot be located in parameter ACTION
    Command SWITCH_AXES cannot be located in parameter ACTION
    Parameter GENERIC_TEXT_BINDING of object item:GROUP_ITEM:GROUP_ITEM_1 cannot be higher-level node of GENERAL_TEXT_ELEMENT
    Item item:GROUP_ITEM:GROUP_ITEM_1 of type DROPDOWN_ITEM cannot be a parent node of DATA_PROVIDER_REF
    Item item:GROUP_ITEM:GROUP_ITEM_1 of type CHART_ITEM cannot be a parent node of VISIBLE
    Item item:GROUP_ITEM:GROUP_ITEM_1 of type CHART_ITEM cannot be a parent node of LEGEND_POSITION
    Item item:GROUP_ITEM:GROUP_ITEM_1 of type CHART_ITEM cannot be a parent node of LEGEND_VISIBLE
    Item item:GROUP_ITEM:GROUP_ITEM_1 of type CHART_ITEM cannot be a parent node of LEGEND_ONLY
    Invalid parameter value  for parameter ACTION in object item:GROUP_ITEM:GROUP_ITEM_1
    Invalid parameter value  for parameter ACTION in object item:GROUP_ITEM:GROUP_ITEM_1
    Invalid parameter value  for parameter ACTION in object item:GROUP_ITEM:GROUP_ITEM_1
    bi-Tag bi:item has unknown attribute key. Please check your spelling
    bi-Tag bi:item has unknown attribute key. Please check your spelling
    bi-Tag bi:item has unknown attribute key. Please check your spelling
    bi-Tag bi:item has unknown attribute key. Please check your spelling
    What could be the problem?

    hi,
      Please look into this note let me it helps...url=https://websmp202.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=917950&_NLANG=E]Note 917950 - SAP NetWeaver 2004s: Setting Up BEx Web[/url]
    check this link as well for [<a href="https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ca6de811-0c01-0010-43b2-c64584eef943">Bex Web Configuration</a>
    wht is the SP Level?
    Nagesh Ganisetti.

  • Error while loading the Hierarchy to 0GLACCEXT

    Dear all ,
    I am trying to load a hierarchy ZEM 1 that is manually created in R/3. when I am executing the Infopackage its showing an Error that
    1.Record 8 :Node characteristic 0GL_ACCOUNT is not entered as hierarchy characteristic for 0GLACCEXT
    2.Too many error records - update terminated
    and what is the the concept of 0BAL_DPEND to be added as attribute to OGLACCEXT.
    Thanks

    Hi Dash ,
    I am providing the long test for the error ...Also can you tell me how I should use 0BAL_DPEND field as an attribute of Hieararchy of 0GLACCEXT. Coz i am supposed to do this
    <u>Hierarchy for Characteristic 0GLACCEXT with Attribute 0BAL_DEPEND
    The hierarchy for characteristic 0GLACCEXT is used as the financial statement version in BW queries. To technically enable the particular exception aggregation for contra items, the hierarchy table of characteristic 0GLACCEXT contains the attribute Balance-Dependency of a Hierarchy Node (technical name 0BAL_DEPEND).</u>
    The Long Error Message
    Diagnosis
        The nodes NODEID = [00000194, 00001627 ] have the same node names
        NODENAME = '10000000950090 '. This is not allowed because neither node
        is a leaf and both nodes are assigned to the same characteristic
        0GL_ACCOUNT .
        Note that a maximum of 50 characters is available for message variables.
        The node name NODENAME = '10000000950090 ' might not be displayed in its
        full length.
    System Response
    Procedure
        Try to localize the problem. If the hierarchy is loaded from an SAP
        source system, you can check whether the extracted data is correct by
    executing transaction RSA3 in the source system. If necessary, check if
    the data is modified with a user exit. If the hierarchy is loaded from a
    file, check the contents of this file. The problem might also be due to
    an error in the transfer rules or in the transformation.
    You can identify the two duplicate nodes from the values for the node ID
    NODEID = [00000194, 00001627 ].
    The problem can be caused by identical nodes delivered more than once
    from the source. It is also possible that incorrect values for
    parameters such as the node name NODENAME or validity period [DATEFROM,
    DATETO] are the cause of the problem.
    First check if the value for node name NODENAME = '10000000950090 ' is
    correct. The node name consists of the characteristic value for the
    hierarchy basic characteristic and the characteristic values of all the
    characteristics compounded to this characteristic. Errors creating the
    node name often result in duplicate nodes. This problem also occurs if
    the node names of all nodes are initial.
    If the structure of the hierarchy is defined as time-dependent, the time
    validity of a node can be restricted with the fields DATEFROM and
    DATETO. Duplicate nodes generally do not occur if the validity intervals
    of two nodes do not overlap in time. In this case check if the fields
    DATEFROM and DATETO are correctly filled.
    In this case the duplicate nodes are not leaves. If you want to reuse an
    existing subtree at another location in the hierarchy, you can refer to
    this subtree with a link node. Possibly one of the two duplicate nodes
    were incorrectly not marked as link nodes.
    please Suggest What to do

  • Error while loading the runtime repository via HTTP

    Hi Experts,
    I am trying to delete an enhancement and when I enter the component name and the enhancement set in BSP_WD_CMPWB. I get the following error when right click the enhanced view and select delete : Error while loading the runtime repository via HTTP. How do I delete this enhancement?
    Regards
    Abdullah Ismail.

    if for some reason the runtime repository is not coherent, you get an error each time you try to read it (and this is the case when you open a component using the transaction BSP_WD_CMPWB)
    this is because the XML file is interpreted by a CALL TRANSFORMATION statement, and any incorrect node will raise an uncaught exception
    solution:
    enhanced view is contained into BSP application you have created the first time you enhanced the component
    go to SE80 and enter the BSP application where your objects are stored (the name you provided the first time)
    there you can modify directly the objects, including the runtime repository which is stored under node "Pages with flow Logic"
    once the correction is done, you can access again your component through transaction BSP_WD_CMPWB (and delete it properly if this is what you want to do)

  • Error: Halting this cluster node due to unrecoverable service failure

    Our cluster has experienced some sort of fault that has only become apparent today. The origin appears to have been nearly a month ago yet the symptoms have only just manifested.
    The node in question is a standalone instance running a DistributedCache service with local storage. It output the following to stdout on Jan-22:
    Coherence <Error>: Halting this cluster node due to unrecoverable service failure
    It finally failed today with OutOfMemoryError: Java heap space.
    We're running coherence-3.5.2.jar.
    Q1: It looks like this node failed on Jan-22 yet we did not notice. What is the best way to monitor node health?
    Q2: What might the root cause be for such a fault?
    I found the following in the logs:
    2011-01-22 01:18:58,296 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:18:58.296/9910749.462 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Attempting recovery (due to soft timeout) of Guard{Daemon=DistributedCache}
    2011-01-22 01:18:58,296 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:18:58.296/9910749.462 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Attempting recovery (due to soft timeout) of Guard{Daemon=DistributedCache}
    2011-01-22 01:19:04,772 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:04.772/9910755.938 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Terminating guarded execution (due to hard timeout) of Guard{Daemon=DistributedCache}
    2011-01-22 01:19:04,772 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:04.772/9910755.938 Oracle Coherence EE 3.5.2/463 <Error> (thread=Cluster, member=33): Terminating guarded execution (due to hard timeout) of Guard{Daemon=DistributedCache}
    2011-01-22 01:19:05,785 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:05.785/9910756.951 Oracle Coherence EE 3.5.2/463 <Error> (thread=Termination Thread, member=33): Full Thread Dump
    Thread[Reference Handler,10,system]
    java.lang.Object.wait(Native Method)
    java.lang.Object.wait(Object.java:485)
    java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
    Thread[DistributedCache,5,Cluster]
    java.nio.Bits.copyToByteArray(Native Method)
    java.nio.DirectByteBuffer.get(DirectByteBuffer.java:224)
    com.tangosol.io.nio.ByteBufferInputStream.read(ByteBufferInputStream.java:123)
    java.io.DataInputStream.readFully(DataInputStream.java:178)
    java.io.DataInputStream.readFully(DataInputStream.java:152)
    com.tangosol.util.Binary.readExternal(Binary.java:1066)
    com.tangosol.util.Binary.<init>(Binary.java:183)
    com.tangosol.io.nio.BinaryMap$Block.readValue(BinaryMap.java:4304)
    com.tangosol.io.nio.BinaryMap$Block.getValue(BinaryMap.java:4130)
    com.tangosol.io.nio.BinaryMap.get(BinaryMap.java:377)
    com.tangosol.io.nio.BinaryMapStore.load(BinaryMapStore.java:64)
    com.tangosol.net.cache.SerializationPagedCache$WrapperBinaryStore.load(SerializationPagedCache.java:1547)
    com.tangosol.net.cache.SerializationPagedCache$PagedBinaryStore.load(SerializationPagedCache.java:1097)
    com.tangosol.net.cache.SerializationMap.get(SerializationMap.java:121)
    com.tangosol.net.cache.SerializationPagedCache.get(SerializationPagedCache.java:247)
    com.tangosol.net.cache.AbstractSerializationCache$1.getOldValue(AbstractSerializationCache.java:315)
    com.tangosol.net.cache.OverflowMap$Status.registerBackEvent(OverflowMap.java:4210)
    com.tangosol.net.cache.OverflowMap.onBackEvent(OverflowMap.java:2316)
    com.tangosol.net.cache.OverflowMap$BackMapListener.onMapEvent(OverflowMap.java:4544)
    com.tangosol.util.MultiplexingMapListener.entryDeleted(MultiplexingMapListener.java:49)
    com.tangosol.util.MapEvent.dispatch(MapEvent.java:214)
    com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    com.tangosol.net.cache.AbstractSerializationCache.dispatchEvent(AbstractSerializationCache.java:338)
    com.tangosol.net.cache.AbstractSerializationCache.dispatchPendingEvent(AbstractSerializationCache.java:321)
    com.tangosol.net.cache.AbstractSerializationCache.removeBlind(AbstractSerializationCache.java:155)
    com.tangosol.net.cache.SerializationPagedCache.removeBlind(SerializationPagedCache.java:348)
    com.tangosol.util.AbstractKeyBasedMap$KeySet.remove(AbstractKeyBasedMap.java:556)
    com.tangosol.net.cache.OverflowMap.removeInternal(OverflowMap.java:1299)
    com.tangosol.net.cache.OverflowMap.remove(OverflowMap.java:380)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.clear(DistributedCache.CDB:24)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:32)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ClearRequest.run(DistributedCache.CDB:1)
    com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheRequest.onReceived(DistributedCacheRequest.CDB:12)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Finalizer,8,system]
    java.lang.Object.wait(Native Method)
    java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
    java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
    java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
    Thread[PacketReceiver,7,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketReceiver.onWait(PacketReceiver.CDB:2)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[RMI TCP Accept-0,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketSpeaker,8,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.queue.ConcurrentQueue.waitForEntry(ConcurrentQueue.CDB:16)
    com.tangosol.coherence.component.util.queue.ConcurrentQueue.remove(ConcurrentQueue.CDB:7)
    com.tangosol.coherence.component.util.Queue.remove(Queue.CDB:1)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketSpeaker.onNotify(PacketSpeaker.CDB:62)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Logger@9216774 3.5.2/463,3,main]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketListener1,8,Cluster]
    java.net.PlainDatagramSocketImpl.receive0(Native Method)
    java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
    java.net.DatagramSocket.receive(DatagramSocket.java:712)
    com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
    com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[main,5,main]
    java.lang.Object.wait(Native Method)
    com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:79)
    com.networkfleet.cacheserver.Launcher.main(Launcher.java:122)
    Thread[Signal Dispatcher,9,system]
    Thread[RMI TCP Accept-41006,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    ThreadCluster
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[TcpRingListener,6,Cluster]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    com.tangosol.coherence.component.net.socket.TcpSocketAccepter.accept(TcpSocketAccepter.CDB:18)
    com.tangosol.coherence.component.util.daemon.TcpRingListener.acceptConnection(TcpRingListener.CDB:10)
    com.tangosol.coherence.component.util.daemon.TcpRingListener.onNotify(TcpRingListener.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketPublisher,6,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.onWait(PacketPublisher.CDB:2)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[RMI TCP Accept-0,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:34)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketListenerN,8,Cluster]
    java.net.PlainDatagramSocketImpl.receive0(Native Method)
    java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
    java.net.DatagramSocket.receive(DatagramSocket.java:712)
    com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
    com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Invocation:Management,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[DistributedCache:PofDistributedCache,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[Invocation:Management:EventDispatcher,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onWait(Service.CDB:7)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[Termination Thread,5,Cluster]
    java.lang.Thread.dumpThreads(Native Method)
    java.lang.Thread.getAllStackTraces(Thread.java:1487)
    sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    java.lang.reflect.Method.invoke(Method.java:597)
    com.tangosol.net.GuardSupport.logStackTraces(GuardSupport.java:791)
    com.tangosol.coherence.component.net.Cluster.onServiceFailed(Cluster.CDB:5)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$Guard.terminate(Grid.CDB:17)
    com.tangosol.net.GuardSupport$2.run(GuardSupport.java:652)
    java.lang.Thread.run(Thread.java:619)
    2011-01-22 01:19:05,785 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:05.785/9910756.951 Oracle Coherence EE 3.5.2/463 <Error> (thread=Termination Thread, member=33): Full Thread Dump
    Thread[Reference Handler,10,system]
    java.lang.Object.wait(Native Method)
    java.lang.Object.wait(Object.java:485)
    java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
    Thread[DistributedCache,5,Cluster]
    java.nio.Bits.copyToByteArray(Native Method)
    java.nio.DirectByteBuffer.get(DirectByteBuffer.java:224)
    com.tangosol.io.nio.ByteBufferInputStream.read(ByteBufferInputStream.java:123)
    java.io.DataInputStream.readFully(DataInputStream.java:178)
    java.io.DataInputStream.readFully(DataInputStream.java:152)
    com.tangosol.util.Binary.readExternal(Binary.java:1066)
    com.tangosol.util.Binary.<init>(Binary.java:183)
    com.tangosol.io.nio.BinaryMap$Block.readValue(BinaryMap.java:4304)
    com.tangosol.io.nio.BinaryMap$Block.getValue(BinaryMap.java:4130)
    com.tangosol.io.nio.BinaryMap.get(BinaryMap.java:377)
    com.tangosol.io.nio.BinaryMapStore.load(BinaryMapStore.java:64)
    com.tangosol.net.cache.SerializationPagedCache$WrapperBinaryStore.load(SerializationPagedCache.java:1547)
    com.tangosol.net.cache.SerializationPagedCache$PagedBinaryStore.load(SerializationPagedCache.java:1097)
    com.tangosol.net.cache.SerializationMap.get(SerializationMap.java:121)
    com.tangosol.net.cache.SerializationPagedCache.get(SerializationPagedCache.java:247)
    com.tangosol.net.cache.AbstractSerializationCache$1.getOldValue(AbstractSerializationCache.java:315)
    com.tangosol.net.cache.OverflowMap$Status.registerBackEvent(OverflowMap.java:4210)
    com.tangosol.net.cache.OverflowMap.onBackEvent(OverflowMap.java:2316)
    com.tangosol.net.cache.OverflowMap$BackMapListener.onMapEvent(OverflowMap.java:4544)
    com.tangosol.util.MultiplexingMapListener.entryDeleted(MultiplexingMapListener.java:49)
    com.tangosol.util.MapEvent.dispatch(MapEvent.java:214)
    com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    com.tangosol.net.cache.AbstractSerializationCache.dispatchEvent(AbstractSerializationCache.java:338)
    com.tangosol.net.cache.AbstractSerializationCache.dispatchPendingEvent(AbstractSerializationCache.java:321)
    com.tangosol.net.cache.AbstractSerializationCache.removeBlind(AbstractSerializationCache.java:155)
    com.tangosol.net.cache.SerializationPagedCache.removeBlind(SerializationPagedCache.java:348)
    com.tangosol.util.AbstractKeyBasedMap$KeySet.remove(AbstractKeyBasedMap.java:556)
    com.tangosol.net.cache.OverflowMap.removeInternal(OverflowMap.java:1299)
    com.tangosol.net.cache.OverflowMap.remove(OverflowMap.java:380)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$Storage.clear(DistributedCache.CDB:24)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:32)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ClearRequest.run(DistributedCache.CDB:1)
    com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheRequest.onReceived(DistributedCacheRequest.CDB:12)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Finalizer,8,system]
    java.lang.Object.wait(Native Method)
    java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
    java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
    java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
    Thread[PacketReceiver,7,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketReceiver.onWait(PacketReceiver.CDB:2)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[RMI TCP Accept-0,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketSpeaker,8,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.queue.ConcurrentQueue.waitForEntry(ConcurrentQueue.CDB:16)
    com.tangosol.coherence.component.util.queue.ConcurrentQueue.remove(ConcurrentQueue.CDB:7)
    com.tangosol.coherence.component.util.Queue.remove(Queue.CDB:1)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketSpeaker.onNotify(PacketSpeaker.CDB:62)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Logger@9216774 3.5.2/463,3,main]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketListener1,8,Cluster]
    java.net.PlainDatagramSocketImpl.receive0(Native Method)
    java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
    java.net.DatagramSocket.receive(DatagramSocket.java:712)
    com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
    com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[main,5,main]
    java.lang.Object.wait(Native Method)
    com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:79)
    com.networkfleet.cacheserver.Launcher.main(Launcher.java:122)
    Thread[Signal Dispatcher,9,system]
    Thread[RMI TCP Accept-41006,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    ThreadCluster
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[TcpRingListener,6,Cluster]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    com.tangosol.coherence.component.net.socket.TcpSocketAccepter.accept(TcpSocketAccepter.CDB:18)
    com.tangosol.coherence.component.util.daemon.TcpRingListener.acceptConnection(TcpRingListener.CDB:10)
    com.tangosol.coherence.component.util.daemon.TcpRingListener.onNotify(TcpRingListener.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketPublisher,6,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketPublisher.onWait(PacketPublisher.CDB:2)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[RMI TCP Accept-0,5,system]
    java.net.PlainSocketImpl.socketAccept(Native Method)
    java.net.PlainSocketImpl.accept(PlainSocketImpl.java:390)
    java.net.ServerSocket.implAccept(ServerSocket.java:453)
    java.net.ServerSocket.accept(ServerSocket.java:421)
    sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:34)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:369)
    sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:341)
    java.lang.Thread.run(Thread.java:619)
    Thread[PacketListenerN,8,Cluster]
    java.net.PlainDatagramSocketImpl.receive0(Native Method)
    java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:136)
    java.net.DatagramSocket.receive(DatagramSocket.java:712)
    com.tangosol.coherence.component.net.socket.UdpSocket.receive(UdpSocket.CDB:20)
    com.tangosol.coherence.component.net.UdpPacket.receive(UdpPacket.CDB:4)
    com.tangosol.coherence.component.util.daemon.queueProcessor.packetProcessor.PacketListener.onNotify(PacketListener.CDB:19)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    java.lang.Thread.run(Thread.java:619)
    Thread[Invocation:Management,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[DistributedCache:PofDistributedCache,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onWait(Grid.CDB:9)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[Invocation:Management:EventDispatcher,5,Cluster]
    java.lang.Object.wait(Native Method)
    com.tangosol.coherence.component.util.Daemon.onWait(Daemon.CDB:18)
    com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onWait(Service.CDB:7)
    com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:39)
    java.lang.Thread.run(Thread.java:619)
    Thread[Termination Thread,5,Cluster]
    java.lang.Thread.dumpThreads(Native Method)
    java.lang.Thread.getAllStackTraces(Thread.java:1487)
    sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    java.lang.reflect.Method.invoke(Method.java:597)
    com.tangosol.net.GuardSupport.logStackTraces(GuardSupport.java:791)
    com.tangosol.coherence.component.net.Cluster.onServiceFailed(Cluster.CDB:5)
    com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid$Guard.terminate(Grid.CDB:17)
    com.tangosol.net.GuardSupport$2.run(GuardSupport.java:652)
    java.lang.Thread.run(Thread.java:619)
    2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 INFO 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Info> (thread=main, member=33): Restarting Service: DistributedCache
    2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 INFO 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Info> (thread=main, member=33): Restarting Service: DistributedCache
    2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Error> (thread=main, member=33): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: Distr
    butedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=16, BackupPartitions=16}
    2011-01-22 01:19:06,738 Coherence Logger@9216774 3.5.2/463 ERROR 2011-01-22 01:19:06.738/9910757.904 Oracle Coherence EE 3.5.2/463 <Error> (thread=main, member=33): Failed to restart services: java.lang.IllegalStateException: Failed to unregister: Distr
    butedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=16, BackupPartitions=16}

    Hi
    It seems like the problem in this case is the call to clear() which will try to load all entries stored in the overflow scheme to emit potential cache events to listeners. This probably requires much more memory than there is Java heap available, hence the OOM.
    Our recommendation in this case is to call destroy() since this will bypass the even firing.
    /Charlie

Maybe you are looking for