Query Saving takes long time and giving error
Hi Gurus,
I am creating one query that have lot of calculations (CKF & RKF).
When I am trying to save this query it is taking long time and it is giving error like RFC_ERROR_SYSTEM_FAILURE, Query Designer must be restarted, further work not possible.
Please give me the solution for this.
Thanks,
RChowdary
Hi Chowdary,
Check the following note: 316470.
https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=316470
The note details are:
Symptom
There are no authorizations to change roles. Consequently, the system displays no roles when you save workbooks in the BEx Analyzer. In the BEx browser, you cannot move or change workbooks, documents, folders and so on.
Other terms
BW 2.0B, 2.0A, 20A, 20B, frontend, error 172, Business Explorer,
RFC_ERROR_SYSTEM_FAILURE, NOT_AUTHORIZED, S_USER_TCD, RAISE_EXCEPTION,
LPRGN_STRUCTUREU04, SAPLPRGN_STRUCTURE, PRGN_STRU_SAVE_NODES
Reason and Prerequisites
The authorizations below are not assigned to the user.
Solution
Assign authorization for roles
To assign authorizations for a role, execute the following steps:
1. Start Transaction Role maintenance (PFCG)
2. Select a role
3. Choose the "Change" switch
4. Choose tab title "Authorizations"
5. Choose the "Change authorization data" switch
6. Choose "+ Manually" switch
7. Enter "S_USER_AGR" as "Authorization object"
8. Expand "Basis: Administration"/"Authorization: Role check""
9. From "Activity" select "Create or generate" and others like "Display" or "Change"
10. Under "Role Name", enter all roles that are supposed to be shown or changed. Enter "*" for all roles.
11. You can re-enter authorization object "S_USER_AGR" for other activities.
Assign authorization for transactions
If a user is granted the authorization for changing a role, he/she should also be granted the authorization for all transactions contained in the role. Add these transaction codes to authorization object S_USER_TCD.
1. Start the role maintenance transaction (PFCG).
2. Select a role.
3. Click on "Change".
4. Choose the "Authorizations" tab.
5. Click on "Change authorization data".
6. Click on "+ manually".
7. Specify "S_USER_TCD" as "Authorization object".
8. Expand "Basis - Administration"/"Authorizations: Transactions in Roles".
9. Under "Transaction", choose at least "RRMX" (for BW reports), "SAP_BW_TEMPLATE" (for BW Web Templates), "SAP_BW_QUERY" (for BW Queries and/or "SAP_BW_CRYSTAL" (for Crystal reports) or "*". Values with "SAP_BW_..." are not transactions, they are special node types (see transaction code NODE_TYPE_DEFINITION).
Using the SAP System Trace (Transaction ST01), you can identify the transaction that causes NOT_AUTHORIZED.
Prevent user assignment
Having the authorization for changing roles, the user is not only able to change the menu but also to assign users. If you want to prevent the latter, the user must loose the authorization for Transactions User Maintenance (SU01) and Role maintenance (PFCG).
Z1>Note
Refer to Note 197601, which provides information on the different display of BEx Browser, BEx Analyzer and Easy Access menu.
Please refer to Note 373979 about authorizations to save workbooks.
Check in the transaction ST22 for more details on the Query designer failure or query log file.
With Regards,
Ravi Kanth.
Edited by: Ravi kanth on Apr 9, 2009 6:02 PM
Similar Messages
-
HT1351 for syncing it takes long time and not completed what to do???
for syncing it takes long time and not completed what to do???
Debbie:
deborahfromwindsor wrote:
he advises restarting by inserting the OSX disc and pressing down the C button to reboot from there then selecting disk utility, hard disk and repair.... Does he mean me to hold down the C key on the alpha keyboard or the ctrl key?
Should I just ask for my money back??? If it is a simple repair do I just literally push the disc in, push the power button and hold down the C button?
That's where I would begin, too, with
Repair Disk
Insert Installer disk and Restart, holding down the "C" key until grey Apple appears.
Go to Installer menu (Panther and earlier) or Utilities menu (Tiger) and launch Disk Utility.
Select your HDD (manufacturer ID) in the left panel.
Select First Aid in the Main panel.
(Check S.M.A.R.TStatus of HDD at the bottom of right panel, and report if it saysanything but Verified)
Click Repair Disk on the bottom right.
If DU reports disk does not need repairs quit DU and restart.
If DU reports errors Repair again and again until DU reports disk is repaired.
If DU reports errors it cannot repair you will need touse autility like TechTool Pro,Drive Geniusor DiskWarrior
First we need to determine if the issue you are experiencing with the computer is software or hardware based. Once we have gotten things sorted out there should be time enough to make you decision about keeping or returning it.
cornelius -
Problem Export to Excel it takes long time and Takes more space.
Hello All,
when we export to Excel in portal(reports), it takes long time and it takes more space.
how to overcome this problem please any one knows provide the proper solution for this issues
Regards,
Ch.Hi Chetans,
I have had the same problem, and I had to create a OSS message to SAP in order to solve this issue, until now I don't have an answer. They made me to change a lot of configuration in Java and BW without luck. To tell you more, when we try to export to excel the java instance restarts aumotatically.
But we have this problem, when we try to export a huge quantity of data, so I found a note which says the limitation of exporting to excel. Take a special attention to the Performance section.
Note 1178857
[https://service.sap.com/sap/support/notes/1178857|https://service.sap.com/sap/support/notes/1178857]
I recomend you that you create a message to SAP. If you find a solution, please let me know.
Regards, Federico -
Analyze a Query which takes longer time in Production server with ST03 only
Hi,
I want to Analyze a Query which takes longer time in Production server with ST03 t-code only.
Please provide me with detail steps as to perform the same with ST03
ST03 - Expert mode- then I need to know the steps after this. I have checked many threads. So please don't send me the links.
Write steps in detail please.
<REMOVED BY MODERATOR>
Regards,
Sameer
Edited by: Alvaro Tejada Galindo on Jun 12, 2008 12:14 PMThen please close the thread.
Greetings,
Blag. -
Query Prediction takes long time - After upgrade DB 9i to 10g
Hi all, Thanks for all your help.
we've got an issue in Discoverer, we are using Discoverer10g (10.1.2.2) with APPS and recently we upgraded Oracle DatBase from 9i to 10g.
After Database upgrade, when we try to run reports in Discoverer plus taking long time for query prediction than used to be(double/triple), only for query prediction taking long time andthen takes for running query.
Have anyone got this kind of issues seen before, could you share your ideas/thoughts that way i can ask DBA or sysadmin to change any settings at Discoverer server side
Thanks in advance
skatHi skat
Did you also upgrade your Discoverer from 9i to 10g or did you always have 10g?
If you weren't always on 10g, take a look inside the EUL5_QPP_STATS table by running SELECT COUNT(*) FROM EUL5_QPP_STATS on both the old and new systems
I suspect you may well find that there are far more records in the old system than the new one. What this table stores is the statistics for the queries that have been run before. Using those statistics is how Discoverer can estimate how long queries will take to run. If you have few statistics then for some time Discoverer will not know how long previous queries will take. Also, the statistics table used by 9i is incompatible with the one used by 10g so you can't just copy them over, just in case you were thinking about it.
Personally, unless you absolutely rely on it, I would turn the query predictor off. You do this by editing your PREF.TXT (located on the middle tier server at $ORACLE_HOME\Discoverer|util) and change the value of QPPEnable to 0. AFter you have done this you need to run the Applypreferences script located in the same folder and then stop and start your Discoverer service. From that point on queries will no longer try to predict how long they will take and they will just start running.
There is something else to check. Please run a query and look at the SQL. Do you by change see a database hint called NOREWRITE? If you do then this will also cause poor performance. Should you see this let me know and I will let you know how to override it.
If you have always been on 10g and you have only upgraded your database it could be that you have not generated your database statistics for the tables that Discoverer is using. You will need to speak with your DBA to see about having the statistics generated. Without statistics, the query predictor will be very, very slow.
Best wishes
Michael -
MDX report rendering takes long time and showing Conflict Message
Hi All,
This is my MDX Query
with member
[Measures].[Rent] as
IIF(IsEmpty([Measures].[Budget]),
NULL, [Measures].[Rent])
select {[Measures].[Rent]}
on columns,
[Property].[Address].[All].children *
DESCENDANTS([Account].[Account Hierarchy].[Account Group].[Expenditures],
[Account].[Account Tree].[Account]) *
[Property].[Property].[All].children
on rows
from
[Master]
When i comment [Property] Dimension member , i am able to get the result, but i need Property Dimension in MDX.
Can anyone give some idea ?
Thanks in advanceHi Jarugulalaks,
According to your description, it take long time to render the report when using [Property] dimension, right?
In this case, the issue can be caused by that there are too many members under this dimension. In your query, you used CrossJoin function to join multiple dimensions which might cause the performance issue. If you cross-join medium-sized or large-sized sets
(e.g., sets that contain more than 100 items each), you can end up with a result set that contains many thousands of items—enough to seriously impair performance. You can use the NonEmptyCrossjoin function instead of Corssjoin function. For the detail
information, please see:
http://sqlmag.com/data-access/cross-join-performance
http://msdn.microsoft.com/en-us/library/ms144797.aspx
Besides, the total time to generate a reporting server report can be divided into 3 elements: Time to retrieve the data (TimeDataRetrieval); Time to process the report (TimeProcessing); Time to render the report (TimeRendering). For the detail information,
please refer to the link below to see Charlie's reply.
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/a0962b54-6fc2-4111-b8f1-a3722a65fa05/how-to-improve-performance-of-report?forum=sqlanalysisservices#a0962b54-6fc2-4111-b8f1-a3722a65fa05
Regards,
Charlie Liao
TechNet Community Support -
Validate map in FDM takes long time and finally expires
Hi,
I have an issue with FDM Mapping.
When i click the "validate" option in workflow, the FDM processing takes long time (more than an hour) and finally expires.
But when i open a new web interface, i am seeing a gold fish on validate - which means validate mapping has been done.
As we are in production environmeny, Can someone quickly clarify me, why this happen.
Thanks,
SivaHello Kelly,
Something you said concerns me: +"Because we automated data loads for the large file, this issue is extremely low on the priority list."+
In this type of scenario we notice customers/partners implementing FDM as an 'ETL' utility. FDM is not an ETL utility and should not be treated/used as such. In the event that you are doing this, Support has no control of the application as it is a mis-use of what FDM is for.
Files that are being pushed into FDM should always be broken down into the smallest components(locations) and used as a 'front end,' user interface.... not a server-side/IT application. If you meet this criteria of FDM, then there is not much Support can do.
If you do not think you are misusing FDM, I would highly suggest you open/create a Support SR if you do not use FDM in that manor. Consultants/Partners are functional/design based.... and not necessarily trained as Support Engineers. Therefore they might not have the skills required to make such determinations.
Thank you,
Oracle Global Customer Support -
Installing patch 5217019 takes long time and does not end
Hi,
I try to install patch 5217109 to my TEST instance.While installing patch the 8 workers are selected default.7 of workers are in statu of completed but 1 worker is waiting for long times and does not end.
What can be my problem?Start time for file is: Mon Aug 10 2009 11:45:01
sqlplus -s APPS/***** @/oracle/TEST/testappl/ad/11.5.0/admin/sql/adpcpcmp.pls APPLSYS &pw_fnd APPS &pw_apps &systempwd 8 1 NONE FALSE
Arguments are:
AOL_schema = APPLSYS, AOL_password = *****,
Schema_to_compile = APPS, Schema_to_compile_pw = *****,
SYSTEM_password = *****, Total_workers = 8, Logical_worker_num = 1
Object_type_to_not_compile = NONE
Use_stored_dependencies = FALSE
Connected.
Checking for previously-invalid objects which are now valid...
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.03
Commit complete.
Elapsed: 00:00:00.00
Deleting any errors from previous run of this logical worker
0 rows deleted.
Elapsed: 00:00:00.00
Commit complete.
Elapsed: 00:00:00.00
Compiling invalid objects... -
Query execution takes long time
Hi All,
I have one critical problem in my production system.
I have three sales related queries in the production system and when i try to execute it in the BEx analyser(in Microsoft excle) it will take too much time and at last it will give am error that "Time Limit Exceeded" .
Actually we have created these three queries on one Infoset and that Infoset contains three DSOs and two master data.
Please give me the proper solution and help me to solve this production problem.Dear James,
first give some filter conditions on the query and try to restrict for lesser volume of data.from the message it is evident that may be you are trying to fetch large volume of data.so please execute the query once in RSRT and try to find the solution in that.there you can get all the statistics reagrding the query.if still you cant find please let me know the message you are getting in RSRT.then we can give a viable solution for that.
hope you might aware of all the options reagarding RSRT.
assign points if it helps..
Thanks & Regards,
Ashok. -
Un-Registering takes long time and fails!!!
I am trying to un register a schema that ia already registered in the XML DB. I am using JDeveloper to un register and it really takes a long time to do this. and eventually fails.
what is going on? what is broken?
XML DB is flaky and unreliable.
right?First make sure that all connections that have used the XML Schema are disconnected. Schema deletion cannot start until all sessions that are using the schema have been closed as it needs to get an exclusive lock on the SGA entrires related to the XML Schema.
If there are a large number of rows in the table(s) associated with the XML Schema truncate the table before dropping the XML Schema. If there are a large number of XDB repository resources assocaited with the table, truncate the table and then delete the resources with DBMS_XDB.DELETERESOURCE() mode 2 or 4 to ignore errors and avoid dangling ref issues.
To monitor the progress of deleteSchema itself connect as sys and execute select count(*) from all_objects where owner = 'XXXX'. XXXX should be the name of the owner of the database schema that was the owner of the XML schema. See if the number of objects owned by that user is decreasing. If the number of object is decreasing have patience. -
Application Builder takes long time and Popup messages during Build
Hi All,
My application is slightly bigger with over 300 VI's. In between the build process I receive pop up messages asking GMath library files etc.. are modifies would u like to save the files..? When clicked on OK button it starts saving these library files and when it saves almost half the number of files the application builder crashes and causes LabView to close. As the build process is taking more than 2 hours and end result is always an unsuccesful build due to the application builder crash I am loosing my time.
Regards,
Pavan
Solved!
Go to Solution.Hi,
I recently upgraded from LV 7.1 to 2009, both the Professional versions. My program consists of several hundred vi's. Building an .exe in LV 7.1 was a snap and took maybe 30-50 seconds (not including an installer). Building the same thing in LV 2009 (build script converted to project) takes nearly 30 minutes, not including creating an installer... most of the time is spent 'processing' and then saving vi's, which LV 7 did not appear to do. I've tried the Ctrl+Shift+Run suggested by JB but this does not help. I've also applied the 2009f patch. It still takes a fair amount of memory, 0.5 GB, but my core 2 duo PC has 2 GB and there is plenty available RAM. Any suggestions/details on the differences and something I could do to cut down on the build time?
thanks,
Dan -
Table Import Takes long time and still running
Hi All,
MY DB Version: 10.2.o
OS: Windows Server 2003
I am trying to import on table which i have the export dump file which i take using expdp previously when i load that table on the same host
by using below command:
expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
after that i zip that dump and move it to external usb and now i need that table i copy that table and unzip that that dump
Command i am using to do the import is :
impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
But the query of import is still runing even not showing any amount of rows to be imported.
i already make the tablespace in which the table was previosuly before dropping
but when i check the sapce of tablespace that is also not consuming
one error i got preiviously while performing this task is:
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "CDR"."SYS_IMPORT_TABLE_03" successfully loaded/unloaded
Starting "CDR"."SYS_IMPORT_TABLE_03": cdr/********@tsiindia directory=TEST_DIR dumpfile=CAT_IN_DATA_042012.DMP tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log
ORA-39065: unexpected master process exception in RECEIVE
ORA-39078: unable to dequeue message for agent MCP from queue "KUPC$C_1_20120622102502"
Job "CDR"."SYS_IMPORT_TABLE_03" stopped due to fatal error at 12:10
i done some google on the same proble
and i check streams_pool_size it will show zero and then i make it to 48M and after that
SQL> show parameter streams_pool_size;
NAME TYPE VALUE
streams_pool_size big integer 48M
But still it takes time
Any help1) check the session :
SQL> select username,sid,serial#,status,event,seconds_in_wait,wait_time,state,module from v$session where username='CDR'
2 ;
USERNAME SID SERIAL# STATUS EVENT SECONDS_IN_WAIT WAIT_TIME STATE MODULE
CDR 73 1 ACTIVE wait for unread message on broadcast channel 3 0 WAITING
CDR 75 1 ACTIVE wait for unread message on broadcast channel 10 0 WAITING
CDR 77 1 ACTIVE wait for unread message on broadcast channel 10 0 WAITING
CDR 81 313 ACTIVE wait for unread message on broadcast channel 530 0 WAITING impdp.exe
CDR 87 70 ACTIVE enq: SS - contention 1581 0 WAITING toad.exe
CDR 90 1575 ACTIVE wait for unread message on broadcast channel 3 0 WAITING
CDR 92 1686 ACTIVE enq: SS - contention 619 0 WAITING
CDR 99 5 ACTIVE PX Deq Credit: send blkd 0 0 WAITING TOAD 9.1.0.62
CDR 103 3 ACTIVE direct path read 0 0 WAITING TOAD 9.1.0.62
CDR 105 6 ACTIVE direct path read 0 0 WAITING TOAD 9.1.0.62
CDR 107 6 ACTIVE PX Deq Credit: send blkd 0 0 WAITING TOAD 9.1.0.62
USERNAME SID SERIAL# STATUS EVENT SECONDS_IN_WAIT WAIT_TIME STATE MODULE
CDR 108 16 ACTIVE PX Deq Credit: send blkd 1 0 WAITING TOAD 9.1.0.62
CDR 109 40 ACTIVE PX Deq Credit: send blkd 1 0 WAITING TOAD 9.1.0.62
CDR 110 6 ACTIVE enq: TX - row lock contention 1 0 WAITING TOAD 9.1.0.62
CDR 111 21 ACTIVE direct path read 0 0 WAITING TOAD 9.1.0.62
CDR 112 27 ACTIVE PX Deq Credit: send blkd 1 0 WAITING TOAD 9.1.0.62
CDR 113 8 ACTIVE log buffer space 0 0 WAITING TOAD 9.1.0.62
CDR 117 4496 ACTIVE db file scattered read 0 0 WAITING TOAD 9.1.0.62
CDR 119 9 ACTIVE PX Deq Credit: send blkd 0 0 WAITING TOAD 9.1.0.62
CDR 120 27 ACTIVE PX Deq Credit: send blkd 0 0 WAITING TOAD 9.1.0.62
CDR 123 1 ACTIVE sort segment request 22349 0 WAITING
CDR 129 22 ACTIVE PX Deq Credit: send blkd 0 0 WAITING TOAD 9.1.0.62
USERNAME SID SERIAL# STATUS EVENT SECONDS_IN_WAIT WAIT_TIME STATE MODULE
CDR 131 14402 INACTIVE SQL*Net message from client 2580 0 WAITING TOAD 9.1.0.62
CDR 135 11 ACTIVE log buffer space 0 0 WAITING TOAD 9.1.0.62
CDR 136 6 ACTIVE direct path read 0 0 WAITING TOAD 9.1.0.62
CDR 138 234 ACTIVE sort segment request 19859 0 WAITING
CDR 162 782 INACTIVE SQL*Net message from client 550 0 WAITING TOAD 9.1.0.62
2) check the impprt status:
SQL> select owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;
OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE
CDR SYS_IMPORT_TABLE_01 IMPORT TABLE EXECUTING
3) in a new window
C:\Documents and Settings\vikas>impdp cdr/cdr123_awcc@tsiindia dumpfile=CAT_IN_DATA_042012.dmp tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log directory=test_dir parallel=4
Import: Release 10.1.0.2.0 - Production on Friday, 22 June, 2012 15:04
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Master table "CDR"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "CDR"."SYS_IMPORT_TABLE_01": cdr/********@tsiindia dumpfile=CAT_IN_DATA_042012.dmp tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log directory=test_dir parallel=4 -
Clear operation takes long time and gets interrupted in ThreadGate.doWait
Hi,
We are running Coherence 3.5.3 cluster with 16 storage enabled nodes and 24 storage disabled nodes. We have about hundred of partitioned caches with NearCaches (invalidation strategy = PRESENT, size limit for different caches 60-200K) and backup count = 1. For each cache we have a notion of cache A and cache B. Every day either A or B is active and is used by business logic while the other one is inactive, not used and empty. Daily we load fresh data to inactive caches, mark them as active (switch business logic to work with fresh data from those caches), and clear all yesterday's data in those caches which are not used today.
So at the end of data load we execute NamedCache.clear() operation for each inactive cache from storage disabled node. From time to time, 1-2 times a week, the clear operation fails on one of 2 our biggest caches (one has 1.2M entries and another one has 350K entries). We did some investigations and found that NamedCache.clear operation fires many events within Coherence cluster to clear NearCaches so that operation is quite expensive. In some other simular posts there were suggestions to not use NamedCache.clear, but rather use NamedCache.destroy, however that doesn't work for us in current timelines. So we implemented simple retry logic that retries NamedCache.clear() operation up to 4 times with increasing delay between the attempts (1min, 2 min, 4 min).
However that didn't help. 3 out of those attempts failed with the same error on one storage enabled node and 1 out of those 4 attempts failed on another storage enabled node. In all cases a Coherence worker thread that is executing ClearRequest on storage enabled node got interrupted by Guardian after it reached its timeout while it was waiting on lock object at ThreadGate.doWait. Please see below:
Log from the node that calls NamedCache.clear()
Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for ProductDistributedCache service on Member(Id=26, Timestamp=2012-09-04 13:37:43.922, Address=32.83.113.116:10000, MachineId=3149, Location=machine:mac305,process:2
7091,member:mac305.instance1, Role=storage) (Wrapped: ThreadGate{State=GATE_CLOSING, ActiveCount=3, CloseCount=0, ClosingT
hread= Thread[ProductDistributedCacheWorker:1,5,ProductDistributedCache]}) null) null
Caused by:
Portable(java.lang.InterruptedException) ( << comment: this came form storage enabled node >> )
at java.lang.Object.wait(Native Method)
at com.tangosol.util.ThreadGate.doWait(ThreadGate.java:489)
at com.tangosol.util.ThreadGate.close(ThreadGate.java:239)
at com.tangosol.util.SegmentedConcurrentMap.lock(SegmentedConcurrentMap.java:180)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:27)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ClearRequest.run(DistributedCache.CDB:1)
at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Thread.java:619)
Log from the that storage enabled node which threw an exception
Sat Sep 08 04:38:37 EDT 2012|**ERROR**| com.tangosol.coherence.component.util.logOutput.Log4j | 2012-09-08 04:38:37.720/31330
1.617 Oracle Coherence EE 3.5.3/465 <Error> (thread=DistributedCache:ProductDistributedCache, member=26): Attempting recovery
(due to soft timeout) of Guard{Daemon=ProductDistributedCacheWorker:1} |Client Details{sdpGrid:,ClientName: ClientInstanceN
ame: ,ClientThreadName: }| Logger@9259509 3.5.3/465
Sat Sep 08 04:38:37 EDT 2012|**WARN**| com.tangosol.coherence.component.util.logOutput.Log4j | 2012-09-08 04:38:37.720/313301
.617 Oracle Coherence EE 3.5.3/465 <Warning> (thread=Recovery Thread, member=26): A worker thread has been executing task: Message "ClearRequest"
FromMember=Member(Id=38, Timestamp=2012-09-07 10:12:27.402, Address=32.83.113.120:10000, MachineId=40810, Location=machine:
mac313,process:22837,member:mac313.instance1, Role=maintenance)
FromMessageId=5278229
Internal=false
MessagePartCount=1
PendingCount=0
MessageType=1
ToPollId=0
Poll=null
Packets
[000]=Directed{PacketType=0x0DDF00D5, ToId=26, FromId=38, Direction=Incoming, ReceivedMillis=04:36:49.718, ToMemberSet=nu
ll, ServiceId=6, MessageType=1, FromMessageId=5278229, ToMessageId=337177, MessagePartCount=1, MessagePartIndex=0, NackInProg
ress=false, ResendScheduled=none, Timeout=none, PendingResendSkips=0, DeliveryState=unsent, Body=0x000D551F0085B8DF9FAECE8001
0101010204084080C001C1F80000000000000010000000000000000000000000000000000000000000000000, Body.length=57}
Service=DistributedCache{Name=ProductDistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, B
ackupCount=1, AssignedPartitions=16, BackupPartitions=16}
ToMemberSet=MemberSet(Size=1, BitSetCount=2
Member(Id=26, Timestamp=2012-09-04 13:37:43.922, Address=32.83.113.116:10000, MachineId=3149, Location=machine:mac305,process:27091,member:mac305.instance1, Role=storage)
NotifySent=false
} for 108002ms and appears to be stuck; attempting to interrupt: ProductDistributedCacheWorker:1 |Client Details{sdpGrid:,C
lientName: ClientInstanceName: ,ClientThreadName: }| Logger@9259509 3.5.3/465
I am looking for your help. Please let me know if you see what is the reason for the issue and how to address it.
Thank youToday we had that issue again and I have gathered some more information.
Everything was the same as I described in the previous posts in this thread: first attempt to clear a cache failed and next 3 retries also failed. All 4 times 2 storage enabled nodes had that "... A worker thread has been executing task: Message "ClearRequest" ..." error message and got interrupted by Guardian.
However after that I had some time to do further experiments. Our App has cache management UI that allows to clear any cache. So I started repeatedly taking thread dumps on those 2 storage enabled nodes which failed to clear the cache and executed cache clear operation form that UI. One of storage enabled nodes successfully cleared its part, but the other still failed. It failed with completely same error.
So, I have a thread dump which I took while cache clear operation was in progress. It shows that a thread which is processing that ClearRequest is stuck waiting in ThreadGate.close method:
at java.lang.Object.wait(Native Method)
at com.tangosol.util.ThreadGate.doWait(ThreadGate.java:489)
at com.tangosol.util.ThreadGate.close(ThreadGate.java:239)
at com.tangosol.util.SegmentedConcurrentMap.lock(SegmentedConcurrentMap.java:180)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:27)
at
All subsequents attempts to clear cache from cache management UI failed until we restarted that storage enabled node.
It looks like some thread left ThreadGate in a locked state, and any further attempts to apply a lock as part of ClearRequest message fail. May be it is known issue of Coherence 3.5.3?
Thanks -
Starting Itunes match at atv2 takes long time and doesn't stop?
I startetd itunes match at atv2. It says that it will take only a few minutes depending on the size of the mediathek. But it's running now for 2 days. What can I do?
Do you mean iTunes Match has been running for several days on the initial match of your library? If so, depending on the size of your library this is not unusual.
If atv2 refers to Apple TV 2 then something is very wrong since this will start within minutes. -
Why update query takes long time ?
Hello everyone;
My update query takes long time. In emp ( self testing) just having 2 records.
when i issue update query , it takes long time;
SQL> select * from emp;
EID ENAME EQUAL ESALARY ECITY EPERK ECONTACT_NO
2 rose mca 22000 calacutta 9999999999
1 sona msc 17280 pune 9999999999
Elapsed: 00:00:00.05
SQL> update emp set esalary=12000 where eid='1';
update emp set esalary=12000 where eid='1'
* ERROR at line 1:
ORA-01013: user requested cancel of current operation
Elapsed: 00:01:11.72
SQL> update emp set esalary=15000;
update emp set esalary=15000
* ERROR at line 1:
ORA-01013: user requested cancel of current operation
Elapsed: 00:02:22.27Hi BCV;
Thanks for your reply but it doesn't provide output, please see this.
SQL> update emp set esalary=15000;
........... Lock already occured.
>> trying to trace >>
SQL> select HOLDING_SESSION from dba_blockers;
HOLDING_SESSION
144
SQL> select sid , username, event from v$session where username='HR';
SID USERNAME EVENT
144 HR SQL*Net message from client
151 HR enq: TX - row lock contention
159 HR SQL*Net message from client
>> It does n 't provide clear output about transaction lock >>
SQL> SELECT username, v$lock.SID, TRUNC (id1 / POWER (2, 16)) rbs,
2 BITAND (id1, TO_NUMBER ('ffff', 'xxxx')) + 0 slot, id2 seq, lmode,
3 request
4 FROM v$lock, v$session
5 WHERE v$lock.TYPE = 'TX'
6 AND v$lock.SID = v$session.SID
7 AND v$session.username = USER;
no rows selected
SQL> select MACHINE from v$session where sid = :sid;
SP2-0552: Bind variable "SID" not declared.
Maybe you are looking for
-
Hey all, So I've had this computer (MacBook Pro 15" mid-2010 model) for a little over a year (bought it a few days after the refresh). Later in the year, it started to get regular crashes until it would become completely unresponsive to the point whe
-
HT201210 apple TV "device cannot be found"
I updated my apple TV to find that it needed to be "restored" on itunes. When I plug it in to itunes the apple TV tab opens but then I get error message saying "device cannot be found" - any suggestions? thanks.
-
Dear Expert, Can someone please help me in understanding the process flow of RTGS and how it can be mapped in sap in detail, if possible.... Thank you.
-
What is the best practice for the quorum disk asignment in a dual-node cluster ? 1.Is there any benefit to have a dedicated quorum disk and if yes - what size should it be ? 2.The manual says: "Quorum devices can contain users data". Does it mean the
-
Migrating users from Directory Server 4.16 to 5.2
Hi, I'm trying to migrate users from an old Directory Server 4.16 and importing them to a new 5.2. I tried using the db2ldif script and it succeded in exporting everything into a single file. After I import that data into the new server I can't see t