Validate map in FDM takes long time and finally expires

Hi,
I have an issue with FDM Mapping.
When i click the "validate" option in workflow, the FDM processing takes long time (more than an hour) and finally expires.
But when i open a new web interface, i am seeing a gold fish on validate - which means validate mapping has been done.
As we are in production environmeny, Can someone quickly clarify me, why this happen.
Thanks,
Siva

Hello Kelly,
Something you said concerns me: +"Because we automated data loads for the large file, this issue is extremely low on the priority list."+
In this type of scenario we notice customers/partners implementing FDM as an 'ETL' utility. FDM is not an ETL utility and should not be treated/used as such. In the event that you are doing this, Support has no control of the application as it is a mis-use of what FDM is for.
Files that are being pushed into FDM should always be broken down into the smallest components(locations) and used as a 'front end,' user interface.... not a server-side/IT application. If you meet this criteria of FDM, then there is not much Support can do.
If you do not think you are misusing FDM, I would highly suggest you open/create a Support SR if you do not use FDM in that manor. Consultants/Partners are functional/design based.... and not necessarily trained as Support Engineers. Therefore they might not have the skills required to make such determinations.
Thank you,
Oracle Global Customer Support

Similar Messages

  • Creation of Transfer Order takes a long time and finally throws spool error

    Hi,
    I create Sales Order of type OR. I do delivery successfully. But when i proceed to create Transfer Order, it takes painfully long time and finally after painful 30 mins or so, it throws a error message, 'Error in spool call' with a blank page. I click on save icon and transfer order is created.
    But the creation takes very long time. I checked SP01 and there are no requests.
    Any suggestions will be rewarded.
    Regards,
    Mangesh

    Dear,
              Kindly contact your Basis team it will help you.
             After error please take the screen short of SU53 and give to basis team.
    May be it will help you.
    Regards,
    Sandip

  • Problem Export to Excel it takes long time and Takes more space.

    Hello All,
    when we export to Excel in portal(reports), it takes long time and it takes more space.
    how to overcome this problem please any one knows provide the proper solution for this issues
    Regards,
    Ch.

    Hi Chetans,
    I have had the same problem, and I had to create a OSS message to SAP in order to solve this issue, until now I don't have an answer. They made me to change a lot of configuration in Java and BW without luck. To tell you more, when we try to export to excel the java instance restarts aumotatically.
    But we have this problem, when we try to export a huge quantity of data, so I found a note which says the limitation of exporting to excel. Take a special attention to the Performance section.
    Note 1178857
    [https://service.sap.com/sap/support/notes/1178857|https://service.sap.com/sap/support/notes/1178857]
    I recomend you that you create a message to SAP. If you find a solution, please let me know.
    Regards, Federico

  • HT1351 for syncing it takes long time and not completed what to do???

    for syncing it takes long time and not completed what to do???

    Debbie:
    deborahfromwindsor wrote:
    he advises restarting by inserting the OSX disc and pressing down the C button to reboot from there then selecting disk utility, hard disk and repair.... Does he mean me to hold down the C key on the alpha keyboard or the ctrl key?
    Should I just ask for my money back??? If it is a simple repair do I just literally push the disc in, push the power button and hold down the C button?
    That's where I would begin, too, with
    Repair Disk
    Insert Installer disk and Restart, holding down the "C" key until grey Apple appears.
    Go to Installer menu (Panther and earlier) or Utilities menu (Tiger) and launch Disk Utility.
    Select your HDD (manufacturer ID) in the left panel.
    Select First Aid in the Main panel.
    (Check S.M.A.R.TStatus of HDD at the bottom of right panel, and report if it saysanything but Verified)
    Click Repair Disk on the bottom right.
    If DU reports disk does not need repairs quit DU and restart.
    If DU reports errors Repair again and again until DU reports disk is repaired.
    If DU reports errors it cannot repair you will need touse autility like TechTool Pro,Drive Geniusor DiskWarrior
    First we need to determine if the issue you are experiencing with the computer is software or hardware based. Once we have gotten things sorted out there should be time enough to make you decision about keeping or returning it.
    cornelius

  • Query Saving takes long time and giving error

    Hi Gurus,
    I am creating one query that have lot of calculations (CKF & RKF).
    When I am trying to save this query it is taking long time and it is giving error like RFC_ERROR_SYSTEM_FAILURE, Query Designer must be restarted, further work not possible.
    Please give me the solution for this.
    Thanks,
    RChowdary

    Hi Chowdary,
    Check the following note: 316470.
    https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=316470
    The note details are:
    Symptom
    There are no authorizations to change roles. Consequently, the system displays no roles when you save workbooks in the BEx Analyzer. In the BEx browser, you cannot move or change workbooks, documents, folders and so on.
    Other terms
    BW 2.0B, 2.0A, 20A, 20B, frontend, error 172, Business Explorer,
    RFC_ERROR_SYSTEM_FAILURE, NOT_AUTHORIZED, S_USER_TCD, RAISE_EXCEPTION,
    LPRGN_STRUCTUREU04, SAPLPRGN_STRUCTURE, PRGN_STRU_SAVE_NODES
    Reason and Prerequisites
    The authorizations below are not assigned to the user.
    Solution
    Assign authorization for roles
    To assign authorizations for a role, execute the following steps:
    1. Start Transaction Role maintenance (PFCG)
    2. Select a role
    3. Choose the "Change" switch
    4. Choose tab title "Authorizations"
    5. Choose the "Change authorization data" switch
    6. Choose "+ Manually" switch
    7. Enter "S_USER_AGR" as "Authorization object"
    8. Expand "Basis: Administration"/"Authorization: Role check""
    9. From "Activity" select "Create or generate" and others like "Display" or "Change"
    10. Under "Role Name", enter all roles that are supposed to be shown or changed. Enter "*" for all roles.
    11. You can re-enter authorization object "S_USER_AGR" for other activities.
    Assign authorization for transactions
    If a user is granted the authorization for changing a role, he/she should also be granted the authorization for all transactions contained in the role. Add these transaction codes to authorization object S_USER_TCD.
    1. Start the role maintenance transaction (PFCG).
    2. Select a role.
    3. Click on "Change".
    4. Choose the "Authorizations" tab.
    5. Click on "Change authorization data".
    6. Click on "+ manually".
    7. Specify "S_USER_TCD" as "Authorization object".
    8. Expand "Basis - Administration"/"Authorizations: Transactions in Roles".
    9. Under "Transaction", choose at least "RRMX" (for BW reports), "SAP_BW_TEMPLATE" (for BW Web Templates), "SAP_BW_QUERY" (for BW Queries and/or "SAP_BW_CRYSTAL" (for Crystal reports) or "*". Values with "SAP_BW_..." are not transactions, they are special node types (see transaction code NODE_TYPE_DEFINITION).
    Using the SAP System Trace (Transaction ST01), you can identify the transaction that causes NOT_AUTHORIZED.
    Prevent user assignment
    Having the authorization for changing roles, the user is not only able to change the menu but also to assign users. If you want to prevent the latter, the user must loose the authorization for Transactions User Maintenance (SU01) and Role maintenance (PFCG).
    Z1>Note
    Refer to Note 197601, which provides information on the different display of BEx Browser, BEx Analyzer and Easy Access menu.
    Please refer to Note 373979 about authorizations to save workbooks.
    Check in the transaction ST22 for more details on the Query designer failure or query log file.
    With Regards,
    Ravi Kanth.
    Edited by: Ravi kanth on Apr 9, 2009 6:02 PM

  • MDX report rendering takes long time and showing Conflict Message

    Hi All,
    This is my MDX Query
    with member
    [Measures].[Rent] as
    IIF(IsEmpty([Measures].[Budget]),
    NULL, [Measures].[Rent])
    select {[Measures].[Rent]}
    on columns,
                         [Property].[Address].[All].children *
             DESCENDANTS([Account].[Account Hierarchy].[Account Group].[Expenditures],
                         [Account].[Account Tree].[Account]) *
                         [Property].[Property].[All].children
    on rows
    from
       [Master]
    When i comment [Property] Dimension member , i am able to get the result, but i need Property Dimension in MDX.
    Can anyone give some idea ?
    Thanks in advance

    Hi Jarugulalaks,
    According to your description, it take long time to render the report when using [Property] dimension, right?
    In this case, the issue can be caused by that there are too many members under this dimension. In your query, you used CrossJoin function to join multiple dimensions which might cause the performance issue. If you cross-join medium-sized or large-sized sets
    (e.g., sets that contain more than 100 items each), you can end up with a result set that contains many thousands of items—enough to seriously impair performance. You can use the NonEmptyCrossjoin function instead of Corssjoin function. For the detail
    information, please see:
    http://sqlmag.com/data-access/cross-join-performance
    http://msdn.microsoft.com/en-us/library/ms144797.aspx
    Besides, the total time to generate a reporting server report can be divided into 3 elements: Time to retrieve the data (TimeDataRetrieval); Time to process the report (TimeProcessing); Time to render the report (TimeRendering). For the detail information,
    please refer to the link below to see Charlie's reply.
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/a0962b54-6fc2-4111-b8f1-a3722a65fa05/how-to-improve-performance-of-report?forum=sqlanalysisservices#a0962b54-6fc2-4111-b8f1-a3722a65fa05
    Regards,
    Charlie Liao
    TechNet Community Support

  • Installing patch 5217019 takes long time and does not end

    Hi,
    I try to install patch 5217109 to my TEST instance.While installing patch the 8 workers are selected default.7 of workers are in statu of completed but 1 worker is waiting for long times and does not end.
    What can be my problem?

    Start time for file is: Mon Aug 10 2009 11:45:01
    sqlplus -s APPS/***** @/oracle/TEST/testappl/ad/11.5.0/admin/sql/adpcpcmp.pls APPLSYS &pw_fnd APPS &pw_apps &systempwd 8 1 NONE FALSE
    Arguments are:
    AOL_schema = APPLSYS, AOL_password = *****,
    Schema_to_compile = APPS, Schema_to_compile_pw = *****,
    SYSTEM_password = *****, Total_workers = 8, Logical_worker_num = 1
    Object_type_to_not_compile = NONE
    Use_stored_dependencies = FALSE
    Connected.
    Checking for previously-invalid objects which are now valid...
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.03
    Commit complete.
    Elapsed: 00:00:00.00
    Deleting any errors from previous run of this logical worker
    0 rows deleted.
    Elapsed: 00:00:00.00
    Commit complete.
    Elapsed: 00:00:00.00
    Compiling invalid objects...

  • Un-Registering takes long time and fails!!!

    I am trying to un register a schema that ia already registered in the XML DB. I am using JDeveloper to un register and it really takes a long time to do this. and eventually fails.
    what is going on? what is broken?
    XML DB is flaky and unreliable.
    right?

    First make sure that all connections that have used the XML Schema are disconnected. Schema deletion cannot start until all sessions that are using the schema have been closed as it needs to get an exclusive lock on the SGA entrires related to the XML Schema.
    If there are a large number of rows in the table(s) associated with the XML Schema truncate the table before dropping the XML Schema. If there are a large number of XDB repository resources assocaited with the table, truncate the table and then delete the resources with DBMS_XDB.DELETERESOURCE() mode 2 or 4 to ignore errors and avoid dangling ref issues.
    To monitor the progress of deleteSchema itself connect as sys and execute select count(*) from all_objects where owner = 'XXXX'. XXXX should be the name of the owner of the database schema that was the owner of the XML schema. See if the number of objects owned by that user is decreasing. If the number of object is decreasing have patience.

  • Table Import Takes long time and still running

    Hi All,
    MY DB Version: 10.2.o
    OS: Windows Server 2003
    I am trying to import on table which i have the export dump file which i take using expdp previously when i load that table on the same host
    by using below command:
    expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log
    after that i zip that dump and move it to external usb and now i need that table i copy that table and unzip that that dump
    Command i am using to do the import is :
    impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log
    But the query of import is still runing even not showing any amount of rows to be imported.
    i already make the tablespace in which the table was previosuly before dropping
    but when i check the sapce of tablespace that is also not consuming
    one error i got preiviously while performing this task is:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Master table "CDR"."SYS_IMPORT_TABLE_03" successfully loaded/unloaded
    Starting "CDR"."SYS_IMPORT_TABLE_03":  cdr/********@tsiindia directory=TEST_DIR dumpfile=CAT_IN_DATA_042012.DMP tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log
    ORA-39065: unexpected master process exception in RECEIVE
    ORA-39078: unable to dequeue message for agent MCP from queue "KUPC$C_1_20120622102502"
    Job "CDR"."SYS_IMPORT_TABLE_03" stopped due to fatal error at 12:10
    i done some google on the same proble
    and i check streams_pool_size it will show zero and then i make it to 48M and after that
    SQL> show parameter streams_pool_size;
    NAME                                 TYPE        VALUE
    streams_pool_size                    big integer 48M
    But still it takes time
    Any help

    1) check the session :
    SQL> select username,sid,serial#,status,event,seconds_in_wait,wait_time,state,module from v$session where username='CDR'
      2  ;
    USERNAME                              SID    SERIAL# STATUS   EVENT                                                    SECONDS_IN_WAIT  WAIT_TIME STATE               MODULE
    CDR                                    73          1 ACTIVE   wait for unread message on broadcast channel                                   3          0 WAITING
    CDR                                    75          1 ACTIVE   wait for unread message on broadcast channel                                  10          0 WAITING
    CDR                                    77          1 ACTIVE   wait for unread message on broadcast channel                                  10          0 WAITING
    CDR                                    81        313 ACTIVE   wait for unread message on broadcast channel                                 530          0 WAITING             impdp.exe
    CDR                                    87         70 ACTIVE   enq: SS - contention                                                1581          0 WAITING             toad.exe
    CDR                                    90       1575 ACTIVE   wait for unread message on broadcast channel                                   3          0 WAITING
    CDR                                    92       1686 ACTIVE   enq: SS - contention                                                 619          0 WAITING
    CDR                                    99          5 ACTIVE   PX Deq Credit: send blkd                                               0          0 WAITING             TOAD 9.1.0.62
    CDR                                   103          3 ACTIVE   direct path read                                                       0          0 WAITING             TOAD 9.1.0.62
    CDR                                   105          6 ACTIVE   direct path read                                                       0          0 WAITING             TOAD 9.1.0.62
    CDR                                   107          6 ACTIVE   PX Deq Credit: send blkd                                               0          0 WAITING             TOAD 9.1.0.62
    USERNAME                              SID    SERIAL# STATUS   EVENT                                                    SECONDS_IN_WAIT  WAIT_TIME STATE               MODULE
    CDR                                   108         16 ACTIVE   PX Deq Credit: send blkd                                               1          0 WAITING             TOAD 9.1.0.62
    CDR                                   109         40 ACTIVE   PX Deq Credit: send blkd                                               1          0 WAITING             TOAD 9.1.0.62
    CDR                                   110          6 ACTIVE   enq: TX - row lock contention                                          1          0 WAITING             TOAD 9.1.0.62
    CDR                                   111         21 ACTIVE   direct path read                                                       0          0 WAITING             TOAD 9.1.0.62
    CDR                                   112         27 ACTIVE   PX Deq Credit: send blkd                                               1          0 WAITING             TOAD 9.1.0.62
    CDR                                   113          8 ACTIVE   log buffer space                                                       0          0 WAITING             TOAD 9.1.0.62
    CDR                                   117       4496 ACTIVE   db file scattered read                                                 0          0 WAITING             TOAD 9.1.0.62
    CDR                                   119          9 ACTIVE   PX Deq Credit: send blkd                                               0          0 WAITING             TOAD 9.1.0.62
    CDR                                   120         27 ACTIVE   PX Deq Credit: send blkd                                               0          0 WAITING             TOAD 9.1.0.62
    CDR                                   123          1 ACTIVE   sort segment request                                               22349          0 WAITING
    CDR                                   129         22 ACTIVE   PX Deq Credit: send blkd                                               0          0 WAITING             TOAD 9.1.0.62
    USERNAME                              SID    SERIAL# STATUS   EVENT                                                    SECONDS_IN_WAIT  WAIT_TIME STATE               MODULE
    CDR                                   131      14402 INACTIVE SQL*Net message from client                                         2580          0 WAITING             TOAD 9.1.0.62
    CDR                                   135         11 ACTIVE   log buffer space                                                       0          0 WAITING             TOAD 9.1.0.62
    CDR                                   136          6 ACTIVE   direct path read                                                       0          0 WAITING             TOAD 9.1.0.62
    CDR                                   138        234 ACTIVE   sort segment request                                               19859          0 WAITING
    CDR                                   162        782 INACTIVE SQL*Net message from client                                          550          0 WAITING             TOAD 9.1.0.62
    2) check the impprt status:
    SQL> select owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs;
    OWNER_NAME                     JOB_NAME                       OPERATION                      JOB_MODE                       STATE
    CDR                            SYS_IMPORT_TABLE_01            IMPORT                         TABLE                  EXECUTING
    3) in a new window
    C:\Documents and Settings\vikas>impdp cdr/cdr123_awcc@tsiindia dumpfile=CAT_IN_DATA_042012.dmp tables=CAT_IN_DATA_042012  logfile=impdpCAT_IN_DATA_042012.log directory=test_dir parallel=4
    Import: Release 10.1.0.2.0 - Production on Friday, 22 June, 2012 15:04
    Copyright (c) 2003, Oracle.  All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Master table "CDR"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
    Starting "CDR"."SYS_IMPORT_TABLE_01":  cdr/********@tsiindia dumpfile=CAT_IN_DATA_042012.dmp tables=CAT_IN_DATA_042012 logfile=impdpCAT_IN_DATA_042012.log directory=test_dir parallel=4

  • Application Builder takes long time and Popup messages during Build

    Hi All,
    My application is slightly bigger with over 300 VI's. In between the build process I receive pop up messages asking GMath library  files etc.. are modifies would u like to save the files..? When clicked on OK button it starts saving these library files and when it saves almost half the number of files the application builder crashes and causes LabView to close. As the build process is taking more than 2 hours and end result is always an unsuccesful build due to the application builder crash I am loosing my time.
    Regards,
    Pavan
    Solved!
    Go to Solution.

    Hi,
    I recently upgraded from LV 7.1 to 2009, both the Professional versions. My program consists of several hundred vi's. Building an .exe in LV 7.1 was a snap and took maybe 30-50 seconds (not including an installer). Building the same thing in LV 2009 (build script converted to project) takes nearly 30 minutes, not including creating an installer... most of the time is spent 'processing' and then saving vi's, which LV 7 did not appear to do. I've tried the Ctrl+Shift+Run suggested by JB but this does not help. I've also applied the 2009f patch. It still takes a fair amount of memory, 0.5 GB, but my core 2 duo PC has 2 GB and there is plenty available RAM. Any suggestions/details on the differences and something I could do to cut down on the build time?
    thanks,
    Dan

  • Clear operation takes long time and gets interrupted in ThreadGate.doWait

    Hi,
    We are running Coherence 3.5.3 cluster with 16 storage enabled nodes and 24 storage disabled nodes. We have about hundred of partitioned caches with NearCaches (invalidation strategy = PRESENT, size limit for different caches 60-200K) and backup count = 1. For each cache we have a notion of cache A and cache B. Every day either A or B is active and is used by business logic while the other one is inactive, not used and empty. Daily we load fresh data to inactive caches, mark them as active (switch business logic to work with fresh data from those caches), and clear all yesterday's data in those caches which are not used today.
    So at the end of data load we execute NamedCache.clear() operation for each inactive cache from storage disabled node. From time to time, 1-2 times a week, the clear operation fails on one of 2 our biggest caches (one has 1.2M entries and another one has 350K entries). We did some investigations and found that NamedCache.clear operation fires many events within Coherence cluster to clear NearCaches so that operation is quite expensive. In some other simular posts there were suggestions to not use NamedCache.clear, but rather use NamedCache.destroy, however that doesn't work for us in current timelines. So we implemented simple retry logic that retries NamedCache.clear() operation up to 4 times with increasing delay between the attempts (1min, 2 min, 4 min).
    However that didn't help. 3 out of those attempts failed with the same error on one storage enabled node and 1 out of those 4 attempts failed on another storage enabled node. In all cases a Coherence worker thread that is executing ClearRequest on storage enabled node got interrupted by Guardian after it reached its timeout while it was waiting on lock object at ThreadGate.doWait. Please see below:
    Log from the node that calls NamedCache.clear()
    Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for ProductDistributedCache service on Member(Id=26, Timestamp=2012-09-04 13:37:43.922, Address=32.83.113.116:10000, MachineId=3149, Location=machine:mac305,process:2
    7091,member:mac305.instance1, Role=storage) (Wrapped: ThreadGate{State=GATE_CLOSING, ActiveCount=3, CloseCount=0, ClosingT
    hread= Thread[ProductDistributedCacheWorker:1,5,ProductDistributedCache]}) null) null
    Caused by:
    Portable(java.lang.InterruptedException) ( << comment: this came form storage enabled node >> )
    at java.lang.Object.wait(Native Method)
    at com.tangosol.util.ThreadGate.doWait(ThreadGate.java:489)
    at com.tangosol.util.ThreadGate.close(ThreadGate.java:239)
    at com.tangosol.util.SegmentedConcurrentMap.lock(SegmentedConcurrentMap.java:180)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:27)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ClearRequest.run(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
    at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Log from the that storage enabled node which threw an exception
    Sat Sep 08 04:38:37 EDT 2012|**ERROR**| com.tangosol.coherence.component.util.logOutput.Log4j | 2012-09-08 04:38:37.720/31330
    1.617 Oracle Coherence EE 3.5.3/465 <Error> (thread=DistributedCache:ProductDistributedCache, member=26): Attempting recovery
    (due to soft timeout) of Guard{Daemon=ProductDistributedCacheWorker:1} |Client Details{sdpGrid:,ClientName:  ClientInstanceN
    ame: ,ClientThreadName:  }| Logger@9259509 3.5.3/465
    Sat Sep 08 04:38:37 EDT 2012|**WARN**| com.tangosol.coherence.component.util.logOutput.Log4j | 2012-09-08 04:38:37.720/313301
    .617 Oracle Coherence EE 3.5.3/465 <Warning> (thread=Recovery Thread, member=26): A worker thread has been executing task: Message "ClearRequest"
    FromMember=Member(Id=38, Timestamp=2012-09-07 10:12:27.402, Address=32.83.113.120:10000, MachineId=40810, Location=machine:
    mac313,process:22837,member:mac313.instance1, Role=maintenance)
    FromMessageId=5278229
    Internal=false
    MessagePartCount=1
    PendingCount=0
    MessageType=1
    ToPollId=0
    Poll=null
    Packets
    [000]=Directed{PacketType=0x0DDF00D5, ToId=26, FromId=38, Direction=Incoming, ReceivedMillis=04:36:49.718, ToMemberSet=nu
    ll, ServiceId=6, MessageType=1, FromMessageId=5278229, ToMessageId=337177, MessagePartCount=1, MessagePartIndex=0, NackInProg
    ress=false, ResendScheduled=none, Timeout=none, PendingResendSkips=0, DeliveryState=unsent, Body=0x000D551F0085B8DF9FAECE8001
    0101010204084080C001C1F80000000000000010000000000000000000000000000000000000000000000000, Body.length=57}
    Service=DistributedCache{Name=ProductDistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, B
    ackupCount=1, AssignedPartitions=16, BackupPartitions=16}
    ToMemberSet=MemberSet(Size=1, BitSetCount=2
    Member(Id=26, Timestamp=2012-09-04 13:37:43.922, Address=32.83.113.116:10000, MachineId=3149, Location=machine:mac305,process:27091,member:mac305.instance1, Role=storage)
    NotifySent=false
    } for 108002ms and appears to be stuck; attempting to interrupt: ProductDistributedCacheWorker:1 |Client Details{sdpGrid:,C
    lientName: ClientInstanceName: ,ClientThreadName: }| Logger@9259509 3.5.3/465
    I am looking for your help. Please let me know if you see what is the reason for the issue and how to address it.
    Thank you

    Today we had that issue again and I have gathered some more information.
    Everything was the same as I described in the previous posts in this thread: first attempt to clear a cache failed and next 3 retries also failed. All 4 times 2 storage enabled nodes had that "... A worker thread has been executing task: Message "ClearRequest" ..." error message and got interrupted by Guardian.
    However after that I had some time to do further experiments. Our App has cache management UI that allows to clear any cache. So I started repeatedly taking thread dumps on those 2 storage enabled nodes which failed to clear the cache and executed cache clear operation form that UI. One of storage enabled nodes successfully cleared its part, but the other still failed. It failed with completely same error.
    So, I have a thread dump which I took while cache clear operation was in progress. It shows that a thread which is processing that ClearRequest is stuck waiting in ThreadGate.close method:
    at java.lang.Object.wait(Native Method)
    at com.tangosol.util.ThreadGate.doWait(ThreadGate.java:489)
    at com.tangosol.util.ThreadGate.close(ThreadGate.java:239)
    at com.tangosol.util.SegmentedConcurrentMap.lock(SegmentedConcurrentMap.java:180)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:27)
    at
    All subsequents attempts to clear cache from cache management UI failed until we restarted that storage enabled node.
    It looks like some thread left ThreadGate in a locked state, and any further attempts to apply a lock as part of ClearRequest message fail. May be it is known issue of Coherence 3.5.3?
    Thanks

  • Starting Itunes match at atv2 takes long time and doesn't stop?

    I startetd itunes match at atv2. It says that it will take only a few minutes depending on the size of the mediathek. But it's running now for 2 days. What can I do?

    Do you mean iTunes Match has been running for several days on the initial match of your library?  If so, depending on the size of your library this is not unusual.
    If atv2 refers to Apple TV 2 then something is very wrong since this will start within minutes.

  • My laptop seemed to be frozen and I could not shut down from the menu. I held the power button for a long time and finally it shut off. When I started it again it turned on to show a black screen with an icon of a file folder with a question mark inside

    Any idea what a blank screen with a flashing folder with a question mark inside of it means???

    Ambinder55,
    it means that the forced shutdown corrupted some portion of the internal disk’s filesystem. Which version of OS X is installed on your MacBook Pro?

  • Report takes long time for few records

    hi frends,
    I m facing one problem with my Web based erp application which is developed in .net , in my application when i open the  report from my applicaiton , in my temp folder there one file gets created name is "rpt conmgr cache"
    bcoz of this for few records also my report takes too much time and opens very slow and it takes long time, and it happens in some of the reports only , other reports are working cool and its not creating any file in temp folder,,, so can u guide me whats this file and what can be the solution for it,
    Thanks
    Mithun

    hi sabhajit,
    i have already checked the sql query it is taking less then seconds.
    any other steps u want me to check then pls let me know?
    thanks mithun

  • INSERT INTO TABLE using SELECT takes long time

    Hello Friends,
    --- Oracle version 10.2.0.4.0
    --- I am trying to insert around 2.5 lakhs records in a table using INSERT ..SELECT. The insert takes long time and seems to be hung.
    --- When i try to SELECT the query fetches the rows in 10 seconds.
    --- Any clue why it is taking so much time

    vishalrs wrote:
    Hello Friends,hello
    >
    >
    --- Oracle version 10.2.0.4.0
    alright
    --- I am trying to insert around 2.5 lakhs records in a table using INSERT ..SELECT. The insert takes long time and seems to be hung.
    I don't know how a lakh is, but it sounds like a lot...
    --- When i try to SELECT the query fetches the rows in 10 seconds.
    how did you test this? and did you fetch the last record, or just the first couple of hundred.
    --- Any clue why it is taking so much timeWithout seeing anything, it's impossible to tell the reason.
    Search the forum for "When your query takes too long"

Maybe you are looking for

  • Exporting Avid project to iDVD on a Macbook Pro

    First, let me say that I know this is a Final Ct forum. But it's also a MacBook pro forum, and I know that many FCP editors also edit on Avid (like me). I hope someone here can help me. I created a project on Avid Media Composer. It was shot in 4:3 b

  • Flash Video in Safari

    I have a G5, and when I play a flash video (like youtube) the video I get it very choppy and sluggish. The audio is smooth, but the images are choppy compared to my PC. Is this standard on macs? Is my computer getting old?

  • LMS 4.2 possibility to md5 checksum all ios images in repository?

    Hi, Recently we upgraded a switch with an image from the repository in LMS. After the installation of the image it seemed corrupt. My question now is : Can we do a MD5 checksum on all the images in the local repository to verify that all images are o

  • BW and SEM Compatibility in upgrades

    Hello all, We currently have BW 3.0B (BD) and a separate SEM system 3.1 (ED) on a separate system.  Data is stored in transactional cubes on the BW environment (BD) and there is an RFC link between the two when planning and saving data occurs. We pla

  • Interactive Report how to get Item Values

    Hi there, I have created a workflow which is going through 12 pages. At each page the user needs to choose his variables. This could be the y-axes, x-axes, amount, surname ... etc. The variables are given to the next page via branch to next page proc