Take long time to get data

Hi,
I have one ABAP code to load data to internal table, if the code is
select * from xxxx into  itab where ....,
then it works property. However I don't want to load whole table into internal table, I want to load the data based on another internal table, the code looks
select * from xxx into itab for all entries in itab2 where ...
I thought the code will execute faster than the previous one, however it is really slow, I got "sequential read" from SM50.
Can anyone explain why?
Thanks
Victor

Make sure that the internal table used in the FOR ALL ENTRIES is sorted by the field in which you are joining.
if not itab2[] is initial.
sort itab2 ascending by field1.
select * from xxx into itab
     for all entries in itab2
            where field1 = itab2-field1
endif.
Regards,
Rich Heilman
Message was edited by: Rich Heilman

Similar Messages

  • What could be the reason for Crawl process to take long time or get in to a hung state.

    Hi All,
    What could be the reason for Crawl process to take long time or get in to a hung state? Is it something also related to the DB Server resources crunch? Does this lead to Index file corruption?
    What should be the process to be followed when index file is corrupted? How do we come to know about that?
    Thanks in Advance.

    "The crawl time depends on what you are crawling -- the number of items, the size of the items, the location of the items. If you have a lot of content that needs to be crawled, it will take much time".
    http://social.msdn.microsoft.com/Forums/sharepoint/en-US/f4cad578-f3bc-4822-b660-47ad27ce094a/sharepoint-2007-crawl-taking-long-time-to-complete?forum=sharepointgeneralprevious
    "The only clean and recommended way to recover from an index corruption is to completely rebuild the index on all the servers in the farm."
    http://blogs.technet.com/b/victorbutuza/archive/2008/11/11/event-id-4138-an-index-corruption-was-detected-in-component-shadowmerge-in-catalog-portal-content.aspx
    Whenever search index file got corrupted it will got the details to Event logs
    http://technet.microsoft.com/en-us/library/ff468695%28v=office.14%29.aspx
    My Blog- http://www.sharepoint-journey.com|
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful

  • 10.4.8 Client takes long time to get to login window when bound to OD.

    I am working on a system in a school. We have a dual processor g5 xserve with 4 gb of ram, the raid card, and 3 500 gb drives.
    Fresh install of 10.4.8 with all the updates.
    Raid 5 split in 2 volumes, one for server and one for data.
    AFP service running.
    Local dns running.
    Promoted to open directory master.
    This is the following test scenario i have.
    there is a user called studenttest and he belongs to a group called cccarstarmembers and is in a workgroup called student.
    the studenttest users home folder exists in a sharepoint called students that is set up on the data partition.
    there is a sharepoint called cccarstar that holds data for some educational software we use. The owner is administrator with rw access, the group is cccarstarmembers with rw access and others have no access.
    The student workgroup only has a few changes like dock location just for testing purposes to verify that the work group is working properly.
    When i bind a newly built 10.4.8 client with all the updates to the od server it intermittantly takes a long time for the client to get to the login window when it is powered up. Sometimes it will get to the login window in 45 seconds and other times it will take 5 minutes. This is not consistant. if you unbind the client then the computer will behave properly consistantly.
    I have tried binding the client to the od master using the fully qualified domain name and the ip address with the same results.
    the search path on the server is "dc=osx1,dc=erm,dc=sd,dc=bc,dc=ca" and on the client it auto populates at cn=config,dc=osx1,dc=erm,dc=sd,dc=bc,dc=ca".
    I have changed the search path on the client to match the search path on the server with no success as this is what used to work for us on panther setups.
    But this school has a panther client i am working on at the same time with the same applications installed and system preference settings and when i bind it to the same od master with the same search path that is displayed on the server it works fine. all users work, all groups and work groups work.
    Dns appears to be working. lookup provides the correct forward and reverse lookup info on the server, if i use either the panther or tiger client and use lookup with the servers fully qualified domain name and ip address i get the correct answers back.
    I had this problem before where tiger gave me slow to login screen problems but panther wouldnt when bound, and apple told me that it was because i had afp guest access disabled on the server. Enabling it resolved the issue about 6 months ago at another site but this time when building the server i made sure it was on from the start even thoug it is off by default.
    Any suggestions, i am pulling my hair out and about 8 working hours from a deadline.

    I've seen this a lot.
    This Knowledge Base article refers to Active Directory but we've seen this fix login delays with OD-only environments too:
    http://docs.info.apple.com/article.html?artnum=303841
    Another one of the causes is when you have multiple network mounts and your AFP service has guest access disabled. The loginwindow is trying to authenticate to each share with the username given and it is failing when that user account is not authorised for that share.
    Another can be the LDAP timeout value(s). Try adjusting these in the LDAPv3 plug-in.
    Also make sure your network ports have portfast/faststart set on the Mac ports. Sometimes because of STP the port isn't initialised fast enough for the OS when it's ready to start LDAP'ing.
    Let me know if any of this helps.

  • Using Word Easy Table Under Report Generation takes long time to add data points to table and generate report

    Hi All,
    We used report generation tool kit to generate the report on word and with other API 's under it,we get good reports .
    But when the data points are more (> 100 on all channels) it take a long time  to write all data and create a table in the word and generate report.
    Any sugegstions how to  make this happen in some seconds .
    Please assist.

    Well, I just tried my suggestion.  I simulated a 24-channel data producer (I actually generated 25 numbers -- the first number was the row number, followed by 24 random numbers) and generated 100 of these for a total of 2500 double-precision values.  I then saved this table to Excel and closed the file.  I then opened Word (all using RGT), wrote a single text line "Text with Excel", inserted the previously-created "Excel Object", and saved and closed Word.
    First, it worked (sort of).  The Table in Word started on a new page, and was in a very tiny font (possibly trying to fit 25 columns on a page?  I didn't inspect it very carefully).  This is probably "too much data" to really try to write the whole table, unless you format it for, say, 3 significant figures.
    Now, timing.  I ran this four times, two duplicate sets, one with Excel and Word in "normal" mode, one in "minimized".  To my surprise, this didn't make a lot of difference (minimized was less than 10% faster).  Here are the approximate times:
         Generate the data -- about 1 millisecond.
         Write the Excel Report -- about 1.5 seconds
         Write the Word Report -- about 10.5 seconds
    Seems to me this is way faster than trying to do this directly in Word.
    Bob Schor

  • SSIS package takes longer time when inserting data into temp tables

    querying records from one  server  and  inserting them into temp tables is taking longer time.
    are there any setting in package which  enhance the performance .

    will local temp table (#temp ) enhance the performance  ..
    If you're planning to use # tables in ssis make sure you read this
    http://consultingblogs.emc.com/jamiethomson/archive/2006/11/19/SSIS_3A00_-Using-temporary-tables.aspx
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • DataBlAppend takes long time on registered data

    Greetings! I'm using DIAdem 2012 on a Win7/64-bit computer (16GB memory and solid-state hard drive).  I work with one tdms file at a time but that file can be up to 8GB so I bring it into the Data Portal via the Register Data option.  The tdms file contains about 40 channels and each channel has about 50M datapoints.  If it matters, the data type of each channel is U16 with appropriate scaling factors in the channel parameters.  I display one channel in View and my goal is to set the two cursors on either side of an "event" then copy that segment of data between the cursors to a new channel in another group.  Actually, there are about ten channels that I want to copy exactly the same segment out to ten new channels.  This is the standard technique for programmatically "copying-flagged-data-points", i.e. reading and using the X1,X2 cursor position.  I am using DataBlAppend to write these new channels (I have also tried DataBlCopy with identical results).  My VBS script works exactly as I desire.  The new channel group containing the segments will be written out as a tdms file using another script. 
    Copying out "small" segments takes a certain amount of time but copying larger segments takes an increasing amount of time, i.e. the increase is not linear.  I would like to do larger segments but I don't like waiting 20-30 minutes per segment.  The time culprit is the script line "Call DataBlAppend (CpyS, CurPosX1, CurPosX2-CurPosX1 +1, CpyT)" where CpyS and CpyT are strings containing the names of the source and target channels respectively (the empty target channels were previously created in the new group). 
    My question is, "is there a faster way to do this within DIAdem?"  The amount of data being written to the new group can range from 20-160MB but I need to be able to write up to 250MB.  TDMS files of this size can normally be loaded or written out quite quickly on this computer under normal circumstances, so what is slowing this process down?  Thanks!

    Greetings, Brad!! 
    I agree that DataBlCopy is fast when working "from channels loaded in the Data Portal" but the tdms file I am working with is only "registered" in the portal.  I do not know exactly why that makes a difference except that it must go out to the disk in order to read each channel.  The function DataBlCopy (or Append) is a black box to me so I was hoping for some insight as to why it is behaving like it is under these circumstances.  However, your suggestion to try the function DataFileLoadRed() may bear fruit!  I wrote up a little demo script to copy out a "large" segment from a 8GB file registered in the portal using DataFileLoadRed and it is much, much faster!  It was a little odd selecting "IntervalCount" as my method and the total number of intervals the same as the total number of data points between my begin and end points, and "eInterFirstValue" [in the interval] as the reduction method, but the results speak for themselves.  I will need to do some thorough checking to verify that I am getting exactly the data I want but DataFileLoadRed does look promising as an alternative.  Thanks!
    Chris

  • Query Takes Longer time as the Data Increases.

    Hi ,
    We have one of the below Query which takes around 4 to 5 minutes to retrieve the data and this appears to be very slow as the data grows.
    DB Version=10.2.0.4
    OS=Solaris 10
    tst_trd_owner@MIFEX3> explain plan for select * from TIBEX_OrderBook as of scn 7785234991 where meid='ME4';
    Explained.
    tst_trd_owner@MIFEX3> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 3096779986
    | Id  | Operation                     | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                          |     1 |   303 |       |   609K  (1)| 01:46:38 |
    |*  1 |  HASH JOIN SEMI               |                          |     1 |   303 |   135M|   609K  (1)| 01:46:38 |
    |*  2 |   HASH JOIN                   |                          |   506K|   129M|       |   443K  (1)| 01:17:30 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| TIBEX_ORDERSTATUSENUM    |     1 |    14 |       |     2   (0)| 00:00:01 |
    |*  4 |     INDEX RANGE SCAN          | TIBEX_ORDERSTAT_ID_DESC  |     1 |       |       |     1   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | TIBEX_ORDER              |  3039K|   736M|       |   443K  (1)| 01:17:30 |
    |   6 |   VIEW                        | VW_NSO_1                 |  7931K|   264M|       |   159K  (1)| 00:27:53 |
    |   7 |    HASH GROUP BY              |                          |  7931K|   378M|   911M|   159K  (1)| 00:27:53 |
    |*  8 |     HASH JOIN RIGHT ANTI      |                          |  7931K|   378M|       | 77299   (1)| 00:13:32 |
    |*  9 |      VIEW                     | index$_join$_004         |     2 |    28 |       |     2  (50)| 00:00:01 |
    |* 10 |       HASH JOIN               |                          |       |       |       |            |          |
    |  11 |        INLIST ITERATOR        |                          |       |       |       |            |          |
    |* 12 |         INDEX RANGE SCAN      | TIBEX_ORDERSTAT_ID_DESC  |     2 |    28 |       |     2   (0)| 00:00:01 |
    |  13 |        INDEX FAST FULL SCAN   | XPKTIBEX_ORDERSTATUSENUM |     2 |    28 |       |     1   (0)| 00:00:01 |
    |  14 |      INDEX FAST FULL SCAN     | IX_ORDERBOOK             |    11M|   408M|       | 77245   (1)| 00:13:31 |
    Predicate Information (identified by operation id):
       1 - access("A"."MESSAGESEQUENCE"="$nso_col_1" AND "A"."ORDERID"="$nso_col_2")
       2 - access("A"."ORDERSTATUS"="ORDERSTATUS")
       4 - access("SHORTDESC"='ORD_OPEN')
       5 - filter("MEID"='ME4')
       8 - access("ORDERSTATUS"="ORDERSTATUS")
       9 - filter("SHORTDESC"='ORD_NOTFND' OR "SHORTDESC"='ORD_REJECT')
      10 - access(ROWID=ROWID)
      12 - access("SHORTDESC"='ORD_NOTFND' OR "SHORTDESC"='ORD_REJECT')
    33 rows selected.
    The View Query  TIBEX_OrderBook.
    SELECT  ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
              BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID,
              PRICETYPE, PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL,
              DISCLOSEDQTY, REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE,
              ACCOUNTNO, CLEARINGAGENCY, 'OK' AS LASTINSTRESULT,
              LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE, TIMESTAMP,
              QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE, LASTEXECQTY,
              LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY,
              STOPPRICE, LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO,
              LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
              BOOKTIMESTAMP, ParticipantIDMM, MarketState, PartnerExId,
              LastExecSettlementCycle, LastExecPostTradeVenueType,
              PriceLevelPosition, PrevReferenceID, EXPIRYTIMESTAMP, matchType,
              lastExecutionRole, a.MDEntryID, a.PegOffset, a.haltReason,
              a.LastInstFixSequence, A.COMPARISONPRICE, A.ENTEREDPRICETYPE
        FROM  tibex_Order A
        WHERE (A.MessageSequence, A.OrderID) IN (
                SELECT  max(B.MessageSequence), B.OrderID
                  FROM  tibex_Order B
                  WHERE orderStatus NOT IN (
                          SELECT orderStatus
                            FROM tibex_orderStatusEnum
                            WHERE ShortDesc in ('ORD_REJECT', 'ORD_NOTFND')
                  GROUP By B.OrderID
          AND A.OrderStatus IN (
                SELECT OrderStatus
                  FROM  tibex_orderStatusEnum
                  WHERE ShortDesc IN ('ORD_OPEN')
    /Any helpful suggestions.
    Regards
    NM

    Hi Centinul,
    I tried your modified version of the query on the test Machine.It used Quite a lot of Temp space around 9GB and Finally ran out of disk space.
    On the test Machine i have generated stats and Executed the Queries but in the production our stats will be always Stale reason is
    In the Morning we have 3000 records in Tibex_Order and as the day progresses data will be increment and goes upto 20 millions records by the end of day and we generate the stats and Truncate the Transaction tables(Tibex_Order=20 Million records) and next day our stats will be stale and if the user runs any Query then they will take Ages to retrieve Example is the below one.
    tst_trd_owner@MIFEX3>
    tst_trd_owner@MIFEX3> CREATE OR REPLACE VIEW TIBEX_ORDERBOOK_TEMP
      2  (ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
      3   BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID, PRICETYPE,
      4   PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL, DISCLOSEDQTY,
      5   REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE, ACCOUNTNO,
      6   CLEARINGAGENCY, LASTINSTRESULT, LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE,
      7   TIMESTAMP, QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE,
      8   LASTEXECQTY, LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY, STOPPRICE,
      9   LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO, LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
    10   BOOKTIMESTAMP, PARTICIPANTIDMM, MARKETSTATE, PARTNEREXID, LASTEXECSETTLEMENTCYCLE,
    11   LASTEXECPOSTTRADEVENUETYPE, PRICELEVELPOSITION, PREVREFERENCEID, EXPIRYTIMESTAMP, MATCHTYPE,
    12   LASTEXECUTIONROLE, MDENTRYID, PEGOFFSET, HALTREASON, LASTINSTFIXSEQUENCE,
    13   COMPARISONPRICE, ENTEREDPRICETYPE)
    14  AS
    15  SELECT orderid
    16       , MAX(userorderid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    17       , MAX(orderside) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    18       , MAX(ordertype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    19       , MAX(orderstatus) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    20       , MAX(boardid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    21       , MAX(timeinforce) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    22       , MAX(instrumentid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    23       , MAX(referenceid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    24       , MAX(pricetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    25       , MAX(price) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    26       , MAX(averageprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    27       , MAX(quantity) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    28       , MAX(minimumfill) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    29       , MAX(disclosedqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    30       , MAX(remainqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    31       , MAX(aon) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    32       , MAX(participantid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    33       , MAX(accounttype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    34       , MAX(accountno) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    35       , MAX(clearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    36       , 'ok' as lastinstresult
    37       , MAX(lastinstmessagesequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    38       , MAX(lastexecutionid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    39       , MAX(note) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    40       , MAX(timestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    41       , MAX(qtyfilled) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    42       , MAX(meid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    43       , MAX(lastinstrejectcode) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    44       , MAX(lastexecprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    45       , MAX(lastexecqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    46       , MAX(lastinsttype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    47       , MAX(lastexecutioncounterparty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    48       , MAX(visibleqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    49       , MAX(stopprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    50       , MAX(lastexecclearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    51       , MAX(lastexecaccountno) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    52       , MAX(lastexeccpclearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    53       , MAX(messagesequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    54       , MAX(lastinstuseralias) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    55       , MAX(booktimestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    56       , MAX(participantidmm) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    57       , MAX(marketstate) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    58       , MAX(partnerexid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    59       , MAX(lastexecsettlementcycle) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    60       , MAX(lastexecposttradevenuetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    61       , MAX(pricelevelposition) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    62       , MAX(prevreferenceid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    63       , MAX(expirytimestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    64       , MAX(matchtype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    65       , MAX(lastexecutionrole) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    66       , MAX(mdentryid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    67       , MAX(pegoffset) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    68       , MAX(haltreason) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    69       , MAX(lastinstfixsequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    70       , MAX(comparisonprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    71       , MAX(enteredpricetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    72  FROM   tibex_order
    73  WHERE  orderstatus IN (
    74                           SELECT orderstatus
    75                           FROM   tibex_orderstatusenum
    76                           WHERE  shortdesc IN ('ORD_OPEN')
    77                        )
    78  GROUP BY orderid
    79  /
    View created.
    tst_trd_owner@MIFEX3> SELECT /*+ gather_plan_statistics */    *   FROM   TIBEX_OrderBook_TEMP as of scn 7785234991 where meid='ME4';
    SELECT /*+ gather_plan_statistics */    *   FROM   TIBEX_OrderBook_TEMP as of scn 7785234991 where meid='ME4'
    ERROR at line 1:
    ORA-01114: IO error writing block to file %s (block # %s)
    ERROR:
    ORA-03114: not connected to ORACLEAny Suggestion will be helpful
    Regards
    NM

  • Clear operation takes long time and gets interrupted in ThreadGate.doWait

    Hi,
    We are running Coherence 3.5.3 cluster with 16 storage enabled nodes and 24 storage disabled nodes. We have about hundred of partitioned caches with NearCaches (invalidation strategy = PRESENT, size limit for different caches 60-200K) and backup count = 1. For each cache we have a notion of cache A and cache B. Every day either A or B is active and is used by business logic while the other one is inactive, not used and empty. Daily we load fresh data to inactive caches, mark them as active (switch business logic to work with fresh data from those caches), and clear all yesterday's data in those caches which are not used today.
    So at the end of data load we execute NamedCache.clear() operation for each inactive cache from storage disabled node. From time to time, 1-2 times a week, the clear operation fails on one of 2 our biggest caches (one has 1.2M entries and another one has 350K entries). We did some investigations and found that NamedCache.clear operation fires many events within Coherence cluster to clear NearCaches so that operation is quite expensive. In some other simular posts there were suggestions to not use NamedCache.clear, but rather use NamedCache.destroy, however that doesn't work for us in current timelines. So we implemented simple retry logic that retries NamedCache.clear() operation up to 4 times with increasing delay between the attempts (1min, 2 min, 4 min).
    However that didn't help. 3 out of those attempts failed with the same error on one storage enabled node and 1 out of those 4 attempts failed on another storage enabled node. In all cases a Coherence worker thread that is executing ClearRequest on storage enabled node got interrupted by Guardian after it reached its timeout while it was waiting on lock object at ThreadGate.doWait. Please see below:
    Log from the node that calls NamedCache.clear()
    Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for ProductDistributedCache service on Member(Id=26, Timestamp=2012-09-04 13:37:43.922, Address=32.83.113.116:10000, MachineId=3149, Location=machine:mac305,process:2
    7091,member:mac305.instance1, Role=storage) (Wrapped: ThreadGate{State=GATE_CLOSING, ActiveCount=3, CloseCount=0, ClosingT
    hread= Thread[ProductDistributedCacheWorker:1,5,ProductDistributedCache]}) null) null
    Caused by:
    Portable(java.lang.InterruptedException) ( << comment: this came form storage enabled node >> )
    at java.lang.Object.wait(Native Method)
    at com.tangosol.util.ThreadGate.doWait(ThreadGate.java:489)
    at com.tangosol.util.ThreadGate.close(ThreadGate.java:239)
    at com.tangosol.util.SegmentedConcurrentMap.lock(SegmentedConcurrentMap.java:180)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:27)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ClearRequest.run(DistributedCache.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:1)
    at com.tangosol.coherence.component.util.DaemonPool$WrapperTask.run(DaemonPool.CDB:32)
    at com.tangosol.coherence.component.util.DaemonPool$Daemon.onNotify(DaemonPool.CDB:63)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    Log from the that storage enabled node which threw an exception
    Sat Sep 08 04:38:37 EDT 2012|**ERROR**| com.tangosol.coherence.component.util.logOutput.Log4j | 2012-09-08 04:38:37.720/31330
    1.617 Oracle Coherence EE 3.5.3/465 <Error> (thread=DistributedCache:ProductDistributedCache, member=26): Attempting recovery
    (due to soft timeout) of Guard{Daemon=ProductDistributedCacheWorker:1} |Client Details{sdpGrid:,ClientName:  ClientInstanceN
    ame: ,ClientThreadName:  }| Logger@9259509 3.5.3/465
    Sat Sep 08 04:38:37 EDT 2012|**WARN**| com.tangosol.coherence.component.util.logOutput.Log4j | 2012-09-08 04:38:37.720/313301
    .617 Oracle Coherence EE 3.5.3/465 <Warning> (thread=Recovery Thread, member=26): A worker thread has been executing task: Message "ClearRequest"
    FromMember=Member(Id=38, Timestamp=2012-09-07 10:12:27.402, Address=32.83.113.120:10000, MachineId=40810, Location=machine:
    mac313,process:22837,member:mac313.instance1, Role=maintenance)
    FromMessageId=5278229
    Internal=false
    MessagePartCount=1
    PendingCount=0
    MessageType=1
    ToPollId=0
    Poll=null
    Packets
    [000]=Directed{PacketType=0x0DDF00D5, ToId=26, FromId=38, Direction=Incoming, ReceivedMillis=04:36:49.718, ToMemberSet=nu
    ll, ServiceId=6, MessageType=1, FromMessageId=5278229, ToMessageId=337177, MessagePartCount=1, MessagePartIndex=0, NackInProg
    ress=false, ResendScheduled=none, Timeout=none, PendingResendSkips=0, DeliveryState=unsent, Body=0x000D551F0085B8DF9FAECE8001
    0101010204084080C001C1F80000000000000010000000000000000000000000000000000000000000000000, Body.length=57}
    Service=DistributedCache{Name=ProductDistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, B
    ackupCount=1, AssignedPartitions=16, BackupPartitions=16}
    ToMemberSet=MemberSet(Size=1, BitSetCount=2
    Member(Id=26, Timestamp=2012-09-04 13:37:43.922, Address=32.83.113.116:10000, MachineId=3149, Location=machine:mac305,process:27091,member:mac305.instance1, Role=storage)
    NotifySent=false
    } for 108002ms and appears to be stuck; attempting to interrupt: ProductDistributedCacheWorker:1 |Client Details{sdpGrid:,C
    lientName: ClientInstanceName: ,ClientThreadName: }| Logger@9259509 3.5.3/465
    I am looking for your help. Please let me know if you see what is the reason for the issue and how to address it.
    Thank you

    Today we had that issue again and I have gathered some more information.
    Everything was the same as I described in the previous posts in this thread: first attempt to clear a cache failed and next 3 retries also failed. All 4 times 2 storage enabled nodes had that "... A worker thread has been executing task: Message "ClearRequest" ..." error message and got interrupted by Guardian.
    However after that I had some time to do further experiments. Our App has cache management UI that allows to clear any cache. So I started repeatedly taking thread dumps on those 2 storage enabled nodes which failed to clear the cache and executed cache clear operation form that UI. One of storage enabled nodes successfully cleared its part, but the other still failed. It failed with completely same error.
    So, I have a thread dump which I took while cache clear operation was in progress. It shows that a thread which is processing that ClearRequest is stuck waiting in ThreadGate.close method:
    at java.lang.Object.wait(Native Method)
    at com.tangosol.util.ThreadGate.doWait(ThreadGate.java:489)
    at com.tangosol.util.ThreadGate.close(ThreadGate.java:239)
    at com.tangosol.util.SegmentedConcurrentMap.lock(SegmentedConcurrentMap.java:180)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onClearRequest(DistributedCache.CDB:27)
    at
    All subsequents attempts to clear cache from cache management UI failed until we restarted that storage enabled node.
    It looks like some thread left ThreadGate in a locked state, and any further attempts to apply a lock as part of ClearRequest message fail. May be it is known issue of Coherence 3.5.3?
    Thanks

  • Cellular radio takes long time to get a signal.

    My cellular radio on my iPad 3rd generation is starting to Take quite a while recognizing a signal. It eventually recognizes the signal, but it takes too long. Is anyone else having this problem?

    I tried the hard reset, and it still took about 3 minutes to find a signal. My LTE signal never came back up. It's displaying a 3G signal now. Should I reset network settings or something?

  • 0CRM_SALES_ACT_1 takes a long time to extract data from CRM system

    Hi gurus,
    I am using the datasource 0CRM_SALES_ACT_1 to extract activities data from CRM side. However, it is taking too long time to get any information there.
    I applied the SAP NOTE 829397 (Activity extraction takes a long time: 0crm_sales_act_1) but it did not solve my problem.
    Does anybody knows something about that?
    Thanks in advance,
    Silvio Messias.

    Hi Silvio,
    I've experienced a similar problem with this extractor.  I attempted to Initialize Delta with Data Transfer to no avail.  The job ran for 12+ hours and stayed in "yellow" status (0 records extracted).  The following steps worked for me:
    1.  Initialize Delta without Data Transfer
    2.  Run Delta Update
    3.  Run Full Update and Indicate Request as Repair Request
    Worked like a champ, data load finished in less than 2 minutes.
    Hopefully this will help.
    Regards.
    Jason

  • Takes Long time for Data Loading.

    Hi All,
    Good Morning.. I am new to SDN.
    Currently i am using the datasource 0CRM_SRV_PROCESS_H and it contains 225 fields. Currently i am using around 40 fields in my report.
    Can i hide the remaining fields in the datasource level itself (TCODE : RSA6)
    Currently data loading takes more time to load the data from PSA to ODS (ODS 1).
    And also right now i am pulling some data from another ODS(ODS 2)(LookUP). It takes long time to update the data in Active data table of the ODS.
    Can you please suggest how to improve the performance of dataloading on this Case.
    Thanks & Regards,
    Siva.

    Hi....
    Yes...u can hide..........just Check the hide box for those fields.......R u in BI 7.0 or BW...........whatever ........is the no of records is huge?
    If so u can split the records and execute............I mean use the same IP...........just execute it with different selections.........
    Check in ST04............is there are any locks or lockwaits..........if so...........Go to SM37 >> Check whether any Long running job is there or not.........then check whether that job is progressing or not............double click on the Job >> From the Job details copy the PID..............go to ST04 .....expand the node............and check whether u r able to find that PID there or not.........
    Also check System log in SM21............and shortdumps in ST04........
    Now to improve performance...........u can try to increase the virtual memory or servers.........if possiblr........it will increase the number of work process..........since if many jobs run at a time .then there will be no free Work prrocesses to proceed........
    Regards,
    Debjani......

  • Error,13030 and others! unable to download, takes a long time to get to any page.

    # Question
    Error,13030 and others: Email -!Sending taking a little longer than usual...
    We'll keep trying and let you know when it's been sent.
    I am also unable to download (I use Send Space and Mega Upload), plus it takes a long time to get to any page, I have to Reload several times and sometimes it still does not work!
    I'm using a PC, XP professional -
    I was away for 9 days came home to my computer not responding. I am using the free version of AVG security and read the instructions in the Mozilla help Re: Firewalls, changing the AVG settings removing Mozilla and then adding it back in. However I checked for updates on AVG none were available and the instructions Mozilla has are for a version of AVG that I do not have. I am using 8.5 and I think the instructions were for 9.0. At any rate I am stuck, and desperately need to get this figured out! I hope that this is enough information for someone to point me in the right direction.? Your help is be greatly appreciated. Thank you in advance! Charisme

    ldcorn wrote:
    My computer also has been taking a long time to find a wireless network, up to several minutes. I often have to have it "scan" for networks several times before it finally picks it up. Is there a solution? This has been going on for many months. Other computers (PCs, iPhone) pick up the signals within a few seconds, but my Powerbook takes several minutes, even over 5 minutes sometimes.
    See how many networks are listed in Sys Prefs: Network: Advance. Drag the preferred to the top. Delete any you don't really need.
    Message was edited by: tjk

  • HT4759 Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why

    Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why I need to deactivate the space service and take my money back .. Thanks

    The "issues" you've raised are nothing to do with the iCloud service.
    No service that uploads data allows you to disconnect the device you are uploading from while uploading data. Doing so would prevent the upload from completing. It is a basic requirement for any uploading service that you remain connected to it for uploading to be possible.
    The time it takes to upload data to iCloud is entirely dependent on how fast your Internet connection is, and how much data you are uploading. Both of these things are completely out of Apple's control. Whichever upload service you use will be affected by the speed of your Internet connection.

  • BPM Process chain takes long time to process

    We have BI7, Netweaver 2004s on Oracle and SUN Solaris
    There is a process chain (BPM) which pulls data from the CRM system into BW. The scheduled time to run this chain is 0034 hrs. This chain should ideally complete before / around 0830 Hrs. <b>Now the problem is that every alternate day this chain behaves normally and gets completed well before 0830 hrs but every alternate day this chain fails…</b> there are almost 40 chains running daily. Some are event triggered (dependent with each other) or some run in parallel. In this, (BPM) process chain, usually there are 5 requests with 3 Delta and 2 full uploads (Master Data). The delta uploads finishes in 30 minutes without any issues with very few record transfers. The first full upload is from 0034 hrs to approximately 0130 hrs and the 2nd upload is from 0130 hrs to 0230 hrs. Now if the 1st upload gets delayed then the people who are initiating these chains, stop the 2nd full upload and continue it after all the process chains are completed. Now this entire BPM process chain sometimes takes 17 -18 hrs to complete!!!!!
    No other loads in CRM or BW when these process chains are running
    CRM has background jobs to push IDOCS to BW which run every 2 minutes which runs successfully
    Yesterday this chain got completed successfully (well within stipulated time) with over 33,00,000 records transferred but sometimes it has failed to transfer even 12,00,000 records!!
    Attaching a zip file, please refer the “21 to 26 Analysis screen shot.doc” from the zip file
    Within the zip file, attaching “Normal timings of daily process chains.xls” – the name explains it….
    Also within the zip file refer “BPM Infoprovider and data source screen shot.doc” please refer this file as the infopackage (page 2) which was used in the process chain is not displayed later on in page number 6 BUT CHAIN GOT SUCESSFULLY COMPLETED
    We have analyzed:--
    1)     The PSA data for BPM process chain for past few days
    2)     The info providers for BPM process chain for past few days
    3)     The ODS entries for BPM process chain for past few days
    4)     The point of failure of BPM process chain for past few days
    5)     The overall performance of all the process chains for past few days
    6)     The number of requests in BW for this process chain
    7)     The load on CRM system for past few days when this process chain ran on BW system
    As per our analysis, there are couple of things which can be fixed in the BW system:--
    1)     The partner agreement (transaction WE20) defined for the partner LS/BP3CLNT475 mentions both message types RSSEND and RSINFO: -- collect IDOCs and pack size = 1 Since the pack size = 1 will generate 1 TRFC call per IDOC, it should be changed to 10 so that less number of TRFCs will be generated thus less overhead for the BW server resulting in the increase in performance
    2)     In the definition of destination for the concerned RFC in BW (SM59), the “Technical Setting” tab says the “Load balancing” option = “No”. We are planning to make it “Yes”
    But we believe that though these changes will bring some increase in performance, this is not the root cause of the abnormal behavior of this chain as this chain runs successfully on every alternate day with approximately the same amount of load in it.
    I was not able to attach the many screen shots or the info which I had gathered during my analysis. Please advice how do I attach these files
    Best Regards,

    Hi,
    Normally  index  creation or deletion can take long time in case  your database statistics are not updated properly, so can check  stat  after your data loading is completed and index generation is done,  Do creation of database statistics.
    Then try to recheck ...
    Regards,
    Satya

  • Runbook takes long time to complete

    Hi,
    I created a customized flow to get the data from MS SQL. The runbook is working fine but it takes long time to complete. Is there any option to increase speed or something like that..?
    Regards,
    Soundarajan.

    If you look on the Log tab you can see which activity that took the longest. What does you runbook looks like? If you have for example a Run .NET script activity you can do some tuning on the runbook server. But I think a good start is to share a
    figure of your runbook.
    Anders Bengtsson | Microsoft PFE | blog at http://www.contoso.se

Maybe you are looking for

  • Unable to Connect to 2 Tracks which are in 2 different NWDI Servers

    Hi All, I need to connect to 2 tracks which are in 2 different NWDI Servers using the same NWDS instance. Track 1 is based on  NW 7.0 SP21 and another one is based on SAP NW 7.0 Ehp1 version which are in different NWDI servers. The problem i am facin

  • Inconsistent schedule/monitoring date

    dear all, when do scheduling for a Process Chain, the date entered was different with the date displayed in the background job (sm37). Same problem happend when do monitoring from ODS/IC manage, the request date displayed in the Infoprovider adm was

  • How can I delete a devise I lost from my iCloud account

    I used the same iCloud account for my iPad and my iPhone but I lost my iPad and I want to delete it from my iCloud account because it's taking too much space and I'm not using it anymore. Anyone has an idea of how I could do that?

  • Permissions problem with PC/itunes/Ipad

    Hi. I'm using Windows 7 professional(on a pc) with an ipad 2 and I've just started getting a message 'file could not be copied because you do not have permission to see its content'. It's only started in the last 24 hrs, and was working fine previous

  • Computer won't power on

    I have a Compac Persario desktop that has been working just fine with no problems.  I have had it plugged into a surge protector and shut it down every night or when there is the chance of a storm.  Last night,  I shut it down like normal,  and when