DataBlAppend takes long time on registered data

Greetings! I'm using DIAdem 2012 on a Win7/64-bit computer (16GB memory and solid-state hard drive).  I work with one tdms file at a time but that file can be up to 8GB so I bring it into the Data Portal via the Register Data option.  The tdms file contains about 40 channels and each channel has about 50M datapoints.  If it matters, the data type of each channel is U16 with appropriate scaling factors in the channel parameters.  I display one channel in View and my goal is to set the two cursors on either side of an "event" then copy that segment of data between the cursors to a new channel in another group.  Actually, there are about ten channels that I want to copy exactly the same segment out to ten new channels.  This is the standard technique for programmatically "copying-flagged-data-points", i.e. reading and using the X1,X2 cursor position.  I am using DataBlAppend to write these new channels (I have also tried DataBlCopy with identical results).  My VBS script works exactly as I desire.  The new channel group containing the segments will be written out as a tdms file using another script. 
Copying out "small" segments takes a certain amount of time but copying larger segments takes an increasing amount of time, i.e. the increase is not linear.  I would like to do larger segments but I don't like waiting 20-30 minutes per segment.  The time culprit is the script line "Call DataBlAppend (CpyS, CurPosX1, CurPosX2-CurPosX1 +1, CpyT)" where CpyS and CpyT are strings containing the names of the source and target channels respectively (the empty target channels were previously created in the new group). 
My question is, "is there a faster way to do this within DIAdem?"  The amount of data being written to the new group can range from 20-160MB but I need to be able to write up to 250MB.  TDMS files of this size can normally be loaded or written out quite quickly on this computer under normal circumstances, so what is slowing this process down?  Thanks!

Greetings, Brad!! 
I agree that DataBlCopy is fast when working "from channels loaded in the Data Portal" but the tdms file I am working with is only "registered" in the portal.  I do not know exactly why that makes a difference except that it must go out to the disk in order to read each channel.  The function DataBlCopy (or Append) is a black box to me so I was hoping for some insight as to why it is behaving like it is under these circumstances.  However, your suggestion to try the function DataFileLoadRed() may bear fruit!  I wrote up a little demo script to copy out a "large" segment from a 8GB file registered in the portal using DataFileLoadRed and it is much, much faster!  It was a little odd selecting "IntervalCount" as my method and the total number of intervals the same as the total number of data points between my begin and end points, and "eInterFirstValue" [in the interval] as the reduction method, but the results speak for themselves.  I will need to do some thorough checking to verify that I am getting exactly the data I want but DataFileLoadRed does look promising as an alternative.  Thanks!
Chris

Similar Messages

  • Using Word Easy Table Under Report Generation takes long time to add data points to table and generate report

    Hi All,
    We used report generation tool kit to generate the report on word and with other API 's under it,we get good reports .
    But when the data points are more (> 100 on all channels) it take a long time  to write all data and create a table in the word and generate report.
    Any sugegstions how to  make this happen in some seconds .
    Please assist.

    Well, I just tried my suggestion.  I simulated a 24-channel data producer (I actually generated 25 numbers -- the first number was the row number, followed by 24 random numbers) and generated 100 of these for a total of 2500 double-precision values.  I then saved this table to Excel and closed the file.  I then opened Word (all using RGT), wrote a single text line "Text with Excel", inserted the previously-created "Excel Object", and saved and closed Word.
    First, it worked (sort of).  The Table in Word started on a new page, and was in a very tiny font (possibly trying to fit 25 columns on a page?  I didn't inspect it very carefully).  This is probably "too much data" to really try to write the whole table, unless you format it for, say, 3 significant figures.
    Now, timing.  I ran this four times, two duplicate sets, one with Excel and Word in "normal" mode, one in "minimized".  To my surprise, this didn't make a lot of difference (minimized was less than 10% faster).  Here are the approximate times:
         Generate the data -- about 1 millisecond.
         Write the Excel Report -- about 1.5 seconds
         Write the Word Report -- about 10.5 seconds
    Seems to me this is way faster than trying to do this directly in Word.
    Bob Schor

  • SSIS package takes longer time when inserting data into temp tables

    querying records from one  server  and  inserting them into temp tables is taking longer time.
    are there any setting in package which  enhance the performance .

    will local temp table (#temp ) enhance the performance  ..
    If you're planning to use # tables in ssis make sure you read this
    http://consultingblogs.emc.com/jamiethomson/archive/2006/11/19/SSIS_3A00_-Using-temporary-tables.aspx
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Query Takes Longer time as the Data Increases.

    Hi ,
    We have one of the below Query which takes around 4 to 5 minutes to retrieve the data and this appears to be very slow as the data grows.
    DB Version=10.2.0.4
    OS=Solaris 10
    tst_trd_owner@MIFEX3> explain plan for select * from TIBEX_OrderBook as of scn 7785234991 where meid='ME4';
    Explained.
    tst_trd_owner@MIFEX3> select plan_table_output from table(dbms_xplan.display('plan_table',null,'serial'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 3096779986
    | Id  | Operation                     | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                          |     1 |   303 |       |   609K  (1)| 01:46:38 |
    |*  1 |  HASH JOIN SEMI               |                          |     1 |   303 |   135M|   609K  (1)| 01:46:38 |
    |*  2 |   HASH JOIN                   |                          |   506K|   129M|       |   443K  (1)| 01:17:30 |
    |   3 |    TABLE ACCESS BY INDEX ROWID| TIBEX_ORDERSTATUSENUM    |     1 |    14 |       |     2   (0)| 00:00:01 |
    |*  4 |     INDEX RANGE SCAN          | TIBEX_ORDERSTAT_ID_DESC  |     1 |       |       |     1   (0)| 00:00:01 |
    |*  5 |    TABLE ACCESS FULL          | TIBEX_ORDER              |  3039K|   736M|       |   443K  (1)| 01:17:30 |
    |   6 |   VIEW                        | VW_NSO_1                 |  7931K|   264M|       |   159K  (1)| 00:27:53 |
    |   7 |    HASH GROUP BY              |                          |  7931K|   378M|   911M|   159K  (1)| 00:27:53 |
    |*  8 |     HASH JOIN RIGHT ANTI      |                          |  7931K|   378M|       | 77299   (1)| 00:13:32 |
    |*  9 |      VIEW                     | index$_join$_004         |     2 |    28 |       |     2  (50)| 00:00:01 |
    |* 10 |       HASH JOIN               |                          |       |       |       |            |          |
    |  11 |        INLIST ITERATOR        |                          |       |       |       |            |          |
    |* 12 |         INDEX RANGE SCAN      | TIBEX_ORDERSTAT_ID_DESC  |     2 |    28 |       |     2   (0)| 00:00:01 |
    |  13 |        INDEX FAST FULL SCAN   | XPKTIBEX_ORDERSTATUSENUM |     2 |    28 |       |     1   (0)| 00:00:01 |
    |  14 |      INDEX FAST FULL SCAN     | IX_ORDERBOOK             |    11M|   408M|       | 77245   (1)| 00:13:31 |
    Predicate Information (identified by operation id):
       1 - access("A"."MESSAGESEQUENCE"="$nso_col_1" AND "A"."ORDERID"="$nso_col_2")
       2 - access("A"."ORDERSTATUS"="ORDERSTATUS")
       4 - access("SHORTDESC"='ORD_OPEN')
       5 - filter("MEID"='ME4')
       8 - access("ORDERSTATUS"="ORDERSTATUS")
       9 - filter("SHORTDESC"='ORD_NOTFND' OR "SHORTDESC"='ORD_REJECT')
      10 - access(ROWID=ROWID)
      12 - access("SHORTDESC"='ORD_NOTFND' OR "SHORTDESC"='ORD_REJECT')
    33 rows selected.
    The View Query  TIBEX_OrderBook.
    SELECT  ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
              BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID,
              PRICETYPE, PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL,
              DISCLOSEDQTY, REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE,
              ACCOUNTNO, CLEARINGAGENCY, 'OK' AS LASTINSTRESULT,
              LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE, TIMESTAMP,
              QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE, LASTEXECQTY,
              LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY,
              STOPPRICE, LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO,
              LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
              BOOKTIMESTAMP, ParticipantIDMM, MarketState, PartnerExId,
              LastExecSettlementCycle, LastExecPostTradeVenueType,
              PriceLevelPosition, PrevReferenceID, EXPIRYTIMESTAMP, matchType,
              lastExecutionRole, a.MDEntryID, a.PegOffset, a.haltReason,
              a.LastInstFixSequence, A.COMPARISONPRICE, A.ENTEREDPRICETYPE
        FROM  tibex_Order A
        WHERE (A.MessageSequence, A.OrderID) IN (
                SELECT  max(B.MessageSequence), B.OrderID
                  FROM  tibex_Order B
                  WHERE orderStatus NOT IN (
                          SELECT orderStatus
                            FROM tibex_orderStatusEnum
                            WHERE ShortDesc in ('ORD_REJECT', 'ORD_NOTFND')
                  GROUP By B.OrderID
          AND A.OrderStatus IN (
                SELECT OrderStatus
                  FROM  tibex_orderStatusEnum
                  WHERE ShortDesc IN ('ORD_OPEN')
    /Any helpful suggestions.
    Regards
    NM

    Hi Centinul,
    I tried your modified version of the query on the test Machine.It used Quite a lot of Temp space around 9GB and Finally ran out of disk space.
    On the test Machine i have generated stats and Executed the Queries but in the production our stats will be always Stale reason is
    In the Morning we have 3000 records in Tibex_Order and as the day progresses data will be increment and goes upto 20 millions records by the end of day and we generate the stats and Truncate the Transaction tables(Tibex_Order=20 Million records) and next day our stats will be stale and if the user runs any Query then they will take Ages to retrieve Example is the below one.
    tst_trd_owner@MIFEX3>
    tst_trd_owner@MIFEX3> CREATE OR REPLACE VIEW TIBEX_ORDERBOOK_TEMP
      2  (ORDERID, USERORDERID, ORDERSIDE, ORDERTYPE, ORDERSTATUS,
      3   BOARDID, TIMEINFORCE, INSTRUMENTID, REFERENCEID, PRICETYPE,
      4   PRICE, AVERAGEPRICE, QUANTITY, MINIMUMFILL, DISCLOSEDQTY,
      5   REMAINQTY, AON, PARTICIPANTID, ACCOUNTTYPE, ACCOUNTNO,
      6   CLEARINGAGENCY, LASTINSTRESULT, LASTINSTMESSAGESEQUENCE, LASTEXECUTIONID, NOTE,
      7   TIMESTAMP, QTYFILLED, MEID, LASTINSTREJECTCODE, LASTEXECPRICE,
      8   LASTEXECQTY, LASTINSTTYPE, LASTEXECUTIONCOUNTERPARTY, VISIBLEQTY, STOPPRICE,
      9   LASTEXECCLEARINGAGENCY, LASTEXECACCOUNTNO, LASTEXECCPCLEARINGAGENCY, MESSAGESEQUENCE, LASTINSTUSERALIAS,
    10   BOOKTIMESTAMP, PARTICIPANTIDMM, MARKETSTATE, PARTNEREXID, LASTEXECSETTLEMENTCYCLE,
    11   LASTEXECPOSTTRADEVENUETYPE, PRICELEVELPOSITION, PREVREFERENCEID, EXPIRYTIMESTAMP, MATCHTYPE,
    12   LASTEXECUTIONROLE, MDENTRYID, PEGOFFSET, HALTREASON, LASTINSTFIXSEQUENCE,
    13   COMPARISONPRICE, ENTEREDPRICETYPE)
    14  AS
    15  SELECT orderid
    16       , MAX(userorderid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    17       , MAX(orderside) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    18       , MAX(ordertype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    19       , MAX(orderstatus) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    20       , MAX(boardid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    21       , MAX(timeinforce) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    22       , MAX(instrumentid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    23       , MAX(referenceid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    24       , MAX(pricetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    25       , MAX(price) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    26       , MAX(averageprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    27       , MAX(quantity) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    28       , MAX(minimumfill) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    29       , MAX(disclosedqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    30       , MAX(remainqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    31       , MAX(aon) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    32       , MAX(participantid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    33       , MAX(accounttype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    34       , MAX(accountno) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    35       , MAX(clearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    36       , 'ok' as lastinstresult
    37       , MAX(lastinstmessagesequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    38       , MAX(lastexecutionid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    39       , MAX(note) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    40       , MAX(timestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    41       , MAX(qtyfilled) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    42       , MAX(meid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    43       , MAX(lastinstrejectcode) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    44       , MAX(lastexecprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    45       , MAX(lastexecqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    46       , MAX(lastinsttype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    47       , MAX(lastexecutioncounterparty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    48       , MAX(visibleqty) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    49       , MAX(stopprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    50       , MAX(lastexecclearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    51       , MAX(lastexecaccountno) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    52       , MAX(lastexeccpclearingagency) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    53       , MAX(messagesequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    54       , MAX(lastinstuseralias) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    55       , MAX(booktimestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    56       , MAX(participantidmm) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    57       , MAX(marketstate) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    58       , MAX(partnerexid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    59       , MAX(lastexecsettlementcycle) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    60       , MAX(lastexecposttradevenuetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    61       , MAX(pricelevelposition) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    62       , MAX(prevreferenceid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    63       , MAX(expirytimestamp) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    64       , MAX(matchtype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    65       , MAX(lastexecutionrole) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    66       , MAX(mdentryid) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    67       , MAX(pegoffset) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    68       , MAX(haltreason) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    69       , MAX(lastinstfixsequence) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    70       , MAX(comparisonprice) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    71       , MAX(enteredpricetype) KEEP (DENSE_RANK FIRST ORDER BY messagesequence DESC)
    72  FROM   tibex_order
    73  WHERE  orderstatus IN (
    74                           SELECT orderstatus
    75                           FROM   tibex_orderstatusenum
    76                           WHERE  shortdesc IN ('ORD_OPEN')
    77                        )
    78  GROUP BY orderid
    79  /
    View created.
    tst_trd_owner@MIFEX3> SELECT /*+ gather_plan_statistics */    *   FROM   TIBEX_OrderBook_TEMP as of scn 7785234991 where meid='ME4';
    SELECT /*+ gather_plan_statistics */    *   FROM   TIBEX_OrderBook_TEMP as of scn 7785234991 where meid='ME4'
    ERROR at line 1:
    ORA-01114: IO error writing block to file %s (block # %s)
    ERROR:
    ORA-03114: not connected to ORACLEAny Suggestion will be helpful
    Regards
    NM

  • Take long time to get data

    Hi,
    I have one ABAP code to load data to internal table, if the code is
    select * from xxxx into  itab where ....,
    then it works property. However I don't want to load whole table into internal table, I want to load the data based on another internal table, the code looks
    select * from xxx into itab for all entries in itab2 where ...
    I thought the code will execute faster than the previous one, however it is really slow, I got "sequential read" from SM50.
    Can anyone explain why?
    Thanks
    Victor

    Make sure that the internal table used in the FOR ALL ENTRIES is sorted by the field in which you are joining.
    if not itab2[] is initial.
    sort itab2 ascending by field1.
    select * from xxx into itab
         for all entries in itab2
                where field1 = itab2-field1
    endif.
    Regards,
    Rich Heilman
    Message was edited by: Rich Heilman

  • Takes Long time for Data Loading.

    Hi All,
    Good Morning.. I am new to SDN.
    Currently i am using the datasource 0CRM_SRV_PROCESS_H and it contains 225 fields. Currently i am using around 40 fields in my report.
    Can i hide the remaining fields in the datasource level itself (TCODE : RSA6)
    Currently data loading takes more time to load the data from PSA to ODS (ODS 1).
    And also right now i am pulling some data from another ODS(ODS 2)(LookUP). It takes long time to update the data in Active data table of the ODS.
    Can you please suggest how to improve the performance of dataloading on this Case.
    Thanks & Regards,
    Siva.

    Hi....
    Yes...u can hide..........just Check the hide box for those fields.......R u in BI 7.0 or BW...........whatever ........is the no of records is huge?
    If so u can split the records and execute............I mean use the same IP...........just execute it with different selections.........
    Check in ST04............is there are any locks or lockwaits..........if so...........Go to SM37 >> Check whether any Long running job is there or not.........then check whether that job is progressing or not............double click on the Job >> From the Job details copy the PID..............go to ST04 .....expand the node............and check whether u r able to find that PID there or not.........
    Also check System log in SM21............and shortdumps in ST04........
    Now to improve performance...........u can try to increase the virtual memory or servers.........if possiblr........it will increase the number of work process..........since if many jobs run at a time .then there will be no free Work prrocesses to proceed........
    Regards,
    Debjani......

  • HT4759 Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why

    Hello .. I've been subscribed for ic;oud for 20$ per year and I found it useless for many reasons: that I can not disconnect my mobile while the uploading process and it takes long time for uploading my data .. Its not a reliable system that's why I need to deactivate the space service and take my money back .. Thanks

    The "issues" you've raised are nothing to do with the iCloud service.
    No service that uploads data allows you to disconnect the device you are uploading from while uploading data. Doing so would prevent the upload from completing. It is a basic requirement for any uploading service that you remain connected to it for uploading to be possible.
    The time it takes to upload data to iCloud is entirely dependent on how fast your Internet connection is, and how much data you are uploading. Both of these things are completely out of Apple's control. Whichever upload service you use will be affected by the speed of your Internet connection.

  • BPM Process chain takes long time to process

    We have BI7, Netweaver 2004s on Oracle and SUN Solaris
    There is a process chain (BPM) which pulls data from the CRM system into BW. The scheduled time to run this chain is 0034 hrs. This chain should ideally complete before / around 0830 Hrs. <b>Now the problem is that every alternate day this chain behaves normally and gets completed well before 0830 hrs but every alternate day this chain fails…</b> there are almost 40 chains running daily. Some are event triggered (dependent with each other) or some run in parallel. In this, (BPM) process chain, usually there are 5 requests with 3 Delta and 2 full uploads (Master Data). The delta uploads finishes in 30 minutes without any issues with very few record transfers. The first full upload is from 0034 hrs to approximately 0130 hrs and the 2nd upload is from 0130 hrs to 0230 hrs. Now if the 1st upload gets delayed then the people who are initiating these chains, stop the 2nd full upload and continue it after all the process chains are completed. Now this entire BPM process chain sometimes takes 17 -18 hrs to complete!!!!!
    No other loads in CRM or BW when these process chains are running
    CRM has background jobs to push IDOCS to BW which run every 2 minutes which runs successfully
    Yesterday this chain got completed successfully (well within stipulated time) with over 33,00,000 records transferred but sometimes it has failed to transfer even 12,00,000 records!!
    Attaching a zip file, please refer the “21 to 26 Analysis screen shot.doc” from the zip file
    Within the zip file, attaching “Normal timings of daily process chains.xls” – the name explains it….
    Also within the zip file refer “BPM Infoprovider and data source screen shot.doc” please refer this file as the infopackage (page 2) which was used in the process chain is not displayed later on in page number 6 BUT CHAIN GOT SUCESSFULLY COMPLETED
    We have analyzed:--
    1)     The PSA data for BPM process chain for past few days
    2)     The info providers for BPM process chain for past few days
    3)     The ODS entries for BPM process chain for past few days
    4)     The point of failure of BPM process chain for past few days
    5)     The overall performance of all the process chains for past few days
    6)     The number of requests in BW for this process chain
    7)     The load on CRM system for past few days when this process chain ran on BW system
    As per our analysis, there are couple of things which can be fixed in the BW system:--
    1)     The partner agreement (transaction WE20) defined for the partner LS/BP3CLNT475 mentions both message types RSSEND and RSINFO: -- collect IDOCs and pack size = 1 Since the pack size = 1 will generate 1 TRFC call per IDOC, it should be changed to 10 so that less number of TRFCs will be generated thus less overhead for the BW server resulting in the increase in performance
    2)     In the definition of destination for the concerned RFC in BW (SM59), the “Technical Setting” tab says the “Load balancing” option = “No”. We are planning to make it “Yes”
    But we believe that though these changes will bring some increase in performance, this is not the root cause of the abnormal behavior of this chain as this chain runs successfully on every alternate day with approximately the same amount of load in it.
    I was not able to attach the many screen shots or the info which I had gathered during my analysis. Please advice how do I attach these files
    Best Regards,

    Hi,
    Normally  index  creation or deletion can take long time in case  your database statistics are not updated properly, so can check  stat  after your data loading is completed and index generation is done,  Do creation of database statistics.
    Then try to recheck ...
    Regards,
    Satya

  • XI - J2ee takes long time to startup after SP19

    Hi everybody,
    we have patched our XI development system to SP 19; during startup of j2ee it takes a long time in registering method in com.sap.security.core.server.vsi.service.jni.VirusScanInterface....about 10 minutes!
    Here follow an extract of dev_server0 tarce file:
    JHVM_BuildArgumentList: main method arguments of node [server0]
    [Thr 3600] Wed Jan 24 15:10:14 2007
    [Thr 3600] JHVM_RegisterNatives: registering methods in com.sap.bc.proj.jstartup.JStartupFramework
    [Thr 3600] JLaunchISetClusterId: set cluster id 5501650
    [Thr 3600] JLaunchISetState: change state from [Initial (0)] to [Waiting for start (1)]
    [Thr 3600] JLaunchISetState: change state from [Waiting for start (1)] to [Starting (2)]
    [Thr 31100] Wed Jan 24 15:10:48 2007
    [Thr 31100] JHVM_RegisterNatives: registering methods in com.sap.mw.rfc.driver.CpicDriver
    [Thr 31100] JHVM_RegisterNatives: registering methods in com.sap.mw.jco.util.SAPConverters
    [Thr 31100] JHVM_RegisterNatives: registering methods in com.sap.mw.jco.util.SAPCharToNUCByteConverter
    [Thr 31100] Wed Jan 24 15:10:50 2007
    [Thr 31100] JHVM_RegisterNatives: registering methods in com.sap.mw.rfc.engine.Compress
    <b>[Thr 25960] Wed Jan 24 15:11:02 2007
    [Thr 25960] JHVM_RegisterNatives: registering methods in com.sap.security.core.server.vsi.service.jni.VirusScanInterface</b>
    [Thr 3600] Wed Jan 24 15:21:40 2007
    [Thr 3600] JLaunchISetState: change state from [Starting (2)] to [Starting applications (10)]
    [Thr 23390] Wed Jan 24 15:24:40 2007
    [Thr 23390] JLaunchISetState: change state from [Starting applications (10)] to [Running (3)]
    Is there any way to speed up this process? or to deactivate the virusscaninterface service? What can I check to understand what happens during the registering method of VirusScanInterface?
    We are using XI on AIX platform, version 5.3 ML5 with Oracle database.
    Thanks in advance.
    Best regards.
    Tiziano

    Hello there, mabaaref.
    The following Knowledge Base article provides some great information in regards to battery functionality:
    iPhone and iPod touch: Charging the battery
    http://support.apple.com/kb/HT1476
    It's important to note that if the battery is extremely low on power, your device may display a black screen for up to two minutes before one of these images appears. Continue charging for at least 30 minutes, or until your device is fully charged.
    Thanks for reaching out to Apple Support Communities.
    Cheers,
    Pedro.

  • MVIEW refresh takes long time

    Materialized view takes long time to refresh but when i tried select & insert into table it's very fast.
    i executed SQL and it takes ust 1min ( total rows is 447 )
    but while i refresh the MVIEW it takes 1.5 hrs ( total rows is 447 )
    MVIEW configration :-
    CREATE MATERIALIZED VIEW EVAL.EVALSEARCH_PRV_LWC
    TABLESPACE EVAL_T_S_01
    NOCACHE
    NOLOGGING
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    REFRESH FORCE ON DEMAND
    WITH PRIMARY KEY
    Not sure why so much diffrence

    infant_raj wrote:
    Materialized view takes long time to refresh but when i tried select & insert into table it's very fast.
    i executed SQL and it takes ust 1min ( total rows is 447 )
    but while i refresh the MVIEW it takes 1.5 hrs ( total rows is 447 )A SELECT does a consistent read.
    A MV refresh does that and also writes database data.
    These are not the same thing and cannot be directly compared.
    So instead of pointing at the SELECT execution time and asking why the MV refresh is not as fast, look instead WHAT the refresh is doing and HOW it is doing that.
    Is the execution plan sane? What events are the top ones for the MV refresh? What are the wait states that contributes most to the processing time of the refresh?
    You cannot use the SELECT statement's execution time as a direct comparison metric. The work done by the refresh is more than the work done by the SELECT. You need to determine exactly what work is done by the refresh and whether that work is done in a reasonable time, and how other sessions are impacting the refresh (it could very well be blocked by another session).

  • The 0co_om_opa_6 ip in the process chains takes long time to run

    Hi experts,
    The 0co_om_opa_6 ip in the process chains takes long time to run around 5 hours in production
    I have checked the note 382329,
    -> where the indexes 1 and 4 are active
    -> index 4 was not "Index does not exist in database system ORACLE"- i have assgined to " Indexes on all database systems and ran the delta load in development system, but guess there are not much data in dev it took 2-1/2 hrs to run as it was taking earlier. so didnt find much differnce in performance.
    As per the note Note 549552 - CO line item extractors: performance, i have checked in the table BWOM_SETTINGS these are the settings that are there in the ECC system.
    -> OLTPSOURCE -  is blank
       PARAM_NAME - OBJSELSIZE
       PARAM_VALUE- is blank
    -> OLTPSOURCE - is blank
       PARAM_NAME - NOTSSELECT
       PARAM_VALUE- is blank
    -> OLTPSOURCE- 0CO_OM_OPA_6
       PARAM_NAME - NOBLOCKING
       PARAM_VALUE- is blank.
    Could you please check if any other settings needs to be done .
    Also for the IP there is selction criteris for FISCALYEAR/PERIOD from 2004-2099, also an inti is done for the same period as a result it becoming difficult for me to load for a single year.
    Please suggest.

    The problem was the index 4 was not active in the database level..it was recommended by the SAP team to activate it in se14..however while doing so we face few issues se14 is a very sensitive transaction should be handled carefully ... it should be activate not created.
    The OBJSELSIZE in the table BWOM_SETTINGS has to be Marked 'X' to improve the quality as well as the indexe 4 should be activate at the abap level i.e in the table COEP -> INDEXES-> INDEX 4 -> Select the  u201Cindex on all database systemu201D in place of u201CNo database indexu201D, once it is activated in the table abap level you can activate the same indexes in the database level.
    Be very carefull while you execute it in se14 best is to use db02 to do the same , basis tend to make less mistake there.
    Thanks Hope this helps ..

  • Delete Index in Process Chain Takes long time after SAP BI 7.0 SP 27

    After upgrading to SAP BI 7.0 SP 27 Delete index Process & Create index process in Process chain takes long time.
    For example : Delete index for 0SD_C03 takes around 55 minutes.
    Before SP upgrade it takes around 2 minutes to delete index from 0SD_C03.
    Regards
    Madhu P Menon

    Hi,
    Normally  index  creation or deletion can take long time in case  your database statistics are not updated properly, so can check  stat  after your data loading is completed and index generation is done,  Do creation of database statistics.
    Then try to recheck ...
    Regards,
    Satya

  • SELECT statement takes long time

    Hi All,
    In the following code, if the T_QMIH-EQUNR contains blank or space values ,SELECT statement takes longer time to acess the data from OBJK table. If it T_QMIH-EQUNR contains values other than blank, performance is good and it fetches data very fast.
    Already we have indexes for EQUNR in OBJK table.
    Only for blank entries , it takes much time.Can anybody tell why it behaves for balnk entries?
    if not T_QMIH[] IS INITIAL.
            SORT T_QMIH BY EQUNR.
            REFRESH T_OBJK.
            SELECT EQUNR OBKNR
              FROM OBJK INTO TABLE T_OBJK
              FOR ALL ENTRIES IN T_QMIH
              WHERE OBJK~TASER = 'SER01' AND
             OBJK~EQUNR = T_QMIH-EQUNR.
    Thanks
    Ajay

    Hi
    You can use the field QMIH-QMNUM with OBJK-IHNUM
    in QMIH table, EQUNR is not primary key, it will have multiple entries
    so to improve the performance use one dummy internal table for QMIH  and sort it on EQUNR
    delete adjacent duplicates from d_qmih and use the same in for all entries
    this will improve the performance.
    Also use the fields in sequence of the index and primary keys also in select
    if not T_QMIH[] IS INITIAL.
    SORT T_QMIH BY EQUNR.
    REFRESH T_OBJK.
    SELECT EQUNR OBKNR
    FROM OBJK INTO TABLE T_OBJK
    FOR ALL ENTRIES IN T_QMIH
    WHERE  IHNUM =  T_QMIH-QMNUM
    OBJK~TASER = 'SER01' AND
    OBJK~EQUNR = T_QMIH-EQUNR.
    try this and let me know
    regards
    Shiva

  • Oracle report formatting takes long time

    hi
    i am using oracle 10g Application Server 9.0.4.0.0 on Red hat linux AS 3
    oracle reports are run thru JSP's
    some reports take long time to execute though query in it if executed on sql plus are fast.
    it is fond that that formatting is going on for those reports
    what might be the reason for this??
    is the data fetched first and then formatting takes place as per data??
    waiting for reply
    Avinash

    hi
    i am using oracle 10g Application Server 9.0.4.0.0 on Red hat linux AS 3
    oracle reports are run thru JSP's
    some reports take long time to execute though query in it if executed on sql plus are fast.
    it is fond that that formatting is going on for those reports
    what might be the reason for this??
    is the data fetched first and then formatting takes place as per data??
    waiting for reply
    Avinash

  • Runbook takes long time to complete

    Hi,
    I created a customized flow to get the data from MS SQL. The runbook is working fine but it takes long time to complete. Is there any option to increase speed or something like that..?
    Regards,
    Soundarajan.

    If you look on the Log tab you can see which activity that took the longest. What does you runbook looks like? If you have for example a Run .NET script activity you can do some tuning on the runbook server. But I think a good start is to share a
    figure of your runbook.
    Anders Bengtsson | Microsoft PFE | blog at http://www.contoso.se

Maybe you are looking for

  • Missing PDF Options in Acrobat Pro 8

    For some reason, on some machines in the office "PDF Options" is missing when we try to print to the Adobe PDF 8.0 printer under OS X 10.4.10. Both machines are Intel-based. Two of the machines in the office have the option, two don't. We thought it

  • Error on reading file in Applet

    I wrote an jfilechooser and allow the user to choose the folder. The applet will then load the file list of the folder. However, after i chose the folder, the following error occurs Exception in thread "AWT-EventQueue-25" java.security.AccessControlE

  • System is considering Freight during Excise.

    Hi SAP Gurus, We have one scenario where the  vendor is maintained  as supply as well as freight vendor. While doing PO for this vendor, in conditions it is mentioned that excise amount is added to inventory as cenvat cannot be availed( no set off).

  • Back up and sync my Iphone to my pc and some apps wont sync?

    I used an AppleID yrs ago to buy some apps on my Iphone well now I am trying to sync my Iphone to my itunes account on my pc and it wont allow me to cos it says my pc isnt authorized for it. It wants me to put in my password for those apps and I not

  • After upgrade my iphone 4s to 5.1

    i tried to upgarde my iphone 4s to 5.1. Will shut down by itself. I restore more then 6 times. Always stuck in the middle. after succesfully restore. will shut down by itself. I try to turn on again. cant get in the main page. only shows the apple ic