Data Extraction to Cube is very slow

Hi Experts,
I am working on SAP APO project .I have created a cube to extract data(Backup of a planning area) from Live cache.But the data load is taking lot of time to load.We analysed the job log found that for storing data in PSA and in datatarget takes less than 1 minute but reading data from live cache is taking lot of time and also it is creating and deleting the views for reading data from live cache.I have created the cube in the BI instance present in APO system itself ....So i believe it should be faster ....Could anyone help me in this regard...
Thanks,
Vijay.

If you have 2 separate system APO and BW.
Preference for performance should be Extract data from Live cache in APO Cube and read Data in BW from APO cube to BW Cube.
SAP recommend 2 different instances especially for APO DP.
Hope it helps otherwise you can find lot of documentation in the market place for this subject..
Good luck.

Similar Messages

  • Obiee 11g multidimensional cubes are very slow

    Hi,
    I've noticed that navigation in multidimensional cubes by obiee11g is very very slow, above all in comparison with the microsoft excel plugin.
    I've Essbase 11.1.3 (I hope to remind well) and obiee 11.1.1.3.0.
    Someone have the same problem?
    Thanks

    HI Andy ,
    Observer the similar kind of performance problem in OBIEE 11g with Oracle Cube build using AWM 11g version .
    The same cube access using OBIEE 10g repository is very fast .
    I have seen one of my 10g report coming within 30 secs where in 11g it is even hard to come on before 5 minutes .
    Just have seen in SR that there are inherent problem of OBIEE 11.1.1.5 version regarding the performance and Oracle asked to apply couple of patch to fix the behavior .
    Regards,
    DxP

  • After 10gto11g upgd, 'Planning Data Pull Worker'  perf is very slow

    Hi all,
    We upgraded database from 10g to 11gR2. We have 3 prod instances , they work together. They are ebiz, planning, and configurator. Some of you might be familiar with this kind of setup.
    We are running 'Planning Data Pull Worker' in APS instance (planning) instance. This program "SELECTs" (and gets data) from EBIZ instance and "INSERTS" into planning instance.
    But after upgrade, this works fine in one instance (PERF6) (half hr), and takes very long like 6+ hrs in another instacen ARCH6.
    We figured where the problem is. A select statement.
    "select count(1) from apps.MRP_AP3_SALES_ORDERS_V ;"
    Eventhough this is not exactly the select been used. We simplified by just trying "count(1)" of the main view which as the culprit.
    I am pasting trace of both instance where working fine (PERF6) and where it is taking too long (ARCH6).
    Please have a look and help. We opened a sev-1 tar as well , still problem persists.
    Thanks,
    Gopal

    =======================================
    ___PERF6 instance trace; Where it works fine._
    =======================================
    SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_AWR(('87dtkqstb5adq'),'961766442',null,'ADVANCED'));
    PLAN_TABLE_OUTPUT
    SQL_ID 87dtkqstb5adq
    select count(1) from apps.MRP_AP3_SALES_ORDERS_V
    Plan hash value: 961766442
    | Id | Operation | Name | Row
    s | Bytes | Cost (%CPU)| Time |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | |
    | | 29253 (100)| |
    | 1 | SORT AGGREGATE | |
    1 | 189 | | |
    | 2 | NESTED LOOPS | |
    PLAN_TABLE_OUTPUT
    1 | 189 | 29253 (1)| 00:05:52 |
    | 3 | HASH JOIN | |
    1 | 185 | 29253 (1)| 00:05:52 |
    | 4 | NESTED LOOPS OUTER | |
    1 | 176 | 29246 (1)| 00:05:51 |
    | 5 | NESTED LOOPS OUTER | |
    1 | 170 | 29245 (1)| 00:05:51 |
    PLAN_TABLE_OUTPUT
    | 6 | NESTED LOOPS | |
    1 | 161 | 29245 (1)| 00:05:51 |
    | 7 | HASH JOIN | | 60
    90 | 654K| 19090 (1)| 00:03:50 |
    | 8 | TABLE ACCESS BY INDEX ROWID | FND_LANGUAGES |
    20 | 100 | 2 (0)| 00:00:01 |
    | 9 | INDEX RANGE SCAN | FND_LANGUAGES_N1 |
    20 | | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    | 10 | HASH JOIN | | 62
    46 | 640K| 19088 (1)| 00:03:50 |
    | 11 | TABLE ACCESS FULL | OE_TRANSACTION_TYPES_TL | 12
    97 | 37613 | 10 (0)| 00:00:01 |
    | 12 | NESTED LOOPS | |
    | | | |
    | 13 | NESTED LOOPS | | 41
    PLAN_TABLE_OUTPUT
    03 | 304K| 19077 (1)| 00:03:49 |
    | 14 | NESTED LOOPS | | 41
    03 | 220K| 14517 (1)| 00:02:55 |
    | 15 | HASH JOIN | | 41
    00 | 104K| 9959 (1)| 00:02:00 |
    | 16 | TABLE ACCESS FULL | MRP_DERIVED_SO_DEMANDS | 41
    00 | 65600 | 17 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    | 17 | INDEX FAST FULL SCAN | MTL_SYS_ITEMS_SN_N2 | 15
    59K| 14M| 1184 (1)| 00:00:15 |
    | 18 | TABLE ACCESS BY INDEX ROWID| OE_ORDER_LINES_ALL |
    1 | 29 | 2 (0)| 00:00:01 |
    | 19 | INDEX UNIQUE SCAN | OE_ORDER_LINES_U1 |
    1 | | 1 (0)| 00:00:01 |
    | 20 | INDEX UNIQUE SCAN | OE_ORDER_HEADERS_U1 |
    1 | | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    | 21 | TABLE ACCESS BY INDEX ROWID | OE_ORDER_HEADERS_ALL |
    1 | 21 | 2 (0)| 00:00:01 |
    | 22 | TABLE ACCESS BY INDEX ROWID | MTL_SALES_ORDERS |
    1 | 51 | 3 (0)| 00:00:01 |
    | 23 | INDEX RANGE SCAN | MTL_SALES_ORDERS_N1 |
    1 | | 2 (0)| 00:00:01 |
    | 24 | INDEX UNIQUE SCAN | PJM_PROJECT_PARAMETERS_U1 |
    PLAN_TABLE_OUTPUT
    1 | 9 | 0 (0)| |
    | 25 | INDEX RANGE SCAN | OE_ODR_LINES_SN_N2 |
    1 | 6 | 2 (0)| 00:00:01 |
    | 26 | TABLE ACCESS FULL | AR_SYSTEM_PARAMETERS_ALL |
    43 | 387 | 6 (0)| 00:00:01 |
    | 27 | INDEX UNIQUE SCAN | GL_SETS_OF_BOOKS_U2 |
    1 | 4 | 0 (0)| |
    PLAN_TABLE_OUTPUT
    Query Block Name / Object Alias (identified by operation id):
    1 - SEL$15993197
    8 - SEL$15993197 / FNL@SEL$2
    9 - SEL$15993197 / FNL@SEL$2
    11 - SEL$15993197 / OTTL@SEL$2
    PLAN_TABLE_OUTPUT
    16 - SEL$15993197 / MDSD@SEL$2
    17 - SEL$15993197 / MSIK@SEL$2
    18 - SEL$15993197 / OOL@SEL$2
    19 - SEL$15993197 / OOL@SEL$2
    20 - SEL$15993197 / OOHA@SEL$2
    21 - SEL$15993197 / OOHA@SEL$2
    22 - SEL$15993197 / MSO@SEL$2
    23 - SEL$15993197 / MSO@SEL$2
    24 - SEL$15993197 / MPP@SEL$2
    25 - SEL$15993197 / OOL1@SEL$2
    26 - SEL$15993197 / ASPA@SEL$2
    PLAN_TABLE_OUTPUT
    27 - SEL$15993197 / GSB@SEL$2
    Outline Data
    /*+
    BEGIN_OUTLINE_DATA
    IGNORE_OPTIM_EMBEDDED_HINTS
    OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
    DB_VERSION('11.2.0.2')
    OPT_PARAM('optimizer_dynamic_sampling' 6)
    PLAN_TABLE_OUTPUT
    ALL_ROWS
    OUTLINE_LEAF(@"SEL$15993197")
    ELIMINATE_JOIN(@"SEL$5C160134" "MTL_SALES_ORDERS"@"SEL$3")
    OUTLINE(@"SEL$5C160134")
    MERGE(@"SEL$335DD26A")
    OUTLINE(@"SEL$1")
    OUTLINE(@"SEL$335DD26A")
    MERGE(@"SEL$3")
    OUTLINE(@"SEL$2")
    OUTLINE(@"SEL$3")
    INDEX_FFS(@"SEL$15993197" "MSIK"@"SEL$2" ("MRP_SN_SYS_ITEMS"."INVENTORY_IT
    PLAN_TABLE_OUTPUT
    EM_ID"
    "MRP_SN_SYS_ITEMS"."ORGANIZATION_ID"))
    FULL(@"SEL$15993197" "MDSD"@"SEL$2")
    INDEX_RS_ASC(@"SEL$15993197" "OOL"@"SEL$2" ("OE_ORDER_LINES_ALL"."LINE_ID"
    INDEX(@"SEL$15993197" "OOHA"@"SEL$2" ("OE_ORDER_HEADERS_ALL"."HEADER_ID"))
    FULL(@"SEL$15993197" "OTTL"@"SEL$2")
    INDEX_RS_ASC(@"SEL$15993197" "FNL"@"SEL$2" ("FND_LANGUAGES"."INSTALLED_FLA
    G"))
    PLAN_TABLE_OUTPUT
    INDEX_RS_ASC(@"SEL$15993197" "MSO"@"SEL$2" ("MTL_SALES_ORDERS"."SEGMENT1")
    INDEX(@"SEL$15993197" "MPP"@"SEL$2" ("PJM_PROJECT_PARAMETERS"."PROJECT_ID"
    "PJM_PROJECT_PARAMETERS"."ORGANIZATION_ID"))
    INDEX(@"SEL$15993197" "OOL1"@"SEL$2" ("MRP_SN_ODR_LINES"."LINE_ID"))
    FULL(@"SEL$15993197" "ASPA"@"SEL$2")
    INDEX(@"SEL$15993197" "GSB"@"SEL$2" ("GL_SETS_OF_BOOKS"."SET_OF_BOOKS_ID")
    PLAN_TABLE_OUTPUT
    LEADING(@"SEL$15993197" "MSIK"@"SEL$2" "MDSD"@"SEL$2" "OOL"@"SEL$2" "OOHA"
    @"SEL$2" "OTTL"@"SEL$2"
    "FNL"@"SEL$2" "MSO"@"SEL$2" "MPP"@"SEL$2" "OOL1"@"SEL$2" "ASPA"@"S
    EL$2" "GSB"@"SEL$2")
    USE_HASH(@"SEL$15993197" "MDSD"@"SEL$2")
    USE_NL(@"SEL$15993197" "OOL"@"SEL$2")
    USE_NL(@"SEL$15993197" "OOHA"@"SEL$2")
    PLAN_TABLE_OUTPUT
    NLJ_BATCHING(@"SEL$15993197" "OOHA"@"SEL$2")
    USE_HASH(@"SEL$15993197" "OTTL"@"SEL$2")
    USE_HASH(@"SEL$15993197" "FNL"@"SEL$2")
    USE_NL(@"SEL$15993197" "MSO"@"SEL$2")
    USE_NL(@"SEL$15993197" "MPP"@"SEL$2")
    USE_NL(@"SEL$15993197" "OOL1"@"SEL$2")
    USE_HASH(@"SEL$15993197" "ASPA"@"SEL$2")
    USE_NL(@"SEL$15993197" "GSB"@"SEL$2")
    SWAP_JOIN_INPUTS(@"SEL$15993197" "MDSD"@"SEL$2")
    SWAP_JOIN_INPUTS(@"SEL$15993197" "OTTL"@"SEL$2")
    SWAP_JOIN_INPUTS(@"SEL$15993197" "FNL"@"SEL$2")
    PLAN_TABLE_OUTPUT
    END_OUTLINE_DATA
    110 rows selected.
    SQL> spool off

  • Data extraction very slow

    Hello All,
    I am trying to extract data from init 2lis_11_vaitm. But, extraction is very very slow. Total data in init set-up table are only 200k.
    One thing I noticed is that we changed the extractor to increase a no. of fields to 225. All these fields were available in the BI content delivered communication extract structure of 2lis_11_vaitm  in ECC.
    Is it because of large nos of fields being extracted from ECC. 200k records are being extracted..It has been 38 hrs..and so far only 140K records are extracted.
    Could you please suggest, how I can improve extraction performance.
    We will be moving this to production. I am afraid that much larger nos of records in production will take for ever to extract.
    Thanks
    Shailini

    Yes you are right, IO = input and output, and generally reference to the data loading capability
    BASIS will help to monitor the data loading capability.
    They will help to check the sizing document, prepared before system go launch,  to check the "designed capability". BASIS guys need to do some test on data loading before system go launch.
    And they will help to check the log to find some problems during data loading.
    In bi 7 statistics you can find some information about that load. Discuss with the basis guys, they can help to analysis the problem without the BI statistics.
    The system on hand do no load from 2lis_12_vcitm. But here is some information for your reference:
    1. Production Server: 26K records, load to 2LIS_03_BF, takes 58s in all
    2. Testing Server: 1.5 Million records, full load to 0FI_GL_10, Runtime 34m 12s

  • My performance is very slow when I run graphs. How do I increase the speed at which I can do other things while the data is being updated and displayed on the graphs?

    I am doing an an aquisition and displaying the data on graphs. When I run the program it is slow. I think because I have the number of scans to read associated with my scan rate. It takes the number of seconds I want to display on the chart times the scan rate and feeds that into the number of samples to read at a time from the AI read. The problem is that it stalls until the data points are aquired and displayed so I cannot click or change values on the front panel until the updates occur on the graph. What can I do to be able to help this?

    On Fri, 15 Aug 2003 11:55:03 -0500 (CDT), HAL wrote:
    >My performance is very slow when I run graphs. How do I increase the
    >speed at which I can do other things while the data is being updated
    >and displayed on the graphs?
    >
    >I am doing an an aquisition and displaying the data on graphs. When I
    >run the program it is slow. I think because I have the number of
    >scans to read associated with my scan rate. It takes the number of
    >seconds I want to display on the chart times the scan rate and feeds
    >that into the number of samples to read at a time from the AI read.
    >The problem is that it stalls until the data points are aquired and
    >displayed so I cannot click or change values on the front panel until
    >the updates occur on the graph. What can I do to be a
    ble to help
    >this?
    It may also be your graphics card. LabVIEW can max the CPU and you
    screen may not be refreshing very fast.
    --Ray
    "There are very few problems that cannot be solved by
    orders ending with 'or die.' " -Alistair J.R Young

  • Data Extraction and ODS/Cube loading: New date key field added

    Good morning.
    Your expert advise is required with the following:
    1. A data extract was done previously from a source with a full upload to the ODS and cube. An event is triggered from the source when data is available and then the process chain will first clear all the data in the ODS and cube and then reload, activate etc.
    2. In the ODS, the 'forecast period' field was now moved from data fields to 'Key field' as the user would like to report per period in future. The source will in future only provide the data for a specific period and not all the data as before.
    3) Data must be appended in future.
    4) the current InfoPackage in the ODS is a full upload.
    5) The 'old' data in the ODS and cube must not be deleted as the source cannot provide it again. They will report on the data per forecast period key in future.
    I am not sure what to do in BW as far as the InfoPackages are concerned, loading the data and updating the cube.
    My questions are:
    Q1) How will I ensure that BW will append the data for each forecast period to the ODS and cube in future? What do I check in the InfoPackages?
    Q2) I have now removed the process chain event that used to delete the data in the ODS and cube before reloading it again. Was that the right thing to do?
    Your assistance will be highly appreciated. Thanks
    Cornelius Faurie

    Hi Cornelius,
    Q1) How will I ensure that BW will append the data for each forecast period to the ODS and cube in future? What do I check in the InfoPackages?
    -->> Try to load data into ODS in Overwrite mode full update asbefore(adds new records and changes previous records with latest). Pust delta from this ODS to CUBE.
    If existing ODS loading in addition, introduce one more ODS with same granularity of source and load in Overwrite mode if possible delta or Full and push delta only subsequently.
    Q2) I have now removed the process chain event that used to delete the data in the ODS and cube before reloading it again. Was that the right thing to do?
    --> Yes, It is correct. Otherwise you will loose historic data.
    Hope it Helps
    Srini

  • Data transfer very slow on mac pro!! mac H/W unable to fit specifications.

    Hi,
    I am installinginternal disks on the quad mac pro and the transfer ratet in the mac are very slow!!!
    the disks are: barracuda 7200.10 disks, 500 GB. Sta II 300, max ST3500630AS firmware 3.AAE . throughput:300mb.jumpred off to get the 300.
    MAC tells that the QUAD MAC Pro equipment (2x2.66ghz dualcore intel xeon) comes with a 3.Gigabit specification/ 300MB/sec. (Intel ESB2 AHCI, speed: 3.0 Gigabit. AHCI version 1.10 supported)...
    And the benchmarks of the disk ST3500630AS show maximum 80MB/sec and a minimin of 37.3MB/sec, average 63.1 MB/sec. (anandtech tests).
    According to seagate.com the maximum sustained data transfer rate:78MB/s
    but in the mac the reallity is different: I am getting a transfer rate of 22MB/sec!!!!! . with out any application running and only making copy of the files!!!!
    that means that the mac is only providing only 1/3 of the expected transfer rate for the disks..
    so, is there a way that the MAC Pro gets all the normal transfer rate for those disks???
    Is the Quad MAC Pro hardware unable to get the maximum throughput of the disks???
    It seems that the mac is unable to get also the average transfer rate of those disks.. how to correct it???
    thanks a lot
    Alberto
    ps. just for fun: I am getting a faster transfer rate from an old power pc ,and old pc, and firewire 400 and old ide disks!!! (36MB/seg!!!)

    Yup.
    BThe latest Mac Pro are better, so if you have a new or 2 month old unit, Seagate should work, though I think there should be later firmware.
    Some would say those drives take the cake or woRSe.
    http://www.barefeats.com/hard91.html
    SEAGATE PUZZLES
    +Firmware version 3.AEE or later solves the slow sustained large block write speed issue for a single 7200.10 inside the Mac Pro.+
    +The remaining performance issue is slow small random read speeds for one, two, three or four drives. No matter how many drives you configure in RAID 0 sets, the average random read speed for combined block sizes from 64K to 1024K is less than 30MB/s (based on QuickBench 3 testing).+
    +Until Seagate fixes this, we can't recommend the 7200.10 series as the ideal boot drive for the Mac Pro (or Power Mac).+
    http://www.barefeats.com/quad08.html

  • How to extract data from info cube into an internal table using ABAP code

    HI
    Can Anyone plz suggest me
    How to extract data from info cube into an internal table using ABAP code like BAPI's or function modules.
    Thankx in advance
    regds
    AJAY

    HI Dinesh,
    Thankq for ur reply
    but i ahve already tried to use the function module.
    When I try to Use the function module RSDRI_INFOPOV_READ
    I get an information message "ERROR GENERATION TEST FRAME".
    can U plz tell me what could be the problem
    Bye
    AJAY

  • Fetching data is very slow in workspace for planning application

    Hello Everyone,
    I am working in the web environment of a planning application and recently I have seen some issue's like Fetching data is very slow through webform's and the system will locking automatically and which kick me off from workspace when I refresh.So could you suggest me like what would be the reason's for this issue's.
    Thanks in advance

    Hi,
    Sounds like your form is very large or there are maybe a lot of dynamic calcs on the retrieval. How many rows and columns have you on the form ? Have you dense or sparse dimensions as rows or columns?
    Brian

  • How can I  extract non-cumulative cube data to another cubes ?

    Dear Expert,
             I copied inventory cube 0ic_c03 to  a new cube : ZIC_C03  , then I loaded the data of 0ic_c03 to the new cube ZIC_C03 . 
    the loading was sucessful, but  the result of  report of the ZIC_C03 was not correct. After looking into the problem, I found out
    the new cube can successfully extract data related with "Goods movement" & "Revaluation", yet the extract of "Open Balance" does not seems to work.
             I have read through Note:375098, in which I could not find (InfoObject 0RECORDTP) in my start routine as mentioned in the solution
    ,and the note was not mentioned valid for BW 3.5 release.
             How can I  extract non-cumulative cube data to another cubes correctly in BW3.5 release SP20? 
       Thanks !

    Hi,
    Check Routines in Update rules wether same code is wrriten in  new cube
    cheers,
    Satya

  • Update Process Very Slow in Oracle 8 which update bulk data

    Dear all
    i am just updating data through SQLsub-query,but i want to get to column from sub-query and need to update my source table, but there is problem is that through sub-query just return a single column while updating,but i don't wan't to re-type another query for another column due to performance issue.
    Also the other issued related performance is very slow,how should i fast update in bulk.
    Please suggest,
    Thanks

    Actually i am update time roster table with machine date, first i get from file & insert into Machine_table & then
    i make joing query & then update roster table which is like below.
    in roster table data consist 1 to last day of month of every employee.
    update roster a
    set (a.timein,a.timeout) = (select timein,timeout from machine_table mch
    where a.roster_date = mch.roster_date and a.person_id = mch.person_id);
    this query is updating around 7750 & it takes to much time.
    please help urgent thanks.

  • SQL loader load data very slow...

    Hi,
    On my production server have issue of insert. Regular SQL loder load file, it take more time for insert the data in database.
    First 2 and 3 hours one file take 8 to 10 seconds after that it take 5 minutes.
    As per my understanding OS I/O is very slow, First 3 hours DB buffer is free and insert data in buffer normal.
    But when buffer is fill then going for buffer waits and then insert is slow on. If it rite please tell me how to increase I/O.
    Some analysis share here of My server...................
    [root@myserver ~]# iostat
    Linux 2.6.18-194.el5 (myserver) 06/01/2012
    avg-cpu: %user %nice %system %iowait %steal %idle
    3.34 0.00 0.83 6.66 0.00 89.17
    Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
    sda 107.56 2544.64 3140.34 8084953177 9977627424
    sda1 0.00 0.65 0.00 2074066 16
    sda2 21.57 220.59 1833.98 700856482 5827014296
    sda3 0.00 0.00 0.00 12787 5960
    sda4 0.00 0.00 0.00 8 0
    sda5 0.69 2.75 15.07 8739194 47874000
    sda6 0.05 0.00 0.55 5322 1736264
    sda7 0.00 0.00 0.00 2915 16
    sda8 0.50 9.03 5.24 28695700 16642584
    sda9 0.51 0.36 24.81 1128290 78829224
    sda10 0.52 0.00 5.98 9965 19004088
    sda11 83.71 2311.26 1254.71 7343426336 3986520976
    [root@myserver ~]# hdparm -tT /dev/sda11
    /dev/sda11:
    Timing cached reads: 10708 MB in 2.00 seconds = 5359.23 MB/sec
    Timing buffered disk reads: 540 MB in 3.00 seconds = 179.89 MB/sec
    [root@myserver ~]# sar -u -o datafile 1 6
    Linux 2.6.18-194.el5 (mca-webreporting2) 06/01/2012
    09:57:19 AM CPU %user %nice %system %iowait %steal %idle
    09:57:20 AM all 6.97 0.00 1.87 16.31 0.00 74.84
    09:57:21 AM all 6.74 0.00 1.25 17.48 0.00 74.53
    09:57:22 AM all 7.01 0.00 1.75 16.27 0.00 74.97
    09:57:23 AM all 6.75 0.00 1.12 13.88 0.00 78.25
    09:57:24 AM all 6.98 0.00 1.37 16.83 0.00 74.81
    09:57:25 AM all 6.49 0.00 1.25 14.61 0.00 77.65
    Average: all 6.82 0.00 1.44 15.90 0.00 75.84
    [root@myserver ~]# sar -u -o datafile 1 6
    Linux 2.6.18-194.el5 (mca-webreporting2) 06/01/2012
    09:57:19 AM CPU %user %nice %system %iowait %steal %idle
    mca-webreporting2;601;2012-05-27 16:30:01 UTC;2.54;1510.94;3581.85;0.00
    mca-webreporting2;600;2012-05-27 16:40:01 UTC;2.45;1442.78;3883.47;0.04
    mca-webreporting2;599;2012-05-27 16:50:01 UTC;2.44;1466.72;3893.10;0.04
    mca-webreporting2;600;2012-05-27 17:00:01 UTC;2.30;1394.43;3546.26;0.00
    mca-webreporting2;600;2012-05-27 17:10:01 UTC;3.15;1529.72;3978.27;0.04
    mca-webreporting2;601;2012-05-27 17:20:01 UTC;9.83;1268.76;3823.63;0.04
    mca-webreporting2;600;2012-05-27 17:30:01 UTC;32.71;1277.93;3495.32;0.00
    mca-webreporting2;600;2012-05-27 17:40:01 UTC;1.96;1213.10;3845.75;0.04
    mca-webreporting2;600;2012-05-27 17:50:01 UTC;1.89;1247.98;3834.94;0.04
    mca-webreporting2;600;2012-05-27 18:00:01 UTC;2.24;1184.72;3486.10;0.00
    mca-webreporting2;600;2012-05-27 18:10:01 UTC;18.68;1320.73;4088.14;0.18
    mca-webreporting2;600;2012-05-27 18:20:01 UTC;1.82;1137.28;3784.99;0.04
    [root@myserver ~]# vmstat
    procs -----------memory---------- -swap -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    0 1 182356 499444 135348 13801492 0 0 3488 247 0 0 5 2 89 4 0
    [root@myserver ~]# dstat -D sda
    ----total-cpu-usage---- dsk/sda -net/total- -paging -system
    usr sys idl wai hiq siq| read writ| recv send| in out | int csw
    3 1 89 7 0 0|1240k 1544k| 0 0 | 1.9B 1B|2905 6646
    8 1 77 14 0 1|4096B 3616k| 433k 2828B| 0 0 |3347 16k
    10 2 77 12 0 0| 0 1520k| 466k 1332B| 0 0 |3064 15k
    8 2 77 12 0 0| 0 2060k| 395k 1458B| 0 0 |3093 14k
    8 1 78 12 0 0| 0 1688k| 428k 1460B| 0 0 |3260 15k
    8 1 78 12 0 0| 0 1712k| 461k 1822B| 0 0 |3390 15k
    7 1 78 13 0 0|4096B 6372k| 449k 1950B| 0 0 |3322 15k
    AWR sheet output
    Wait Events
    ordered by wait time desc, waits desc (idle events last)
    Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
    free buffer waits 1,591,125 99.95 19,814 12 129.53
    log file parallel write 31,668 0.00 1,413 45 2.58
    buffer busy waits 846 77.07 653 772 0.07
    control file parallel write 10,166 0.00 636 63 0.83
    log file sync 11,301 0.00 565 50 0.92
    write complete waits 218 94.95 208 955 0.02
    SQL> select 'free in buffer (NOT_DIRTY)',round((( select count(DIRTY) N_D from v$bh where DIRTY='N')*100)/(select count(*) from v$bh),2)||'%' DIRTY_PERCENT from dual
    union
    2 3 select 'keep in buffer (YES_DIRTY)',round((( select count(DIRTY) N_D from v$bh where DIRTY='Y')*100)/(select count(*) from v$bh),2)||'%' DIRTY_PERCENT from dual;
    'FREEINBUFFER(NOT_DIRTY)' DIRTY_PERCENT
    free in buffer (NOT_DIRTY) 10.71%
    keep in buffer (YES_DIRTY) 89.29%
    Rag....

    1)
    Yah This is partition table and on it Local partition index.
    SQL> desc GR_CORE_LOGGING
    Name Null? Type
    APPLICATIONID VARCHAR2(20)
    SERVICEID VARCHAR2(25)
    ENTERPRISENAME VARCHAR2(25)
    MSISDN VARCHAR2(15)
    STATE VARCHAR2(15)
    FROMTIME VARCHAR2(25)
    TOTIME VARCHAR2(25)
    CAMP_ID VARCHAR2(50)
    TRANSID VARCHAR2(25)
    MSI_INDEX NUMBER
    SQL> select index_name,column_name from user_ind_columns where table_name='GR_CORE_LOGGING';
    INDEX_NAME
    COLUMN_NAME
    GR_CORE_LOGGING_IND
    MSISDN
    2) I was try direct but after that i was drop this table and again create new partition table and create fresh index. but still same issue.

  • Internal Disk to Disk Data Transfer Speed Very Slow

    I have a G5 Xserve running Tiger with all updates applied that has recently started experiencing very slow Drive to Drive Data transfer speeds.
    When transferring data from one drive to another ( Internal to Internal, Internal to USB, Internal, Internal to FW, USB to USB or any other combination of the three ) we only are getting about 2GB / hr transfer speeds.
    I initially thought the internal drive was going bad. I tested the drive and found some minor header issues etc... that were able to be repaired so I replace the internal boot drive
    I tested and immediately got the same issue.
    I also tried booting from a FW drive and I got the same issue.
    If I connect to the server over the ethernet network, I get what I would expect to be typical data transfer rates of about 20GB+ / hr. Much higher than the internal rates and I am copying data from the same internal drives so I really don't think the drive is the issue.
    I called AppleCare and discussed the issue with them. They said it sounded like a controller issue so I purchased a replacement MLB from them. After replacing the drive data transfer speeds jumped back to normal for about a day maybe two.
    Now we are back to experiencing slow data transfer speeds internally ( 2GB / hr ) and normal transfer speeds ( 20GB+ / hr ) over the network.
    Any ideas on what might be causing the problem would be appreciated

    As suggested, do check for other I/O load on the spindles. And check for general system load.
    I don't know of a good GUI in-built I/O monitor here (and particularly for Tiger Server), though there is iopending and DTrace and Apple-provided [performance scripts|http://support.apple.com/kb/HT1992] with Leopard and Leopard Server. top would show you busy processes.
    Also look for memory errors and memory constraints and check for anything interesting in the contents of the system logs.
    The next spot after the controllers (and it's usually my first "hardware" stop for these sorts of cases, and usually before swapping the motherboard) are the disks that are involved, and whatever widgets are in the PCI slots. Loose cables, bad cables, and spindle-swaps. Yes, disks can sometimes slow down like this, and that's not usually a Good Thing. I know you think this isn't the disks, but that's one of the remaining common hardware factors. And don't presume any SMART disk monitoring has predictive value; SMART can miss a number of these cases.
    (Sometimes you have to use the classic "field service" technique of swapping parts and of shutting down software pieces until the problem goes away. Then work from there.)
    And the other question is around how much time and effort should be spent on this Xserve G5 box; whether you're now in the market for a replacement G5 box or a newer Intel Xserve box as a more cost-effective solution.
    (How current and how reliable is your disk archive?)

  • Best way to extract data from archived cube

    Hello Experts,
    Can anyone tell me best way to extract data from archived cube.
    Basically I am trying to pull all the data from archived cube and then load it into another brand new infoprovider which is in different box.
    Also I need to extract all the master data for all infoobjects.
    I have two options in my mind:
    1) Use open hub destination
    or
    2) Infoprovider>display data>select the fields and download the data.
    Is it really possible to extract data using option (2) if records are too high and then load it into another infoprovider in new system.
    Please suggest me the pros and cons for the two options.
    Thanks for your time in advance.

    Hello Reddy,
    Thanks a lot for your quick reply.
    Actually in my case I am trying to extract archived infocube data and then load it into new infoprovider which is in different system. If I have connectivity I can simply export data source from archived infocube and then reload into new infoprovider.
    But there is no connectivity between those two systems (where archived cube is and new infoprovider) and so I am left with the two options I mentioned.
    1) Use Open Hub
    or
    2) Extract data manually from infoprovider into excel.
    Can anyone let me know which of the two options is the best and also I doubt on how to use excel in extracting data as excel have limit of no.of records 65536
    Thanks
    Edited by: saptrain on Mar 12, 2010 6:13 AM

  • DataSource extraction very slow ( from Source System to PSA it takes 23 hrs

    Friends,
    We have enhanced the datasource 0CRM_SALES_ORDER_I with the user exit....after the enhancement i.e (adding the new fields and wrote some coding to enhance ...) the data extraction takes place for around 23 hours. there is approximately 2,50,000 records.
    Can you please suggest any steps to tune up the performance of the datasource.
    NOTE: Data Extraction from source system to PSA alone takes 23 hrs.once the data is arrived in PSA then the loading of data to cube is fast.
    PLZ help me to solve this issue.
    BASKAR

    Hi Friends,
    This is the code used for the datasource enhancement.(EXIT_SAPLRSAP_001)
    DATA : IS_CRMT_BW_SALES_ORDER_I LIKE CRMT_BW_SALES_ORDER_I.
    DATA:  MKT_ATTR TYPE STANDARD TABLE OF  CRMT_BW_SALES_ORDER_I.
    DATA: L_TABIX TYPE I.
    DATA: LT_LINK TYPE STANDARD TABLE OF CRMD_LINK,
          LS_LINK TYPE CRMD_LINK.
    DATA: LT_PARTNER TYPE STANDARD TABLE OF CRMD_PARTNER,
          LS_PARTNER TYPE CRMD_PARTNER.
    DATA: LT_BUT000 TYPE STANDARD TABLE OF BUT000,
          LS_BUT000 TYPE BUT000.
    DATA: GUID TYPE CRMT_OBJECT_GUID.
    DATA: GUID1 TYPE CRMT_OBJECT_GUID_TAB.
    DATA: ET_PARTNER TYPE CRMT_PARTNER_EXTERNAL_WRKT,
          ES_PARTNER TYPE CRMT_PARTNER_EXTERNAL_WRK.
    TYPES: BEGIN OF M_BINARY,
           OBJGUID_A_SEL TYPE CRMT_OBJECT_GUID,
            END OF M_BINARY.
    DATA: IT_BINARY TYPE STANDARD TABLE OF M_BINARY,
          WA_BINARY TYPE M_BINARY.
    TYPES : BEGIN OF M_COUPON,
             OFRCODE TYPE CRM_MKTPL_OFRCODE,
             END OF M_COUPON.
    DATA: IT_COUPON TYPE STANDARD TABLE OF M_COUPON,
          WA_COUPON TYPE M_COUPON.
    DATA: CAMPAIGN_ID TYPE CGPL_EXTID.
    TYPES : BEGIN OF M_ITEM,
             GUID TYPE CRMT_OBJECT_GUID,
            END OF M_ITEM.
    DATA: IT_ITEM TYPE STANDARD TABLE OF M_ITEM,
          WA_ITEM TYPE M_ITEM.
    TYPES : BEGIN OF M_PRICE,
                  KSCHL TYPE PRCT_COND_TYPE,
                  KWERT  TYPE PRCT_COND_VALUE,
                  KBETR   TYPE PRCT_COND_RATE,
            END OF M_PRICE.
    DATA: IT_PRICE TYPE STANDARD TABLE OF M_PRICE,
          WA_PRICE TYPE M_PRICE.
    DATA: PRODUCT_GUID TYPE COMT_PRODUCT_GUID.
    TYPES : BEGIN OF M_FRAGMENT,
             PRODUCT_GUID TYPE COMT_PRODUCT_GUID,
             FRAGMENT_GUID TYPE COMT_FRG_GUID,
             FRAGMENT_TYPE TYPE COMT_FRGTYPE_GUID,
           END OF M_FRAGMENT.
    DATA: IT_FRAGMENT TYPE STANDARD TABLE OF M_FRAGMENT,
          WA_FRAGMENT TYPE M_FRAGMENT.
    TYPES : BEGIN OF M_UCORD,
          PRODUCT_GUID TYPE     COMT_PRODUCT_GUID,
          FRAGMENT_TYPE     TYPE COMT_FRGTYPE_GUID,
          ZZ0010 TYPE     Z1YEARPLAN,
            ZZ0011 TYPE Z6YAERPLAN_1,
            ZZ0012 TYPE Z11YEARPLAN,
            ZZ0013 TYPE Z16YEARPLAN,
            ZZ0014 TYPE Z21YEARPLAN,
         END OF M_UCORD.
    DATA: IT_UCORD TYPE STANDARD TABLE OF M_UCORD,
          WA_UCORD TYPE M_UCORD.
    DATA: IT_CATEGORY TYPE STANDARD TABLE OF COMM_PRPRDCATR,
          WA_CATEGORY TYPE COMM_PRPRDCATR.
    DATA: IT_CATEGORY_MASTER TYPE STANDARD TABLE OF ZPROD_CATEGORY ,
          WA_CATEGORY_MASTER TYPE ZPROD_CATEGORY .
    types : begin of st_final,
               OBJGUID_B_SEL  TYPE CRMT_OBJECT_GUID,
               OFRCODE TYPE CRM_MKTPL_OFRCODE,
               PRODJ_ID TYPE CGPL_GUID16,
               OBJGUID_A_SEL type     CRMT_OBJECT_GUID,
              end of st_final.
    data : t_final1 type  standard table of st_final.
    data : w_final1 type  st_final.
    SELECT  bOBJGUID_B_SEL aOFRCODE  aPROJECT_GUID bOBJGUID_A_SEL  INTO table t_final1 FROM
       CRMD_MKTPL_COUP as a  inner join CRMD_BRELVONAE as b on  bOBJGUID_A_SEL = aPROJECT_GUID .

Maybe you are looking for

  • What does this error mean and why can I not find a reference to it ?

    In my continuing saga to get the type of backups I want our new Basis Admin had me use the brarchive option -ds  to brbackup brbackup -u / -p init$.sap -t online -m full -c -a -ds the brarchive option being -ds when the above was executed it gave the

  • Unable to Copy to Purchase Order from a Quotation

    Hi All, Can anyone help with this problem: When I try to copy to a PO fro a PQ I get an error msg ' No matching records ODBC-2028'. This message appears after the PO form has opened with no lines copied. Thanks  Martin

  • Swf. not working right in premiere pro

    I imported a swf. into premiere pro and I can play the video in the source monitor, but when I put it on the timeline I can't see it. Does anyone know how to get it to show up?

  • Xtext Unit tests with JUnitParam

    Hey, I do some testing with Xtext. Somehow I ended up with a lot of boilerplate code and would like to get rid of it by using JUnitParam. Both Xtext and JUnitParam require the use of @RunWith annotation. @RunWith(JUnitParamsRunner.class) @RunWith(Xte

  • Net price for consignment item category in ME21n (urgent)

    Hi all, while creating PO in me21n, net price is being displayed for the standard item category but net price is not comming while maintaining item category as consignment. if the item category(ekko-pstyp)is consignment (i.e we have to put 'K' in the