Gather Schema Statistics Report taking more than 13 hours to complete is it normal?

I have run Gather Schema Statistics Report at 9 pm and it completed on 11am next morning. It almost took more than 13 hours, is this behavior normal.
I have used the following parameter.
Schema name: ALL
Estimate percent:50
Backup Flag :NOBACKUP
History Mode :LASTRUN
Gather Option:GATHER
Invalidate Dependent Cursor : Y
My database size is about 250 GB.
Please reply

Gather schema stastics is erroring out when i'm using the GATHER_AUTO option with 10%.
Here is the log file
+---------------------------------------------------------------------------+
Application Object Library: Version : 12.0.0
Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
FNDGSCST module: Gather Schema Statistics
+---------------------------------------------------------------------------+
Current system time is 13-AUG-2013 10:42:12
+---------------------------------------------------------------------------+
**Starts**13-AUG-2013 10:42:12
ORACLE error 20001 in FDPSTP
Cause: FDPSTP failed due to ORA-20001: SYS_NTGNSVL1S+OCZGRAAHKD9MYG== is an invalid identifier
ORA-06512: at "APPS.FND_STATS", line 774
ORA-06512: at line 1
The SQL statement being executed at the time of the error was: SE
+---------------------------------------------------------------------------+
Start of log messages from FND_FILE
+---------------------------------------------------------------------------+
In GATHER_SCHEMA_STATS , schema_name= ALL percent= 10 degree = 8 internal_flag= NOBACKUP
ORA-20001: SYS_NTGNSVL1S+OCZGRAAHKD9MYG== is an invalid identifier
+---------------------------------------------------------------------------+
End of log messages from FND_FILE
+---------------------------------------------------------------------------+
+---------------------------------------------------------------------------+
Executing request completion options...
Finished executing request completion options.
+---------------------------------------------------------------------------+
Concurrent request completed
Current system time is 13-AUG-2013 10:43:29
+---------------------------------------------------------------------------+
I have used the following parameters
Schema name: ALL
Estimate percent:10
Backup Flag :NOBACKUP
History Mode :LASTRUN
Gather Option:GATHER_AUTO
Invalidate Dependent Cursor : Y

Similar Messages

  • WEBI report taking more than a hour while refreshing

    Hi All,
    I have few WEBI report which need to be refreshed everyday at a particular time automatically, it should take around 10-15 minutes to refresh, while few day back  it started refreshing for hours without any error...i am unable to find out the reason...
    please guide me.
    Regards

    Hi Denis,
    Thanks for your reply.
    my report does not complete even after an hour of refreshing however its remain in the same running status,regarding the filter its the same its not changed, regarding the report sql i have tested it, its working fine as expected, no BOE users are increased,yes data set on DB is increased, could not find anything in Webi logs.
    Any guidance
    Regards

  • Sync and Create project operation from DTR is taking more than one hour

    Hi All.
    Recently basis team has implemented the track for  ESS/MSS application.So When we import the track to NWDS its showing 500 Dcs.
    I have successfully done the Sync and create project operation from DTR for 150 DCS and its take 5 min per Dcs.
    However after that when i am trying to sync Dc or create project DC from DTR the operation is taking more than 3 hour per DC.Which should not be the case because for rest 150 DC that i ahve done Sync operation adn Create project operation from DTR it hardly takes 5 min per Dc.As this operataion is taking so much time finally i have close the NWDS to stop this operation.
    I am using NWDS 2.0.15 and EP7.0 portal SP15 and NWDI is 7.0
    Can any body tell how to solve this issue so that i can Sync and Create project from DTR for a DC within 5 min?
    Thanks
    Susmita

    Hi Susmita,
    If the DCs are fine in CBS build, then I feel there is no need to test all of them locally in NWDS.
    You can verify some certain applications in these DCs, then you sync & create project for these DCs & test run these applications.
    As I get you only need to check ( no changes will be done ), yes you can verify them in small groups (say 20-25 DCs/group) in different workspaces so that no workspace is overloaded.
    But why do you want to keep a copy of them locally as you are not making any changes, you can Unsync & Remove these projects once verified & use the same workspace to work on the next set of DCs.
    Hope this clarifies your concerns.
    Kind Regards,
    Nitin
    Edited by: Nitin Jain on Apr 23, 2009 1:55 PM

  • XMLSerialize taking more than 10 hours to execute in Oracle

    Hi All,
    In my current project, converting oracle query into xml format first and then using XMLSerialize for printing but while execution it is taking more than 15 hours and more.
    The basic oracle query taking hardly 10 second to execute and converting from oracle query to xml format taking around 1 min but when i am using XMLSerialize with order by clause it is not executing.
    Can some help for fixing this performance issue causing due to XMLSerialize
    Thanks in advance.
    after adding the below clause performance issue started
    select XMLSerialize(CONTENT rec_str as CLOB) as test_XML, 100 + rnum as ORDER_CLAUSE from xxtemp
    Edited by: redrose1405 on May 1, 2012 12:45 AM

    How much free space do you have on your boot drive?
    OT

  • HT201274 My iphone4 is taking more than 5 hours to erase all the data still it is under process , still how much more time i have to wait for my mobile to ON

    My iphone4 is taking more than 5 hours to erase all the data still it is under process , still how much more time i have to wait for my mobile to ON ?

    I'm having this EXACT same problem with my iPhone 4, and I have the same computer stats (I have a Samsung Series 7)

  • Syncing my iPad is taking more than 36 hours... how do I speed this upso I can update my iOS?

    syncing my iPad is taking more than 36 hours... how do I speed this up so I can update my iOS?

    Try restarting the iPad. Disconnect the iPad first.
    Press and hold the On/Off Sleep/Wake button until the red slider appears. Slide your finger across the slider to turn off iPod touch. To turn iPod touch back on, press and hold the On/Off Sleep/Wake button until the Apple logo appears.
    Restart your PC.
    Try again.

  • I'm trying to restore and back up my iPhone 4 it's taking more than 45 hours. What should I do?

    I have been trying to restore and back up my Iphone which i have just updated and I wanted my apps, contacts, messages,etc,. all back on my iPhone but its taking more than 45 hours and it's only 16 GB. I have really important contacts that I need to have in my phone but it would take weeks to get them back one by one without using the back up.

    As has been said, you may have a corrupt backup on your computer. However, if it is a Windows computer you may also have a corrupt socket layer in Windows. So first try this (after canceling the backup):
    Open a command window with Administrator privileges and type:
       netsh  winsock  reset
    then reboot your computer and try again to do a backup (right click on the phone's name in iTunes and select Backup from the floating menu).
    If this doesn't work you will have to delete the existing backup. Go to iTunes preferences, Devices page.
    Note: If you have a Mac first try restarting it, launching iTunes, and back up as described above.

  • Query taking more than 1/2 hour for 80 million rows in fact table

    Hi All,
    I am stuck in this query as it it taking more than 35 mins to execute for 80 million rows. My SLA is less than 30 mins for 160 million rows i.e. double the number.
    Below is the query and the Execution Plan.
    SELECT txn_id AS txn_id,
    acntng_entry_src AS txn_src,
    f.hrarchy_dmn_id AS hrarchy_dmn_id,
    f.prduct_dmn_id AS prduct_dmn_id,
    f.pstng_crncy_id AS pstng_crncy_id,
    f.acntng_entry_typ AS acntng_entry_typ,
    MIN (d.date_value) AS min_val_dt,
    GREATEST (MAX (d.date_value),
    LEAST ('07-Feb-2009', d.fin_year_end_dt))
    AS max_val_dt
    FROM Position_Fact f, Date_Dimension d
    WHERE f.val_dt_dmn_id = d.date_dmn_id
    GROUP BY txn_id,
    acntng_entry_src,
    f.hrarchy_dmn_id,
    f.prduct_dmn_id,
    f.pstng_crncy_id,
    f.acntng_entry_typ,
    d.fin_year_end_dt
    Execution Plan is as:
    11 HASH JOIN Cost: 914,089 Bytes: 3,698,035,872 Cardinality: 77,042,414      
                                                                                    9 TABLE ACCESS FULL TABLE Date_Dimension Cost: 29 Bytes: 94,960 Cardinality: 4,748
                                                                                    10 TABLE ACCESS FULL TABLE Position_Fact Cost: 913,693 Bytes: 2,157,187,592 Cardinality: 77,042,414
    Kindly suggest, how to make it faster.
    Regards,
    Sid

    The above is just a part of the query that is taking the maximum time.
    Kindly find the entire query and the plan as follows:
    WITH MIN_MX_DT
    AS
    ( SELECT
    TXN_ID AS TXN_ID,
    ACNTNG_ENTRY_SRC AS TXN_SRC,
    F.HRARCHY_DMN_ID AS HRARCHY_DMN_ID,
    F.PRDUCT_DMN_ID AS PRDUCT_DMN_ID,
    F.PSTNG_CRNCY_ID AS PSTNG_CRNCY_ID,
    F.ACNTNG_ENTRY_TYP AS ACNTNG_ENTRY_TYP,
    MIN (D.DATE_VALUE) AS MIN_VAL_DT,
    GREATEST (MAX (D.DATE_VALUE), LEAST (:B1, D.FIN_YEAR_END_DT))
    AS MAX_VAL_DT
    FROM
    proj_PSTNG_FCT F, proj_DATE_DMN D
    WHERE
    F.VAL_DT_DMN_ID = D.DATE_DMN_ID
    GROUP BY
    TXN_ID,
    ACNTNG_ENTRY_SRC,
    F.HRARCHY_DMN_ID,
    F.PRDUCT_DMN_ID,
    F.PSTNG_CRNCY_ID,
    F.ACNTNG_ENTRY_TYP,
    D.FIN_YEAR_END_DT),
    SLCT_RCRDS
    AS (
    SELECT
    M.TXN_ID,
    M.TXN_SRC,
    M.HRARCHY_DMN_ID,
    M.PRDUCT_DMN_ID,
    M.PSTNG_CRNCY_ID,
    M.ACNTNG_ENTRY_TYP,
    D.DATE_VALUE AS VAL_DT,
    D.DATE_DMN_ID,
    D.FIN_WEEK_NUM AS FIN_WEEK_NUM,
    D.FIN_YEAR_STRT AS FIN_YEAR_STRT,
    D.FIN_YEAR_END AS FIN_YEAR_END
    FROM
    MIN_MX_DT M, proj_DATE_DMN D
    WHERE
    D.HOLIDAY_IND = 0
    AND D.DATE_VALUE >= MIN_VAL_DT
    AND D.DATE_VALUE <= MAX_VAL_DT),
    DLY_HDRS
    AS (
    SELECT
    S.TXN_ID AS TXN_ID,
    S.TXN_SRC AS TXN_SRC,
    S.DATE_DMN_ID AS VAL_DT_DMN_ID,
    S.HRARCHY_DMN_ID AS HRARCHY_DMN_ID,
    S.PRDUCT_DMN_ID AS PRDUCT_DMN_ID,
    S.PSTNG_CRNCY_ID AS PSTNG_CRNCY_ID,
    SUM
    DECODE
    PNL_TYP_NM,
    :B5, DECODE (NVL (F.PSTNG_TYP, :B2),
    :B2, NVL (F.PSTNG_AMNT, 0) * (-1),
    NVL (F.PSTNG_AMNT, 0)),
    0))
    AS MTM_AMT,
    NVL (
    LAG (
    SUM (
    DECODE (
    PNL_TYP_NM,
    :B5, DECODE (NVL (F.PSTNG_TYP, :B2),
    :B2, NVL (F.PSTNG_AMNT, 0) * (-1),
    NVL (F.PSTNG_AMNT, 0)),
    0)))
    OVER (
    PARTITION BY S.TXN_ID,
    S.TXN_SRC,
    S.HRARCHY_DMN_ID,
    S.PRDUCT_DMN_ID,
    S.PSTNG_CRNCY_ID
    ORDER BY S.VAL_DT),
    0)
    AS YSTDY_MTM,
    SUM (
    DECODE (
    PNL_TYP_NM,
    :B4, DECODE (NVL (F.PSTNG_TYP, :B2),
    :B2, NVL (F.PSTNG_AMNT, 0) * (-1),
    NVL (F.PSTNG_AMNT, 0)),
    0))
    AS CASH_AMT,
    SUM (
    DECODE (
    PNL_TYP_NM,
    :B3, DECODE (NVL (F.PSTNG_TYP, :B2),
    :B2, NVL (F.PSTNG_AMNT, 0) * (-1),
    NVL (F.PSTNG_AMNT, 0)),
    0))
    AS PAY_REC_AMT,
    S.VAL_DT,
    S.FIN_WEEK_NUM,
    S.FIN_YEAR_STRT,
    S.FIN_YEAR_END,
    NVL (TRUNC (F.REVSN_DT), S.VAL_DT) AS REVSN_DT,
    S.ACNTNG_ENTRY_TYP AS ACNTNG_ENTRY_TYP
    FROM
    SLCT_RCRDS S,
    proj_PSTNG_FCT F,
    proj_ACNT_DMN AD,
    proj_PNL_TYP_DMN PTD
    WHERE
    S.TXN_ID = F.TXN_ID(+)
    AND S.TXN_SRC = F.ACNTNG_ENTRY_SRC(+)
    AND S.HRARCHY_DMN_ID = F.HRARCHY_DMN_ID(+)
    AND S.PRDUCT_DMN_ID = F.PRDUCT_DMN_ID(+)
    AND S.PSTNG_CRNCY_ID = F.PSTNG_CRNCY_ID(+)
    AND S.DATE_DMN_ID = F.VAL_DT_DMN_ID(+)
    AND S.ACNTNG_ENTRY_TYP = F.ACNTNG_ENTRY_TYP(+)
    AND SUBSTR (AD.ACNT_NUM, 0, 1) IN (1, 2, 3)
    AND NVL (F.ACNT_DMN_ID, 1) = AD.ACNT_DMN_ID
    AND NVL (F.PNL_TYP_DMN_ID, 1) = PTD.PNL_TYP_DMN_ID
    GROUP BY
    S.TXN_ID,
    S.TXN_SRC,
    S.DATE_DMN_ID,
    S.HRARCHY_DMN_ID,
    S.PRDUCT_DMN_ID,
    S.PSTNG_CRNCY_ID,
    S.VAL_DT,
    S.FIN_WEEK_NUM,
    S.FIN_YEAR_STRT,
    S.FIN_YEAR_END,
    TRUNC (F.REVSN_DT),
    S.ACNTNG_ENTRY_TYP,
    F.TXN_ID)
    SELECT
    D.TXN_ID,
    D.VAL_DT_DMN_ID,
    D.REVSN_DT,
    D.TXN_SRC,
    D.HRARCHY_DMN_ID,
    D.PRDUCT_DMN_ID,
    D.PSTNG_CRNCY_ID,
    D.YSTDY_MTM,
    D.MTM_AMT,
    D.CASH_AMT,
    D.PAY_REC_AMT,
    MTM_AMT + CASH_AMT + PAY_REC_AMT AS DLY_PNL,
    SUM (
    MTM_AMT + CASH_AMT + PAY_REC_AMT)
    OVER (
    PARTITION BY D.TXN_ID,
    D.TXN_SRC,
    D.HRARCHY_DMN_ID,
    D.PRDUCT_DMN_ID,
    D.PSTNG_CRNCY_ID,
    D.FIN_WEEK_NUM || D.FIN_YEAR_STRT || D.FIN_YEAR_END
    ORDER BY D.VAL_DT)
    AS WTD_PNL,
    SUM (
    MTM_AMT + CASH_AMT + PAY_REC_AMT)
    OVER (
    PARTITION BY D.TXN_ID,
    D.TXN_SRC,
    D.HRARCHY_DMN_ID,
    D.PRDUCT_DMN_ID,
    D.PSTNG_CRNCY_ID,
    D.FIN_YEAR_STRT || D.FIN_YEAR_END
    ORDER BY D.VAL_DT)
    AS YTD_PNL,
    D.ACNTNG_ENTRY_TYP AS ACNTNG_PSTNG_TYP,
    'EOD ETL' AS CRTD_BY,
    SYSTIMESTAMP AS CRTN_DT,
    NULL AS MDFD_BY,
    NULL AS MDFCTN_DT
    FROM
    DLY_HDRS D
    Plan
    SELECT STATEMENT ALL_ROWSCost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
    25 WINDOW SORT Cost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
    24 WINDOW SORT Cost: 11,950,256 Bytes: 3,369,680,886 Cardinality: 7,854,734
    23 VIEW Cost: 10,519,225 Bytes: 3,369,680,886 Cardinality: 7,854,734
    22 WINDOW BUFFER Cost: 10,519,225 Bytes: 997,551,218 Cardinality: 7,854,734
    21 SORT GROUP BY Cost: 10,519,225 Bytes: 997,551,218 Cardinality: 7,854,734
    20 HASH JOIN Cost: 10,296,285 Bytes: 997,551,218 Cardinality: 7,854,734
    1 TABLE ACCESS FULL TABLE proj_PNL_TYP_DMN Cost: 3 Bytes: 45 Cardinality: 5
    19 HASH JOIN Cost: 10,296,173 Bytes: 2,695,349,628 Cardinality: 22,841,946
    5 VIEW VIEW index$_join$_007 Cost: 3 Bytes: 84 Cardinality: 7
    4 HASH JOIN
    2 INDEX FAST FULL SCAN INDEX (UNIQUE) proj_ACNT_DMN_PK Cost: 1 Bytes: 84 Cardinality: 7
    3 INDEX FAST FULL SCAN INDEX (UNIQUE) proj_ACNT_DMN_UNQ Cost: 1 Bytes: 84 Cardinality: 7
    18 HASH JOIN RIGHT OUTER Cost: 10,293,077 Bytes: 68,925,225,244 Cardinality: 650,237,974
    6 TABLE ACCESS FULL TABLE proj_PSTNG_FCT Cost: 913,986 Bytes: 4,545,502,426 Cardinality: 77,042,414
    17 VIEW Cost: 7,300,017 Bytes: 30,561,184,778 Cardinality: 650,237,974
    16 MERGE JOIN Cost: 7,300,017 Bytes: 230,184,242,796 Cardinality: 650,237,974
    8 SORT JOIN Cost: 30 Bytes: 87,776 Cardinality: 3,376
    7 TABLE ACCESS FULL TABLE proj_DATE_DMN Cost: 29 Bytes: 87,776 Cardinality: 3,376
    15 FILTER
    14 SORT JOIN Cost: 7,238,488 Bytes: 25,269,911,792 Cardinality: 77,042,414
    13 VIEW Cost: 1,835,219 Bytes: 25,269,911,792 Cardinality: 77,042,414
    12 SORT GROUP BY Cost: 1,835,219 Bytes: 3,698,035,872 Cardinality: 77,042,414
    11 HASH JOIN Cost: 914,089 Bytes: 3,698,035,872 Cardinality: 77,042,414
    9 TABLE ACCESS FULL TABLE proj_DATE_DMN Cost: 29 Bytes: 94,960 Cardinality: 4,748
    10 TABLE ACCESS FULL TABLE proj_PSTNG_FCT Cost: 913,693 Bytes: 2,157,187,592 Cardinality: 77,042,414

  • Cancelling disc burn taking more than half-an-hour

    I had just burned a cd before trying to burn another one but that wouldn't finish. The one before it took less than two minutes while this next one was taking more than ten. I decided to cancel it but now it is still on the cancelling sequence and I cannot get the cd out. Any advice?

    Given that it's so new, and what you've already tried, it sounds like an issue that requires service.
    You can arrange online service here.
    Service request.

  • SSRS report taking more time but fast in SSMS

    SSRS report taking more time but fast in SSMS,We are binding a string which more than 43000 length each and total number records is 4000.
    Plese do the needful.It is not possible to create index because it crosses 900 byte.

    Hi DBA5,
    As per my understanding, when previewing the report, there are three options to affect the report performance. They are data retrieval time, processing time, rendering time.
    The data retrieval takes time to retrieve the data from the server using a queries or stored procedures, the processing time takes time to process the data within the report using the operations in the layout of the report and the rendering time takes time
    to display the information of the report and how it is displayed like excel, pdf, html formats. It is the reason why report take more time than in SSMS.
    To improve the performance, we need to improve the three aspects. For the details, please see the great blog:http://blogs.msdn.com/b/mariae/archive/2009/04/16/quick-tips-for-monitoring-and-improving-performance-in-reporting-services.aspx
    Hope this helps.
    Regards,
    Heidi Duan
    Heidi Duan
    TechNet Community Support

  • How often we need to run gather schema statistics etc.. ??

    HI,
    Am on 11.5.10.2
    RDBMS 9.2.0.6
    How often we need to run the following requests in Production...
    1.Gather schema statistics
    2.Gather Column statistics
    3.Gather Table statistics
    4.Gather All Column statisitics
    Thanks

    Hi;
    We discussed here before about same issue. Please check below thread which could be helpful about your issue:
    How often we need to run gather schema statistics
    Re: Gather schema stats run
    How we can collect custom schema information wiht gather statistics
    gather schema stats for EBS 11.5.10
    gather schema stats conc. program taking too long time
    Re: gather schema stats conc. program taking too long time
    How it runs
    Gather Schema Statistics
    http://oracle-apps-dba.blogspot.com/2007/07/gather-statistics-for-oracle.html
    gather statistict collect which informations
    Gather Schema Statistics...
    Regard
    Helios

  • Can we track the progress of Gather Schema Statistics?

    Is there a way that we can track the progress of gather schema statistics other than querying dba_tables?
    I am trying to find a better option. Please suggest.
    Thanks,
    Suneel

    SRV008 wrote:
    Is there a way that we can track the progress of gather schema statistics other than querying dba_tables?
    I am trying to find a better option. Please suggest.
    Thanks,
    SuneelNo, it's either the request log file or dba_tables (last_analyzed column).
    Thanks,
    Hussein

  • DIAdem report with more than 3 signals

    Hi everybody!
    How can I create a report with more than 3 signals  as input. The DIAdem report accepts only 3 signals as Input.
    In my opinion this is an very easy question but I could not solve it during a few hours!
    Thanks in advance,
        Manuel123

    Thank you for your answer!
    But I have still some problems.
    According to your help I have connected eight of the signal-boxes to the connector. After that it is possible to connect up to eight signals to this sub-vi. Great!
    But now I want show all these signals in the Diadem report, for this reason I created a new report-layout (see attachment) and wired the path of it to the "report layout file" - input. Unfortunately Diadem uses still a layout with just two signals and even if I open my own layout in Diadem there are only two data-lines listed in the "Data Portal: Internal Data" - panel.
    Where is my failure?
    Regards,
       Manuel123
    Message Edited by Manuel123 on 08-29-2007 10:00 AM
    Message Edited by Manuel123 on 08-29-2007 10:01 AM
    Attachments:
    man.TDR.zip ‏11 KB

  • APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기

    제품 : AOL
    작성날짜 : 2003-12-02
    APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기
    ================================================
    PURPOSE
    APPLICATIONS 11I GATHER SCHEMA STATISTICS의 수행 주기
    Explanation
    Gather Schema Statistics 수행에 대한 정해진 주기는 없다. 일부 System은 매주 수행할 필요가 있을수 있고, 또 다른 System은 1개월 주기로 수행할 수 있다. 수행 주기는 data 양과 형태 그리고 얼마나 자주 변경되느냐에 따라 결정된다.
    가장 효과적인 수행 주기를 결정하기 위하여 다른 Schedule 로 수행하여 Monitoring 이 필요하다.
    일반적으로 다음과 같이 수행한다.
    1) 많은 양의 Data 및 Data 내용이 변경된 후 수행
    2) Data import 후 에 수행
    3) Performance의 저하가 발생한경우.
    11i 에서는 ANALYZE command 와 DBMS_STATS packages가 지원되지 않으므로 FND_STATS 을 사용하여 한다.
    Gather Schema Statistics 는 FND_STATS 을 사용한다.
    Example
    Reference Documents
    Note 168136.1 - How Often Should Gather Schema Statistics Program be
    Run?

    john
    you can do these things
    1. gather schema statistics regularly weekly once -full
    2. gather schema statistics daily - atleast 10%
    3. rebuild the fragmented indexes regularly - 15 days
    4. coalesce the tablespaces monthly once
    5. purge the unwanted data weekly once
    6. pin the db objects into SGA with dbms_shared_pool package
    7. find the objects which have become invalid and then validate them .
    8. purge workflow runtime data
    and there are still some more that as system administrator you should keep a watch on....
    but do the above you , your job is best done
    any help post here
    regards
    sdsreenivas

  • Reporting from more than one infocube and also from multiple ODS

    Hi all,
            Someone please help me in these issues.
    How can we do reporting from more than one cube ( if data in all the cubes are required, but only a few fields from each cube). And how to do the same with more than one ODS.
    Thanks in advance,
    Sekhar

    Hi Sekahr
    Can u jus create a multiprovider.
    Before creating the multiprovider..Just check the Common charcterstics avilable in
    all the ods based on that give the mapping ..
    and then once u have completd the creation of multiprovider execute the same in
    the tcode <b>Listcube</b> and then create the querys as per the user thats it...
    Regards
    R M K
    Assining points is the only way of saying thanx in SDN ***

Maybe you are looking for