Performance issue with Oracle data source

Hi all,
I've a rather strange problem that I'm stuck on need some assistance on.
I have a rules file which drags data in via an SQL data source thats an Oracle server. If I cut/paste the 3 sections of "select" "from" and "where" into SQL-Developer and run the query, it takes less than 1 second to complete. When I run the "load data" with this rule file or even use the "Retrieve" with the rules file edit, it takes up to an hour to complete/retrieve the data.
The table in question being used has millions of rows and I'm using one of the indexed fields to retrieve the data. It's as if the Essbase/Rule file is ognoring the index, or I have a config issue with the ODBC settings on the server that is causing the problem.
ODBC.INI file entry for the Oracle server as follows (changed any sensitive info to xxx or 999).
[XXX]
Driver=/opt/data01/hyperion/common/ODBC-64/Merant/5.2/lib/ARora22.so
Description=DataDirect 5.2 Oracle Wire Protocol
AlternateServers=
ApplicationUsingThreads=1
ArraySize=60000
CachedCursorLimit=32
CachedDescLimit=0
CatalogIncludesSynonyms=1
CatalogOptions=0
ConnectionRetryCount=0
ConnectionRetryDelay=3
DefaultLongDataBuffLen=1024
DescribeAtPrepare=0
EnableDescribeParam=0
EnableNcharSupport=0
EnableScrollableCursors=1
EnableStaticCursorsForLongData=0
EnableTimestampWithTimeZone=0
HostName=999.999.999.999
LoadBalancing=0
LocalTimeZoneOffset=
LockTimeOut=-1
LogonID=xxx
Password=xxx
PortNumber=1521
ProcedureRetResults=0
ReportCodePageConversionErrors=0
ServiceType=0
ServiceName=xxx
SID=
TimeEscapeMapping=0
UseCurrentSchema=1
Can anyone please advise on this lack of performance.
Thanks in advance
Bagpuss

One other thing that I've seen is that if your Oracle data source and Essbase server are in different geographic locations, you can get some delay when it retrieves data over the WAN. I guess there is some handshaking going on when passing the data from Oracle to Essbase (either by record or groups of records) that is slowed WAY down over the WAN.
Our solution to this was remove teh query out of the load rule, run it via SQL+ on a command line at the geographic location where the Oracle database is, then ftp the resulting file to where the Essbase server is.
With upwards of 6 million records being retrieved, it took around 4 hours in the load rule, but running the query via command line took 10 minutes, then the ftp took less than 5.

Similar Messages

  • Performance issues with Oracle EE 9.2.0.4 and RedHat 2.1

    Hello,
    I am having some serious performance issues with Oracle Enterprise Edition 9.2.0.4 and RedHat Linux 2.1. The processor goes berserk at 100% for long (some 5 min.) periods of time, and all the ram memory gets used.
    Some environment characteristics:
    Machine: Intel Pentium IV 2.0GHz with 1GB of RAM.
    OS: RedHat Linux 2.1 Enterprise.
    Oracle: Oracle Enterprise Edition 9.2.0.4
    Application: We have a small web-application with 10 users (for now) and very basic queries (all in stored procedures). Also we use the latest version of ODP.NET with default connection settings (some low pooling, etc).
    Does anyone know what could be going on?
    Is anybody else having this similar behavior?
    We change from SQL-Server so we are not the world expert on the matter. But we want a reliable system nonetheless.
    Please help us out, gives some tips, tricks, or guides…
    Thanks to all,
    Frank

    Thank you very much and sorry I couldn’t write sooner. It seems that the administrator doesn’t see the kswap going on so much, so I don’t really know what is going on.
    We are looking at some queries and some indexing but this is nuts, if I had some poor queries, which we don’t really, the server would show pick right?
    But he goes crazy and has two oracle processes taking all the resources. There seems to be little swapping going on.
    Son now what? They are all ready talking about MS-SQL please help me out here, this is crazy!!!
    We have, may be the most powerful combinations here. What is oracle doing?
    We even kill the Working Process of the IIS and have no one do anything with the database and still dose two processes going on.
    Can some one help me?
    Thanks,
    Frank

  • CrystalReports error in XI 3.1 with Oracle Data Source

    Having trouble running Crystal Reports with Oracle data source off XI 3.1 SP3. Reports hang through CMC or InfoView. Have no trouble on identical environment (presumably) with the same reports. Basically this happened after migration.
    Report can be run on designer on the server box (suggesting the connection is good) however once uploaded to CMS, it hangs with error. This is not about an specific report only reports with Oracle Data source (Oracle client already installed and connection established)
    Any comments appreciated.

    Thank you for your reply.
    Seems like a re-boot resolved the problem.
    Edited by: Amir Eskandari on Feb 10, 2011 3:57 PM

  • Issue with 2LIS_02_SCL Data source

    Hi Everybody,
    I am facing the below 2 issues with 2lis_02_scl data source,
    1) This is fetching only the records  ETENR (Delivery Schedule Line Counter) value with ' 1 ', It is ignoring others ex:2,3 and 4. Hence Data is not reconciling with ECC system.
    2) The standard field GLMNG is not getting any data, Data was existed in table(EKET) level. So i have written the code and data is coming now. But the problem is, This is not considering the ROCANCEL indicator it seems. All the other key figures values are coming in with Negative sign When ROCANCEL Value is ' X ' or ' R ', But this field is getting all the positive values irrespective of ROCANCEL indicator. Hence showing the incorrect values compared to the ECC.
    Can anybody help me on this,
    Regards,
    Gopinath

    Hi Gopinath:
       Have you already applied any SAP Note to solve this problem?
    Please check if the SAP Note below is applicable to your system.
    668177 - "LIS BW: wrong quantity for documents with invoice plan"
    Regards,
    Francisco Milán.

  • 10g Reports issue with XML Data Source

    Hi,
    Has anybody ever encountered an issue with Oracle 10g report using an XML as the data source? What happens is, some of the values in the XML are printed to the wrong column.
    One of the elements in our XML file is a complex type with 10 elements under it. The first 5 are picked up properly, but the last 6 are not. Elements #6 to #9 has a minimum occurence of 0. What happens is when element #6 is present, but #7 is, the value for element #7 is passed on to element #6.
    The XSD and XSL files are both valid since the reports were working when we were still using 9i. There is no hidden logic in the report which might cause this issue to come up, i.e., the report just picks up the values from the XML and prints it to the appropriate columns.
    Any help will be greatly appreciated.

    XSD used
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified">
            <!-- trade instructions detail & trailer -->
            <xs:element name="TradeDetail">
                    <xs:complexType>
                            <xs:sequence>
                                    <xs:element ref="TradeType"/>
                                    <xs:element ref="TradeID"/>
                                    <xs:element ref="TradeDate"/>
                                    <xs:element ref="FundID"/>
                                    <xs:element ref="FundName"/>
                                    <xs:element ref="DollarValue" minOccurs="0"/>
                                    <xs:element ref="UnitValue" minOccurs="0"/>
                                    <xs:element ref="PercentageValue" minOccurs="0"/>
                                    <xs:element ref="OriginalTradeID" minOccurs="0"/>
                                    <xs:element ref="CancellationFlag"/>
                            </xs:sequence>
                    </xs:complexType>
            </xs:element>
            <xs:element name="Instruction">
                    <xs:complexType>
                            <xs:sequence minOccurs="0">
                                    <xs:element ref="TradeDetail" maxOccurs="unbounded"/>
                            </xs:sequence>
                    </xs:complexType>
            </xs:element>
            <!-- overall trade instruction message -->
            <xs:element name="InterchangeHeader">
                    <xs:complexType>
                            <xs:sequence>
                                    <xs:element ref="Instruction"/>
                            </xs:sequence>
                    </xs:complexType>
            </xs:element>
            <!-- definition of simple elements -->
            <xs:element name="FundID" type="xs:string"/>
            <xs:element name="TradeType" type="xs:string"/>
            <xs:element name="TradeID" type="xs:string"/>
            <xs:element name="TradeDate" type="xs:string"/>
            <xs:element name="FundName" type="xs:string"/>
            <xs:element name="DollarValue" type="xs:decimal"/>
            <xs:element name="UnitValue" type="xs:decimal"/>
            <xs:element name="PercentageValue" type="xs:decimal"/>
            <xs:element name="OriginalTradeID" type="xs:string"/>
            <xs:element name="CancellationFlag" type="xs:string"/>
    </xs:schema>
    XML used
    <?xml version = '1.0' encoding = 'UTF-8'?>
    <InterchangeHeader xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="TradeInstruction.xsd">
       <Instruction>
          <TradeDetail>
             <TradeType>Purchase</TradeType>
             <TradeID>M000038290</TradeID>
             <TradeDate>20061201</TradeDate>
             <FundID>ARO0011AU</FundID>
             <FundName>ABN Fund</FundName>
             <DollarValue>2111.53</DollarValue>
             <CancellationFlag>N</CancellationFlag>
          </TradeDetail>
          <TradeDetail>
             <TradeType>Redemption</TradeType>
             <TradeID>M000038292</TradeID>
             <TradeDate>20061201</TradeDate>
             <FundID>ARO0011AU</FundID>
             <FundName>AMRO Equity Fund</FundName>
             <UnitValue>104881.270200</UnitValue>
             <CancellationFlag>N</CancellationFlag>
          </TradeDetail>
          <TradeDetail>
             <TradeType>ISPurchase</TradeType>
             <TradeID>M000038312</TradeID>
             <TradeDate>20061201</TradeDate>
             <FundID>MLC0011AU</FundID>
             <FundName>Cash Fund</FundName>
             <OriginalTradeID>M000038311</OriginalTradeID>
             <CancellationFlag>N</CancellationFlag>
          </TradeDetail>
       </Instruction>
    </InterchangeHeader>
    XSLT used
    <?xml version="1.0" encoding="UTF-8"?>
    <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
            <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
            <xsl:template match="/">
                    <InterchangeHeader>
                            <xsl:for-each select="InterchangeHeader/Instruction/TradeDetail">
                            <xsl:sort select="FundName"/>
                            <xsl:sort select="TradeDate"/>
                                    <TradeDetail>
                                            <TradeType><xsl:value-of select="TradeType"/></TradeType>
                                            <TradeID><xsl:value-of select="TradeID"/></TradeID>
                                            <TradeDate><xsl:value-of select="TradeDate"/></TradeDate>
                                            <FundID><xsl:value-of select="FundID"/></FundID>
                                            <FundName><xsl:value-of select="FundName"/></FundName>
                                            <DollarValue><xsl:value-of select="DollarValue"/></DollarValue>
                                            <UnitValue><xsl:value-of select="UnitValue"/></UnitValue>
                                            <PercentageValue><xsl:value-of select="PercentageValue"/></PercentageValue>
                                            <OriginalTradeID><xsl:value-of select="OriginalTradeID"/></OriginalTradeID>
                                            <CancellationFlag><xsl:value-of select="CancellationFlag"/></CancellationFlag>
                                    </TradeDetail>
                            </xsl:for-each>
                    </InterchangeHeader>
            </xsl:template>
    </xsl:stylesheet>

  • Powerpivot Data Refresh Not working with Oracle Data Source in sharePoint 2013

    I am using SQL Server 2012 PowerPivot for Excel 2010. Getting the following error in SharePoint 2013 environment, when using Oracle data source within a workbook -
    EXCEPTION: Microsoft.AnalysisServices.SPAddin.DataRefreshException: Engine error during processing of OLE DB or ODBC error: The specified module could not be found..:
    <Site\PPIV workbook>---> Microsoft.AnalysisServices.SPAddin.DataRefreshException: OLE DB or ODBC error:
    The specified module could not be found..   
     at Microsoft.AnalysisServices.SPAddin.DataRefresh.ASEngineInstance.ProcessDataSource(String server, String databaseName, String datasourceName, SecureStoreCredentialsWrapper
    runAsCredentials, SecureStoreCredentialsWrapper specificConfigurationCredentials, DataRefreshService dataRefreshService, String fileUrlForTracing)     -
    -- End of inner exception stack trace ---   
     at Microsoft.AnalysisServices.SPAddin.DataRefresh.ASEngineInstance.ProcessDataSource(String server, String databaseName, String datasourceName, SecureStoreCredentialsWrapper
    runAsCredentials, SecureStoreCredentialsWrapper specificConfigurationCredentials, DataRefreshService dataRefreshService, String fileUrlForTracing)   
     at Microsoft.AnalysisServices.SPAddin.DataRefresh.DataRefreshService.ProcessingJob(Object parameters)
    I created a simple Excel 2013 PPIV workbook with an oracle data source and uploaded that to SharePoint 2013, but no change in the results - still getting the above error.
    What is this error? We have installed Oracle client (64-bit, since we use 64-bit Excel and Sp is also 64-bit) on SSAS PPIV Server and SharePoint Content DB Server. Do we need
    to install it anywhere else?
    Thanks,
    Sonal

    Hi Sonal,
    To use PowerPivot for SharePoint on SharePoint 2013, it is required to install PowerPivot for SharePoint with the Slipstream version of SQL Server 2012 SP1. If you install SQL Server 2012 and then use the upgrade version of SQL Server 2012 SP1 to upgrade,
    the environment will not support SharePoint 2013.
    I would suggest you refer to the following articles:
    Install SQL Server BI Features with SharePoint 2013 (SQL Server 2012 SP1):
    http://technet.microsoft.com/en-us/library/jj218795.aspx
    Upgrade SQL Server BI Features to SQL Server 2012 SP1:
    http://technet.microsoft.com/en-us/library/jj870987.aspx
    Regards,
    Elvis Long
    TechNet Community Support

  • Performance issues with Planning data load & Agg in 11.1.2.3.500

    We recently upgraded from 11.1.1.3 to 11.1.2.3. Post upgrade we face performance issues with one of our Planning job (eg: Job E). It takes 3x the time to complete in our new environment (11.1.2.3) when compared to the old one (11.1.1.3). This job loads then actual data and does the aggregation. The pattern which we noticed is , if we run a restructure on the application and execute this job immediately it gets completed as the same time as 11.1.1.3. However, in current production (11.1.1.3) the job runs in the following sequence Job A->Job B-> Job C->Job D->Job E and it completes on time, but if we do the same test in 11.1.2.3 in the above sequence it take 3x the time . We dont have a window to restructure the application to before running Job E  every time in Prod. Specs of the new Env is much higher than the old one.
    We have Essbase clustering (MS active/passive) in the new environment and the files are stored in the SAN drive. Could this be because of this? has any one faced performance issues in the clustered environment?

    Do you have exactly same Essbase config settings and calculations performing AGG ? . Remember something very small like UPDATECALC ON/OFF can make a BIG difference in timing..

  • Data load issue with export data source - BW 3.5

    Hi,
    We are facing issues in loading data with the help of export data source.
    We have created export data source of 0PCA_C01 cube. With the help of this export datasource,  we are loading data to other custom cube. Scenario is working fine in development server.
    But when we transported objects to quality server data is not getting loaded to custom target cube.
    It is extracting zero records.  All transports are ok and we have generated export datasource in quality before transports .Also regenerated export datasource after transport and activated infosource, update rule via RS* programs.  Every object is active but data is not getting extracted.
    RSA3 for 80PCA_C01 datasource isn't extracting any record in Quality. Records getting extracted in development.   We are in BW 3.5 with patch level 19.
    Please guide us to resolve the issue.
    Thanks,
    Aditya

    Hi
    Make sure that you have relevant Role & Authorization at Quality/PRS.
    You have to Transport the Source Cube first and then Create a Generate Export Data Source in QAS. Then, replicate data sources for BW QAS Soruce System. Make sure this replicated Data Source in QAS. Only then can transport new update rules for second cube.
    Hope it helps and clear

  • Performance issue with Oracle Text index

    Hi Experts,
    We are on Oracle 11.2..0.3 on Solaris 10. I have implemented Oracle Text in our environment and I am facing a strange performance issue that is happening in our environment.
    One sql having CONTAINS clause is taking forever - more than 20 minutes and still does not complete. This sql has a contains clause and an exists clause and a not exists clause.
    Now if I remove the exists clause and a not exists clause , it completes fast. but with those two clauses it is just taking forever. It is late night so i am not able to post the table and sql query details and will do so tomorrow but based on this general description, are there any pointers for me to review?
    sql query doing fine:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    --sql query that hangs forever:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    and exists (--one clause here wiht a few table joins)
    and not exists (--one clause here wiht a few table joins);
    --Now another strange thing I found is if instead of 'TO%' in this sql, if I were to use 'ZZ%' or 'L1%' it works fast but for 'TO%' it goes slow with those two exists not exists clauses!
    I will be most thankful for the inputs.
    OrauserN

    Hi Barbara,
    First of all, thanks a lot for reviewing the issue.
    Unluckily making the change to empty_stoplist did not work out. I am today copying the entire sql here that has this issue and will be most thankful for more insights/pointers on what can be done.
    Here is the entire sql:
    SELECT U.CLNT_OID,
           U.USR_OID,
           S.EMAILADDRESS,
           U.FIRST_NAME,
           U.LAST_NAME,
           S.JOBCODE,
           S.LOCATION,
           S.DEPARTMENT,
           S.ASSOCIATEID,
           S.ENTERPRISECOMPANYCODE,
           S.EMPLOYEEID,
           S.PAYGROUP,
           S.PRODUCTLOCALE
      FROM    ACCESS_USR U
           INNER JOIN
              ACCESS_SIA S
           ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
    WHERE     U.CLNT_OID = 'G39NY3D25942TXDA'
           AND EXISTS
                  (SELECT 1
                     FROM ACCESS_USR_GROUP_XREF UGX
                          INNER JOIN ACCESS_GROUP RELG
                             ON     RELG.CLNT_OID = UGX.CLNT_OID
                                AND RELG.GROUP_OID = UGX.GROUP_OID
                          INNER JOIN ACCESS_GROUP G
                             ON     G.CLNT_OID = RELG.CLNT_OID
                                AND G.GROUP_TYPE_OID = RELG.GROUP_TYPE_OID
                    WHERE     UGX.CLNT_OID = U.CLNT_OID
                          AND UGX.USR_OID = U.USR_OID
                          AND G.GROUP_OID = 920512943
                          AND UGX.INCLUDED = 1)
           AND NOT EXISTS
                      (SELECT 1
                         FROM    ACCESS_USR_GROUP_XREF UGX
                              INNER JOIN
                                 ACCESS_GROUP G
                              ON     G.CLNT_OID = UGX.CLNT_OID
                                 AND G.GROUP_OID = UGX.GROUP_OID
                        WHERE     UGX.CLNT_OID = U.CLNT_OID
                              AND UGX.USR_OID = U.USR_OID
                              AND G.GROUP_OID = 920512943
                              AND UGX.INCLUDED = 1)
           AND CONTAINS (U.LAST_NAME, 'Bon%') > 0;
    Like I said before if the EXISTS and NOT EXISTS clause are removed it works in sub-second. But with those EXISTS and NOT EXISTS CLAUSE IT TAKES ANY WHERE FROM 25 minutes to more than one hour.
    NOte also that it was not TO% but Bon% in the CONTAINS clause that is giving the issue - sorry that was wrong on my part.
    Also please see below the ORACLE TEXT index defined on the table ACCESS_USER:
    --definition of preferences used in the index:
    SET SERVEROUTPUT ON size unlimited
    WHENEVER SQLERROR EXIT SQL.SQLCODE
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
       ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_LEXER with BASIC LEXER is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_wl', 'BASIC_WORDLIST');
       ctxsys.ctx_ddl.set_attribute ('cust_wl', 'SUBSTRING_INDEX', 'true'); -- to improve performance
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_WL with BASIC WORDLIST is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    --now below is the code of the index:
    CREATE INDEX ACCESS_USR_IDX3 ON ACCESS_USR
    (FIRST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    CREATE INDEX ACCESS_USR_IDX4 ON ACCESS_USR
    (LAST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    The strange thing is that, like I said, If I remove the exists clause the query returns very fast. Also if I modify the query to use only one NOT EXISTS clause and remove the other EXISTS clause it returns in less than one second.  Also if I remove the EXISTS clause and use only the NOT EXISTS  clause it returns in less than 4 seconds. But with both clauses it runs forever!
    When I tried to get dbms_xplan.display_cursor to get the query plan (for the case of both exists and not exists clause in the query), it said that previous statement's sql id was 0 or something like that so that I was not able to see the query plan. I will keep trying to get this plan (it takes 25 minutes to one hour each time but will get this info soon). Again any pointers are most helpful.
    Regards
    OrauserN

  • Issue with Treasury Data Source - 0CFM_DELTA_POSITIONS

    Hi all,
    I'm trying to use data sources 0CFM_INIT_POSITIONS & 0CFM_DELTA_POSITIONS to load treasury data.
    Through INIT data source the data has been initialized and loaded the history data to BW from R/3. I'm able to see the entry in time stamp table for the INIT data source. Also the initialization has been done for DELTA_POSITIONS. But I couldn't see any entry in time stamp tabe, whereas queue has been created in RSA7.
    After that we made few transactions in R/3 and trying to take those data through DELTA_POSITIONS. I checked in RSA3, system fetches more than 5000 records (which was not made after initialization) and beyond that it gives dump saying termination point has been set in the program.
    Can some one please explain me the logic on which these data source works? Any other setting needs to be done?
    With regards,
    Muthu.

    Hi Pooja,
    1) It might be Database table space issue in the backround, Please checkit out with BASIS
    Regards,
    Marasa.

  • Master data load issue with flexible data source

    Hi All,
    1CL_OLIS001have this data source for loading configuration data. the changed records are not updating into this DS regularly.
    I have delta load on this every day. It loads correct data for one month and suddently it will start loading 0 records every day with out failing. If I reset the delta, it will work fine for month or two and then will corrupt again.
    Is any one have this kind of issue before Please suggest to resolve this issue permanently?
    Thanks and Regards,
    Pooja

    Hi Pooja,
    1) It might be Database table space issue in the backround, Please checkit out with BASIS
    Regards,
    Marasa.

  • Issue with 0SRM_TD_IV data source delta

    Hi experts,
    I have an issue with the invoices numbers not updated correctly when one of invoice item is deleted in SRM.
    For example:
    In SRM:
    Invoice number: 50001168
    Invoice item : 1, quantity: 30 - deleted ( deletion indicator - x)
    Invoice item : 2, quantity: 4
    Invoice item : 3, quantity: 7
    In standard extractor (0SRM_TD_IV) RSA3 : exactly same as SRM data.
    In standard DSO (0SRIV_D3) : only invoice number and invoice item one is present, which suppose to be deleted in BI.
    Invoice number: 50001168
    Invoice item : 1, quantity: 30
    When I did the full repair for this invoice number from SRM. I got all the 3 invoice items. It did not overwrite invoice item 1, which is deleted in SRM. the way it works in SRM is, it didn't delete the invoice item but flaged with the deletion indicator x.
    Know every thing look good up to extractor RSA3. what should I do know to fix this issue? I see that we don't have tranformation rules in BI 7.0 insted we are using trnasfer rules and update rule. Please let me know.
    Thanks,
    Chandra

    Hi Chandu,
    The way I understand your issue is that,
    when you do a full upload, you get the information correctly in RSA3 and DSO.
    when you run a delta, you are not getting all the line items.
    1) Please check whether there are any relevant SAP notes for this extractor to fix this delta issue.
    2) Please check whether there is any code in the update rule or transfer rule that deletes the missing line item
    Thanks,
    Krishnan

  • Crystal Reports XI R2 SP 6 - Issue with setting Data Source Location

    Hello,
      After some initial difficulty instally CR XI R2 with SP 6 on a windows XP machine (see thread Error on installation of CR XI R2 SP6), the Crystal Reports environment seemed fine, however, when I tried resetting the datasource location on an ODBC datasource between a development and production server, I get the message 'Some Tables could not be replaced as no match was found....'   The tables in the two databases are identical so that isn't the issue.
    Here is some additional information
    The issue seems related only to DSN's that point to a progress database.  I am able to reset datasource locations for DSN's that use the SQL server driver and also for those that use the XML CR ODBC XML Driver 5.0.  I am not able to reset the datasource locations on the DSN's that use a Progress Open Edge 10.1 DB  driver.
    I can create a new report using the DSN for the Progress Driver and add tables, but the table names are coming up as an alias - i.e. if I add a table called PM_Plant, the table added to the report is PM_Plant1.
    I also found I can go into existing reports, rename the tables in the database expert to be an alias (appending 1 to the end of the table name), then I am able to repoint them using the datasource location screen. 
    So it looks like there is a potential work around to the situation, but I didn't run across any information that we should need to do that. 
    Any recommendations how to fix the issue?  
    Thanks,

    Hi Don,
    The reports were created with CR XI R1 on my PC  initially and the progress drivers have not changed since.   The reports were deployed to a server and I pulled them back to my PC to test out any changes after the CR XI R2 SP 6 upgrade.  (so there is really only one machine involved, the one that had the upgrade). 
    I did look at the settings for verifying and tried playing around with those and also with verifying the database but that didn't make any difference.
    I wasn't quite sure which registry keys to look at or what the values should be so wasn't able to pursue that option. 
    All the tables in the progress database use underscores as part of the table name (i.e. PM_Plant, PM_Company), do you think the upgrade to SP6 means that the underscore is now a reserved character and that is why the tables are getting aliased?  If so, do you know how to change the alias settings or the list of characters that are reserved?
    Just an FYI, I had to downgrade back to CR XI R1 at this point to get work done.   If time allows, I'll retry the upgrade in about 5-7 weeks.  I discussed the issue with a system admininstrator and we willl try removing the progress drivers and DSN's prior to trying an upgrade again to see if that makes a difference.  I'll also make sure to keep track of all Report Options and Options and also registry keys to see what changes.
    Thanks

  • Performance issue with extreme data distribution using histogram

    Hi
    We have a performance stability issue which we later found out is cause by the bind variable and histogram for a particular column when it was use as part of equality predicate. Assume the column name is parent_id0.
    There is also an index on parent_id0.
    Our temporary workaround is to install the good plan when it is started.
    This is on database 10.2.0.3. I have a table with 2570149 rows, there is one common value(value 0) that represent about 99.91% of the total rows.
    When i do
    select parent_id0, count(*)
    from table1
    group by parent_id0
    order by parent_id0;
    I'm getting 187 rows and i would assume to have 187 buckets to have a better representation. The first row have the count nearly to 99.91%. The rest rows are having count something like 1 or 2 or less than 200.
    With the auto gather, Oracle came up with 5 bucket. When i check the sample size, i only see oracle uses 2.215% of all the total rows at that time.
    Column name Endpoint num Endpoint value
    PARENT_ID0     5,579     0
    PARENT_ID0     5,582     153,486,811
    PARENT_ID0     5,583     156,240,279
    PARENT_ID0     5,584     163,081,173
    PARENT_ID0     5,585     168,255,656
    Is the problem due to the wrong sample size and hence the histogram is miscalculated.
    When i trace the sql with 10053, i see something like this..seems like some value is not capture in the histogram.
    Using prorated density: 3.9124e-07 of col #2 as selectivity of out-of-range value pred
    What i need to do to have a correct and stable execution plan?
    Thank you

    Hi, its an OLTP environment.
    The problem is this sql has 4 tables to join, table1 (2.5 mil rows), table2 (4 mil), table3 (4.6 mil) and table4 (20 mil)
    By right, the table with the highest filter ratio is table1. However, from the plan, oracle is using table3 as the driving table. The moment i take away the parent_id0 as part of the predicate, Oracle choose the right driving table (table1).
    Here is the sql structure
    select ...
    from table1, table2, table3, table4
    where table1.id = :1 and table1.parent_id0 :=2
    and ...
    We have index on id column too.
    From the application, the application will never pass in value 0 for the parent_id0. Therefore, we will be querying 0.09 percent all the time from that particular query.
    p/s: i'm sorry that i'm not able to paste the exact sql text here

  • View objects performance issue with oracle seeded tables

    While i am writing a view object on a oracle seeded tables like MTL_PARAMETERS, its taking more time to show in the oaf page.I am trying to display all these view object columns in detail disclosure of advanced table. My Application is taking more than two minutes to display the view columns of the query which is returning just 200 rows. Please help me how to improve performance when my query using seeded tables.
    This issue is happening only in R12 view object and advanced tables.
    Edited by: vlsn on Jun 24, 2012 11:36 PM

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

Maybe you are looking for