Performance problem: 1.000 queries over a 1.000.000 rows table

Hi everybody!
I have a difficult performance problem: I use JDBC over an ORACLE database. I need to build a map using data from a table with around 1.000.000 rows. My query is very simple (see the code) and takes an average of 900 milliseconds, but I must perform around 1.000 queries with different parameters. The final result is that user must wait several minutes (plus the time needed to draw the map and send it to the client)
The code, very simplified, is the following:
String sSQLCreateView =
"CREATE VIEW " + sViewName + " AS " +
"SELECT RIGHT_ASCENSION, DECLINATION " +
"FROM T_EXO_TARGETS " +
"WHERE (RIGHT_ASCENSION BETWEEN " + dRaMin + " AND " + dRaMax + ") " +
"AND (DECLINATION BETWEEN " + dDecMin + " AND " + dDecMax + ")";
String sSQLSentence =
"SELECT COUNT(*) FROM " + sViewName +
" WHERE (RIGHT_ASCENSION BETWEEN ? AND ?) " +
"AND (DECLINATION BETWEEN ? AND ?)";
PreparedStatement pstmt = in_oDbConnection.prepareStatement(sSQLSentence);
for (int i = 0; i < 1000; i++)
pstmt.setDouble(1, a);
pstmt.setDouble(2, b);
pstmt.setDouble(3, c);
pstmt.setDouble(4, d);
ResultSet rset = pstmt.executeQuery();
X = rset.getInt(1);
I have yet created index with RIGHT_ASCENSION and DECLINATION fields (trying different combinations).
I have tried yet multi-threads, with very bad results
Has anybody a suggestion ?
Thank you very much!

How many total rows are there likely to be in the View you create?
Perhaps just do a select instead of a view, and loop thru the resultset totalling the ranges in java instead of trying to have 1000 queries do the job. Something like:
int     iMaxRanges = 1000;
int     iCount[] = new int[iMaxRanges];
class Range implements Comparable
     float fAMIN;
     float fAMAX;
     float fDMIN;
     float fDMAX;
     float fDelta;
     public Range(float fASC_MIN, float fASC_MAX, float fDEC_MIN, float fDEC_MAX)
          fAMIN = fASC_MIN;
          fAMAX = fASC_MAX;
          fDMIN = fDEC_MIN;
          fDMAX = fDEC_MAX;
     public int compareTo(Object range)
          Range     comp = (Range)range;
          if (fAMIN < comp.fAMIN)
               return -1;
          if (fAMAX > comp.fAMAX)
               return 1;
          if (fDMIN < comp.fDMIN)
               return -1;
          if (fDMAX > comp.fDMAX)
               return 1;
          return 0;
List     listRanges = new ArrayList(iMaxRanges);
listRanges.add(new Range(1.05, 1.10, 120.5, 121.5));
//...etc.
String sSQL =
"SELECT RIGHT_ASCENSION, DECLINATION FROM T_EXO_TARGETS " +
"WHERE (RIGHT_ASCENSION BETWEEN " + dRaMin + " AND " + dRaMax + ") " +
"AND (DECLINATION BETWEEN " + dDecMin + " AND " + dDecMax + ")";
Statement stmt = in_oDbConnection.createStatement();
ResultSet rset = stmt.executeQuery(sSQL);
while (rset.next())
     float fASC = rset.getFloat("RIGHT_ASCENSION");
     flaot fDEC = rset.getFloat("DECLINATION");
     int iRange = Collections.binarySearch(listRanges, new Range(fASC, fASC, fDEC, fDEC));
     if (iRange >= 0)
          ++iCount[iRange];

Similar Messages

  • Facing problem in distributed queries over Oracle linked server

    Hi,
    I have a SQL Server 2005 x64 Standard Edition SP3 instance. On the instance, we had distributed (4 part) queries over an Oracle linked server running fine just a few hours back but now they have starting taking too long. They seem to work fine when OPENQUERY
    is used. Now I have a huge number of queries using the same mechanism and is not feasible for me to convert all queries to OPENQUERY, please help in getting this resolved.
    Thanks in advance.

    Hi Ashutosh,
    According to your description, you face performance issues with distributed queries and
    it is not feasible for you to convert all queries to
    OPENQUERY. To improve the performance, you could follow the solutions below:
    1. Make sure that you have a high-speed network between the local server and the linked server.
    2. Use driving_site hint. The driving site hint forces query execution to be done at a different site than the initiating instance. 
    This is done when the remote table is much larger than the local table and you want the work (join, sorting) done remotely to save the back-and-forth network traffic. In the following example, we use the driving_site hint to force the "work"
    to be done on the site where the huge table resides:
    select /*+DRIVING_SITE(h)*/
    ename
    from
    tiny_table t,
    huge_table@remote h
    where
    t.deptno = h.deptno;
    3. Use views. For instance, you could create a view on the remote site referencing the tables and call the remote view via the local view as the following example.
    create view local_cust as select * from cust@remote;
    4. Use procedural code. In some rare occasions it can be more efficient to replace a distributed query by procedural code, such as a PL/SQL procedure or a precompiler program.
    For more information about the process, please refer to the article:
    http://www.dba-oracle.com/t_sql_dblink_performance.htm
    Regards,
    Michelle Li

  • Curious performance problem

    Hello,
    I have a very curious performance problem. I have a query which returns 0 rows and takes around 9 secs to execute in TopLink. If I try to execute generated SQL for that ReadAllQuery (taken from log) directly through JDBC, it takes only 70ms. I use TopLink 9.0.3 with Oracle9i 9.2.0.3. I've traced through sources and identified that the problem is not in Toplink directly but in call to Oracle JDBC driver. But then I don't understand why in my JDBC case it is so fast. The problem is the same no matter if I use thin or OCI driver.
    I've prepared a little test to show it up:
    import com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO;
    import java.util.Vector;
    import oracle.toplink.expressions.Expression;
    import oracle.toplink.expressions.ExpressionBuilder;
    import oracle.toplink.queryframework.ReadAllQuery;
    import oracle.toplink.queryframework.SQLCall;
    import oracle.toplink.sessions.DatabaseSession;
    import oracle.toplink.sessions.DefaultSessionLog;
    import oracle.toplink.sessions.Project;
    import oracle.toplink.tools.profiler.PerformanceProfiler;
    import oracle.toplink.tools.workbench.XMLProjectReader;
    * @author mstraka
    public class ToplinkTest {
    public static void main(String[] args) {
    try {
    // Pure JDBC test
    String sql =
    "SELECT object_type, MESSAGENUMBER, object_id, MESSAGETYPE, TIMESTAMP, VALUE1, POTORDER, " +
    "VALUE2, VALUE3, ORDERNUMBER, VALUE4, POTNAME, ISINCOMINGMESSAGE " +
    "FROM POTMESSAGELOG " +
    "WHERE " +
    "((((TIMESTAMP >= TO_DATE('2003-07-21 15:00:00', 'YYYY-MM-DD HH24:MI:SS')) " +
    "AND (TIMESTAMP <= TO_DATE('2003-07-21 16:00:00', 'YYYY-MM-DD HH24:MI:SS'))) " +
    "AND ((POTORDER >= 1) AND (POTORDER <= 172))) AND " +
    "(object_type = 'com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO')) " +
    "ORDER BY TIMESTAMP ASC";
    Class.forName("oracle.jdbc.driver.OracleDriver");
    java.sql.Connection con = java.sql.DriverManager.getConnection("jdbc:oracle:oci8:@katka", "sco", "sco");
    long time = System.currentTimeMillis();
    java.sql.PreparedStatement ps = con.prepareStatement(sql);
    java.sql.ResultSet rs = ps.executeQuery();
    int rows = 0;
    while (rs.next()) {
    rows++;
    System.out.println("*** Pure JDBC test ****");
    System.out.println("Rows: " + rows);
    System.out.println("JDBC Time: " + String.valueOf(System.currentTimeMillis() - time) + " ms");
    rs.close();
    ps.close();
    con.close();
    // TopLink test
    XMLProjectReader xmlReader = new XMLProjectReader();
    Project project = xmlReader.read("./config/bc/tlproject.xml");
    project.getLogin().setUserName("sco");
    project.getLogin().setPassword("sco");
    DatabaseSession dbSession = project.createDatabaseSession();
    dbSession.logMessages();
    DefaultSessionLog log = (DefaultSessionLog) dbSession.getSessionLog();
    log.logDebug();
    log.logExceptions();
    log.logExceptionStackTrace();
    log.printDate();
    dbSession.login();
    java.util.Calendar cal = java.util.Calendar.getInstance();
    cal.set(java.util.Calendar.YEAR, 2003);
    cal.set(java.util.Calendar.MONTH, 6);
    cal.set(java.util.Calendar.DAY_OF_MONTH, 21);
    cal.set(java.util.Calendar.HOUR_OF_DAY, 15);
    cal.set(java.util.Calendar.MINUTE, 0);
    cal.set(java.util.Calendar.SECOND, 0);
    cal.set(java.util.Calendar.MILLISECOND, 0);
    ExpressionBuilder eb = new ExpressionBuilder();
    Expression ex = eb.get("timestamp").greaterThanEqual(new java.sql.Date(cal.getTimeInMillis()));
    cal.set(java.util.Calendar.HOUR_OF_DAY, 16);
    ex = ex.and(eb.get("timestamp").lessThanEqual(new java.sql.Date(cal.getTimeInMillis())));
    Expression pot = eb.get("potOrder").greaterThanEqual(1);
    pot = pot.and(eb.get("potOrder").lessThanEqual(172));
    dbSession.setProfiler(new PerformanceProfiler());
    ReadAllQuery rq = new ReadAllQuery(PotMessageLogJDO.class);
    rq.setSelectionCriteria(ex.and(pot));
    rq.addAscendingOrdering("timestamp");
    time = System.currentTimeMillis();
    Vector result = (Vector)dbSession.executeQuery(rq);
    System.out.println("*** TopLink ReadAllQuery test ****");
    System.out.println("Rows: " + result.size());
    System.out.println("TopLink Time: " + String.valueOf(System.currentTimeMillis() - time) + " ms");
    time = System.currentTimeMillis();
    result = (Vector)dbSession.executeSelectingCall(new SQLCall(sql));
    System.out.println("*** TopLink direct SQL test ****");
    System.out.println("Rows: " + result.size());
    System.out.println("TopLink SQL Time: " + String.valueOf(System.currentTimeMillis() - time) + " ms");
    } catch (Exception e) {
    e.printStackTrace();
    ...and here is output from run:
    *** Pure JDBC test ****
    Rows: 0
    JDBC Time: 62 ms
    2003.07.21 06:07:44.127--DatabaseSession(30752603)--Connection(20092482)--TopLink, version:TopLink - 9.0.3 (Build 423)
    2003.07.21 06:07:44.736--DatabaseSession(30752603)--Connection(20092482)--connecting(DatabaseLogin(
         platform => OraclePlatform
         user name => "sco"
         datasource URL => "jdbc:oracle:oci8:@katka"
    2003.07.21 06:07:44.799--DatabaseSession(30752603)--Connection(20092482)--Connected: jdbc:oracle:oci8:@katka
         User: SCO
         Database: Oracle Version: Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.3.0 - Production
         Driver: Oracle JDBC driver Version: 9.2.0.1.0
    2003.07.21 06:07:44.971--DatabaseSession(30752603)--#executeQuery(ReadAllQuery(com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO))
    Begin Profile of{ReadAllQuery(com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO)
    2003.07.21 06:07:45.002--DatabaseSession(30752603)--Connection(20092482)--SELECT object_type, MESSAGENUMBER, object_id, MESSAGETYPE, TIMESTAMP, VALUE1, POTORDER, VALUE2, VALUE3, ORDERNUMBER, VALUE4, POTNAME, ISINCOMINGMESSAGE FROM POTMESSAGELOG WHERE ((((TIMESTAMP >= {ts '2003-07-21 15:00:00.0'}) AND (TIMESTAMP <= {ts '2003-07-21 16:00:00.0'})) AND ((POTORDER >= 1) AND (POTORDER <= 172))) AND (object_type = 'com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO')) ORDER BY TIMESTAMP ASC
    Profile(ReadAllQuery,
         class=com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO,
         total time=9453,
         local time=9453,
         query prepare=15,
         sql execute=9422,
    } End Profile
    *** TopLink ReadAllQuery test ****
    Rows: 0
    TopLink Time: 9468 ms
    2003.07.21 06:07:54.439--DatabaseSession(30752603)--#executeQuery(DataReadQuery())
    Begin Profile of{DataReadQuery()
    2003.07.21 06:07:54.439--DatabaseSession(30752603)--Connection(20092482)--SELECT object_type, MESSAGENUMBER, object_id, MESSAGETYPE, TIMESTAMP, VALUE1, POTORDER, VALUE2, VALUE3, ORDERNUMBER, VALUE4, POTNAME, ISINCOMINGMESSAGE FROM POTMESSAGELOG WHERE ((((TIMESTAMP >= TO_DATE('2003-07-21 15:00:00', 'YYYY-MM-DD HH24:MI:SS')) AND (TIMESTAMP <= TO_DATE('2003-07-21 16:00:00', 'YYYY-MM-DD HH24:MI:SS'))) AND ((POTORDER >= 1) AND (POTORDER <= 172))) AND (object_type = 'com.abilitydev.slovalco.parameter.messages.PotMessageLogJDO')) ORDER BY TIMESTAMP ASC
    Profile(DataReadQuery,
         total time=0,
         local time=0,
    } End Profile
    *** TopLink direct SQL test ****
    Rows: 0
    TopLink SQL Time: 16 ms
    Thanks a lot!
    Marcel

    Marcel,
    TopLink supports native SQL generation that will use the TO_DATE operators. You can turn on native SQL in a couple of ways.
    1. SESSIONS.XML
              <login>
                   <platform-class>oracle.toplink.internal.databaseaccess.OraclePlatform</platform-class>
                   <user-name>user</user-name>
                   <password>password</password>
                   <uses-native-sequencing>true</uses-native-sequencing>
    2. Through DatabaseLogin API:
    After the project is read in or instantiated:
    project.getLogin().useNativeSQL();
    This should get the SQL you need and address your performance issue.
    Doug

  • Interactive report performance problem over database link - Oracle Gateway

    Hello all;
    This is regarding a thread Interactive report performance problem over database link that was posted by Samo.
    The issue that I am facing is when I use Oracle function like (apex_item.check_box) the query slow down by 45 seconds.
    query like this: (due to sensitivity issue, I can not disclose real table name)
    SELECT apex_item.checkbox(1,b.col3)
    , a.col1
    , a.col2
    FROM table_one a
    , table_two b
    WHERE a.col3 = 12345
    AND a.col4 = 100
    AND b.col5 = a.col5
    table_one and table_two are remote tables (non-oracle) which are connected using Oracle Gateway.
    Now if I run above queries without apex_item.checkbox function the query return or response is less than a second but if I have apex_item.checkbox then the query run more than 30 seconds. I have resolved the issues by creating a collection but it’s not a good practice.
    I would like to get ideas from people how to resolve or speed-up the query?
    Any idea how to use sub-factoring for the above scenario? Or others method (creating view or materialized view are not an option).
    Thank you.
    Shaun S.

    Hi Shaun
    Okay, I have a million questions (could you tell me if both tables are from the same remote source, it looks like they're possibly not?), but let's just try some things first.
    By now you should understand the idea of what I termed 'sub-factoring' in a previous post. This is to do with using the WITH blah AS (SELECT... syntax. Now in most circumstances this 'materialises' the results of the inner select statement. This means that we 'get' the results then do something with them afterwards. It's a handy trick when dealing with remote sites as sometimes you want the remote database to do the work. The reason that I ask you to use the MATERIALIZE hint for testing is just to force this, in 99.99% of cases this can be removed later. Using the WITH statement is also handled differently to inline view like SELECT * FROM (SELECT... but the same result can be mimicked with a NO_MERGE hint.
    Looking at your case I would be interested to see what the explain plan and results would be for something like the following two statements (sorry - you're going have to check them, it's late!)
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two),
    sourceqry AS
    (SELECT  b.col3 x
           , a.col1 y
           , a.col2 z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5)
    SELECT apex_item.checkbox(1,x), y , z
    FROM sourceqry
    WITH a AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_one),
    b AS
    (SELECT /*+ MATERIALIZE */ *
    FROM table_two)
    SELECT  apex_item.checkbox(1,x), y , z
    FROM table_one a
        , table_two b
    WHERE a.col3 = 12345
    AND   a.col4 = 100
    AND   b.col5 = a.col5If the remote tables are at the same site, then you should have the same results. If they aren't you should get the same results but different to the original query.
    We aren't being told the real cardinality of the inners select here so the explain plan is distorted (this is normal for queries on remote and especially non-oracle sites). This hinders tuning normally but I don't think this is your problem at all. How many distinct values do you normally get of the column aliased 'x' and how many rows are normally returned in total? Also how are you testing response times, in APEX, SQL Developer, Toad SQLplus etc?
    Sorry for all the questions but it helps to answer the question, if I can.
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance problems after Update from 7.6.00.37 to 7.6.03.15

    Hi,
    after the Update from 7.6.00.37 to 7.6.03.15 in our live system we noticed a lot of performance problems. We have tested the new version before on our test system, but there don't noticed effects like this.
    We have 2 identical systems (Opteron 64 bit, openSuse 10.2) with log shipping between them. We updated first the standby system, switched from online to standby (with copy of cold log) and started the new server as online system. After that we run a complete backup (runtime: 1 hour) for starting a new backup history and for activating autolog. Then
    With the update we changed USE_OPEN_DIRECT to YES, but the performance of the system was very slow afterwards. After the backup it remains at a high load average (> 10, previous system had about 2-4), with nearly 100% of CPU usage for the db kernel process.
    Next day we switched USE_OPEN_DIRECT back to NO. The system first runs better, but periodically rises up to a load average of 6 and slow down the performance of various applications (somebody says about 10 times slower). Here we also noticed a high usage (now 200-300%) of the db kernel process.
    Our questions are:
    1. Has something basically changed from 7.6.00.37 to 7.6.03.15, so that our various applications (JDBC, ODBC and Perl/SQLDBC partially on old linux systems with drivers from 7.5.00.23) don't reach same performance as before?
    2. Are there any other (new) parameters, which can help? Maybe reducing MAXCPU from 4 to 3 for reserving capacities for the system (there is only one maxdb instance running)?
    3. Is the a possibility to switch back to 7.6.00.37 (only for worst case)?
    I have made some first steps with x_cons, but don't see any anomalies on the first look.
    Regards,
    Thomas

    Thomas Schulz wrote:>
    > > > Next day we switched USE_OPEN_DIRECT back to NO. The system first runs better, but
    > >
    > > What is it about this parameter that lets you think it may be the cause for your problems?
    > After changing it back to NO, the system runs better (lower load average) than with YES (but much slower than with old version!)
    Hmm... that is really odd. When using USE_OPEN_DIRECT there is actually less work to do for he operating system.
    > > > Our questions are
    > > >
    > > > 1. Has something basically changed from 7.6.00.37 to 7.6.03.15, so that our various applications (JDBC, ODBC and Perl/SQLDBC partially on old linux systems with drivers from 7.5.00.23) don't reach same performance as before?
    > >
    > > Yes - of course. Changes are what Patches are all about!

    > Are there any known problems with updating from 7.6.00.37 to 7.6.03.15?
    Well of course there are bugs that have been found inbetween the release of both versions, but I am not aware of something like the performance killer.
    We will have to check this in detail here.
    > > > I have made some first steps with x_cons, but don't see any anomalies on the first look.
    > >
    > > Ok, looking into what the system does when it uses CPU is a first good step.
    > > But what would be "anomalies" to you?
    >
    > Good question! I don't really know.
    Well - then I guess 'looking at the system' won't bring you far...
    > > Do you use DBAnalyzer? If not -> activate it!
    > > Does it gives you any warnings?
    > > What about TIME_MEASUREMENT? Is it activated on the system? If not -> activate it!
    >
    > OK, that will be our next steps.
    Great - let's see the warnings you get.
    Let us also see the DB Parameters you set.
    > > What parameters have changed due  to the patch installations (check the parameter history file)?
    > >
    > > What queries take longer now? What is the execution plan of them?
    >
    > It seems to happen for all selects on tables with a lot of rows (>10.000, partially without indexes because automatically generated). With the old version we had no problems with missing indexes or the general performance. Unfortunatly it is very difficult to extract some sql statements out of the JBoss applications. But even simple queries (without any join) runs slower, when the load average rises over 4-5.
    Hmm... the question here is still, if the execution plans are good enough to meet your expectations.
    E.g. for tables that you access via the primary key it actually doesn't matter how many rows a table has (not for MaxDB at least).
    > > BTW: how exactly do the tests look like that you've done on the testsystem?
    > Usage over 6 weeks with our JDBC development environment (JBoss), backup and restore with various combinations of USE_OPEN_DIRECT and USE_OPEN_DIRECT_FOR_BACKUP.
    Sounds like the I/O is your most suspect aspect for overall system performance...
    > > Was the testsystem a 1:1 copy of the productive machine before the upgrade test?
    > No - smaller hardware (32 bit), only 20% of data of the live system, few db users and applications.
    >
    > > How did you test the system performance with multiple parallel users?
    > Only while permanent development with the 2-3 developers and some parallel tests of backup/restore. Unfortunately no tests with many users/applications.
    Ok - so this is next to no testing at all when it comes to performance.
    > An UPDATE STATISTICS over all db users seems to change nothing. At the moment the system remains markedly slow and we are searching for reasons and solutions. Another attempt will be the change of MAXCPU from 4 to 3.
    Why do you want to do that? Have you observed any threads that don't get a CPU because all 4 cores are used by the MaxDB kernel?
    regards,
    Lars

  • Performance problem inserting lots of rows

    I'm a software developer; we have a J2EE-based product that works against Oracle or SQL Server. As part of a benchmarking suite, I insert about 70,000 records into our auditing table (using jdbc, going over the network to a database server). The database server is a smallish desktop Windows machine running Windows Server 2003; it has 384M of RAM, a 1GHz CPU, and plenty of disk.
    When using Oracle (9.2.0.3.0), I can insert roughly 2,000 rows per minute. Not too shabby!
    HOWEVER -- and this is what's making Oracle look bad -- SQL Server 2000 on the SAME MACHINE is inserting roughly 8,000 rows per minute!
    Why is Oracle so slow? The database server is using roughly 50% CPU, so I assume disk speed is an issue. My goal is to get Oracle to compare favorably with SQL Server on the same hardware. Any ideas or suggestions? (I've done a fair amount of Oracle tuning in the past to get SELECTs to run faster, but have never dealt with INSERT performance problems of this magnitude.)
    Thanks,
    Daniel Rabe

    I've tried using a PreparedStatement and a CallableStatement, always with bind variables. (I use a sequence to populate one of the columns, so initially my code was doing the insert, then a select to get the last inserted row. This was fast on SQL Server but slow on Oracle, so I conditionalized my code to use a pl/sql block that does INSERT... RETURNING so I could get the new rowid without doing the extra select - that required switching from PreparedStatement to CallableStatement). The Performance Manager shows "Executes without Parses" > 98%.
    Performance Manager also shows Application I/O Physical Reads approx 30/sec, and Background Process I/O Physical Writes approx 60/sec.
    File Write Operations showed most of the writes going to my tablespace (which is sized plenty big for the data I'm writing), but with occasional writes to UNTODBS01.DBF as well.
    The database is in NOARCHIVELOG mode.
    I'm NOT committing very often - I'm doing all 70,000 rows as one transaction. BTW, I realize this isn't a real-life scenario - this is just the setup I do so that I can run some benchmarks on various queries that our application performs. Once I get into those, I'm sure I'll have a whole new slew of questions for the group. ;-)
    I'll look into SQL TRACE and TKPROF - time to refresh some skills I haven't used in a while...
    Thanks,
    --Daniel Rabe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Performance problem in Zstick report...

    Hi Experts,
    I am facing performance problem in Custoom Stock report of Material Management.
    In this report i am fetching all the materials with its batches to get the desired output, at a time this report executes 36,000 plus unique combination of material and batch.
    This report takes around 30 mins to execute. And this report is to be viewed regularly in every 2 hours.
    To read the batch characteristics value I am using FM -> '/SAPMP/CE1_BATCH_GET_DETAIL'
    Is there any way out to increase the performance of this report, the output of the report is in ALV.
    May i have any refresh button in the report so that data may get refreshed automatically without executing it again. or is there any cache memory concept.
    Note: I have declared all the itabs with type sorted, all the select queries are fetched with key and index.
    Thanks
    Rohit Gharwar

    Hello,
    SE30 is old. Switch on trace on ST12 while running this progarm and identify where exactly most of the time is being spent. If you see high CPU time this problem with the ABAP code. You can exactly figure out the program/function module from ST12 trace where exactly time is being spent. If you see high database time in ST12, problem is with database related issue. So basically you have to analyze sql statement from performance traces in ST12. These could resolve your issue.
    Yours Sincerely
    Dileep

  • Performance problems with DFSN, ABE and SMB

    Hello,
    We have identified a problem with DFS-Namespace (DFSN), Access Based Enumeration (ABE) and SMB File Service.
    Currently we have two Windows Server 2008 R2 servers providing the domain-based DFSN in functional level Windows Server 2008 R2 with activated ABE.
    The DFSN servers have the most current hotfixes for DFSN and SMB installed, according to http://support.microsoft.com/kb/968429/en-us and http://support.microsoft.com/kb/2473205/en-us
    We have only one AD-site and don't use DFS-Replication.
    Servers have 2 Intel X5550 4 Core CPUs and 32 GB Ram.
    Network is a LAN.
    Our DFSN looks like this:
    \\contoso.com\home
        Contains 10.000 Links
        Drive mapping on clients to subfolder \\contoso.com\home\username
    \\contoso.com\group
        Contains 2500 Links
        Drive mapping on clients directly to \\contoso.com\group
    On \\contoso.com\group we serve different folders for teams, projects and other groups with different access permissions based on AD groups.
    We have to use ABE, so that users see only accessible Links (folders)
    We encounter sometimes multiple times a day enterprise-wide performance problems for 30 seconds when accessing our Namespaces.
    After six weeks of researching and analyzing we were able to identify the exact problem.
    Administrators create a new DFS-Link in our Namespace \\contoso.com\group with correct permissions using the following command line:
    dfsutil.exe link \\contoso.com\group\project123 \\fileserver1\share\project123
    dfsutil.exe property sd grant \\contoso.com\group\project123 CONTOSO\group-project123:RX protect replace
    This is done a few times a day.
    There is no possibility to create the folder and set the permissions in one step.
    DFSN process on our DFSN-servers create the new link and the corresponding folder in C:\DFSRoots.
    At this time, we have for example 2000+ clients having an active session to the root of the namespace \\contoso.com\group.
    Active session means a Windows Explorer opened to the mapped drive or to any subfolder.
    The file server process (Lanmanserver) sends a change notification (SMB-Protocol) to each client with an active session \\contoso.com\group.
    All the clients which were getting the notification now start to refresh the folder listing of \\contoso.com\group
    This was identified by an network trace on our DFSN-servers and different clients.
    Due to ABE the servers have to compute the folder listing for each request.
    DFS-Service on the servers doen't respond for propably 30 seconds to any additional requests. CPU usage increases significantly over this period and went back to normal afterwards. On our hardware from about 5% to 50%.
    Users can't access all DFS-Namespaces during this time and applications using data from DFS-Namespace stop responding.
    Side effect: Windows reports on clients a slow-link detection for \\contoso.com\home, which can be offline available for users (described here for WAN-connections: http://blogs.technet.com/b/askds/archive/2011/12/14/slow-link-with-windows-7-and-dfs-namespaces.aspx)
    Problem doesn't occure when creating a link in \\contoso.com\home, because users have only a mapping to subfolders.
    Currently, the problem doesn't occure also for \\contoso.com\app, because users usually don't use Windows Explorer accessing this mapping.
    Disabling ABE reduces the DFSN freeze time, but doesn't solve the problem.
    Problem also occurs with Windows Server 2012 R2 as DFSN-server.
    There is a registry key available for clients to avoid the reponse to the change notification (NoRemoteChangeNotify, see http://support.microsoft.com/kb/812669/en-us)
    This might fix the problem with DFSN, but results in other problems for the users. For example, they have to press F5 for refreshing every remote directory on change.
    Is there a possibility to disable the SMB change notification on server side ?
    TIA and regards,
    Ralf Gaudes

    Hi,
    Thanks for posting in Microsoft Technet Forums.
    I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.
    Thank you for your understanding and support.
    Regards.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • (new?) performance problem using jDriver after a Sql Server 6.5 to 2000 conversion

    Hi,
    This is similar - yet different - to a few of the old postings about performance
    problems with using jdbc drivers against Sql Server 7 & 2000.
    Here's the situation:
    I am running a standalone java application on a Solaris box using BEA's jdbc driver
    to connect to a Sql Server database on another network. The application retrieves
    data from the database through joins on several tables for approximately 40,000
    unique ids. It then processes all of this data and produces a file. We tuned
    the app so that the execution time for a single run through the application was
    24 minutes running against Sql Server 6.5 with BEA's jdbc driver. After performing
    a DBMS conversion to upgrade it to Sql Server 2000 I switched the jDriver to the
    Sql Server 2000 version. I ran the app and got an alarming execution time of
    5hrs 32 min. After some research, I found the problem with unicode and nvarchar/varchar
    and set the "useVarChars" property to "true" on the driver. The execution time
    for a single run through the application is now 56 minutes.
    56 minutes compared to 5 1/2 hrs is an amazing improvement. However, it is still
    over twice the execution time that I was seeing against the 6.5 database. Theoretically,
    I should be able to switch out my jdbc driver and the DBMS conversion should be
    invisible to my application. That would also mean that I should be seeing the
    same execution times with both versions of the DBMS. Has anybody else seen a
    simlar situation? Are there any other settings or fixes that I can put into place
    to get my performance back down to what I was seeing with 6.5? I would rather
    not have to go through and perform another round of performance tuning after having
    already done this when the app was originally built.
    thanks,
    mike

    Mike wrote:
    Joe,
    This was actually my next step. I replaced the BEA driver with
    the MS driver and let it run through with out making any
    configuration changes, just to see what happened. I got an
    execution time of about 7 1/2 hrs (which was shocking). So,
    (comparing apples to apples) while leaving the default unicode
    property on, BEA ran faster than MS, 5 1/2 hrs to 7 1/2 hrs.
    I then set the 'SendStringParametersAsUnicode' to 'false' on the
    MS driver and ran another test. This time the application
    executed in just over 24 minutes. The actual runtime was 24 min
    16 sec, which is still ever so slightly above the actual runtime
    against SS 6.5 which was 23 min 35 sec, but is twice as fast as the
    56 minutes that BEA's driver was giving me.
    I think that this is very interesting. I checked to make sure that
    there were no outside factors that may have been influencing the
    runtimes in either case, and there were none. Just to make sure,
    I ran each driver again and got the same results. It sounds like
    there are no known issues regarding this?
    We have people looking into things on the DBMS side and I'm still
    looking into things on my end, but so far none of us have found
    anything. We'd like to continue using BEA's driver for the
    support and the fact that we use Weblogic Server for all of our
    online applications, but this new data might mean that I have to
    switch drivers for this particular application.Thanks. No, there is no known issue, and if you put a packet sniffer
    between the client and DBMS, you will probably not see any appreciable
    difference in the content of the SQL sent be either driver. My suspicion is
    that it involves the historical backward compatibility built in to the DBMS.
    It must still handle several iterations of older applications, speaking obsolete
    versions of the DBMS protocol, and expecting different DBMS behavior!
    Our driver presents itself as a SQL7-level application, and may well be treated
    differently than a newer one. This may include different query processing.
    Because our driver is deprecated, it is unlikely that it will be changed in
    future. We will certainly support you using the MS driver, and if you look
    in the MS JDBC newsgroup, you'll see more answers from BEA folks than
    from MS people!
    Joe
    >
    >
    Mike
    The next test you should do, to isolate the issue, is to try another
    JDBC driver.
    MS provides a type-4 driver now, for free. If it is significantly faster,
    it would be
    interesting. However, it would still not isolate the problem, because
    we still would
    need to know what query plan is created by the DBMS, and why.
    Joe Weinstein at BEA
    PS: I can only tell you that our driver has not changed in it's semantic
    function.
    It essentially send SQL to the DBMS. It doesn't alter it.

  • OBIEE 11g performance problem

    Hi,
    I am facing a performance problem in OBIEE 11g. When I run the query taking from nqquery.log in database, it is giving me the result within few seconds. But In the OBIEE Answers the query runs forever not showing any data.
    Attaching the query below.
    Please help to solve.
    Thanks
    Titas
    [2012-10-16T18:07:34.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-23] [] [ecid: 3a39339b45a46ab4:-70b1919f:13a1f282668:-8000-00000000000769b2] [tid: 44475940] [requestid: 26e1001e] [sessionid: 26e10000] [username: weblogic] -------------------- General Query Info: [[
    Repository: Star, Subject Area: BM_BG Pascua Lama, Presentation: BG PL Project Analysis
    [2012-10-16T18:07:34.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-18] [] [ecid: 3a39339b45a46ab4:-70b1919f:13a1f282668:-8000-00000000000769b2] [tid: 44475940] [requestid: 26e1001e] [sessionid: 26e10000] [username: weblogic] -------------------- Sending query to database named XXBG Pascua Lama (id: <<26911>>), connection pool named Connection Pool, logical request hash e3feca59, physical request hash 5ab00db6: [[
    WITH
    SAWITH0 AS (select sum(T6051.COST_AMT_PROJ_RATE) as c1,
    sum(T6051.COST_AMOUNT) as c2,
    T6051.AFE_NUMBER as c3,
    T6051.BUDGET_OWNER as c4,
    T6051.COMMENTS as c5,
    T6051.COMMODITY as c6,
    T6051.COST_PERIOD as c7,
    T6051.COST_SOURCE as c8,
    T6051.COST_TYPE as c9,
    T6051.DATA_SEL as c10,
    T6051.FACILITY as c11,
    T6051.HISTORICAL as c12,
    T6051.OPERATING_UNIT as c13,
    T5633.project_number as c14,
    T5637.task_number as c15
    from
    (SELECT project_id proj_id
    ,segment1 project_number
    ,org_id
    FROM pa.pa_projects_all
    WHERE org_id IN (825, 865, 962, 2161)) T5633,
    (SELECT project_id proj_id
    ,task_id
    ,task_number
    ,task_name
    FROM pa.pa_tasks) T5637,
    (SELECT xxbg_pl_proj_analysis_cost_v.AFE_NUMBER,
    xxbg_pl_proj_analysis_cost_v.BUDGET_OWNER,
    xxbg_pl_proj_analysis_cost_v.COMMENTS,
    xxbg_pl_proj_analysis_cost_v.COMMODITY,
    xxbg_pl_proj_analysis_cost_v.COST_PERIOD,
    xxbg_pl_proj_analysis_cost_v.COST_SOURCE,
    xxbg_pl_proj_analysis_cost_v.COST_TYPE,
    xxbg_pl_proj_analysis_cost_v.FACILITY,
    xxbg_pl_proj_analysis_cost_v.HISTORICAL,
    xxbg_pl_proj_analysis_cost_v.PO_NUMBER_COST_CONTROL,
    xxbg_pl_proj_analysis_cost_v.PREVIOUS_PROJECT,
    xxbg_pl_proj_analysis_cost_v.PREV_AFE_NUMBER,
    xxbg_pl_proj_analysis_cost_v.PREV_COST_CONTROL_ACC_CODE,
    xxbg_pl_proj_analysis_cost_v.PREV_COST_TYPE,
    xxbg_pl_proj_analysis_cost_v.PROJECT_NUMBER,
    xxbg_pl_proj_analysis_cost_v.SUPPLIER_NAME,
    xxbg_pl_proj_analysis_cost_v.TASK_DESCRIPTION,
    xxbg_pl_proj_analysis_cost_v.TASK_NUMBER,
    xxbg_pl_proj_analysis_cost_v.TRANSACTION_NUMBER,
    xxbg_pl_proj_analysis_cost_v.WORK_PACKAGE,
    xxbg_pl_proj_analysis_cost_v.WP_OWNER,
    xxbg_pl_proj_analysis_cost_v.OPERATING_UNIT,
    xxbg_pl_proj_analysis_cost_v.DATA_SEL,
    pa_periods_all.PERIOD_NAME,
    xxbg_pl_proj_analysis_cost_v.ORG_ID,
    xxbg_pl_proj_analysis_cost_v.COST_AMT_PROJ_RATE COST_AMT_PROJ_RATE,
    xxbg_pl_proj_analysis_cost_v.COST_AMOUNT COST_AMOUNT,
    xxbg_pl_proj_analysis_cost_v.project_id,
    xxbg_pl_proj_analysis_cost_v.task_id
    FROM (select xpac.*,
    decode(xpac.historical, 'Y', 'Historical', 'N', 'Current') data_sel
    from apps.xxbg_pl_proj_analysis_cost_v xpac
    union
    select xpac.*, 'All' data_sel
    from apps.xxbg_pl_proj_analysis_cost_v xpac) xxbg_pl_proj_analysis_cost_v,
    (select period_name, org_id from apps.pa_periods_all) pa_periods_all
    WHERE ((xxbg_pl_proj_analysis_cost_v.ORG_ID = pa_periods_all.ORG_ID))
    AND (xxbg_pl_proj_analysis_cost_v.ORG_ID IN (825,865,962,2161))
    AND (APPS.XXBG_PL_PA_COMMITMENT_PKG.GET_LAST_DAY(xxbg_pl_proj_analysis_cost_v.COST_PERIOD) <=
    APPS.XXBG_PL_PA_COMMITMENT_PKG.GET_LAST_DAY(pa_periods_all.PERIOD_NAME))) T6051
    where ( T5633.proj_id = T5637.proj_id and T5633.project_number = 'SUDPALAPAS11' and T5637.proj_id = T6051.PROJECT_ID and T5637.task_id = T6051.TASK_ID and T5637.task_number = '2100.2000.01.BC0100' and T6051.DATA_SEL = 'All' and T6051.OPERATING_UNIT = 'Compañía Minera Nevada SpA' and T6051.PERIOD_NAME = 'JUL-12' )
    group by T5633.project_number, T5637.task_number, T6051.AFE_NUMBER, T6051.BUDGET_OWNER, T6051.COMMENTS, T6051.COMMODITY, T6051.COST_PERIOD, T6051.COST_SOURCE, T6051.COST_TYPE, T6051.DATA_SEL, T6051.FACILITY, T6051.HISTORICAL, T6051.OPERATING_UNIT)
    select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6, D1.c7 as c7, D1.c8 as c8, D1.c9 as c9, D1.c10 as c10, D1.c11 as c11, D1.c12 as c12, D1.c13 as c13, D1.c14 as c14, D1.c15 as c15, D1.c16 as c16 from ( select distinct 0 as c1,
    D1.c3 as c2,
    D1.c4 as c3,
    D1.c5 as c4,
    D1.c6 as c5,
    D1.c7 as c6,
    D1.c8 as c7,
    D1.c9 as c8,
    D1.c10 as c9,
    D1.c11 as c10,
    D1.c12 as c11,
    D1.c13 as c12,
    D1.c14 as c13,
    D1.c15 as c14,
    D1.c2 as c15,
    D1.c1 as c16
    from
    SAWITH0 D1
    order by c13, c14, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12 ) D1 where rownum <= 65001

    Hi Titas,
    with such problems typically, at least for me, the cause turns out to be something simple and embarrassing like:
    - I am connected to another database,
    - The database is right but I have made some manual adjustments without committing them,
    - I have got the wrong query from the query log,
    - I have got the right query but my request is based on multiple queries.
    Do other OBIEE reports work fine?
    Have you tried removing columns one by one to see whether it makes a difference?
    -JR

  • Performance problem because of ignored index

    Hi,
    We have a performance problem with kodo ignoring indexes in Oracle:
    Our baseclass of all our persistent classes (LogasPoImpl) has a subclass
    CODEZOLLMASSNAHMENIMPL.
    We use vertical mapping for all subclasses and have 400.000 instances of
    CODEZOLLMASSNAHMENIMPL.
    We defined an additional index on an attribute of CODEZOLLMASSNAHMENIMPL.
    A query with a filter like "myIndexedAttribute = 'DE'" takes about 15
    seconds on Oracle 8.1.7.
    Kodo logs something like the following:
    [14903 ms] executing prepstmnt 6156689 SELECT (...)
    FROM CODEZOLLMASSNAHMENIMPL t0, LOGASPOIMPL t1
    WHERE (t0.myIndexedAttribute = ?)
    AND t1.JDOCLASS = ?
    AND t0.JDOID = t1.JDOID
    [params=(String) DE, (String)
    de.logas.zoll.eztneu.CodeZollMassnahmenImpl] [reused=0]
    When I execute the same statement from a SQL-prompt, it takes that long as
    well, but when I swap the tablenames in the from part
    (to "FROM LOGASPOIMPL t1, CODEZOLLMASSNAHMENIMPL t0") the result comes
    immediately.
    I've had a look at the query plans, oracle creates for the two statements
    and found, that our index on myIndexedAttribute is not used
    by the first statement, but it is by the second.
    How can I make Kodo use the faster statement?
    I've tried to use the "jdbc-indexed" tag, but without success so far.
    Thanks,
    Wolfgang

    Thank you very much, Stefan & Alex.
    After computing statistics the index is used and the performance is fine
    now.
    - Wolfgang
    Alex Roytman wrote:
    ANALYZE TABLE MY_TABLE COMPUTE STATISTICS;
    "Stefan" <[email protected]> wrote in message
    news:btlqsj$f18$[email protected]..
    When I execute the same statement from a SQL-prompt, it takes that longas
    well, but when I swap the tablenames in the from part
    (to "FROM LOGASPOIMPL t1, CODEZOLLMASSNAHMENIMPL t0") the result comes
    immediately.
    I've had a look at the query plans, oracle creates for the twostatements
    and found, that our index on myIndexedAttribute is not used
    by the first statement, but it is by the second.
    How can I make Kodo use the faster statement?
    I've tried to use the "jdbc-indexed" tag, but without success so far.I know that in DB2 there is a function called "Run Statistics" which you
    can (and should do) on all tables involved in a query (at least once a
    month, when there are heavy changes in the tables).
    On information gathered by this statistics DB2 can optimize your queries
    and execution path's
    Since I was once involved in query performance optimizing on DB/2 I can
    say you can get improvements of 80% on big tables on which statistics are
    run and not. (Since the execution plans created by the optimizer differ
    heavily)
    Since I'm working now with Oracle as well, at least I can say, that Oracle
    has a featere like statistics as well. (go into the manager enterprise
    Console and click on a table, you will find a row "statisitics last run")
    I don't know how to trigger these statistics nor whether they would
    influence the query execution path on oracle (thus "swapping" tablenames
    by itself), since I didn't have time to do further research on thatmatter.
    But it's worth a try to find out and maybe it helps on you problem ?

  • 10.1.3.3 ESB DB Adapter Performance Problem

    Hello,
    We are trying to update oracle database using DB Adapter. The insertion into database via DBAdapter (& only with DBAdapter alone) is slow. Even for transferring 50 records of ~1K data, 5-6 seconds are spent.
    Environment:
    Oracle SOA suite 10.1.3 with 10.1.3.3 Patch Applied
    AIX 5
    8 CPU & 20 GB RAM
    Our test setup.
    Tool:ESB & BPEL
    Inbound Adapter to read data from Oracle Table
    TransformActivity to convert source schema to destination schema
    Outbound Adapter to write data into same oracle table in the same machine. (This has performance problem).
    ESB Console shows much of total time spent in the Outbound Adapter activity.
    We also created a BPEL process to do the data transfer between Oracle Databases. Adapter statistics for outbound insert activity in BPEL console shows higher value under "Commit" listed under "Adapter Post Processing".
    If we are to read data using DB adapter from oracle table & write it to a file using File adapter, transfer of 10,000 records (~2K each) happens in 2 secs.Only writing into database is taking long time. We are unsure why writing into database takes so much. Any help would be appreciated to solve this problem.
    We have modified the DB values stated by Oracle documentation for performance improvement. We have done the JVM tuning. We tried using "UsesBatchWriting" & UseDirectSql=true. However there is no improvement.
    We also tried creating an outbound adapter which executes custom sql. Custom sql inserts 10000 records into destination table. (insert into dest_table select * from source_table). There is no performance issue with this approach. Customsql is executed in less than 2 seconds. Also we dont see any performance problem if we are to use any of the sql clients to update data in the same destination table. Only via DB Adapter we face this issue.
    Interestingly, in a different setup,a Windows machine with just 1CPU, 1GB RAM running 10.1.3 is able to transfer 10,000 records (~2K per record) to a different Oracle database over the network(with in LAN).
    Please let me know if you would like know setting of any parameter in the system.We would appreciate if any help can be provided to find where the bottleneck is.
    Thanks

    I'm presuming this is just merge and not insert.
    do alter system set sql_Trace=true and capture the trace files on the database . It's probably only waiting on SQLNET message from client but we need to rule that out.
    dmstool should show you some of the activity stuff inside the client, it may also be worth doing a truss on the java process to see what syscalls it is waiting on.
    Also are you up to MLR7 , the latest ESB release ?

  • Report program Performance problem

    Hi All,
       one object is taking 30hr for executing.some one develped this in 1998 but this time it is a big Performance problem.please some one helep what to do i am giving that code.
    *--DOCUMENTATION--
    Programe written by :  31.03.1998 .
    Purpose : this programe updates the car status into the table zsdtab1
    This programe is to be schedule in the backgroud periodically .
    Querries can be fired on the table zsdtab1 to get the details of the
        Car .
    This programe looks at the changes made in the material master from
    last updated date and the new entries in material master and updates
    the tables zsdtab1 .
    Changes in the Sales Order are not taken into account .
    To get a fresh data set the value of zupddate in table ZSTATUS as
    01.01.1998 . All the data will be refreshed from that date .
    Program Changed on 23/7/2001 after version upgrade 46b by jyoti
    Addition of New tables for Ibase
    tables used -
    tables : mara ,                        " Material master
             ausp ,                        " Characteristics table .
             zstatus ,                     " Last updated status table .
             zsdtab1 ,    " Central database table to be maintained .
             vbap ,                        " Sales order header table .
             vbak ,                        " Sales order item table .
             kna1 ,                        " Customer master .
             vbrk ,
             vbrp ,
             bkpf ,
             bseg ,
             mseg ,
             mkpf ,
             vbpa ,
             vbfa ,
             t005t .                         " Country details tabe .
    --NEW TABLES ADDEDFOR VERSION 4.6B--
    tables :   ibsymbol ,ibin , ibinvalues .
    data : vatinn like ibsymbol-atinn , vatwrt like ibsymbol-atwrt ,
           vatflv like ibsymbol-atflv .
    *--types definition--
    types : begin of mara_itab_type ,
               matnr like mara-matnr ,
               cuobf like mara-cuobf ,
            end of mara_itab_type ,
            begin of ausp_itab_type ,
               atinn like ausp-atinn ,
               atwrt like ausp-atwrt ,
               atflv like ausp-atflv ,
            end of ausp_itab_type .
    data : mara_itab type mara_itab_type occurs 500 with header line ,
           zsdtab1_itab like zsdtab1 occurs 500 with header line ,
           ausp_itab type ausp_itab_type occurs 500 with header line ,
           last_date type d ,
           date type d .
    data: length type i.
    clear mara_itab . refresh mara_itab .
    clear zsdtab1_itab . refresh zsdtab1_itab .
    select single  zupddate into last_date from zstatus
           where programm = 'ZSDDET01' .
    select matnr cuobf into (mara_itab-matnr , mara_itab-cuobf) from mara
          where mtart eq 'FERT' or mtart = 'ZCBU'.
        where MATNR IN MATERIA
         and ERSDA IN C_Date
         and MTART in M_TYP.
        append mara_itab .
    endselect .
    loop at mara_itab.
    clear zsdtab1_itab .
    zsdtab1_itab-commno = mara_itab-matnr .
       Get the detailed data into internal table ausp_itab .----------->>>
    clear ausp_itab . refresh ausp_itab .
    --change starts--
    select atinn atwrt atflv into (ausp_itab-atinn , ausp_itab-atwrt ,
                               ausp_itab-atflv) from ausp
          where objek = mara_itab-matnr .
          append ausp_itab .
       endselect .
       clear ausp_itab .
    select  atinn  atwrt atflv  into (ausp_itab-atinn , ausp_itab-atwrt ,
    ausp_itab-atflv) from ibin as a inner join ibinvalues as b
                      on ain_recno = bin_recno
           inner join  ibsymbol as c
                      on bsymbol_id = csymbol_id
        where a~instance = mara_itab-cuobf  .
      append ausp_itab .
    endselect .
    ----CHANGE ENDS HERE -
    sort ausp_itab by atwrt.
    loop at ausp_itab .
    clear date .
    case ausp_itab-atinn .
      when '0000000094' .
        zsdtab1_itab-model = ausp_itab-atwrt .  " model  .
      when '0000000101' .
        zsdtab1_itab-drive = ausp_itab-atwrt .  " drive
      when '0000000095' .
        zsdtab1_itab-converter = ausp_itab-atwrt . "converter
      when '0000000096' .
        zsdtab1_itab-transmssn = ausp_itab-atwrt . "transmission
      when '0000000097' .
        zsdtab1_itab-colour = ausp_itab-atwrt .    "colour
      when '0000000098' .
        zsdtab1_itab-ztrim = ausp_itab-atwrt .     "trim
      when '0000000103' .
    *=========Sujit 14-Mar-2006
       IF AUSP_ITAB-ATWRT(3) EQ 'WDB' OR AUSP_ITAB-ATWRT(3) EQ 'WDD'
       OR AUSP_ITAB-ATWRT(3) EQ 'WDC' OR AUSP_ITAB-ATWRT(3) EQ 'KPD'.
           ZSDTAB1_ITAB-CHASSIS_NO = AUSP_ITAB-ATWRT+3(14).
       ELSE.
           ZSDTAB1_ITAB-CHASSIS_NO = AUSP_ITAB-ATWRT .     "chassis no
       ENDIF.
        zsdtab1_itab-chassis_no = ausp_itab-atwrt .     "chassis no
    *=========14-Mar-2006
      when '0000000166' .
    ----25.05.04
      length = strlen( ausp_itab-atwrt ).
      if length < 15.                       "***aded by patil
       zsdtab1_itab-engine_no = ausp_itab-atwrt .     "ENGINE NO
      else.
    zsdtab1_itab-engine_no = ausp_itab-atwrt+13(14)."Aded on 21.05.04 patil
      endif.
    ----25.05.04
      when '0000000104' .
        zsdtab1_itab-body_no = ausp_itab-atwrt .     "BODY NO
      when '0000000173' .                                          "21.06.98
        zsdtab1_itab-cockpit = ausp_itab-atwrt .     "COCKPIT NO . "21.06.98
      when '0000000102' .
        zsdtab1_itab-dest = ausp_itab-atwrt .     "destination
      when '0000000105' .
        zsdtab1_itab-airbag = ausp_itab-atwrt .     "AIRBAG
      when '0000000110' .
        zsdtab1_itab-trailer_no = ausp_itab-atwrt .     "TRAILER_NO
      when '0000000109' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-fininspdat = date .   "FIN INSP DATE
      when '0000000108' .
        zsdtab1_itab-entrydate = ausp_itab-atwrt .     "ENTRY DATE
      when '0000000163' .
        zsdtab1_itab-regist_no = ausp_itab-atwrt .     "REGIST_NO
      when '0000000164' .
        zsdtab1_itab-mech_key = ausp_itab-atwrt .     "MECH_KEY
      when '0000000165' .
        zsdtab1_itab-side_ab_rt = ausp_itab-atwrt .     "SIDE_AB_RT
      when '0000000171' .
        zsdtab1_itab-side_ab_lt = ausp_itab-atwrt .     "SIDE_AB_LT
      when '0000000167' .
        zsdtab1_itab-elect_key = ausp_itab-atwrt .     "ELECT_KEY
      when '0000000168' .
        zsdtab1_itab-head_lamp = ausp_itab-atwrt .     "HEAD_LAMP
      when '0000000169' .
        zsdtab1_itab-tail_lamp = ausp_itab-atwrt .     "TAIL_LAMP
      when '0000000170' .
        zsdtab1_itab-vac_pump = ausp_itab-atwrt .     "VAC_PUMP
      when '0000000172' .
        zsdtab1_itab-sd_ab_sn_l = ausp_itab-atwrt .     "SD_AB_SN_L
      when '0000000174' .
        zsdtab1_itab-sd_ab_sn_r = ausp_itab-atwrt .     "SD_AB_SN_R
      when '0000000175' .
        zsdtab1_itab-asrhydunit = ausp_itab-atwrt .     "ASRHYDUNIT
      when '0000000176' .
        zsdtab1_itab-gearboxno = ausp_itab-atwrt .     "GEARBOXNO
      when '0000000177' .
        zsdtab1_itab-battery = ausp_itab-atwrt .     "BATTERY
      when '0000000178' .
        zsdtab1_itab-tyretype = ausp_itab-atwrt .     "TYRETYPE
      when '0000000179' .
        zsdtab1_itab-tyremake = ausp_itab-atwrt .     "TYREMAKE
      when '0000000180' .
        zsdtab1_itab-tyresize = ausp_itab-atwrt .     "TYRESIZE
      when '0000000181' .
        zsdtab1_itab-rr_axle_no = ausp_itab-atwrt .     "RR_AXLE_NO
      when '0000000183' .
        zsdtab1_itab-ff_axl_nor = ausp_itab-atwrt .     "FF_AXLE_NO_rt
      when '0000000182' .
        zsdtab1_itab-ff_axl_nol = ausp_itab-atwrt .     "FF_AXLE_NO_lt
      when '0000000184' .
        zsdtab1_itab-drivairbag = ausp_itab-atwrt .     "DRIVAIRBAG
      when '0000000185' .
        zsdtab1_itab-st_box_no = ausp_itab-atwrt .     "ST_BOX_NO
      when '0000000186' .
        zsdtab1_itab-transport = ausp_itab-atwrt .     "TRANSPORT
      when '0000000106' .
        zsdtab1_itab-trackstage = ausp_itab-atwrt .  " tracking stage
      when '0000000111' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_1 = date .    " tracking date for 1.
      when '0000000112' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_5 = date .    " tracking date for 5.
      when '0000000113' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_10 = date .   "tracking date for 10
      when '0000000114' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_15 = date .   "tracking date for 15
      when '0000000115' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_20 = date .   " tracking date for 20
      when '0000000116' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_25 = date .   " tracking date for 25
      when '0000000117' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_30 = date .   "tracking date for 30
      when '0000000118' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_35 = date .   "tracking date for 35
      when '0000000119' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_40 = date .   " tracking date for 40
      when '0000000120' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_45 = date .   " tracking date for 45
      when '0000000121' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_50 = date .   "tracking date for 50
      when '0000000122' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_55 = date .   "tracking date for 55
      when '0000000123' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_60 = date .   " tracking date for 60
      when '0000000124' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_65 = date .   " tracking date for 65
      when '0000000125' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_70 = date .   "tracking date for 70
      when '0000000126' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_75 = date .   "tracking date for 75
      when '0000000127' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_78 = date .   " tracking date for 78
      when '0000000203' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_79 = date .   " tracking date for 79
      when '0000000128' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_80 = date .   " tracking date for 80
      when '0000000129' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_85 = date .   "tracking date for 85
      when '0000000130' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_90 = date .   "tracking date for 90
      when '0000000131' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dat_trk_95 = date .   "tracking date for 95
      when '0000000132' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dattrk_100 = date .   " tracking date for100
      when '0000000133' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dattrk_110 = date .   " tracking date for110
      when '0000000134' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dattrk_115 = date .   "tracking date for 115
      when '0000000135' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dattrk_120 = date .   "tracking date for 120
      when '0000000136' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-dattrk_105 = date .   "tracking date for 105
      when '0000000137' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_1 = date .     "plan trk date for 1
      when '0000000138' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_5 = date .     "plan trk date for 5
      when '0000000139' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_10 = date .    "plan trk date for 10
      when '0000000140' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_15 = date .    "plan trk date for 15
      when '0000000141' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_20 = date .    "plan trk date for 20
      when '0000000142' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_25 = date .    "plan trk date for 25
      when '0000000143' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_30 = date .    "plan trk date for 30
      when '0000000144' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_35 = date .    "plan trk date for 35
      when '0000000145' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_40 = date .    "plan trk date for 40
      when '0000000146' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_45 = date .    "plan trk date for 45
      when '0000000147' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_50 = date .    "plan trk date for 50
      when '0000000148' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_55 = date .    "plan trk date for 55
      when '0000000149' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_60 = date .    "plan trk date for 60
      when '0000000150' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_65 = date .    "plan trk date for 65
      when '0000000151' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_70 = date .    "plan trk date for 70
      when '0000000152' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_75 = date .    "plan trk date for 75
      when '0000000153' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_78 = date .    "plan trk date for 78
      when '0000000202' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_79 = date .    "plan trk date for 79
      when '0000000154' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_80 = date .    "plan trk date for 80
      when '0000000155' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_85 = date .    "plan trk date for 85
      when '0000000156' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_90 = date .    "plan trk date for 90
      when '0000000157' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_95 = date .    "plan trk date for 95
      when '0000000158' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_100 = date .   "plan trk date for 100
      when '0000000159' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_105 = date .   "plan trk date for 105
      when '0000000160' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_110 = date .   "plan trk date for 110
      when '0000000161' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_115 = date .   "plan trk date for 115
      when '0000000162' .
        perform date_convert using  ausp_itab-atflv changing date .
        zsdtab1_itab-pdt_tk_120 = date .   "plan trk date for 120
    ********Additional fields / 24.05.98**********************************
      when '0000000099' .
        case ausp_itab-atwrt .
          when '540' .
            zsdtab1_itab-roll_blind = 'X' .
          when '482' .
            zsdtab1_itab-ground_clr = 'X' .
          when '551' .
            zsdtab1_itab-anti_theft = 'X' .
          when '882' .
            zsdtab1_itab-anti_tow = 'X' .
          when '656' .
            zsdtab1_itab-alloy_whel = 'X' .
          when '265' .
            zsdtab1_itab-del_class = 'X' .
          when '280' .
            zsdtab1_itab-str_wheel = 'X' .
          when 'CDC' .
            zsdtab1_itab-cd_changer = 'X' .
          when '205' .
            zsdtab1_itab-manual_eng = 'X' .
          when '273' .
            zsdtab1_itab-conn_handy = 'X' .
          when '343' .
            zsdtab1_itab-aircleaner = 'X' .
          when '481' .
            zsdtab1_itab-metal_sump = 'X' .
          when '533' .
            zsdtab1_itab-speaker = 'X' .
          when '570' .
            zsdtab1_itab-arm_rest = 'X' .
          when '580' .
            zsdtab1_itab-aircond = 'X' .
          when '611' .
            zsdtab1_itab-exit_light = 'X' .
          when '613' .
            zsdtab1_itab-headlamp = 'X' .
          when '877' .
            zsdtab1_itab-readlamp = 'X' .
          when '808' .
            zsdtab1_itab-code_ckd = 'X' .
          when '708' .
            zsdtab1_itab-del_prt_lc = 'X' .
          when '593' .
            zsdtab1_itab-ins_glass = 'X' .
          when '955' .
            zsdtab1_itab-zelcl = 'Elegance' .
          when '593' .
            zsdtab1_itab-zelcl = 'Classic' .
        endcase .
    endcase .
    endloop .
    *--Update the sales data .--
    perform get_sales_order using mara_itab-matnr .
    perform get_cartype using mara_itab-matnr .
    append zsdtab1_itab .
    endloop.
    <<<
    loop at zsdtab1_itab .
      if zsdtab1_itab-cartype <> 'W-203'
      or zsdtab1_itab-cartype <> 'W-210'
      or zsdtab1_itab-cartype <> 'W-211'.
          clear zsdtab1_itab-zelcl.
      endif.
    SELECT SINGLE * FROM ZSDTAB1 WHERE COMMNO = MARA_ITAB-MATNR .
    select single * from zsdtab1 where commno = zsdtab1_itab-commno.
    if sy-subrc <> 0 .
        insert into zsdtab1 values zsdtab1_itab .
    else .
        update zsdtab1 set :vbeln = zsdtab1_itab-vbeln
                       bill_doc = zsdtab1_itab-bill_doc
                       dest = zsdtab1_itab-dest
                       lgort = zsdtab1_itab-lgort
                       ship_tp = zsdtab1_itab-ship_tp
                       country = zsdtab1_itab-country
                       kunnr = zsdtab1_itab-kunnr
                       vkbur = zsdtab1_itab-vkbur
                       customer = zsdtab1_itab-customer
                       city   = zsdtab1_itab-city
                       region = zsdtab1_itab-region
                       model = zsdtab1_itab-model
                       drive = zsdtab1_itab-drive
                       converter = zsdtab1_itab-converter
                       transmssn = zsdtab1_itab-transmssn
                       colour = zsdtab1_itab-colour
                       ztrim = zsdtab1_itab-ztrim
                       commno = zsdtab1_itab-commno
                       trackstage = zsdtab1_itab-trackstage
                       chassis_no    =   zsdtab1_itab-chassis_no
                       engine_no     =   zsdtab1_itab-engine_no
                       body_no       =   zsdtab1_itab-body_no
                       cockpit       =   zsdtab1_itab-cockpit
                       airbag        =   zsdtab1_itab-airbag
                       trailer_no    =   zsdtab1_itab-trailer_no
                       fininspdat    =   zsdtab1_itab-fininspdat
                       entrydate     =   zsdtab1_itab-entrydate
                       regist_no     =   zsdtab1_itab-regist_no
                       mech_key      =   zsdtab1_itab-mech_key
                       side_ab_rt    =   zsdtab1_itab-side_ab_rt
                       side_ab_lt    =   zsdtab1_itab-side_ab_lt
                       elect_key     =   zsdtab1_itab-elect_key
                       head_lamp     =   zsdtab1_itab-head_lamp
                       tail_lamp     =   zsdtab1_itab-tail_lamp
                       vac_pump      =   zsdtab1_itab-vac_pump
                       sd_ab_sn_l    =   zsdtab1_itab-sd_ab_sn_l
                       sd_ab_sn_r    =   zsdtab1_itab-sd_ab_sn_r
                       asrhydunit    =   zsdtab1_itab-asrhydunit
                       gearboxno     =   zsdtab1_itab-gearboxno
                       battery       =   zsdtab1_itab-battery
                       tyretype      =   zsdtab1_itab-tyretype
                       tyremake      =   zsdtab1_itab-tyremake
                       tyresize      =   zsdtab1_itab-tyresize
                       rr_axle_no    =   zsdtab1_itab-rr_axle_no
                       ff_axl_nor    =   zsdtab1_itab-ff_axl_nor
                       ff_axl_nol    =   zsdtab1_itab-ff_axl_nol
                       drivairbag    =   zsdtab1_itab-drivairbag
                       st_box_no     =   zsdtab1_itab-st_box_no
                       transport     =   zsdtab1_itab-transport
    OPTIONS-
                       roll_blind    = zsdtab1_itab-roll_blind
                       ground_clr    = zsdtab1_itab-ground_clr
                       anti_theft    = zsdtab1_itab-anti_theft
                       anti_tow      = zsdtab1_itab-anti_tow
                       alloy_whel    = zsdtab1_itab-alloy_whel
                       del_class     = zsdtab1_itab-del_class
                       str_wheel     = zsdtab1_itab-str_wheel
                       cd_changer    = zsdtab1_itab-cd_changer
                       manual_eng    = zsdtab1_itab-manual_eng
                       conn_handy    = zsdtab1_itab-conn_handy
                       aircleaner    = zsdtab1_itab-aircleaner
                       metal_sump    = zsdtab1_itab-metal_sump
                       speaker       = zsdtab1_itab-speaker
                       arm_rest      = zsdtab1_itab-arm_rest
                       aircond       = zsdtab1_itab-aircond
                       exit_light    = zsdtab1_itab-exit_light
                       headlamp      = zsdtab1_itab-headlamp
                       readlamp      = zsdtab1_itab-readlamp
                       code_ckd      = zsdtab1_itab-code_ckd
                       del_prt_lc    = zsdtab1_itab-del_prt_lc
                       ins_glass     = zsdtab1_itab-ins_glass
                       dat_trk_1 = zsdtab1_itab-dat_trk_1
                       dat_trk_5 = zsdtab1_itab-dat_trk_5
                       dat_trk_10 = zsdtab1_itab-dat_trk_10
                       dat_trk_15 = zsdtab1_itab-dat_trk_15
                       dat_trk_20 = zsdtab1_itab-dat_trk_20
                       dat_trk_25 = zsdtab1_itab-dat_trk_25
                       dat_trk_30 = zsdtab1_itab-dat_trk_30
                       dat_trk_35 = zsdtab1_itab-dat_trk_35
                       dat_trk_40 = zsdtab1_itab-dat_trk_40
                       dat_trk_45 = zsdtab1_itab-dat_trk_45
                       dat_trk_50 = zsdtab1_itab-dat_trk_50
                       dat_trk_55 = zsdtab1_itab-dat_trk_55
                       dat_trk_60 = zsdtab1_itab-dat_trk_60
                       dat_trk_65 = zsdtab1_itab-dat_trk_65
                       dat_trk_70 = zsdtab1_itab-dat_trk_70
                       dat_trk_75 = zsdtab1_itab-dat_trk_75
                       dat_trk_78 = zsdtab1_itab-dat_trk_78
                       dat_trk_79 = zsdtab1_itab-dat_trk_79
                       dat_trk_80 = zsdtab1_itab-dat_trk_80
                       dat_trk_85 = zsdtab1_itab-dat_trk_85
                       dat_trk_90 = zsdtab1_itab-dat_trk_90
                       dat_trk_95 = zsdtab1_itab-dat_trk_95
                       dattrk_100 = zsdtab1_itab-dattrk_100
                       dattrk_105 = zsdtab1_itab-dattrk_105
                       dattrk_110 = zsdtab1_itab-dattrk_110
                       dattrk_115 = zsdtab1_itab-dattrk_115
                       dattrk_120 = zsdtab1_itab-dattrk_120
                       pdt_tk_1 = zsdtab1_itab-pdt_tk_1
                       pdt_tk_5 = zsdtab1_itab-pdt_tk_5
                       pdt_tk_10 = zsdtab1_itab-pdt_tk_10
                       pdt_tk_15 = zsdtab1_itab-pdt_tk_15
                       pdt_tk_20 = zsdtab1_itab-pdt_tk_20
                       pdt_tk_25 = zsdtab1_itab-pdt_tk_25
                       pdt_tk_30 = zsdtab1_itab-pdt_tk_30
                       pdt_tk_35 = zsdtab1_itab-pdt_tk_35
                       pdt_tk_40 = zsdtab1_itab-pdt_tk_40
                       pdt_tk_45 = zsdtab1_itab-pdt_tk_45
                       pdt_tk_50 = zsdtab1_itab-pdt_tk_50
                       pdt_tk_55 = zsdtab1_itab-pdt_tk_55
                       pdt_tk_60 = zsdtab1_itab-pdt_tk_60
                       pdt_tk_65 = zsdtab1_itab-pdt_tk_65
                       pdt_tk_70 = zsdtab1_itab-pdt_tk_70
                       pdt_tk_75 = zsdtab1_itab-pdt_tk_75
                       pdt_tk_78 = zsdtab1_itab-pdt_tk_78
                       pdt_tk_79 = zsdtab1_itab-pdt_tk_79
                       pdt_tk_80 = zsdtab1_itab-pdt_tk_80
                       pdt_tk_85 = zsdtab1_itab-pdt_tk_85
                       pdt_tk_90 = zsdtab1_itab-pdt_tk_90
                       pdt_tk_95 = zsdtab1_itab-pdt_tk_95
                       pdt_tk_100 = zsdtab1_itab-pdt_tk_100
                       pdt_tk_105 = zsdtab1_itab-pdt_tk_105
                       pdt_tk_110 = zsdtab1_itab-pdt_tk_110
                       pdt_tk_115 = zsdtab1_itab-pdt_tk_115
                       pdt_tk_120 = zsdtab1_itab-pdt_tk_120
                       cartype = zsdtab1_itab-cartype
                       zelcl = zsdtab1_itab-zelcl
                       excise_no = zsdtab1_itab-excise_no
    where commno = zsdtab1_itab-commno .
       Update table .---------<<<
    endif .
    endloop .
    perform update_excise_date .
    perform update_post_goods_issue_date .
    perform update_time.
    *///////////////////// end of programe /////////////////////////////////
    Get sales data -
    form get_sales_order using matnr .
      data : corr_vbeln like vbrk-vbeln .
    ADDED BY ADITYA / 22.06.98 **************************************
    perform get_order using matnr .
    select single vbeln lgort into (zsdtab1_itab-vbeln , zsdtab1_itab-lgort)
                from vbap where matnr = matnr .   " C-22.06.98
                  from vbap where vbeln = zsdtab1_itab-vbeln .
      if sy-subrc = 0 .
    ************Get the Excise No from Allocation Field*******************
        select single * from zsdtab1 where commno = matnr .
        if zsdtab1-excise_no =  '' .
          select * from vbrp where matnr = matnr .
            select single vbeln into corr_vbeln from vbrk where
            vbeln = vbrp-vbeln and vbtyp = 'M'.
            if sy-subrc eq 0.
              select single * from vbrk where vbtyp = 'N'
              and sfakn = corr_vbeln.      "cancelled doc.
              if sy-subrc ne 0.
                select single * from vbrk where vbeln = corr_vbeln.
                if sy-subrc eq 0.
                  data : year(4) .
                  move sy-datum+0(4) to year .
      select single * from bkpf where awtyp = 'VBRK' and awkey = vbrk-vbeln
                                      and  bukrs = 'MBIL' and gjahr = year .
                  if sy-subrc = 0 .
      select single * from bseg where bukrs = 'MBIL' and belnr = bkpf-belnr
                                       and gjahr = year and koart = 'D' and
                                                               shkzg = 'S' .
                    zsdtab1_itab-excise_no = bseg-zuonr .
                  endif .
                endif.
              endif.
            endif.
          endselect.
        endif .
        select single kunnr vkbur into (zsdtab1_itab-kunnr ,
                zsdtab1_itab-vkbur) from vbak
                where vbeln = zsdtab1_itab-vbeln .
        if sy-subrc = 0 .
          select single name1 ort01 regio into (zsdtab1_itab-customer ,
             zsdtab1_itab-city , zsdtab1_itab-region) from kna1
             where kunnr = zsdtab1_itab-kunnr .
        endif.
      Get Ship to Party **************************************************
        select single * from vbpa where vbeln = zsdtab1_itab-vbeln and
                        parvw = 'WE' .
        if sy-subrc = 0 .
            zsdtab1_itab-ship_tp = vbpa-kunnr .
      Get Destination Country of Ship to Party .************
            select single * from kna1 where kunnr = vbpa-kunnr .
            if sy-subrc = 0 .
               select single * from t005t where land1 = kna1-land1
                                       and spras = 'E' .
               if sy-subrc = 0 .
                   zsdtab1_itab-country = t005t-landx .
               endif .
            endif .
        endif .
      endif .
    endform.                               " GET_SALES
    form update_time.
      update zstatus set zupddate = sy-datum
                         uzeit = sy-uzeit
      where programm = 'ZSDDET01' .
    endform.                               " UPDATE_TIME
    *&      Form  DATE_CONVERT
    form date_convert using atflv changing date .
      data : dt(8) , dat type i .
      dat = atflv .
      dt = dat .
      date = dt .
    endform.                               " DATE_CONVERT
    *&      Form  UPDATE_POST_GOODS_ISSUE_DATE
    form update_post_goods_issue_date .
      types : begin of itab1_type ,
                mblnr like mseg-mblnr ,
                budat like mkpf-budat ,
              end of itab1_type .
      data : itab1 type itab1_type occurs 10 with header line .
      loop at mara_itab .
        select single * from zsdtab1 where commno = mara_itab-matnr .
        if sy-subrc =  0  and zsdtab1-postdate =  '00000000' .
          refresh itab1 . clear itab1 .
        select * from mseg where matnr = mara_itab-matnr and bwart = '601' .
            itab1-mblnr = mseg-mblnr .
            append itab1 .
          endselect .
          loop at itab1 .
            select single * from mkpf where mblnr = itab1-mblnr .
            if sy-subrc = 0 .
              itab1-budat = mkpf-budat .
              modify itab1 .
            endif .
          endloop .
          sort itab1 by budat .
          read table itab1 index 1 .
          if sy-subrc = 0 .
            update zsdtab1 set postdate = itab1-budat
                         where commno = mara_itab-matnr .
          endif .
        endif .
      endloop .
    endform.                               " UPDATE_POST_GOODS_ISSUE_DATE
    *&      Form  UPDATE_EXCISE_DATE
    form update_excise_date.
      types : begin of itab2_type ,
                mblnr like mseg-mblnr ,
                budat like mkpf-budat ,
              end of itab2_type .
      data : itab2 type itab2_type occurs 10 with header line .
      loop at mara_itab .
        select single * from zsdtab1 where commno = mara_itab-matnr .
        if sy-subrc =  0  and zsdtab1-excise_dat  = '00000000' .
          refresh itab2 . clear itab2 .
          select * from mseg where matnr = mara_itab-matnr and
                                  (  bwart = '601' or  bwart = '311' ) .
            itab2-mblnr = mseg-mblnr .
            append itab2 .
          endselect .
          loop at itab2 .
            select single * from mkpf where mblnr = itab2-mblnr .
            if sy-subrc = 0 .
              itab2-budat = mkpf-budat .
              modify itab2 .
            endif .
          endloop .
          sort itab2 by budat .
          read table itab2 index 1 .
          if sy-subrc = 0 .
            update zsdtab1 set excise_dat = itab2-budat
                         where commno = mara_itab-matnr .
          endif .
        endif .
      endloop .
    endform.                               " UPDATE_EXCISE_DATE
    form get_order using matnr .
    types :  begin of itab_type ,
                vbeln like vbap-vbeln ,
                posnr like vbap-posnr ,
             end of itab_type .
    data : itab type itab_type occurs 10 with header line .
    refresh itab . clear itab .
    select * from vbap where matnr = mara_itab-matnr .
       itab-vbeln = vbap-vbeln .
       itab-posnr = vbap-posnr .
       append itab .
    endselect .
    loop at itab .
      select single * from vbak where vbeln = itab-vbeln .
      if vbak-vbtyp <> 'C' .
        delete itab .
      endif .
    endloop .
    loop at itab .
    select single * from vbfa where vbelv = itab-vbeln and
             posnv = itab-posnr and vbtyp_n = 'H' .
    if sy-subrc = 0 .
      delete itab .
    endif .
    endloop .
    clear :  zsdtab1_itab-vbeln ,  zsdtab1_itab-bill_doc .
    loop at itab .
      zsdtab1_itab-vbeln = itab-vbeln .
      select single * from vbfa where vbelv = itab-vbeln and
             posnv = itab-posnr and vbtyp_n = 'M' .
    if sy-subrc = 0 .
      zsdtab1_itab-bill_doc = vbfa-vbeln .
    endif .
    endloop .
    endform .
    *&      Form  GET_CARTYPE
    form get_cartype using matnr .
    select single * from mara where matnr = matnr .
    zsdtab1_itab-cartype = mara-satnr .
    endform.                    " GET_CARTYPE

    Hi,
    I have analysed your program and i would like to share following points for better performance of this report :
    (a)  Use the field Names instead of Select * or Select Single * as if you use the field names it will consume less amount of resources inside the loop as well as you have lot many Select Single * and u r using very big tables like VBAP and many more.
    (b) Trace on ST05 which particular query is mostly effecting your system or use ST12 in current mode to trace for less inputs which run the report for 20-30 min so that we get an idea which queries are effecting the system and taking a lot of time.
    (c) In Case of internal tables sort the data properly and use binary search for getting the data.
    I think this will help.
    Thanks and Regards,
    Harsh

  • Report painter performance problem...

    I have a client which runs a report group consists of 14 reports... When we  run this program... It takes about 20 minutes to get results... I was assigned to optimize this report...
    This is what I've done so far
    (this is a SAP generated program)...
    1. I've checked the tables that the program are using... (a customized table with more than 20,000 entries and many others)
    2. I've created secondary indexes  to the main customized table with (20,000) entries - It improves the performance a bit(results about 18 minutes)...
    3. I divided the report group by 4... 3 reports each report group... It greatly improves the performance... (but this is not what the client wants)...
    4. I've read an article about report group performance that it is a bug. 
    (sap support recognized the fact that we are dealing with a bug in the sap standard functionality)
    http://it.toolbox.com/blogs/sap-on-db2/sap-report-painter-performance-problem-26000
    Anyone have the same problem as mine?
    Edited by: christopher mancuyas on Sep 8, 2008 9:32 AM
    Edited by: christopher mancuyas on Sep 9, 2008 5:39 AM

    Report painter/Writer always creates a prerformance issue.i never preffred them since i have a option with Zreport
    now you can do only one thing put more checks on selection-screen for filtering the data.i think thats the only way.
    Amit.

Maybe you are looking for