Query performance differs in different databases

Hi,
I have a Payroll Query which is executing in 1 minutes in one database.
The same query, its taking hours to execute in another instance and still not producing any results.
Gather Schema Statistics has also been run but still no use.
Any suggestions on this?
Thanks
--Kumar                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Thank you so much for the advice.
I got my issue resolved.
After comparing the Explain Plan,
i have run "Gather Table Statistics" for the 2 tables which i suspected.
Now it works fine taking not more than 20 Seconds.
Thanks
--Kumar                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • Query for UDT over different Databases on another Server

    Hi,
    i want to use via query an UDT from another Database on another Server...
    nomal Tables can be Used like this:
    select* from [SVR-Name].Database.dbo.OITM
    But if I try the same for an UDT:
    select* from [SVR-Name].Database.dbo.[@UDT]
    it doesn't work.
    Does anyone have an Idea for me ?
    Thx.
    Markus

    Thx for your replies,
    a query over different Databases on the same SQL-Server is no problem.
    ... but I want to select an UDT which is on another SAP-Server (other location) in a different Database.
    Servername = SVR
    Database = LIVE
    UDT = [@portfolio]
    select * from [SVR].LIVE.DBO.[@portfolio]
    doesn't work.
    (normal query to reach OITM for example:
    select * from [SVR].LIVE.DBO.OITM
    works fine !!

  • Compare performance of two different databases...idea of a report to us?

    Dear Colleagues,
    We would like to compare the performance of two different databases running with SAP. We don't want to start with Loadrunner. We are looking more for some ABAP program which generates a lot of load on the database and then would just run the program on both databases. Any idea for a good report? We thought about sgen, but we think this is taking more into account the application server instead of the database.
    Regards,
    alexander

    Hello,
    If you are not willing to use tools like HP Loadrunner or IBM Rational performance testing
    SGEN could be an option. It is quite intensively using the DB. It will be hard to find a report that only perform DB access.
    SGEN at least allows to test read (REPOSRC, DYNPSOURCE...) & write (REPOLOAD, DYNPLOAD...). If the DBs are on the same servers as the SAP system and both servers has the same power it can be a good way to test.
    Best regards

  • Import of BaseView to a different schema in database

    Hi All,
    We have developed some bam reports. Now the base view, meta view are referring to a schema name (say SchemaName1) in database.
    In the plans we are using "SQL query" to get data from database. So we have queries like "Select SchemaName1.table1.column1 ... from ......".
    Now we have taken a export of the base view, meta view ,plans and all other components of BAM.
    All is fine till now... But now i need to import all these stuff in a different database. My problem is that in the new database instance i cannot use the same schema name (i.e. SchemaName1 ). I have a different schema name(say SchemaName2) there and i need to use that......There is no way i can use SchemaName1.
    Will the import of Base view/Meta view/plans work or we need to do some configuration changes during import??
    Do we have to create Base view for the new schema names from scratch or the imported code can be used??
    The plans have sql queries referring to SchemaName1 but we need it to refer to the Other schema(SchemaName2 )...What is the possible way of referring to the new schema without developing the plan again??
    Waiting for your valuable feedback...

    ***Backup your repository database***
    ***Please review all steps before attempting***
    Sarputil Export
    1.     Start | Run, type “sarputil”
    2.     Do a partial export. (Note down directory that will contain files to be exported.).
    3.     Select your Baseview, Metaview, Security:Login Profile, and Plan. (Note, the check boxes are in front of each object name and might appear very light on some clients.)
    4.     On the XML dialog and with the question on what you would like to do – select Execute Now.
    5.     Finish the wizard and from the directory noted above, verify .csv files were created. (Typically C:\OracleBAM\EnterpriseLink\Data\rp\export).
    6.     Copy all .csv files and paste to different temporary folder on 2nd server.
    Sarputil Import
    1.     Start | Run, type “sarputil” on the 2nd server where you’ll import into.
    2.     Partial import
    3.     Under Repository Information dialog, set the Data Source information on the right and remember to change the directory now containing your .csv files.
    4.     Like before, select “Execute now” and Finish the wizard.
    Oracle BAM Admin
    Modify connect string (host string or tns name) if you need to:
    1.     Connect in Oracle BAM Admin
    2.     Expand Baseviews and select the imported Baseview
    3.     Check under “Server” on the left if the connect string is correct.
    4.     If not, click Modify button and change.
    Modify Baseview Login if you need to:
    1.     With your Baseview still selected on the left side, click on “Baseview Logins” tab
    2.     Logins on the left correspond to actual database user id’s and password. Add a new Login with a database User ID and Password to access the new schema.
    3.     Associate or Set the new Login (right) to the BAM User on the left.
    Sarpbv Modification
    Use sarpbv utility to change references of the old schema name to a new schema name.
    General syntax for Oracle:
    sarpbv /R"username:pwd:Oracle:TNS Name::DB UserID:DBUser pwd” /B"BaseView name" /O"NewSchema"
    **Use Capital letters for New Schema name!
    **Notice 2 colons (::) after TNS Name (because you do not have a database name in Oracle).
    1.     Open DOS
    2.     Type your sarpbv command. Below is an example where I changed a Baseview called “scott” to use a new schema name called Jack.
    sarpbv /R"sa::ORACLE:baminst::sagent:sagent" /B"scott" /O"JACK"
    3.     Open Design Studio, locate the plan and check the SQL Query now reflects the new schema.
    4.     Test the plan to ensure it’s pulling data correctly from the new schema.

  • Same Query returning different result (Different execution plan)

    Hi all,
    To day i have discovered a strange thing: a query that return a different result when using a different execution plan.
    The query :
    SELECT  *
      FROM schema.table@database a
    WHERE     column1 IN ('3')
           AND column2 = '101'
           AND EXISTS
                  (SELECT null
                     FROM schema.table2 c
                    WHERE a.column3 = SUBSTR (c.column1, 2, 12));where schema.table@database is a remote table.
    when executed with the hint /*+ ordered use_nl(a c) */ these query return no result and its execution plan is :
    Rows     Row Source Operation
          0  NESTED LOOPS  (cr=31 r=0 w=0 time=4894659 us)
       4323   SORT UNIQUE (cr=31 r=0 w=0 time=50835 us)
       4336    TABLE ACCESS FULL TABLE2 (cr=31 r=0 w=0 time=7607 us)
          0   REMOTE  (cr=0 r=0 w=0 time=130536 us)When i changed the execution plan with the hint /*+ use_hash(c a) */
    Rows     Row Source Operation
       3702  HASH JOIN SEMI (cr=35 r=0 w=0 time=497839 us)
      22556   REMOTE  (cr=0 r=0 w=0 time=401176 us)
       4336   TABLE ACCESS FULL TABLE2 (cr=35 r=0 w=0 time=7709 us)It seem that when the execution plan have changed the remote query return no result.
    It'is a bug or i have missed somthing ?
    PS: The two table are no subject to insert or update statement.
    Oracle version : 9.2.0.2.0
    System version : HP-UX v1
    Thanks.

    H.Mahmoud wrote:
    Oracle version : 9.2.0.2.0
    System version : HP-UX v1Hard to say. You're using a very old and deprecated version of the database, and one that was known to contain bugs.
    9.2.0.7 was really the lowest version of 9i that was considered to be 'stable', but even so, it's old and lacking in many ways.
    Consider upgrading to the latest database version at your earliest opportunity. (or at least apply patches up to the latest 9i version before querying if there is bugs in your really low buggy version)

  • Same query different timings , different sessions at the same time

    I am running exactly the same query from 2 different sessions almost simultaneously and in one session it is taking 2 seconds and in the other it is taking 20 seconds. The explain plans in both the sessions (by set autotrace on) are exactly the same. The timing is almost same for succesive runs of the query in the same sessions. That is when I run the query again in the "slow session" it is always around 20 seconds and when I run the query again in the "fast session" it is always fast. The queries are being run within a few seconds of each other so the load on the database is almost same.
    My hunch is that it is a database parameter that needs to be changed to solve this problem, can someone guide me with this ....which parameters I should ask our DBAs to adjust ? Our database is Oracle 10G.
    Regards
    Amitabha

    Duplicate thread
    Same query different timings , different sessions at the same time
    Gints Plivna
    http://www.gplivna.eu

  • Insert, search, delete performance overhead for different collections

    Hi,
    I am trying to create a table which compares performance overheads for different collections data structures. Does anyone want to help me out? I want to put a number from 1 - 9 in each of the question marks. 1 being very poor performance and 9 being very good performance (the reason I am doing this is that I had this question in a job interview test and I didn't pass it)
    anyone have any comments?
              Searching     Inserting     Deleting     
    ArrayList ? ? ?
    LinkedList ? ? ?
    TreeSet ? ? ?
    TreeMap ? ? ?
    HashMap ? ? ?
    HashSet ? ? ?
    Stack ? ? ?

    sorry the formatting has screwed up a bit when I posted it. It should have a list of the collection types and three columns (inserting, deleting, searching)

  • Execute the same query twice, get two different results

    I have a query that returns two different results:
    Oracle Version : 10.2.0.1.0
    I am running the following query on the Oracle server in SQL*Plus Worksheet.
    SELECT COUNT(*)
    FROM AEJOURNAL_S1
    WHERE CHAR_TIME BETWEEN TO_DATE('12-AUG-10 01:17:39 PM','DD-MON-YY HH:MI:SS AM') AND
    TO_DATE('13-AUG-10 14:17:34','DD-MON-YY HH24:MI:SS')
    AND DESC2 LIKE '%'
    AND DESC1 LIKE '%'
    AND DESC2 LIKE '%'
    AND ETYPE LIKE '%'
    AND MODULE LIKE '%'
    AND LEVELL = '11-WARNING'
    ORDER BY ORDD DESC;
    The very first time the query is run, it will return a count of 259. The next time the query is run, lets say, 10 seconds later, it will return a count of 260. The above query is exemplary of the kind of thing I'm trying to do. It seems like the more fields filtered against '%', the more random the count return becomes. Sometime you have to execute the query three or four times before it levels out to a consistent number.
    I'm using '%' as the default for various fields, because this was the easiest thing to do to support a data-driven Web interface. Maybe I have to 'dynamically' build the entire where clause, instead of just parameterizing the elements and having default '%'. Anyway, to eliminate the web interface for the purpose of troubleshooting the above query was run directly on the Oracle server.
    This query runs against a view. The view does a transpose of data from a table.
    Below is the view AEJOURNAL_S1
    SELECT
    CHAR_TIME,
    CHAR_INST,
    BATCH_ID,
    MIN(DECODE(CHAR_ID,6543,CHAR_VALUE)) AS ORDD,
    MIN(DECODE(CHAR_ID,6528,CHAR_VALUE)) AS AREAA,
    MIN(DECODE(CHAR_ID,6529,CHAR_VALUE)) AS ATT,
    COALESCE(MIN(DECODE(CHAR_ID,6534,CHAR_VALUE)),'N/A') AS CATAGORY,
    MIN(DECODE(CHAR_ID,6535,CHAR_VALUE)) AS DESC1,
    MIN(DECODE(CHAR_ID,6536,CHAR_VALUE)) AS DESC2,
    MIN(DECODE(CHAR_ID,6537,CHAR_VALUE)) AS ETYPE,
    MIN(DECODE(CHAR_ID,6538,CHAR_VALUE)) AS LEVELL,
    MIN(DECODE(CHAR_ID,6539,CHAR_VALUE)) AS MODULE,
    MIN(DECODE(CHAR_ID,6540,CHAR_VALUE)) AS MODULE_DESCRIPTION,
    MIN(DECODE(CHAR_ID,6541,CHAR_VALUE)) AS NODE,
    MIN(DECODE(CHAR_ID,6542,CHAR_VALUE)) AS STATE,
    MIN(DECODE(CHAR_ID,6533,CHAR_VALUE)) AS UNIT
    FROM CHAR_BATCH_DATA
    WHERE subbatch_id = 1774
    GROUP BY CHAR_TIME, CHAR_INST, BATCH_ID
    So... why does the query omit rows on the first execution? Is this some sort of optimizer issue. Do I need to rebuild indexes? I looked at the indexes, they are all valid.
    Thanks for looking,
    Dan

    user2188367 wrote:
    I have a query that returns two different results:
    Oracle Version : 10.2.0.1.0
    I am running the following query on the Oracle server in SQL*Plus Worksheet.
    SELECT COUNT(*)
    FROM AEJOURNAL_S1
    WHERE CHAR_TIME BETWEEN TO_DATE('12-AUG-10 01:17:39 PM','DD-MON-YY HH:MI:SS AM') AND
    TO_DATE('13-AUG-10 14:17:34','DD-MON-YY HH24:MI:SS')
    AND DESC2 LIKE '%'
    AND DESC1 LIKE '%'
    AND DESC2 LIKE '%'
    AND ETYPE LIKE '%'
    AND MODULE LIKE '%'
    AND LEVELL = '11-WARNING'
    ORDER BY ORDD DESC;
    The very first time the query is run, it will return a count of 259. The next time the query is run, lets say, 10 seconds later, it will return a count of 260. The above query is exemplary of the kind of thing I'm trying to do. It seems like the more fields filtered against '%', the more random the count return becomes. Sometime you have to execute the query three or four times before it levels out to a consistent number.
    I'm using '%' as the default for various fields, because this was the easiest thing to do to support a data-driven Web interface. Maybe I have to 'dynamically' build the entire where clause, instead of just parameterizing the elements and having default '%'. Anyway, to eliminate the web interface for the purpose of troubleshooting the above query was run directly on the Oracle server.
    This query runs against a view. The view does a transpose of data from a table.
    Below is the view AEJOURNAL_S1
    SELECT
    CHAR_TIME,
    CHAR_INST,
    BATCH_ID,
    MIN(DECODE(CHAR_ID,6543,CHAR_VALUE)) AS ORDD,
    MIN(DECODE(CHAR_ID,6528,CHAR_VALUE)) AS AREAA,
    MIN(DECODE(CHAR_ID,6529,CHAR_VALUE)) AS ATT,
    COALESCE(MIN(DECODE(CHAR_ID,6534,CHAR_VALUE)),'N/A') AS CATAGORY,
    MIN(DECODE(CHAR_ID,6535,CHAR_VALUE)) AS DESC1,
    MIN(DECODE(CHAR_ID,6536,CHAR_VALUE)) AS DESC2,
    MIN(DECODE(CHAR_ID,6537,CHAR_VALUE)) AS ETYPE,
    MIN(DECODE(CHAR_ID,6538,CHAR_VALUE)) AS LEVELL,
    MIN(DECODE(CHAR_ID,6539,CHAR_VALUE)) AS MODULE,
    MIN(DECODE(CHAR_ID,6540,CHAR_VALUE)) AS MODULE_DESCRIPTION,
    MIN(DECODE(CHAR_ID,6541,CHAR_VALUE)) AS NODE,
    MIN(DECODE(CHAR_ID,6542,CHAR_VALUE)) AS STATE,
    MIN(DECODE(CHAR_ID,6533,CHAR_VALUE)) AS UNIT
    FROM CHAR_BATCH_DATA
    WHERE subbatch_id = 1774
    GROUP BY CHAR_TIME, CHAR_INST, BATCH_ID
    So... why does the query omit rows on the first execution? Is this some sort of optimizer issue. Do I need to rebuild indexes? I looked at the indexes, they are all valid.
    Thanks for looking,
    DanIn fact you the first time you ran the query the data has been retrived from disk to memory , in the second time the data is already in memory so the respnse time should be faster ,but if you chagne any condition or column or letter case the optimizer will do the first step (data will be retrived from disk to memory )

  • Performing searches across different Human Tasks

    On our project we have been using Custom Views within BPM Workspace to perform searches on running and closed workflow instances.
    We have found that we first have to choose what Human Task we want to build the custom search on (e.g.. "Vacation request Manager´s review") and then build the view.
    However, we do not know how to perform searches across different tasks (e.g.: list all vacation requests either "under Manager´s review" or "pending HR acceptance") and get the results in a single page.
    How can we do this?
    Regards
    Juan Algaba Colera

    Hi,
    This link may help you : http://beatechnologies.wordpress.com/2011/06/29/using-flex-fields-in-oracle-bpm-11g/

  • One Essbase Server to Different shared services Database

    I need to know if it is possible to point one essbase server to different shared services database and then have both of them working?

    Hi Vasvya,
    So, correct me if I am wrong, the way to do this is,
    Go to the computer where essbase server is installed and run the configurator and then where it asks for set up shared services and registry database connection, we give the schema we gave for the new shared services.
    Is that it? Do I have to change the instance home or something?
    Thanks. Would really appreciate it if you could helpme more.

  • UCCX 8.5 - Historical Reports - Traffic Analysis report and Application Performance Analysis report different calls presented

    Hi,
    Please Advice.
    When I compare Traffic Analysis report and Application report, Calls presented are not same. Please Help !
    Also attached herwith the reports

    Mohammed,
    It is common, in many of the Cisco Express 8.5 environments we have looked at, for the Total Incoming Calls given on a Traffic Analysis report to be a higher number than an Application Report.
    The Traffic Analysis Report counts every unique sessionID (unique call) that is inbound (contact type of 1).  The Application Reports do a similar thing but qualify (filter) only the records that have an application assigned.
    There are simply times where inbound calls have been directed to an "agent" without having an applicaiton assigned.
    The best thing the reporting user can do is to run a query on his or her database such as: 
         select * from ContactCallDetail where contactType=1and startDateTime > '2012-11-16 10:00:00' and startDateTime < '2012-11-16 11:00:00';
    Usually when an application is not assigned to the record in the ContactCallDetail table it is because the destination type is equal to 1, which is an 'Agent' instead of a 'route point'.
    So if you modify your select statement to filter by destinationType, you can quickly find the records that don't have the application assigned. 
    Example:  select * from ContactCallDetail where contactType = 1 and destinationType = 1 and startDateTime > '2012-11-16 10:00:00 and startDateTime < 2012-11-16 11:00:00';
    When you look at these records, you will see the agent that took the call from the destinationID field.  The number in that field should match up with the field called 'resourceID' in a table called 'resouce';
         Example:  select * from resource where resouce=6011; where 6011 was the number you found in the destinationID field.
    If there is still confusion about the source of the call - then talk to that agent and find out what is was.
    Good Luck and let me know if you need further help.
    Ron Reif
    [email protected]

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Problems with Full loads/Decreased query performance in Prod

    We have a table which serves as a base for a complex view. The table has roughly around 10 Million records and its a daily full load(I know, that delta loads are much better for handling large sets of data, but this information is very dynamic and with the business time constraints and project deliverables, the best decision was to do a Full load).
    This is the process we follow:
    > Drop Indexes (All columns individual indexes which are used inside the complex view as joins)
    > Truncate table
    > Load data
    > Recreate indexes.
    All the above steps are performed from SAP Dataservices Thru scripts and sql() function to execute the command, no manual intervention what so ever.
    After the job is successfully completed, the view doesn't refresh at all(It sits there forever). The same job when run across same volume of production data in Test environment performs much faster.
    Then, how do I refresh the view is manually log into SQL Developer, drop all the indexes on the parent table, and re-create all the indexes in the same order as Dataservices script. It performs very well after till the next load (the next morning).
    Any suggestions would be very helpful.
    My main question is why does it run faster, when I drop and recreate the indexes? and doesnt complete when the indexes are created by the SQL() from data services tool.
    Tried:
    Explain Plan(in dev, Test, Prod): Query cost vaired accross environments but returned results with same return times (In Production after manual Index creation)
    Tuning advisor (Only in Test) DBA evaluated it to be good.
    Thanks
    Nash
    DB Version Oracle 11.0.7
    Dataservices 3.2

    BluShadow and Harman
    Thank You!
    Im using a regular view, not a materialized view. and yes the query plan is completely different from Test and Production. In test the query was completely running on Hash based joins whereas in Production its using Nested Joins in the execution plan.
    Will try to gather statistics after the load and as per BluShadow, will look on the way of writing a function that makes a call to Database.
    Thank you all for taking sometime. I will try to test this out starting today and will extend tests over a couple of days.
    Regards
    Nash

  • Query performance using distinct

    Greetings! We're on Oracle 8.1.7, Solaris 2.8.
    I have a query that utilizes a different access path if I use the word distinct in the select from this view. Here is our query:
    SELECT
    DISTINCT SETID,
    VENDOR_ID,
    VENDOR_NAME_SHORT,
    AR_NUM,
    NAME1,
    ADDRESS1,
    ADDRESS2,
    CITY
    FROM
    PS_VENDOR_VW
    WHERE
    SETID LIKE 'MNSA' AND
    NAME1='FUN' ORDER BY NAME1, SETID, VENDOR_ID
    The view SQL is:
    SELECT /*+ FIRST_ROWS */ a.setid
    , a.vendor_name_short
    , a.name1
    , c.address1
    , c.address2
    , c.city
    , a.vendor_id
    FROM SYSADM.ps_vendor a
    SELECT /*+ INDEX_ASC(B PSAVENDOR_ADDR) */
    FROM SYSADM.ps_vendor_addr b
    WHERE b.effdt = (
    SELECT MAX(effdt)
    FROM SYSADM.ps_vendor_addr
    WHERE setid = b.setid
    AND vendor_id = b.vendor_id
    AND address_seq_num = b.address_seq_num
    AND effdt <= sysdate)) c
    WHERE a.setid = c.setid (+)
    AND a.vendor_id = c.vendor_id (+)
    AND a.prim_addr_seq_num=c.address_seq_num (+)
    This query does an index full scan on an index for ps_vendor_addr. It takes 2+ minutes to run. Now, if I remove the word distinct, it uses an index range scan and "view pushed predicate". It runs in 2 seconds.
    I've tried with and without the first_rows hint in the view. If I leave off the INDEX_ASC hint then it does a full table scan of table ps_vendor_addr. It refuses to do a range scan with the hint. Can anybody tell me how I can get the 'distinct' query tuned?
    2 minutes may not seem like a lot but when online users run the query many times a day, it is very frustrating.
    Thanks! I hope I provided enough info.

    Thomas:
    The different behaviours you are seeing are a result of the DISTINCT in the query. This causes a sort to be performed, and will influence the way that the CBO will execute the query. (You do know that you are using the Cost Based Optimizer because of the hints, and that you should analyze the tables and indexes?) You need to be able to re-write the view to avoid the need for the DISTINCT in the query.
    Without knowing the meaning of the fields, it is really hard to say anything meaningful, but my guess is that it is the correlated sub-query that is ultimately causing the need for the DISTINCT. Is the combination of set_id,vendor_id and address_seq_num truly unique, or is the address_seq_num just a sequence.
    For example in one of my databases, I have a table with INDV_ID, EFF_DT, EMPSTAT_SEQ. The empstat_seq field is just there to allow for more than one thing happening on the same day. The way we query this table is:
    SELECT *
    FROM empstat_t a
    WHERE indv_id = :emp_id and
          TO_CHAR(eff_dt,'yyyymmdd')||TO_CHAR(empstat_seq,'009') =
                  (SELECT MAX(TO_CHAR(eff_dt,'yyyymmdd')||TO_CHAR(empstat_seq,'009')
                   FROM empstat_t
                   WHERE a.indv_id = indv_id);Could something similar work in your case?
    If not, assuming your statistics are up to date, I would also look at creating the view without hints to see what the optimizer comes up with on its own. It may be better than you think.
    TTFN
    John

  • Reg: Process Chain, query performance tuning steps

    Hi All,
    I come across a question like,  There is a process chain of 20 processes.out of which 5 processes are completed at the 6th step error occured and it cannot be rectified. I should start the chain again from the 7th step.If i go to a prticular step i can do that particular step, How can i start the entair chain again from step 7.i know that i need to use a function module but i dont know the name of FM. Please somebody help me out.
    Please let me know the steps involved in query performance tuning and aggregate tuning.
    Thanks & Regards
    Omkar.K

    Hi,
    Process Chain
    Method 1 (when it fails in a step/request)
    /people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
    How is it possible to restart a process chain at a failed step/request?
    Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.
    You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.
    Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.
    In the opened popup click on the tab 'Chain'.
    In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:
    1. copy the variant from the popup to the variante of table rspcprocesslog
    2. copy the instance from the popup to the instance of table rspcprocesslog
    3. copy the start date from the popup to the batchdate of table rspcprocesslog
    Press F8 to display the entries of table rspcprocesslog.
    Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.
    Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:
    1. rspcprocesslog-log_id -> i_logid
    2. rspcprocesslog-type -> i_type
    3. rspcprocesslog-variante -> i_variant
    4. rspcprocesslog-instance -> i_instance
    5. enter 'G' for parameter i_state (sets the status to green).
    Now press F8 to run the fm.
    Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.
    Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....
    Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.
    Query performance tuning
    General tips
    Using aggregates and compression.
    Using  less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particual day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memoery will decrease the loading time of the report.
    Run reporting agent at night and sending results to email.This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1.  Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. --- sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    Thanks,
    JituK

  • Query performance : TOAD Vs SQL Plus, and caching of query output

    Hi,
    I have one query which takes more than 2 minutes in TOAD, but the same one takes less than 30 secs when executed in SQL Plus. Can some body please tell me why the performance is so different in the two. Is this a known issue? Can i get the actual time required for a query?
    Also when I execute the same query in TOAD for the second time in succession, the execution time reduces drastically from 2 mins to 10 secs. Does caching occur in TOAD? Can I disable it. I am using TOAD for Oracle, Version 9.0.0.160.
    Thanks,
    Rahul.

    user641207 wrote:
    Hi,
    I have one query which takes more than 2 minutes in TOAD, but the same one takes less than 30 secs when executed in SQL Plus. Can some body please tell me why the performance is so different in the two. Is this a known issue? Can i get the actual time required for a query?
    Also when I execute the same query in TOAD for the second time in succession, the execution time reduces drastically from 2 mins to 10 secs. Does caching occur in TOAD? Can I disable it. I am using TOAD for Oracle, Version 9.0.0.160.Rahul,
    it's unlikely but possible that the session established via TOAD has different settings than the session established via SQL*Plus.
    You can check V$SQL and V$SQL_SHARED_CURSOR to find out if the SQL you're executing has different plans, which would result in multiple rows in V$SQL for the same SQL_ID (10g) or ADDRESS, HASH_VALUE (pre-10g) with different CHILD_NUMBERs and V$SQL_SHARED_CURSOR can tell you why they can't share the plan, in case you've got multiple plans.
    In addition you can use V$SES_OPTIMIZER_ENV to find out if the optimizer related settings differ for the TOAD and SQL*Plus session.
    The caching effect is probably the caching of the database rather than anything else. I don't think that TOAD caches the result on the client side.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

Maybe you are looking for

  • Unable to open PDF document when run through task/workflow

    Hi all, I am currently usign a background method to send a smartform as a PDF document to all the agents who approve. When i send the smartform in mail as a PDF documetn throught the task i get an error message when opening the PDF document which say

  • Report to find the SAP NOTES applied

    Is there a way to generate a report in solution manager which gives us a list of SAP NOTES applied in our R/3 or BW servers. Thanks, Kiran

  • Deleted R/3 records Not coming to BI

    I'm using the delivered LO cockpit extractor 2LIS_46_SCL for Global Trade Management (GTM). Its a LO Cockpit extractor, so I'm assuming it works the same way as all the other LO Cockpit extractors. The user is creating a contract in R/3 with two line

  • ScopedBookmarkAgent and AppleIDAuthAgent

    Hi, my names is Giuditta. I have a Mac Book Pro with OS X 10.8.5 - 2.9 Ghz Intel Core i7 - 8 GB 1600 Mhz DDR3 My HD is 750 GB capable with 480 GB vacant. My problem is: My keychain password ask me a password even if don't need. The request window com

  • Apple ios developer on a Windows PC

    is it possible to apple ios developer to be on a Windows PC robin