Capturing 9i SQL for Replay Against 10g in Oracle ERP Environment

We are preparing to upgrade our installation of Oracle ERP from 9i to 10g. This is a very large installation (9 instances, 1 TB of data.)
We were researching Streams as a means of capturing the re-do log and replaying the SQL from our 9i instances to new instances in 10g to test memory, the optimizers, etc. but Streams does not support this.
Any thoughts on how/where we could "trap" the SQL from our production 9i environment and replay it against our 10g environment?
I know replay is slated for 11g, but we can't wait that long.
Thanks

Thanks for the quick erply N.G.
I have looked at Logminer briefly before I posted my question. I wasn't clear if this was a suitable tool for gathering (for example) a couple of days worth of transactions. Since we are attempting to replicate our 9i production environment we will have to process and replay hundreds of thousands of transactions.
Is this a proper use of Logminer?
DL Rusk

Similar Messages

  • Capture all SQL for a session

    I'd like to be able to capture all the SQL run for a session in SQL Developer.  I am aware of the SQL History, however that does not include updates made "directly" to tables via the Data view.  I am really more interested in inserts, updates and deletes than queries (selects), if that makes a difference to anyone.
    Thanks,
    Ray

    RayDeCampo wrote:
    I'd like to be able to capture all the SQL run for a session in SQL Developer.  I am aware of the SQL History, however that does not include updates made "directly" to tables via the Data view.  I am really more interested in inserts, updates and deletes than queries (selects), if that makes a difference to anyone.
    Thanks,
    Ray
    You can also use Oracle Trace with tkprof to capture all of the SQL from the time you turn it on to the time you turn it off

  • Identifying the sql for non persistant session

    We have a number of queries which are being directed to our Oracle 10.2.0.5 database.
    I am being informed that there is something wrong with these queries and I need to view the SQL for these queries to investigate.
    The problem I have is that the sequence of events is as follows :
    1. Connection is made to the database.
    2. Query is run and completes in less than 1 second.
    3. Database session is disconnected.
    Normally to investigate this I would connect to the session in question and interrogate the SQL.
    However, because steps 1-3 complete in a split second I am unable to do this.
    Is there any way that I can capture this SQL for investigation?
    Thank you in advance.

    user6502667 wrote:
    We have a number of queries which are being directed to our Oracle 10.2.0.5 database.
    I am being informed that there is something wrong with these queries and I need to view the SQL for these queries to investigate.
    The problem I have is that the sequence of events is as follows :
    1. Connection is made to the database.from which system(s)?
    as which user?
    2. Query is run and completes in less than 1 second.
    3. Database session is disconnected.
    Normally to investigate this I would connect to the session in question and interrogate the SQL.
    However, because steps 1-3 complete in a split second I am unable to do this.
    Is there any way that I can capture this SQL for investigation?
    Thank you in advance.AUDIT

  • Capture Excise Invoice for material document created against a PGR

    Hi SD Gurus,
    We have a scenario of reverse subcontracting where we have to receive raw material from vendor, process it and then dispatch the finished goods to the vendor or the vendor's customer.
    For receiving the raw materials, i have created a document type which is copied from standard RE type. Against this order, a returns delivery is created which is then PGR-ed. With reference to the previous document, a service order is created in SD, which is then billed.
    But i am facing a problem when i tried to capture the incoming excise invoice for the raw materials. When i tried to capture the excise against the material document generated for PGR, the system is throwing a error, that it cannot capture the excise for this materia document type.
    I would be greatly obliged if anyone can throw some light on this. Do i need to maintain any special configuration for this? Why is SAP not allowing it? How this can be done?
    With Regards,
    Arindam Datta.

    Dear Arun R,
    You Kindly go to the following path and maintain your movement type over their and do the MIGO.I am very sure at the time ot migo excise tab will come for part-1 updation and subsequently you capture part-2 on J1IEX.
    SPRO->LG->Tax on Goods Movemet->India->Business Transactions->Incoming Excise Invoice->Specify which movement type involve excise Invoices.
    Hope this will help you out.
    Regards
    AKS

  • Reports 6i against 10g database

    Hi,
    We are upgrading our database from 9i to 10g. We are running our reports (6i) against 10g database in the batch mode. For some of our reports we are getting the following error:
    REP-0736: There exist uncompiled program unit(s).
    REP-1247: Report contains uncompiled PL/SQL.
    When report is opened in ReportBuilder it needs to be recompiled against 10g and then it runs fine.
    Please advise how to solve this problem.

    This is not because of the database is changed. Some of the packages/program units compilation dates might have changed and hence reports wants it to be recompiled.
    Rajesh Alex

  • Reports 6i against 10g in batch

    Hi,
    I am running bunch of the reports against 10g in the batch mode. When ran against 9i everything is OK but when ran against 10g for each recport I am getting:
    ERR REP-0736: There exist uncompiled program unit(s).
    ERR REP-1247: Report contains uncompiled PL/SQL.
    When report is opened in ReportBuilder and compiled there are no errors (against 10g). I tried using .rdf, and also compiling into .rep and using it but no help.
    Is this known problem? Is there a solution?
    Thanks,
    Nenad.

    Hi,
    Could you solve this problem? How?

  • Designer10g R2 Design Capture of SQL Server DDL file Failed

    I have been using Designer 10g R2 to design capture Oracle Database Schema into the Server Model Diagram with no problems at all.
    I have a SQL Server 2005 exported DDL file into an ASCII text file. I then import the DDL script into the Server Model Diagram but ERRORs.
    I use SQL Server 2005 to import the DDL script and then create an ODBC to connect to the SQL Server. I can then design capture the SQL Server schema into Designer.
    What has gone wrong? Does Designer 10g R2 support design capture from SQL Server DDL file directly to Designer?
    By the way, if I want to generate MySQL DDL script or design capture MySQL DDL script into Designer, how could this be done as there is no ODBC driver for MySQL?

    Hello,
    Please see the following links:
    http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/thread/df35f9f5-9c52-4ec4-8f5a-03a8dbef4352/
    http://social.msdn.microsoft.com/forums/en-US/sqlsetupandupgrade/thread/e8e27857-7bb7-46a2-af9b-25e397b37374/
    http://ask.sqlservercentral.com/questions/3582/sqlbol-cab-is-corrupt-and-cannot-be-used-in-sql-server-2005
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • ORA-01489 Received Generating SQL for Report Region

    I am new to Apex and I am running into an issue with an report region I am puzzled by. Just a foreword, I'm sure this hack solution will get a good share of facepalms and chuckles from those with far more experience. I welcome suggestions and criticism that are helpful and edifying!
    I am on Apex 4.0.2.00.07 running on 10g, I believe R2.
    A little background, my customer has asked an Excel spreadsheet be converted into a database application. As part of the transition they would like an export from the database that is in the same format as the current spreadsheet. Because the column count in this export is dynmic based on the number of records in a specific table, I decided to create a temporary table for the export. The column names in this temp table are based on a "name" column from the same data table so I end up with columns named 'REC_NAME A', 'REC_NAME B', etc. (e.g. Alpha Record, Papa Record, Echo Record, X-Ray Record). The column count is currently ~350 for the spreadsheet version.
    Because the column count is so large and the column names are dynamic I've run into a host of challenges and errors creating this export. I am a contractor in a corporate environmentm, so making changes to the apex environment or installation is beyond my influence and really beyond what could be justified by this single requirement for this project. I have tried procedures and apex plug-ins for generating the file however the UTL_FILE package is not available to me. I am currently generating the SQL for the query in a function and returning it to the report region in a single column (the user will be doing a text-to-column conversion later). The data is successfully being generated, however, the sql for the headers is where I am stumped.
    At first I thought it was because I returned both queries as one and they were joined with a 'union all'. However, after looking closer, the SQL being returned for the headers is about +10K+ characters long. The SQL being returned for the data is about +14k+. As mentioned above, the data is being generated and exported, however when I generate the SQL for the headers I am receiving a report error with "ORA-01489: result of string concatenation is too long" in the file. I am puzzled why a shorter string is generating this message. I took the function from both pages and ran them in a SQL command prompt and both return their string values without errors.
    I'm hopeful that it's something obvious and noobish that I'm overlooking.
    here is the code:
    data SQL function:
    declare
      l_tbl varchar2(20);
      l_ret varchar2(32767);
      l_c number := 0;
      l_dlim varchar2(3) := '''|''';
    begin
      l_tbl := 'EXPORT_STEP';
      l_ret := 'select ';
      for rec in (select column_name from user_tab_columns where table_name = l_tbl order by column_id)
      loop
        if l_c = 1 then
            l_ret := l_ret || '||' || l_dlim || '|| to_char("'||rec.column_name||'")';
        else
            l_c := 1;
            l_ret := l_ret || ' to_char("' || rec.column_name || '")';
        end if;
      end loop;
        l_ret := l_ret || ' from ' || l_tbl;
      dbms_output.put_line(l_ret);
    end;header sql function:
    declare
      l_tbl varchar2(20);
      l_ret varchar2(32767);
      l_c number := 0;
      l_dlim varchar2(3) := '''|''';
    begin
      l_tbl := 'EXPORT_STEP';
      for rec in (select column_name from user_tab_columns where table_name = l_tbl order by column_id)
      loop
        if l_c = 1 then
            l_ret := l_ret || '||' || l_dlim || '||'''||rec.column_name||'''';
        else
            l_c := 1;
            l_ret := l_ret || '''' || rec.column_name || '''';
        end if;
      end loop;
        l_ret := l_ret || ' from dual';
      dbms_output.put_line(l_ret);
    end;-------
    EDIT: just a comment on the complexity of this export, each record in the back-end table adds 12 columns to my export table. Those 12 columns are coming from 5 different tables and are the product of a set of functions calculating or looking up their values. This is export is really a pivot table based on the records in another table.
    Edited by: nimda xinu on Mar 8, 2013 1:28 PM

    Thank you, Denes, for looking into my issue. I appreciate your time!
    It is unfortunately a business requirement. My customer has required that the data we are migrating to this app from a spreadsheet be exported in the same format, albeit temporarily. I still must meet the requirement. I'm working around the 350 columns by dumping everything into a single column, which is working for the data, however, the headers export is throwing the 01489 error. I did run into the error you posted in your reply. I attempted to work around it with the clob type but eneded up running into my string concatentation error again.
    I'm open to any suggestions at this point given that I have the data. I'm so close because the data is exporting, but because the columns are dynamic, the export does me little good without the headers to go along with it.

  • Capture all SQL statements and archive to file in real time

    Want to Capture all SQL statements and archive to file in real time?
    Oracle Session Manager is the tool just you need.
    Get it at http://www.wangz.net
    This tools monitor how connected sessions use database instance resources in real time. You can obtain an overview of session activity sorted by a statistic of your choosing. For any given session, you can then drill down for more detail. You can further customize the information you display by specifying manual or automatic data refresh, the rate of automatic refresh.
    In addition to these useful monitoring capabilities, OSM allows you to send LAN pop-up message to users of Oracle sessions.
    Features:
    --Capture all SQL statement text and archive to files in real time
    --Pinpoints problematic database sessions and displays detailed performance and resource consumption data.
    --Dynamically list sessions holding locks and other sessions who are waiting for.
    --Support to kill several selected sessions
    --Send LAN pop-up message to users of Oracle sessions
    --Gives hit/miss ratio for library cache,dictionary cache and buffer cache periodically,helps to tune memory
    --Export necessary data into file
    --Modify the dynamic system parameters on the fly
    --Syntax highlight for SQL statements
    --An overview of your current connected instance informaiton,such as Version, SGA,License,etc
    --Find out object according to File Id and Block Id
    Gudu Software
    http://www.wangz.net

    AnkitV wrote:
    Hi All
    I have 3 statements and I am writing some thing to a file using UTL_FILE.PUT_LINE after each statement is over. Each statement takes mentioned time to complete.
    I am opening file in append mode.
    statement1 (takes 2 mins)
    UTL_FILE.PUT_LINE
    statement2 (takes 5 mins)
    UTL_FILE.PUT_LINE
    statement3 (takes 10 mins)
    UTL_FILE.PUT_LINE
    I noticed that I am able to see contents written by UTL_FILE.PUT_LINE only after statement3 is over, not IMMEDIATELY after statement1 and statement2 are done ?
    Can anybody tell me if this is correct behavior or am I missing something here ?Calling procedure must terminate before data is actually written to the file.
    It is expected & correct behavior.

  • How to do pivoting on part of an SQL output row in 10g

    Hi,
    I'm using Oracle 10.1.0.5.0.
    I would like to know what the general decode is for pivoting in 10g. I need to display PAGE_DISPLAY_NAME, ITEM_DISPLAY_NAME, ITEM_TYPE_DISPLAY_NAME essentially once and then display ATTRIBUTE_DISPLAY_NAME and ITEM_ATTRIBUTE_VALUE as a separate column for each:
    So my query starts me with the following output.
    PAGE_DISPLAY_NAME        ITEM_DISPLAY_NAME            ITEM_TYPE_DISPLAY_NAME     ATTRIBUTE_DISPLAY_NAME                              ITEM_ATTRIBUTE_VALUE
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Benefit                                                       0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: Accounting                               0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: Consulting Services                       0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: Customer Data Management              0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: Facilities                                0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: Finance                                0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: HR                                        0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: IT                                        0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: International                        0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: Inventory Control                     0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: Legal                                0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: Marketing                                0
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: Sales                                1
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record     Business Function: Sales Operations                     1I'd like to show the following:
    PAGE_DISPLAY_NAME        ITEM_DISPLAY_NAME            ITEM_TYPE_DISPLAY_NAME        decode(...),   decode(...), .....         decode(etc...
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record          Benefit    Business Function: Accounting  
    Content     Rpt-Yearly-EMC Transactions-2007     RIKR Content Record             0                   0                              What's the general SQL for doing this type of thing?
    For the record, here's the initial SQL:
    select p.DISPLAY_NAME               PAGE_DISPLAY_NAME,
    i.DISPLAY_NAME                      ITEM_DISPLAY_NAME,
    it.DISPLAY_NAME                     ITEM_TYPE_DISPLAY_NAME,
    attr.DISPLAY_NAME                   ATTRIBUTE_DISPLAY_NAME,
    ia.VALUE                            ITEM_ATTRIBUTE_VALUE
    from
    wwsbr_item_attributes ia,
    wwsbr_attributes attr,
    wwsbr_all_items i,
    wwsbr_item_types it,
    wwsbr_all_folders p
    where
        ia.attribute_id = attr.id
    and ia.attribute_caid = attr.CAID
    and ia.item_masterid = i.id
    and ia.item_caid = i.caid
    and i.subtype = it.id
    and i.subtype_caid = it.caid
    and i.folder_id = p.id
    and i.active = 1
    and i.visible = 1
    and i.is_current_version = 1
    and upper(it.DISPLAY_NAME) like '%RIKR%'
    and substr(attr.DISPLAY_NAME,1,5) <> '-----' 
    order by i.DISPLAY_NAME, attr.DISPLAY_NAME;

    Hi,
    There are a few errors in the INSERT statements, but it looks like the ones that work provide a good enough sample.
    In your sample data,
    attribute_display_name = 'Benefit' only when item_type_display_name LIKE '%RIKR%', and
    attribute_display_name = 'Billing & Collections' only when item_type_display_name LIKE '%OMKR%'
    and so on, for the other values. In other words, just by looking at attribute_display_name , once could predict whether item_type_display_name was LIKE '%RIKR%' or '%OMKR%'.
    If that's alwsays the case, then you can say:
    SELECT    page_display_name
    ,       item_display_name
    ,       item_type_display_name
    ,       MAX (DECODE ( attribute_display_name
                          , 'Benefit'                         , item_attribute_value
                          , 'Billing & Collections'                    , item_attribute_value
               )          AS p1
    ,       MAX (DECODE ( attribute_display_name
                           , 'Business Function: Accounting'               , item_attribute_value
                    , 'Change Description'                    , item_attribute_value
               )           AS p2
    ,       MAX (DECODE ( attribute_display_name
                          , 'Business Function: Consulting Services'     , item_attribute_value
                    , 'Deal Type: Counrty Federal'                 , item_attribute_value
               )           AS p3
    ,       MAX (DECODE ( attribute_display_name
                    , 'Deal Type: Counrty International Order'     , item_attribute_value
               )           AS p4
    FROM       taxonomy
    GROUP BY  page_display_name
    ,       item_display_name
    ,       item_type_display_name
    ORDER BY  page_display_name
    ,       item_display_name
    ,       item_type_display_name
    ;This produces generically-named columns, 'p1', 'p2', ...:
                               ITEM_
    PAGE_      ITEM_           TYPE_
    DISPLAY_   DISPLAY_        DISPLAY_
    NAME       NAME            NAME                 P1  P2  P3  P4
    Archive    Rpt-AR          RIKR Content Record      1   0
    Content    Rpt-Booking     RIKR Content Record      1   0
    Documents  Calc-Discount   OMKR Document        0       0   0
    Documents  Calc-Early      OMKR Document        0       0   0I shortened item_display_name to make the output more readable.
    If you add a WHERE clause like
    WHERE   item_type_display_name LIKE '%RIKR%'then, of course, you would only get the first two rows of output from the result set above, but you would get all the same columns, including the p4 column that will necessarily be NULL.
    If my earlier assumption was wrong (for example, if attribute_display_name can 'Benefit' even when item_type_display_name is NOT LIKE '%RIKR%', but, when that happens, you want to ignore it), then change the pivot columns like this:
    ,       MAX ( CASE
                         WHEN  (     item_type_display_name LIKE '%RIKR%'
                           AND  attribute_display_name = 'Benefit'
                   OR        (     item_type_display_name LIKE '%OMKR%'
                           AND  attribute_display_name = 'Billing & Collections'
                   THEN  item_attribute_value
              END
               )          AS p1
    ,       MAX ( CASE
                         WHEN  (     item_type_display_name LIKE '%RIKR%'
                           AND  attribute_display_name = 'Business Function: Accounting'
                   OR        (     item_type_display_name LIKE '%OMKR%'
                           AND  attribute_display_name = 'Change Description'
                   THEN  item_attribute_value
              END
               )          AS p2
    ,       MAX ( CASE
                         WHEN  (     item_type_display_name LIKE '%RIKR%'
                           AND  attribute_display_name = 'Business Function: Consulting Services'
                   OR        (     item_type_display_name LIKE '%OMKR%'
                           AND  attribute_display_name = 'Deal Type: Counrty Federal'
                   THEN  item_attribute_value
              END
               )          AS p3
    ,       MAX ( CASE
                         WHEN  (     item_type_display_name LIKE '%OMKR%'
                           AND  attribute_display_name = 'Deal Type: Counrty International Order'
                   THEN  item_attribute_value
              END
               )          AS p4For the sample data given, this produces the same output as above.
    All this assumes that the conditions
    item_type_display_name LIKE '%RIKR%' and
    item_type_display_name LIKE '%OMKR%' are mutualyy exclusive; that is, you never have item_type_display like 'RIKR Content Record/OMKR Document'.
    If you do have values like that, then you'll need separate columns for each of the possible values of attribute_display_name. Given your sample data, that means 7 pivoted columns. Depending on your WHERE clause and your data, some of those columns might by NULL for all rows. As you suggested, you coul;d use the anlaytic ROW_NUMBER function to number the rows that actually occurred in the query. You would still have 7 pivoted columns, but the ones that were not always NULL would appear on the left, where they were easier to read.
    The number of columns, and their aliases, must be hard-coded into the query. If you want something that has column aliases like BENEFIT instead of P1, or only has 4 piovted columns when you only need 4, then you have to use dynamic SQL.
    You can fake column headings by doing a separate query, which includes some of the same conditions as the main query, figures out what the pivoted columns will be, and displays appropriate headings. I've done this when producing csv files, where the heading only had to appear once, at the very beginning. Getting such a heading to appear after, say, every 50 rows of output is much more complicated.
    You can fake the number of columns by using string aggregation to put all the pivoted data into one humongeous VARCHAR2 column, concatenating spaces so that it looks like 3, or 4, or however many columns.

  • How to integrate from MS SQL SERVER 2005 and Flatfile to Oracle 10g.

    Hi
    I am new to ODI. I am trying to load sample data from MS SQL Server 2005 and Flatfile to Oracle 10g.
    1. I have created three models.
    1-1. SQL2005 (SRC_CUSTOMER table)
    1-2. Flatfile (SRC_AGE_GROUP.txt & SRC_SALES_PERSON.txt)
    1-3. Oracle 10g (TRG_CUSTOMER table)
    You may know I got those environments from the ODI DEMO environment.
    2. I could able to reverse the tables also.
    3. I have created an interface which contains source table (from MSSQL 2005), Flatfile and target table from ORACLE model.
    4. I have imported the knowledge modules. But I am confusing in selecting the knowledge modules to source and target tables.
    I've selected LKM File to SQL for flatfile model.
    I've also selected LKM SQL to SQL for MSSQL 2005 model and IKM Oracle Incremental Update for the target table (ORACLE).
    I've also implemented the interface that I created. It worked without errors. But there is no data in target table which is TRG_CUSTOMER.
    I really would like to know what happened and what the problems are.
    You can email me [email protected]
    Thanks in advance
    Jason Lee

    what did give for SRC_AGE_GROUP SRC_CUSTOMER join condition
    if it is
    (SRC_CUSTOMER.AGE=SRC_AGE_GROUP.AGE_MIN) AND SRC_CUSTOMER.AGE=SRC_AGE_GROUP.AGE_MAX
    give it as
    (SRC_CUSTOMER.AGE>SRC_AGE_GROUP.AGE_MIN) AND SRC_CUSTOMER.AGE<SRC_AGE_GROUP.AGE_MAX

  • Trying to find out the sql for the below 3 values

    HI Experts,
    I am trying to find the sql that can give me the values for the below three values. can some one Help me out getting these ?
    Free buffer waits (%)
    Local write wait (%)
    Latch: cache buffer chains (%)
    Actually these are the metrics which are available in OEM for the DB releases up to 9i. Post 9i releases , these metrics are obsoleted.
    So, trying to find the sql for these and use them as an UDM for the 10g and 11g DB's
    Thanks in Advance.
    Thanks,
    Naveen kumar.

    And is there any why to find using what sql the metrci is formed ?

  • SQL Tuning Advisor against the session (is it poosible)

    My Company customer has observed that there is job that are running longer than expected running(5 days).
    They did not gave any information.they want me to run
    SQL Tuning Advisor against the session running this job.
    can you run sql tunning advisor against running session?
    if so how
    Please suggest me your valuable tips so that I approach this work properly.
    DB = 11g
    OS= Solaris 10

    >
    ...SQL Tuning Advisor against the session running this job.
    can you run sql tunning advisor against running session?
    >
    SQL Tuning Advisor is run on statements and not sessions. I don't do much with SQL Tuning Advisor, but I'd consider that current running sessions a lost cause until it completes or you kill it. You can see the "estimate" of how long that current running SQL is going to take in v$session_longops. You can use a script like Tanel's sw.sql
    http://blog.tanelpoder.com/2008/01/08/updated-session-wait-script/
    to see what the wait interface has to say.
    >
    Please suggest me your valuable tips so that I approach this work properly.
    >
    My approach for this would be to determine what the current explain plan is and compare it to one that ran (correctly) in the past and then try to determine why it changed. (bad stats, dropped index, parameter changes, etc).
    Cheers.

  • SQL performance slow -- oracle 10g to oracle 11g

    Hi,
    We are have two development server, here we can call server10 and server11.
    Server’s having same hardware and OS, but different oracle version, server 10 and server 11 having oracle 10g and oracle 11g respectively.
    Problem, when you run sql query in oracle 11g it’s very slow compare to oracle 10g server.
    Here I have checked
    1)     sga size – comparatively 11g size is big
    2)     no full table scan
    OS – SUN
    DB: 11.1.0.7.0, 10.2.0.4.0

    mmee wrote:
    Hi,
    We are have two development server, here we can call server10 and server11.
    Server’s having same hardware and OS, but different oracle version, server 10 and server 11 having oracle 10g and oracle 11g respectively.
    Problem, when you run sql query in oracle 11g it’s very slow compare to oracle 10g server.
    Here I have checked
    1)     sga size – comparatively 11g size is big
    2)     no full table scan
    OS – SUN
    DB: 11.1.0.7.0, 10.2.0.4.0If the query is running slow, the sga size should not be the first thing to check. PLease post the execution plans of the queries from both the servers. Please try to post the tkprof output of the trace of both. The reason for this that explain plan may lie to us about what has happened but trace would be the real picture only.
    HTH
    Aman....
    PS: Don't forget to use the code tag and using the Preview tab to see how the code looks. A better formatted post would most likely get better attention and response.

  • No PL/SQL windown in Forms-10g Debug

    Dear gurus...I used to use a PL/SQL window in 6i forms debug tool which displayed at the bottom of debugging module. It was very useful to look at the database values of current session but I'm unable to fine this PL/SQL interation tool in 10g Forms debug module. Is it away from my eyes only or really doesn't exist in 10g-Forms???

    Caz,
    Two things. First; it is considered poor Forums Etiquette to hijack someone else's posting. Always create your own post and include a link to a related posting if your issue is similar to someone else's.
    Second; now that you have the FORMS_TRACE_PATH defined, you create the trace file by adding the "RECORD=FORMS" parameter to the URL. Optionaly, you can add the "TRACEGROUP=DEBUG/CUSTOM1" parameter as well as long as you have specified the CUSTOM1 trace group in the ftrace.cfg file. Just adding the RECORD=FORMS should be enough, however.
    If that still doesn't produce the results you are looking for, check out the following references: Oracle9i Forms Diagnostic Techniques and Oracle9iAS Forms Services Tracing and Diagnostics. I know these are for Forms 9i, but the process did not change between Forms 9i and Forms 10g. If you are using Forms 11g, I'm not sure if these technigues will work the same as I am not using Forms 11g yet. ;-)
    Craig...

Maybe you are looking for

  • Exposure to SAP FI

    Hello, I met a financial professional today who has never been on SAP, but would like to acquire some exposure to SAP.  Because I am an ABAP developer, I know that it is possible to download trial versions to practice your skills.  How can somebody g

  • I tried exporting my project numerous times, but the first half of my exported movie is all black, with the sound. Can I get some help?

    Hi. So I decided to a class project, and found this supposedly great app. I imported all the pictures, recorded the voices, and inserted the soundtrack via iMovie. Then, once I clicked export and looked at the result, the first half of the product wa

  • Adobe Premiere File Import Failure

    Hi guys, I seem to have some problem with Adobe Premiere CS4 after re-installing the program into my new computer. It seems like Premiere won't recognize my files even though it is supported.  Examples include mp4, avi (sound only), mpeg, etc. Be gre

  • [SOLVED] Flash support in Chrome broken on pacman upgrade

    [2012-06-04 09:59] Running 'pacman -S -y -u' [2012-06-04 09:59] synchronizing package lists [2012-06-04 09:59] starting full system upgrade [2012-06-04 10:00] removed udev (182-4) [2012-06-04 10:00] upgraded bash (4.2.028-1 -> 4.2.029-1) [2012-06-04

  • New MacBook Pro section - when can we expect it?

    Hello A new section has previously been enabled for pre-release machines before. I was wondering if and when we will see the same for technical enquiries about the MacBook Pro. Many thanks in advance.