SQL Performance Help

We have a performance difference for a select query in two development environments. ( Same version :Oracle Database 10g Enterprise Edition Release 10.1.0.4.0 ). One query takes 30 seconds in one environment while execution time in other environment is about 5 minutes. Did an explain plan major difference in output shows INDEX FAST FULL SCAN (30 seconds execution time) while other shows TABLE ACCESS FULL(5 minutes execution time). Same index exist in all tables involved in the query. DBA says all system parameters are same in both environments and request development team to identify the cause. Load on both environments are pretty much same. There is room for improvement in the query. But we are trying to find out what causes the performance degradation in one environment.
Can someone help me with some clues to identify the source of this problem ?
Thanks
Deepak

Hello
The first thing I'd check would be the statistics for the tables and indexes involved. Is there any difference in the last_analyzed date in user_tables/dba_tables/user_indexes/dba_indexes for the tables and indexes in question?
Also, are there any differences in data volumes for the two environments or are they both copies of the same source?
HTH
David

Similar Messages

  • Help needed in SQL performance - Using CASE in SQL statement versus 2 query

    Hi,
    I have a requirement to find count from a bunch of tables.
    The SQL I have gives the count of all members.
    I have created 2 queries to find count of active and inactive members.
    The key difference is only the active dates.
    Each query takes 20 seconds to execute.
    I modified the SQL to use CASE statement in the SELECT.
    So after the data is fetched the CASE statement will evaluate the active date and gives 2 counts (active and inactive)
    Is it advisable to use this approach. Will CASE improve SQL performance ? I have to justify this.
    Please let me know your thoughts.
    Thanks,
    J

    Hi,
    If it can be done in single SQL do it in single SQL.
    You said:
    Will CASE improve SQL performance There can be both cases to prove if the performance is better or worse.
    In your case you should tell us how it is.
    Regards,
    Bhushan

  • SQL Performance issue: Using user defined function with group by

    Hi Everyone,
    im new here and I really could need some help on a weird performance issue. I hope this is the right topic for SQL performance issues.
    Well ok, i create a function for converting a date from timezone GMT to a specified timzeone.
    CREATE OR REPLACE FUNCTION I3S_REP_1.fnc_user_rep_date_to_local (date_in IN date, tz_name_in IN VARCHAR2) RETURN date
    IS
    tz_name VARCHAR2(100);
    date_out date;
    BEGIN
    SELECT
    to_date(to_char(cast(from_tz(cast( date_in AS TIMESTAMP),'GMT')AT
    TIME ZONE (tz_name_in) AS DATE),'dd-mm-yyyy hh24:mi:ss'),'dd-mm-yyyy hh24:mi:ss')
    INTO date_out
    FROM dual;
    RETURN date_out;
    END fnc_user_rep_date_to_local;The following statement is just an example, the real statement is much more complex. So I select some date values from a table and aggregate a little.
    select
    stp_end_stamp,
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    stp_end_stampThis statement selects ~70000 rows and needs ~ 70ms
    If i use the function it selects the same number of rows ;-) and takes ~ 4 sec ...
    select
    fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin'),
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    fnc_user_rep_date_to_local(stp_end_stamp,'Europe/Berlin')I understand that the DB has to execute the function for each row.
    But if I execute the following statement, it takes only ~90ms ...
    select
    fnc_user_rep_date_to_gmt(stp_end_stamp,'Europe/Berlin','ny21654'),
    noi
    from
    select
    stp_end_stamp,
    count(*) noi
    from step
    where
    stp_end_stamp
    BETWEEN
    to_date('23-05-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')      
    AND
    to_date('23-07-2009 00:00:00','dd-mm-yyyy hh24:mi:ss')
    group by
    stp_end_stamp
    )The execution plan for all three statements is EXACTLY the same!!!
    Usually i would say, that I use the third statement and the world is in order. BUT I'm working on a BI project with a tool called Business Objects and it generates SQL, so my hands are bound and I can't make this tool to generate the SQL as a subselect.
    My questions are:
    Why is the second statement sooo much slower than the third?
    and
    Howcan I force the optimizer to do whatever he is doing to make the third statement so fast?
    I would really appreciate some help on this really weird issue.
    Thanks in advance,
    Andi

    Hi,
    The execution plan for all three statements is EXACTLY the same!!!Not exactly. Plans are the same - true. They uses slightly different approach to call function. See:
    drop table t cascade constraints purge;
    create table t as select mod(rownum,10) id, cast('x' as char(500)) pad from dual connect by level <= 10000;
    exec dbms_stats.gather_table_stats(user, 't');
    create or replace function test_fnc(p_int number) return number is
    begin
        return trunc(p_int);
    end;
    explain plan for select id from t group by id;
    select * from table(dbms_xplan.display(null,null,'advanced'));
    explain plan for select test_fnc(id) from t group by test_fnc(id);
    select * from table(dbms_xplan.display(null,null,'advanced'));
    explain plan for select test_fnc(id) from (select id from t group by id);
    select * from table(dbms_xplan.display(null,null,'advanced'));Output:
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1
       2 - SEL$1 / T@SEL$1
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$1" "T"@"SEL$1")
          OUTLINE_LEAF(@"SEL$1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "ID"[NUMBER,22]
       2 - "ID"[NUMBER,22]
    34 rows selected.
    SQL>
    Explained.
    SQL>
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$1
       2 - SEL$1 / T@SEL$1
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$1" "T"@"SEL$1")
          OUTLINE_LEAF(@"SEL$1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "TEST_FNC"("ID")[22]
       2 - "ID"[NUMBER,22]
    34 rows selected.
    SQL>
    Explained.
    SQL> select * from table(dbms_xplan.display(null,null,'advanced'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 47235625
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   1 |  HASH GROUP BY     |      |    10 |    30 |   162   (3)| 00:00:02 |
    |   2 |   TABLE ACCESS FULL| T    | 10000 | 30000 |   159   (1)| 00:00:02 |
    Query Block Name / Object Alias (identified by operation id):
       1 - SEL$F5BB74E1
       2 - SEL$F5BB74E1 / T@SEL$2
    Outline Data
      /*+
          BEGIN_OUTLINE_DATA
          FULL(@"SEL$F5BB74E1" "T"@"SEL$2")
          OUTLINE(@"SEL$2")
          OUTLINE(@"SEL$1")
          MERGE(@"SEL$2")
          OUTLINE_LEAF(@"SEL$F5BB74E1")
          ALL_ROWS
          OPTIMIZER_FEATURES_ENABLE('10.2.0.4')
          IGNORE_OPTIM_EMBEDDED_HINTS
          END_OUTLINE_DATA
    Column Projection Information (identified by operation id):
       1 - (#keys=1) "ID"[NUMBER,22]
       2 - "ID"[NUMBER,22]
    37 rows selected.

  • [sql performance] inline view , group by , max, join

    Hi. everyone.
    I have a question with regard to "group by" inline view ,
    max value, join, and sql performance.
    I will give you simple table definitions in order for you
    to understand my intention.
    Table A (parent)
    C1
    C2
    C3
    Table B (child)
    C1
    C2
    C3
    C4 number type(sequence number)
    1. c1, c2, c3 are the key columns of tabla A.
    2. c1, c2, c3, c4 are the key columns of table B.
    3. table A is the parent table of Table B.
    4. c4 column of table b is the serial number.
    (c4 increases from 1 by "1" regarding every (c1,c2,c3)
    the following is the simple example of the sql query.
    select .................................
    from table_a,
    (select c1, c2, c3, max(c4)
    from table_b
    group by c1, c2, c3) table_c
    where table_a.c1 = table_c.c1
    and table_a.c2 = table_c.c2
    and table_a.c3 = table_c.c3
    The real query is not simple as above. More tables come
    after "the from clause".
    Table A and table B are big tables, which have more than
    100,000,000 rows.
    The response time of this sql is very very slow
    as everyone can expect.
    Are there any solutions or sql-tips about the late response-time?
    I am considering adding a new column into "Table B" in
    order to identify the row, which has max serial number.
    At this point, I am not sure adding a column is a good
    thing in terms of every aspect.
    I will be waiting for your advice and every response
    will be appreciated even if it is not the solution.
    Have a good day.
    HO.
    Message was edited by:
    user507290

    For such big sources check that
    1) you use full scans, hash joins or at least merge joins
    2) you scan your source data as less as possible. In the best case each necessary table only once (for example not using exists clause to effectively scan all table via index scan).
    3) how much time you are spending on sorts and hash joins (either from v$session_longops directly or some tool that visualises this info). If you are using workarea_size_policy = auto, probably you can switch to manual for this particular select and adjust sort_area_size and hash_area_size big enough to do as less as possible sorts on disk
    4) if you have enough free resources i.e. big box probably you can consider using some parallelism
    5) if your full scans are taking big time check what is your db_file_multiblock_read_count, probably increasing it for this select will give some gain.
    6) run trace and check on what are you waiting for
    7) most probably your problem is IO bound so probably you can do something from OS side to make IO faster
    8) if your query now is optimized as much as you can, disks are running as mad and you are using all RAM then probably it is the most you can get out of your box :)
    9) if nothing helps then you can start thinking about precalculations either using your idea about derived column or some materialized views.
    10) I hope you have a test box and at least point (9) do firstly on it and see whether it helps.
    Gints Plivna
    http://www.gplivna.eu

  • SQL Performance and Security

    Help needed here please. I am new to this concept and i am working on a tutorial based on SQL performance and security. I have worked my head round this but now i am stuck.
    Here is the questions:
    1. Analyse possible performance problems, and suggest solutions for each of the following transactions against the database
    a) A manager of a project needs to inspect total planned and actual hours spent on a project broken down by activity.
    e.g     
    Project: xxxxxxxxxxxxxx
    Activity Code          planned     actual (to date)
         1          20          25
         2          30          30
         3          40          24
    Total               300          200
    Note that actual time spent on an activity must be calculated from the WORK UNIT table.
    b)On several lists (e.g. list or combo boxes) in the on-line system it is necessary to identify completed, current, or future projects.
    2. Security: Justify and implement solutions at the server that meet the following security requirements
    (i)Only members of the Corporate Strategy Department (which is an organisation unit) should be able to enter, update and delete data in the project table. All users should be able to read this information.
    (ii)Employees should only be able to read information from the project table (excluding the budget) for projects they are assigned to.
    (iii)Only the manager of a project should be able to update (insert, update, delete) any non-key information in the project table relating to that project.
    Here is the project tables
    set echo on
    * Changes
    * 4.10.00
    * manager of employee on a project included in the employee on project table
    * activity table now has compound key, based on ID dependence between project
    * and activity
    drop table org_unit cascade constraints;
    drop table project cascade constraints;
    drop table employee cascade constraints;
    drop table employee_on_project cascade constraints;
    drop table employee_on_activity cascade constraints;
    drop table activity cascade constraints;
    drop table activity_order cascade constraints;
    drop table work_unit cascade constraints;
    * org_unit
    * type - for example in lmu might be FACULTY, or SCHOOL
    CREATE TABLE org_unit
    ou_id               NUMBER(4)      CONSTRAINT ou_pk PRIMARY KEY,
    ou_name          VARCHAR2(40)     CONSTRAINT ou_name_uq UNIQUE
                             CONSTRAINT ou_name_nn NOT NULL,
    ou_type          VARCHAR2(30) CONSTRAINT ou_type_nn NOT NULL,
    ou_parent_org_id     NUMBER(4)     CONSTRAINT ou_parent_org_unit_fk
                             REFERENCES org_unit
    * project
    CREATE TABLE project
    proj_id          NUMBER(5)     CONSTRAINT project_pk PRIMARY KEY,
    proj_name          VARCHAR2(40)     CONSTRAINT proj_name_uq UNIQUE
                             CONSTRAINT proj_name_nn NOT NULL,
    proj_budget          NUMBER(8,2)     CONSTRAINT proj_budget_nn NOT NULL,
    proj_ou_id          NUMBER(4)     CONSTRAINT proj_ou_fk REFERENCES org_unit,
    proj_planned_start_dt     DATE,
    proj_planned_finish_dt DATE,
    proj_actual_start_dt DATE
    * employee
    CREATE TABLE employee
    emp_id               NUMBER(6)     CONSTRAINT emp_pk PRIMARY KEY,
    emp_name          VARCHAR2(40)     CONSTRAINT emp_name_nn NOT NULL,
    emp_hiredate          DATE          CONSTRAINT emp_hiredate_nn NOT NULL,
    ou_id               NUMBER(4)      CONSTRAINT emp_ou_fk REFERENCES org_unit
    * activity
    * note each activity is associated with a project
    * act_type is the type of the activity, for example ANALYSIS, DESIGN, BUILD,
    * USER ACCEPTANCE TESTING ...
    * each activity has a people budget , in other words an amount to spend on
    * wages
    CREATE TABLE activity
    act_id               NUMBER(6),
    act_proj_id          NUMBER(5)     CONSTRAINT act_proj_fk REFERENCES project
                             CONSTRAINT act_proj_id_nn NOT NULL,
    act_name          VARCHAR2(40)     CONSTRAINT act_name_nn NOT NULL,
    act_type          VARCHAR2(30)     CONSTRAINT act_type_nn NOT NULL,
    act_planned_start_dt     DATE,
    act_actual_start_dt      DATE,
    act_planned_end_dt     DATE,
    act_actual_end_dt     DATE,
    act_planned_hours     number(6)     CONSTRAINT act_planned_hours_nn NOT NULL,
    act_people_budget     NUMBER(8,2)      CONSTRAINT act_people_budget_nn NOT NULL,
    CONSTRAINT act_pk PRIMARY KEY (act_id, act_proj_id)
    * employee on project
    * when an employee is assigned to a project, an hourly rate is set
    * remember that the persons manager depends on the project they are on
    * the implication being that the manager needs to be assigned to the project
    * before the 'managed'
    CREATE TABLE employee_on_project
    ep_emp_id          NUMBER(6)     CONSTRAINT ep_emp_fk REFERENCES employee,
    ep_proj_id          NUMBER(5)     CONSTRAINT ep_proj_fk REFERENCES project,
    ep_hourly_rate      NUMBER(5,2)      CONSTRAINT ep_hourly_rate_nn NOT NULL,
    ep_mgr_emp_id          NUMBER(6),
    CONSTRAINT ep_pk PRIMARY KEY(ep_emp_id, ep_proj_id),
    CONSTRAINT ep_mgr_fk FOREIGN KEY (ep_mgr_emp_id, ep_proj_id) REFERENCES employee_on_project
    * employee on activity
    * type - for example in lmu might be FACULTY, or SCHOOL
    CREATE TABLE employee_on_activity
    ea_emp_id          NUMBER(6),
    ea_proj_id          NUMBER(5),
    ea_act_id          NUMBER(6),      
    ea_planned_hours      NUMBER(3)     CONSTRAINT ea_planned_hours_nn NOT NULL,
    CONSTRAINT ea_pk PRIMARY KEY(ea_emp_id, ea_proj_id, ea_act_id),
    CONSTRAINT ea_act_fk FOREIGN KEY (ea_act_id, ea_proj_id) REFERENCES activity ,
    CONSTRAINT ea_ep_fk FOREIGN KEY (ea_emp_id, ea_proj_id) REFERENCES employee_on_project
    * activity order
    * only need a prior activity. If activity A is followed by activity B then
    (B is the prior activity of A)
    CREATE TABLE activity_order
    ao_act_id          NUMBER(6),      
    ao_proj_id          NUMBER(5),
    ao_prior_act_id      NUMBER(6),
    CONSTRAINT ao_pk PRIMARY KEY (ao_act_id, ao_prior_act_id, ao_proj_id),
    CONSTRAINT ao_act_fk FOREIGN KEY (ao_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id),
    CONSTRAINT ao_prior_act_fk FOREIGN KEY (ao_prior_act_id, ao_proj_id) REFERENCES activity (act_id, act_proj_id)
    * work unit
    * remember that DATE includes time
    CREATE TABLE work_unit
    wu_emp_id          NUMBER(5),
    wu_act_id          NUMBER(6),
    wu_proj_id          NUMBER(5),
    wu_start_dt          DATE CONSTRAINT wu_start_dt_nn NOT NULL,
    wu_end_dt          DATE CONSTRAINT wu_end_dt_nn NOT NULL,
    CONSTRAINT wu_pk PRIMARY KEY (wu_emp_id, wu_proj_id, wu_act_id, wu_start_dt),
    CONSTRAINT wu_ea_fk FOREIGN KEY (wu_emp_id, wu_proj_id, wu_act_id)
              REFERENCES employee_on_activity( ea_emp_id, ea_proj_id, ea_act_id)
    /* enter data */
    start ouins
    start empins
    start projins
    start actins
    start aoins
    start epins
    start eains
    start wuins
    start pmselect
    I have the tables containing ouins and the rest. email me on [email protected] if you want to have a look at the tables.

    Answer to your 2nd question is easy. Create database roles for the various groups of people who are allowed to access or perform various DML actions.
    The assign the various users to these groups. The users will be restricted to what the roles are restricted to.
    Look up roles if you are not familiar with it.

  • How to improve my pls/sql performance tunning skills

    Hi All , I would like to learn more about pl/sql performance tunning , where or how can i get more knowledge in this area ?
    Is there any tutorials which can help me to understand the Explain plan, Dbms_Profiler, Dbms_Advisor more etc ........Thanks . Bcj

    Explain plan
    http://www.psoug.org/reference/explain_plan.html
    DBMS_PROFILER (10g)
    http://www.psoug.org/reference/dbms_profiler.html
    DBMS_HPROF (11g)
    http://www.psoug.org/reference/dbms_hprof.html
    DBMS_ADVISOR
    http://www.psoug.org/reference/dbms_advisor.html
    DBMS_MONITOR
    http://www.psoug.org/reference/dbms_monitor.html
    DBMS_SUPPORT
    http://www.psoug.org/reference/dbms_support.html
    DBMS_TRACE
    http://www.psoug.org/reference/dbms_trace.html
    DBMS_SQLTUNE
    http://www.psoug.org/reference/dbms_sqltune.html

  • Regarding SQL performance Analyzer

    Hi,
    I am working on oracle 11g new feature Real application testing.I want to test the performance of DB after setting DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING parameters to TRUE(currently FALSE) by using SQL performance Analyzer. I have collected the SQL Tuning Sets from production and imported in Test server(replica of PROD), and will test the same in Test server only.
    Will it be feasible to check by using SPA ? My concern is that in Test environment concurrent transaction and DML operation will not be done .
    Please help me out on this .
    Rgds,
    At

    Hi,
    Look at http://download.oracle.com/docs/cd/B28359_01/server.111/e12253/dbr_intro.htm
    Regards,

  • No of columns in a table and SQL performance

    How does the table size effects sql performance?
    I am comparing 2 tables , with same number of rows(54 million rows) ,
    table1(columns a,b,c,d,e,f..) has 40 columns
    table2 (columns (a,b,c,d)
    SQL uses columns a,b.
    SQL using table2 runs in 1 sec.
    SQL using table1 runs in 30 min.
    Can any one please let me know how the table size , number of columns in table efects the performance of SQL's?
    Thanks
    jeevan.

    user600431 wrote:
    This is a general question. I just want to compare table with more columns and table with less columns with same number of rows .
    I am finding that table with less columns is good in performance , than the table with more columns.
    Assuming there are no row chains , will there be any difference in performance with the number of columns in a table.Jeevan,
    the question is not how many columns your table has, but how large your table segment is. If your query runs a full table scan it has to read through the whole table segment, so in that case the size of the table matters.
    A table having more columns potentially has a larger row size than a table with less columns but this is not a general rule. Think of large columns, e.g. varchar2 columns, think of blank (NULL) columns and you can easily end up with a table consisting of a single column taking up more space per row than a table with 200 columns consisting only of varchar2(1) columns.
    Check the DBA/ALL/USER_SEGMENTS view to determine the size of your two table segments. If you gather statistics on the tables then the dictionary will contain information about the average row size.
    If your query is using indexes then the size of the table won't affect the query performance significantly in many cases.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • SQL PERFORMANCE SQL ANALYZER of 11g

    I am using 11g on Windows 2000, I want run SQL PERFORMANCE ANALYZER to see the impact of parameter change on some sql’s. Currently, I am using it in a test environment, but eventually I want to apply t to production environment.
    Let us say I want to see the effect of different values db_file_muntilbock_readcount.
    When I run this in my database, will the values changed impact only the session where I am running sol performance analyzer, or will it impact any other sessions, which are accessing the same database instance. I think, it impacts only the session where SQL Performance analyzer is being run, but want to make sure that is the case?
    Appreciate your feedback.

    I think, it impacts only the session where
    SQL Performance analyzer is being run, but want to
    make sure that is the case?The database instance is part of a larger 'system' which includes a fixed set of physical resources. Your session, and every other session, work within the constraints of those resources. When you change the current SQL statement, you will be moving the balance between those resources.
    For example, a disk can only respond to one access request at a time. A memory location can be used for one piece of data at a time. A DB cache buffer can only reflect one block at a time. There are a lot of 'points of serialization'.
    Although the major impact should be on the current session, there will be some impact on every other session in the system.
    BY the way, there is an 'edit' button available to you for every post you create. As a courtesy, you could edit the title of the duplicate and let us know it is indeed a duplicate - or you could edit that other thread to ask that other question you were going to ask.

  • SQL*Plus Help

    Hi! Does anyone know where i can download SQL*Plus Help? I need to get info on select syntax, sql built-in functions, etc..
    Thanks!

    hi
    you can proceed like this :
    go to the OTN
    the click on products.
    null

  • EXEC SQL PERFORMING

    when run the following program ,it reported error as
    "the error occurred in the current  database connection "DEFAULT".
    how to solve the problem ?
    =================================
    DATA : BEGIN OF WA,
      CLIENT(3),
      ARG1(3),
      ARG2(3),
      END OF WA.
    DATA F3  VALUE '1'.
    EXEC SQL PERFORMING LOOP_OUTPUT.
      SELECT CLIENT , ARG1 INTO  :WA FROM TABLE_001 WHERE ARG2 = :F3.
    ENDEXEC.
    FORM LOOP_OUTPUT.
      WRITE : / WA-CLIENT,WA-ARG2.
    ENDFORM.
    ==================================

    hi
       try
        SELECT * FROM TABLE_001 INTO CORRESPONDING FIELDS OF WA WHERE ARG2 = F3.
    KUMAR

  • How can I improve below SQL performance.

    Hi,
    How can I improve below SQL performance. This SQL consumes CPU and occures wait events. It is running every 10 seconds. When I look at the session information from Enterprise Manager I can see that "Histogram for Wait Event: PX Deq Credit: send blkd"
    I created some indexes. I heard that the indexes are not used when there is a NULL but when I checked the xecution plan It uses index.
    SELECT i.ID
    FROM EXPRESS.invoices i
    WHERE i.nbr IS NOT NULL
    AND i.EXTRACT_BATCH IS NULL
    AND i.SUB_TYPE='COD'
    Explain Plan from Toad
    SELECT STATEMENT CHOOSECost: 77 Bytes: 6,98 Cardinality: 349                     
         4 PX COORDINATOR                
              3 PX SEND QC (RANDOM) SYS.:TQ10000 Cost: 77 Bytes: 6,98 Cardinality: 349           
                   2 PX BLOCK ITERATOR Cost: 77 Bytes: 6,98 Cardinality: 349      
                        1 INDEX FAST FULL SCAN INDEX EXPRESS.INVC_TRANS_INDX Cost: 77 Bytes: 6,98 Cardinality: 349
    Execution Plan from Sqlplus
    | Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 349 | 6980 | 77 | | | |
    | 1 | PX COORDINATOR | | | | | | | |
    | 2 | PX SEND QC (RANDOM) | :TQ10000 | 349 | 6980 | 77 | Q1,00 | P->S | QC (RAND) |
    | 3 | PX BLOCK ITERATOR | | 349 | 6980 | 77 | Q1,00 | PCWC | |
    |* 4 | INDEX FAST FULL SCAN| INVC_TRANS_INDX | 349 | 6980 | 77 | Q1,00 | PCWP | |
    Predicate Information (identified by operation id):
    4 - filter("I"."NBR" IS NOT NULL AND "I"."EXTRACT_BATCH" IS NULL AND "I"."SUB_TYPE"='COD')
    Note
    - 'PLAN_TABLE' is old version
    - cpu costing is off (consider enabling it)
    Statistics
    141 recursive calls
    0 db block gets
    5568 consistent gets
    0 physical reads
    0 redo size
    319 bytes sent via SQL*Net to client
    458 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    0 rows processed
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %: 100.00
    Redo NoWait %: 100.00
    Buffer Hit %: 99.70
    In-memory Sort %: 100.00
    Library Hit %: 99.81
    Soft Parse %: 99.77
    Execute to Parse %: 63.56
    Latch Hit %: 90.07
    Parse CPU to Parse Elapsd %: 0.81
    % Non-Parse CPU: 98.88
    Top 5 Timed Events                         
    Event     Waits     Time(s)     Avg Wait(ms)     % Total Call Time     Wait Class
    latch: library cache     12,626     16,757     1,327     62.6     Concurrency
    CPU time          5,712          21.Mar     
    latch: session allocation     1,848,987     1,99     1     07.Nis     Other
    PX Deq Credit: send blkd     1,242,265     981     1     03.Tem     Other
    PX qref latch     1,405,819     726     1     02.Tem     Other
    The database version is 10.2.0.1 but we haven't installed the patch 10.2.0.5. yet.
    I am waiting your comments.
    Thanks in advance

    Welcome to the forum.
    I created some indexes. I heard that the indexes are not used when there is a NULL but when I checked the xecution plan It uses index. What columns are indexed?
    And what do:
    select i.sub_type
    ,      count(*)
    from   express.invoices i
    where  i.nbr is not null
    and    i.extract_batch is null
    group by i.sub_type; and
    select i.sub_type
    ,      count(*)
    from   express.invoices i
    group by i.sub_type; return?
    Also, try use the {noformat}{noformat} tag when posting examples/execution plans etc.
    See: HOW TO: Post a SQL statement tuning request - template posting for more tuning instructions.
    It'll make a big difference:
    SELECT i.ID
    FROM EXPRESS.invoices i
    WHERE i.nbr IS NOT NULL
    AND i.EXTRACT_BATCH IS NULL
    AND i.SUB_TYPE='COD'
    Explain Plan from Toad
    SELECT STATEMENT CHOOSECost: 77 Bytes: 6,98 Cardinality: 349                     
         4 PX COORDINATOR                
              3 PX SEND QC (RANDOM) SYS.:TQ10000 Cost: 77 Bytes: 6,98 Cardinality: 349           
                   2 PX BLOCK ITERATOR Cost: 77 Bytes: 6,98 Cardinality: 349      
                        1 INDEX FAST FULL SCAN INDEX EXPRESS.INVC_TRANS_INDX Cost: 77 Bytes: 6,98 Cardinality: 349
    Execution Plan from Sqlplus
    | Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 349 | 6980 | 77 | | | |
    | 1 | PX COORDINATOR | | | | | | | |
    | 2 | PX SEND QC (RANDOM) | :TQ10000 | 349 | 6980 | 77 | Q1,00 | P->S | QC (RAND) |
    | 3 | PX BLOCK ITERATOR | | 349 | 6980 | 77 | Q1,00 | PCWC | |
    |* 4 | INDEX FAST FULL SCAN| INVC_TRANS_INDX | 349 | 6980 | 77 | Q1,00 | PCWP | |
    Predicate Information (identified by operation id):
    4 - filter("I"."NBR" IS NOT NULL AND "I"."EXTRACT_BATCH" IS NULL AND "I"."SUB_TYPE"='COD')
    Note
    - 'PLAN_TABLE' is old version
    - cpu costing is off (consider enabling it)
    Statistics
    141 recursive calls
    0 db block gets
    5568 consistent gets
    0 physical reads
    0 redo size
    319 bytes sent via SQL*Net to client
    458 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    0 rows processed
    Instance Efficiency Percentages (Target 100%)
    Buffer Nowait %: 100.00
    Redo NoWait %: 100.00
    Buffer Hit %: 99.70
    In-memory Sort %: 100.00
    Library Hit %: 99.81
    Soft Parse %: 99.77
    Execute to Parse %: 63.56
    Latch Hit %: 90.07
    Parse CPU to Parse Elapsd %: 0.81
    % Non-Parse CPU: 98.88
    Top 5 Timed Events                         
    Event     Waits     Time(s)     Avg Wait(ms)     % Total Call Time     Wait Class
    latch: library cache     12,626     16,757     1,327     62.6     Concurrency
    CPU time          5,712          21.Mar     
    latch: session allocation     1,848,987     1,99     1     07.Nis     Other
    PX Deq Credit: send blkd     1,242,265     981     1     03.Tem     Other
    PX qref latch     1,405,819     726     1     02.Tem     Other

  • How to install SQL*Plus help facilities and demos.

    Hi, everyone
    It appears error message say "failure to login" during SQL*Plus
    part of the Oracle8 full installation. I knew that system want
    to install SQL*Plus help and demos through logining one of
    dba account(maybe system user account). However, due to
    password's reason, can not log in SQL*Plus, so installer can't
    execute named pupbld.sql script, result in SQL*Plus help and
    demos can not be installed.
    Now, I am intend to install these stuff lonely.
    Could anyone help me? thank a lot.
    William
    null

    Hi,
    The pupbld.sql isn't the correct script to create the help
    facility, it just creates product and user profile tables.
    The help script is at $ORACLE_HOME/sqlplus/admin/help (run as
    system)
    cd $ORACLE_HOME/sqlplus/admin/help
    sqlplus system/<password> @helptbl
    sqlldr system/<password> control=plushelp.ctl
    sqlldr system/<password> control=plshelp.ctl
    sqlldr system/<password> control=sqlhelp.ctl
    sqlplus system/<password> @helpindx
    I think it is necessary to run the pupbld.sql script, without
    this script everyone who logins in oracle with sqlplus will see
    an error message, but... Run the script again:
    $ORACLE_HOME/sqlplus/admin/pupbld.sql
    Best regards,
    Ari
    William (guest) wrote:
    : Hi, everyone
    : It appears error message say "failure to login" during SQL*Plus
    : part of the Oracle8 full installation. I knew that system want
    : to install SQL*Plus help and demos through logining one of
    : dba account(maybe system user account). However, due to
    : password's reason, can not log in SQL*Plus, so installer can't
    : execute named pupbld.sql script, result in SQL*Plus help and
    : demos can not be installed.
    : Now, I am intend to install these stuff lonely.
    : Could anyone help me? thank a lot.
    : William
    null

  • Will Oracle pl/sql certification help me get  IT job

    Hello guys,
    I have completed my B.tech in Computer Science, I am confused a bit , Can i get a job after getting certified in Oracle Associate Pl/sql developer

    1005323 wrote:
    Hello guys,
    I have completed my B.tech in Computer Science, I am confused a bit , Can i get a job after getting certified in Oracle Associate Pl/sql developerYou may get a job after achieving Pl/sql developer OCA
    You may get a job after without achieving Pl/sql developer OCA
    You may fail to get a job after achieving Pl/sql developer OCA
    You may fail to get a job after without achieving Pl/sql developer OCA
    There are several factors involved in getting a job. And there are several ways a job may be obtained. But usually there are there stages:
    - Stage Zero: A company but has a job to offer.
    - And you need to be aware of it. - A friend may tell you, or an agency may tell you. And it must suit you for location and remuneration etc.
    - Stage one: An interview is obtained with the company.
    - Stage two: The job is offered to you rather than anyone else and you find it acceptable.
    So ... to your question:
    "Can i get a job after getting certified in Oracle Associate Pl/sql developer?"
    Well .... there is only three possible answers ... yes, no, and maybe; and maybe is probably the only correct answer, and most people will have worked this out, which means the question may have not been the best question to have asked.
    (( That said I now read the title of the thread and it says: Re: Will Oracle pl/sql certification help me get IT job)
    I have been known on occasion to have been given a question by a boss.
    And I have answered him:
    "You have given me the wrong question
    The question you should have answer me is this.
    And the answer I will give you is this."
    And the boss goes away happy
    So you you a better question would have been:
    How much will an OCA PL/SQL certification increase my chances of getting a job?
    Mind you even that question won't help you get a much better answer.
    For a proportion of jobs where PL/SQL is relevant that will help (for those where it is not it might be occasionally be a problem), for people with identical CV's it sometimes might help get to interview stage. But there are other factors as well. For instance if I was thinking of giving you a job on the basis of your post I might for example:
    - Not be impressed with an "Hello Guys" greeting ( though this is a forum so that isn't relevant here).
    - Not be impressed with you being confused.
    - etc.
    You probably need to get a good appreciation of the job market in your locality; and the numbers of applicants for each job. Which jobs you can apply for, what is your skillset and knowing youself as well.
    Sometimes an ITIL certification may be a better differentiator for some positions in business. But it will depend on the job you can think you can get.

  • (SQL*PLUS HELP) RUNNING PUPBLD OR HELPINS ASKS FOR SYSTEM_PASS

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-22
    (SQL*PLUS HELP) RUNNING PUPBLD OR HELPINS ASKS FOR SYSTEM_PASS
    ==============================================================
    PURPOSE
    이 내용은 SQL*Plus 상에서 SQL*Plus command의 help를 보기 위한 방법이다.
    Problem Description
    SQL*Plus command의 help를 보기 위해서 helpins를 수행하면
    SYSTEM_PASS is not set이라는 에러 메시지가 발생하는 경우가 있다.
    이 자료는 pupbld 또는 helpins를 수행하기 전에 SYSTEM_PASS 환경변수를
    셋팅하는 방법에 대한 자료이다.
    아래와 같은 에러가 발생하는 경우 조치 방법에 대해 알아본다.
    SYSTEM_PASS not set.
    Set and export SYSTEM_PASS, then restart help (for helpins or
    profile for pupbld) installation.
    Workaround
    none
    Solution Description
    이 스크립트는 system user로 database에 connect되어 수행되어야 한다.
    helpins를 수행하기 위해서는 SYSTEM_PASS 환경변수가 셋팅되어 있어야 한다.
    NOTE
    For security reasons, do not set this variable in your shell
    startup scripts. (i.e. .login or .profile.).
    Set this environment variable at the prompt.
    1. Prompt에서 환경변수를 셋팅하기
    For C shell:
    % setenv SYSTEM_PASS system/<password>
    For Korn or Bourne shells:
    $ SYSTEM_PASS=system/<password> ;export SYSTEM_PASS
    2. Now run "$ORACLE_HOME/bin/pupbld" or "$ORACLE_HOME/bin/helpins".
    % cd $ORACLE_HOME/bin
    % pupbld
    or
    % helpins
    주의사항
    $ORACLE_HOME/bin/pupbld 스크립트와 $ORACLE_HOME/bin/helpins 스크
    립트를 수행하기 위해서는 반드시 SYSTEM_PASS 환경변수를 필요로 한다.
    Reference Document
    <Note:1037075.6>

    check it please whether it is a database version or just you are installing a client. Install Enterprize database on 2k system. I you are running a client software then you are to deinstall it.

Maybe you are looking for