How to make sql to use index/make to query to perform better

Hi,
I have 2 sql query which results the same.
But both has difference in SQL trace.
create table test_table
(u_id number(10),
u_no number(4),
s_id number(10),
s_no number(4),
o_id number(10),
o_no number(4),
constraint pk_test primary key(u_id, u_no));
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030301, 1, 1001, 1, 2001, 1);
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030302, 1, 1001, 1, 2001, 2);
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030303, 1, 1001, 1, 2001, 3);
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030304, 1, 1001, 1, 2001, 4);
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030305, 1, 1002, 1, 1001, 2);
insert into test_table(u_id, u_no, s_id, s_no, o_id, o_no)
values (2007030306, 1, 1002, 1, 1002, 1);
commit;
CREATE INDEX idx_test_s_id ON test_table(s_id, s_no);
set autotrace on
select s_id, s_no, o_id, o_no
from test_table
where s_id <> o_id
and s_no <> o_no
union all
select o_id, o_no, s_id, s_no
from test_table
where s_id <> o_id
and s_no <> o_no;
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=8 Bytes=416)
1 0 UNION-ALL
2 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
3 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
Statistics
223 recursive calls
2 db block gets
84 consistent gets
0 physical reads
0 redo size
701 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
8 rows processed
-- i didnt understand why the above query is not using the index idx_test_s_id.
-- But still it is faster
select s_id, s_no, o_id, o_no
from test_table
where (u_id, u_no) in
(select u_id, u_no from test_table
minus
select u_id, u_no from test_table
where s_id = o_id
or s_no = o_no)
union all
select o_id, o_no, s_id, s_no
from test_table
where (u_id, u_no) in
(select u_id, u_no from test_table
minus
select u_id, u_no from test_table
where s_id = o_id
or s_no = o_no);
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=16 Card=2 Bytes=156)
1 0 UNION-ALL
2 1 FILTER
3 2 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=6 Bytes=468)
4 2 MINUS
5 4 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1 Bytes=26)
6 4 TABLE ACCESS (BY INDEX ROWID) OF 'TEST_TABLE' (TABLE) (Cost=2 Card=1 Bytes=78)
7 6 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1)
8 1 FILTER
9 8 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=6 Bytes=468)
10 8 MINUS
11 10 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1 Bytes=26)
12 10 TABLE ACCESS (BY INDEX ROWID) OF 'TEST_TABLE' (TABLE) (Cost=2 Card=1 Bytes=78)
13 12 INDEX (UNIQUE SCAN) OF 'PK_TEST' (INDEX (UNIQUE)) (Cost=1 Card=1)
Statistics
53 recursive calls
8 db block gets
187 consistent gets
0 physical reads
0 redo size
701 bytes sent via SQL*Net to client
508 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
8 rows processed
-- The above query is using index PK_TEST. But still it has FULL SCAN to the
-- table two times it has the more cost.
1st query --> SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=8 Bytes=416)
2nd query --> SELECT STATEMENT Optimizer=ALL_ROWS (Cost=16 Card=2 Bytes=156)
My queries are:
1) performance wise which query is better?
2) how do i make the 1st query to use an index
3) is there any other method to get the same result by using any index
Appreciate your immediate help.
Best regards
Muthu

Hi William
Nice...it works.. I have added "o_id" and "o_no" are in part of the index
and now the query uses the index
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=8 Bytes=416)
1 0 UNION-ALL
2 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
3 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
Statistics
7 recursive calls
0 db block gets
21 consistent gets
0 physical reads
0 redo size
701 bytes sent via SQL*Net to client
507 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
But my questions are:
1) In a where clause, if "<>" condition is used, then, whether the system will use the index. Because I have observed in several situations even though the column in where clause is indexed, since the where condition is "like" or "is null/is not null"
then the index is not used. Same as like this, i assumed, if we use <> then indexes will not be used. Is it true?
2) Now, after adding "o_id" and "o_no" columns to the index, the Execution plan is:
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=8 Bytes=416)
1 0 UNION-ALL
2 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
3 1 INDEX (FULL SCAN) OF 'IDX_TEST_S_ID' (INDEX) (Cost=1 Card=4 Bytes=208)
Before it was :
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=6 Card=8 Bytes=416)
1 0 UNION-ALL
2 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
3 1 TABLE ACCESS (FULL) OF 'TEST_TABLE' (TABLE) (Cost=3 Card=4 Bytes=208)
Difference only in Cost (reduced), not in Card, Bytes.
Can you explain, how can i decide which makes the performace better (Cost / Card / Bytes). Full Scan / Range Scan?
On statistics also:
Before:
Statistics
52 recursive calls
0 db block gets
43 consistent gets
0 physical reads
0 redo size
701 bytes sent via SQL*Net to client
507 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
After:
Statistics
7 recursive calls
0 db block gets
21 consistent gets
0 physical reads
0 redo size
701 bytes sent via SQL*Net to client
507 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8 rows processed
Difference in recursive calls & consistent gets.
Which one shows the query with better performance?
Please explain..
Regards
Muthu

Similar Messages

  • How can Photoshop CS6 be used to make a collage of photos

    How can Photoshop CS5 or CS5 be used to make calles of photos, say 6 per page?   Are there templates avaiable for drap and drop?   Can text be added? 

    Do you simply want a contact sheet? File>Automate>Contact Sheet II...
    There are a number of actions available for making collages. Google search yields:
    https://www.google.com/search?client=safari&rls=en&q=collage+action+photoshop&ie=UTF-8&oe= UTF-8

  • Query tuning and how to force  table to use index?

    Dear Experts,
    i have two (2) question regarding performance during DRL.
    Question # 1
    There is a column name co_id in every transaction table. DBA suggest me to add [co_id='KPG'] in every clause which forces query to use index, resulting immediate processing. As an index was created for each table on the co_id column.
    Please note that co_id has constant value 'KPG' through out the table. is it make sense to add that column in where caluse like
    select a,b,c from tab1
    where a='89' and co_id='KPG'
    Question # 2
    if i am using a column name in where clause having index on it and that column is not in my column list does it restrict query for full table scan. like
    select a,b,c,d from tabletemp
    where e='ABC';
    Thanks in advance
    Edited by: Fiz Dosani on Mar 27, 2009 12:00 PM

    Fiz Dosani wrote:
    Dear Experts,
    i have two (2) question regarding performance during DRL.
    Question # 1
    There is a column name co_id in every transaction table. DBA suggest me to add [co_id='KPG'] in every clause which forces query to use index, resulting immediate processing. As an index was created for each table on the co_id column.
    Please note that co_id has constant value 'KPG' through out the table. is it make sense to add that column in where caluse like
    select a,b,c from tab1
    where a='89' and co_id='KPG'If co_id is always 'KPG' it is not needed to add this condition to the table. It would be very stupid to add an (normal) index on that column. An index is used to reduce the resultset of a query by storing the values and the rowids in a specified order. When all the values are equal and index justs makes all dml operations slower without makeing any select faster.
    And of cause the CBO is clever enough not to use such a index.
    >
    Question # 2
    if i am using a column name in where clause having index on it and that column is not in my column list does it restrict query for full table scan. like
    select a,b,c,d from tabletemp
    where e='ABC';
    Yes this is possible. However it depends from a few things.
    1) How selective this condition is. In general an index will be used when selectivity is less than 5%. This factor depends a bit on the database version. it means that when less then 5% of your rows have the value 'ABC' then an index access will be faster than the full table scan.
    2) Are the statistics up to date. The cost based optimizer (CBO) needs to know how many values are in that table, in the columns, in that index to make a good decision bout using an index access or a full table scan. Often one forgets to create statistics for freshly created data as in temptables.
    Edited by: Sven W. on Mar 27, 2009 8:53 AM

  • How is it possible to use Index Seek for LIKE %search-string% case?

    Hello,
    I have the following SP:
    CREATE PROCEDURE dbo.USP_SAMPLE_PROCEDURE(@Beginning nvarchar(15))
    AS
    SELECT * FROM HumanResources.Employee
    WHERE NationalIDNumber LIKE @Beginning + N'%';
    GO
    If I run the sp first time with param: N'94', then the following plan is generated and added to the cache:
    SQL Server "sniffs" the input value (94) when compiling the query. So for this param using Index Seek for AK_Employee_NationalIDNumber index will be the best option. On the other hand, the query plan should be generic enough to be able to handle
    any values specified in the @Beginning param.
    If I call the sp with @Beginning =N'%94':
    EXEC dbo.USP_SAMPLE_PROCEDURE N'%94'
    I see the same execution plan as above. The question is how is it possible to reuse this execution plan in this case? To be more precise, how
    Index Seek can be used in case LIKE %search-string% case. I expected that
    ONLY Index Scan operation can be used here.
    Alexey

    The key is that the index seek operator includes both seek (greater than and less than) and a predicate (LIKE).  With the leading wildcard, the seek is effectively returning all rows just like a scan and the filter returns only rows matching
    the LIKE expression.
    Do you want to say that in case of leading wildcard, expressions Expr1007 and Expr1008 (see image below) calculated such a way that
    Seek Predicates retrieve all rows from the index. And only
    Predicate does the real job by taking only rows matching the Like expression? If this is the case, then it explains how
    Index Seek can be used to resolve such queries: LIKE N'%94'.
    However, it leads me to another question: Since
    Index Seek in
    this particular case scans
    all the rows, what is the difference between
    Index Seek and Index Scan?
    According to
    MSDN:
    The Index Seek operator uses the seeking ability of indexes to retrieve rows from a nonclustered index.
    The storage engine uses the index to process
    only those rows that satisfy the SEEK:() predicate. It optionally may include a WHERE:() predicate, which the storage engine will evaluate against all rows that satisfy the SEEK:() predicate (it does not use the indexes to do this).
    The Index Scan operator retrieves
    all rows from the nonclustered index specified in the Argument column. If an optional WHERE:() predicate appears in the Argument column, only those rows that satisfy the predicate are returned.
    It seems like Index Scan is a special case of Index Seek,
    which means that when we see Index Seek in the execution plan, it does NOT mean that storage engine does NOT scan all rows. Right?
    Alexey

  • How can i use index in select query.. facing problem with the select query.

    Hi Friends,
    I am facing a serious problem in one of the select query. It is taking a lot of time to fetch data in Production Scenario.
    Here is the query:
      SELECT * APPENDING CORRESPONDING FIELDS OF TABLE tbl_summary
        FROM ztftelat LEFT JOIN ztfzberep
         ON  ztfzberep~gjahr = st_input-gjahr
         AND ztfzberep~poper = st_input-poper
         AND ztfzberepcntr  = ztftelatrprctr
        WHERE rldnr  = c_telstra_accounting
          AND rrcty  = c_actual
          AND rvers  = c_ver_001
          AND rbukrs = st_input-bukrs
          AND racct  = st_input-saknr
          AND ryear  = st_input-gjahr
          And rzzlstar in r_lstar                            
          AND rpmax  = c_max_period.
    There are 5 indices present for Table ZTFTELAT.
    Indices of ZTFTELAT:
      Name   Description                                               
      0        Primary key( RCLNT,RLDNR,RRCTY,RVERS,RYEAR,ROBJNR,SOBJNR,RTCUR,RUNIT,DRCRK,RPMAX)                                          
      005    Profit (RCLNT,RPRCTR)
      1        Ledger, company code, account (RLDNR,RBUKRS, RACCT)                                
      2        Ledger, company code, cost center (RLDNR, RBUKRS,RCNTR)                           
      3        Account, cost center (RACCT,RCNTR)                                        
      4        RCLNT/RLDNR/RRCTY/RVERS/RYEAR/RZZAUFNR                        
      Z01    Activity Type, Account (RZZLSTAR,RACCT)                                        
      Z02    RYEAR-RBUKRS- RZZZBER-RLDNR       
    Can anyone help me out why it is taking so much time and how we can reduce it ? and also tell me if I want to use index number 1 then how can I use?
    Thanks in advance.

    Hi Shiva,
    I am using two more select queries with the same manner ....
    here are the other two select query :
    ***************1************************
    SELECT * APPENDING CORRESPONDING FIELDS OF TABLE tbl_summary
        FROM ztftelpt LEFT JOIN ztfzberep
         ON  ztfzberep~gjahr = st_input-gjahr
         AND ztfzberep~poper = st_input-poper
         AND ztfzberepcntr  = ztftelptrprctr
        WHERE rldnr  = c_telstra_projects
          AND rrcty  = c_actual
          AND rvers  = c_ver_001
          AND rbukrs = st_input-bukrs
          AND racct  = st_input-saknr
          AND ryear  = st_input-gjahr
          and rzzlstar in r_lstar             
          AND rpmax  = c_max_period.
    and the second one is
    *************************2************************
      SELECT * APPENDING CORRESPONDING FIELDS OF TABLE tbl_summary
        FROM ztftelnt LEFT JOIN ztfzberep
         ON  ztfzberep~gjahr = st_input-gjahr
         AND ztfzberep~poper = st_input-poper
         AND ztfzberepcntr  = ztftelntrprctr
        WHERE rldnr  = c_telstra_networks
          AND rrcty  = c_actual
          AND rvers  = c_ver_001
          AND rbukrs = st_input-bukrs
          AND racct  = st_input-saknr
          AND ryear  = st_input-gjahr
          and rzzlstar in r_lstar                              
          AND rpmax  = c_max_period.
    for both the above table program is taking very less time .... although both the table used in above queries have similar amount of data. And i can not remove the APPENDING CORRESPONDING. because i have to append the data after fetching from the tables.  if i will not use it will delete all the data fetched earlier.
    Thanks on advanced......
    Sourabh

  • How to connect sql server using oracle Client

    Hi,
    I am working in oracle9i and windows os 32 bit.
    I need to connect SQL server 2000 from my oracle client..
    I heard about heterogeneous connectivity ...
    Please expalin me the steps what to add and how to connect the sql server...
    Regs....

    Are you trying to connect to SQL Server from your Oracle database (i.e. create a database link in Oracle to SQL Server)? Or are you trying to connect to SQL Server using your Oracle client software (i.e. SQL*Plus)?
    The former just requires Heterogeneous Services with Generic Connectivity. The latter is functionality that has been depricated and probably no longer works in your environment.
    Justin

  • How to execute SQL Script using windows powershell(using invoke-sqlcmd or any if)

    OS : Windows server 2008
    SQL Server : SQL Server 2012
    Script: Test.sql (T-SQL)  example : "select name from sys.databases"
    Batch script: windows  MyBatchscript.bat ( here connects to sql server using sqlcmd  and output c:\Testput.txt) 
     (sqlcmd.exe -S DBserverName -U username -P p@ssword -i C:\test.sql -o "c:\Testoutput.txt)  ---it working without any issues.....
    This can execute if i double click MyBatchscript.bat file and can see the output in c:\testput.txt.
    Powershell: Similarly, How can i do in powershell 2.0 or higher versions?  can any one give full details with each step?
    I found some of them online, but nowhere seen clear details or examples and it not executing through cmd line (or batch script).
    example: invoke-sqlcmd -Servernameinstance Servername -inputfile "c:\test.sql" | out-File -filepath "c:\psOutput.txt"  --(call this file name MyTest.ps1)
    (The above script working if i run manually. I want to run automatic like double click (or schedule with 3rd party tool/scheduler ) in Batch file and see the output in C drive(c:\psOutput.txt))
    Can anyone Powershell experts give/suggest full details/steps for this. How to proceed? Is there any configurations required to run automatic?
    Thanks in advance.

    Testeted the following code and it's working.....thanks all.
    Execute sql script using invoke-sqlcmd with batch script and without batch script.
    Option1: using Import sqlps
    1.Save sql script as "C:\scripts\Test.sql"  script in side Test.sql: select name from sys.databases
    2.Save Batch script as "C:\scripts\MyTest.bat" Script inside Batch script:
    powershell.exe C:\scripts\mypowershell.ps1
    3.Save powershell script as "C:\scripts\mypowershell.ps1"
    import-module "sqlps" -DisableNameChecking
    invoke-sqlcmd -Servername ServerName -inputFile "C:\scripts\Test.sql" | out-File -filepath "C:\scripts\TestOutput.txt"
    4.Run the Batch script commandline or double click then can able to see the output "C:\scripts\TestOutput.txt" file.
    5.Connect to current scripts location  cd C:\scripts (enter)
    C:\scripts\dir (enter )
    C:\scripts\MyTest.bat (enter)
    Note: can able to see the output in "C:\scripts" location as file name "TestOutput.txt".
    Option2: Otherway, import sqlps and execution
    1.Save sql script as "C:\scripts\Test.sql"  script in side Test.sql: select name from sys.databases
    2.Save powershell script as "C:\scripts\mypowershell.ps1"
    # import-module "sqlps" -DisableNameChecking #...Here it not required.
    invoke-sqlcmd -Servername ServerName -inputFile "C:\scripts\Test.sql" | out-File -filepath "C:\scripts\TestOutput.txt"
    3.Connect to current scripts location
    cd C:\scripts (enter)
    C:\scripts\dir (enter )
    C:\scripts\powershell.exe sqlps C:\scripts\mypowershell.ps1 (enter)
    Note: can able to see the output in "C:\scripts" location as file name "TestOutput.txt".

  • How to execute .sql file using ODI

    Hi All,
    I need to execute .sql file using ODI.
    I tried @{path}{file} command in ODI procedure selecting oracle technology.but it is failing.
    Do any one have any other idea to execute .sql file.
    Thanks in advance

    Ohk...I think you can try creating batch file(.bat) if its Windows & call that from ODIOSCommand.
    The bat file should contain scripts which call .sql file using sqlplus  & there you can use @{path}{file} format.
    See if this helps.
    Regards,
    Santy

  • How to find CPU_time being used by currently executing query

    Hello,
    Is there a way to know how much CPU is being used by a query that is currently being runninng? I tried to look into dm_exec_sessions or even dm_exec_requests dmv but it looks like CPU_time stat doesn't get updated to those views until the query actually
    completes or get cancelled. Sometimes, it's important to find out among the actively running queries, which session/query is consuming the most CPU and there seem to be no way to find out that stat until the query actually completes or get cancelled. I was
    testing it on SQL 2008 r2 but behavior could be same on 2014 or 2012.
    Appreciate any feedback.
    thanks,
    Raj

    I have been using Adam's great utility
    Who Is Active? v10.00 (2010-10-21)
    (C) 2007-2010, Adam Machanic
    Updates: http://sqlblog.com/blogs/adam_machanic/archive/tags/who+is+active/default.aspx
    "Beta" Builds: http://sqlblog.com/files/folders/beta/tags/who+is+active/default.aspx
    License: 
    Who is Active? is free to download and use for personal, educational, and internal 
    corporate purposes, provided that this header is preserved. Redistribution or sale 
    of Who is Active?, in whole or in part, is prohibited without the author's express 
    written consent.
    CPU filter
    /*CPU*/
    EXEC dbo.sp_WhoIsActive
    @get_transaction_info=0,
    @output_column_list ='[session_id][start_time]
                      [cpu][status][context_switches][wait_info][program_name]
                     [database_name][sql_text][host_name][open_tran_count]', 
    @sort_order='[CPU]DESC'
    /*delta*/
    EXEC dbo.sp_WhoIsActive
    @delta_interval=5, @get_task_info = 2,
    @output_column_list ='[session_id][start_time][context switches]
                      [CPU_delta][reads_delta][writes_delta][tempdb_writes_delta]
                      [tempdb_reads_delta][tempdb_current_delta]
                      [database_name][host_name][login_name]', 
    @sort_order='[CPU_delta]DESC'
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Construct a Sql block using With Clause to improve the performance

    I have got four diff parametrized cursor in my Pl/Sql Procedure. As the performance of the Procedure is very pathetic,so i have been asked to tune the Select statements used in those cursors.
    So I am trying to use the With Clause in order to club all those four Select Statements.
    I would appreciate if anybody can help me to construct the Sql Block using With Clause.
    My DB version is..
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    Four Diff cursors are defined below.
    CURSOR all_iss (
          b_batch_end_date   IN   TIMESTAMP,
       IS
          SELECT isb.*
                FROM IMPLMN_STEP_BREKPN  isb
               , ISSUE iss
          WHERE isb.issue_id = iss.issue_id
           AND iss.issue_status_id  =  50738
           AND ewo_no IN
          (SELECT TO_CHAR(wo_no)
            FROM MGO_PLANT_AUDIT
           WHERE dml_status = 'U' OR dml_status = 'I')
          UNION ALL
          SELECT isb.*
           FROM IMPLMN_STEP_BREKPN  isb
            , ISSUE iss
           WHERE isb.issue_id = iss.issue_id
           AND iss.issue_status_id  =  50738
           AND CAST (isb.last_updt_timstm AS TIMESTAMP) >=
                                                                  b_batch_end_date;
          CURSOR ewo_plant  ( p_ewo_no IN  IMPLMN_STEP_BREKPN.ewo_no%TYPE)
          IS
          SELECT DISTINCT wo_no ,
          plant_code
          FROM MGO_PLANT
          WHERE TO_CHAR(wo_no) = p_ewo_no;
          CURSOR iss_ewo_plnt (
          p_issue_id IN IMPLMN_STEP_BREKPN.issue_id%TYPE ,
          p_ewo_no IN IMPLMN_STEP_BREKPN.EWO_NO%TYPE,
          p_plnt_code IN IMPLMN_STEP_BREKPN.PLT_FACLTY_ID%TYPE)
          IS
          SELECT *
          FROM IMPLMN_STEP_BREKPN
          WHERE issue_id = p_issue_id
          AND ewo_no = p_ewo_no
          AND
          (plt_faclty_id = p_plnt_code
          OR
          plt_faclty_id IS NULL);
          CURSOR iss_ewo_plnt_count (
          p_issue_id IN IMPLMN_STEP_BREKPN.issue_id%TYPE ,
          p_ewo_no IN IMPLMN_STEP_BREKPN.EWO_NO%TYPE,
          p_plnt_code IN IMPLMN_STEP_BREKPN.PLT_FACLTY_ID%TYPE)
          IS
          SELECT COUNT(*)
          FROM IMPLMN_STEP_BREKPN
          WHERE issue_id = p_issue_id
          AND ewo_no = p_ewo_no
          AND
          (plt_faclty_id = p_plnt_code
          OR
          plt_faclty_id IS NULL);

    Not tested. Some thing like below. i just made the queries as tables and given name as a,b,c and substituted columns for the parameters used in the 2nd cursor and third cursor. Try like this.
    CURSOR all_iss (
    b_batch_end_date IN TIMESTAMP,
    IS
    select a.*,b.*,c.* from
    ( SELECT isb.*
    FROM IMPLMN_STEP_BREKPN isb
    , ISSUE iss
    WHERE isb.issue_id = iss.issue_id
    AND iss.issue_status_id = 50738
    AND ewo_no IN
    (SELECT TO_CHAR(wo_no)
    FROM MGO_PLANT_AUDIT
    WHERE dml_status = 'U' OR dml_status = 'I')
    UNION ALL
    SELECT isb.*
    FROM IMPLMN_STEP_BREKPN isb
    , ISSUE iss
    WHERE isb.issue_id = iss.issue_id
    AND iss.issue_status_id = 50738
    AND CAST (isb.last_updt_timstm AS TIMESTAMP) >=
    b_batch_end_date) a,
    ( SELECT DISTINCT wo_no ,
    plant_code
    FROM MGO_PLANT
    WHERE TO_CHAR(wo_no) = p_ewo_no) b,
    ( SELECT *
    FROM IMPLMN_STEP_BREKPN
    WHERE issue_id = p_issue_id
    AND ewo_no = p_ewo_no
    plt_faclty_id IS NULL) c
    where b.wo_no = c.ewo_no and
    c.issue_id = a.issue_id ;
    vinodh
    Edited by: Vinodh2 on Jul 11, 2010 12:03 PM

  • How many files should be used to make 1 website?

    This is a relatively simple begininner question, but my begininner tutorials have lead me astray and confused me a bit.
    (I did katie's cafe tutorial and it has an Embed file, a Master Page file, a Visit Page file, a Widgets file, and a Publish file)
    I was just wondering how many files I should include for my website. It seems like I will only need one.
    Say I have 5 pages on my site, do I need 5 different files? or May I just edit all the pages on 1 adobe-muse file, and when it uploads via FTP it will create the seperate pages for me.
    Thanks
    Ben

    Hello Ben,
    You're correct in thinking that you need only one file for your site with 5 pages, and that you don't need 5 files for 5 pages.
    Hope this helps.
    Cheers
    Parikshit

  • How to create sql database using java frame or appelet?

    hi ! i am working on database project i want to create a database using java frame or applet where it asks user to select the location for database to be created , after user have specified the path then the programs creates the database, again i want that database to be read and write by another frame or applet but as user select the path how do i make the connectivity. i just have basic knowledge on java. please give me idea how can i process further.
    thanks a lot

    While duffymo is correct in regard to most major database products, it's my understanding (warning! wild-ass guess coming) that the Hypersonic DB is more "application-centric" and the dynamic creation of databases is part of its design. It's pure Java database software, and therefore is not appropriate for all database projects, in particular those that require extremely high-performance.
    See http://hsqldb.org/
    I've not used it yet (but soon though), and I can't really advise anyone about it.
    However, I'm wondering if you phrased you question in a way that is confusing us. To most of us in casual conversation, a "database" is both (1) a large organized collection of data and (2) the software that is used to organize and access it. However, the phrase "create a database" usually means creating a (1) database (a collection of data) using an already created (2) database software, such as Oracle, MySQL, DB2, HSQDB, etc., etc. If you'r question is, how do a create some new database software using Java, the answer is that this is a very very big and hard thing to do for the general case and probably not something you want to be doing.

  • How oracle decide whetehr to use index or full scan (statistics)

    Hi Guys,
    Let say i have a index on a column.
    The table and index statistics has been gathered. (without histograms).
    Let say i perform a select * from table where a=5;
    Oracle will perform a full scan.
    But from which statistics it will be able to know indeed most of the column = 5? (histograms not used)
    After analyzing, we get the below:
    Table Statistics :
    (NUM_ROWS)
    (BLOCKS)
    (EMPTY_BLOCKS)
    (AVG_SPACE)
    (CHAIN_COUNT)
    (AVG_ROW_LEN)
    Index Statistics :
    (BLEVEL)
    (LEAF_BLOCKS)
    (DISTINCT_KEYS)
    (AVG_LEAF_BLOCKS_PER_KEY)
    (AVG_DATA_BLOCKS_PER_KEY)
    (CLUSTERING_FACTOR)
    thanks
    Index Column (A)
    ======
    1
    1
    2
    2
    5
    5
    5
    5
    5
    5

    I have prepared some explanation and have not noticed that the topic has been marked as answered.
    This my sentence is not completely true.
    A column "without histograms" means that the column has only one bucket. More correct: even without histograms there are data in dba_tab_histograms which we can consider as one bucket for whole column. In fact these data are retrieved from hist_head$, not from histgrm$ as usual buckets.
    Technically there is no any buckets without gathered histograms.
    Let's create a table with skewed data distribution.
    SQL> create table t as
      2  select least(rownum,3) as val, '*' as pad
      3    from dual
      4  connect by level <= 1000000;
    Table created
    SQL> create index idx on t(val);
    Index created
    SQL> select val, count(*)
      2    from t
      3   group by val;
           VAL   COUNT(*)
             1          1
             2          1
             3     999998So, we have table with very skewed data distribution.
    Let's gather statistics without histograms.
    SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for all columns size 1', cascade => true);
    PL/SQL procedure successfully completed
    SQL> select blocks, num_rows  from dba_tab_statistics
      2   where table_name = 'T';
        BLOCKS   NUM_ROWS
          3106    1000000
    SQL> select blevel, leaf_blocks, clustering_factor
      2    from dba_ind_statistics t
      3   where table_name = 'T'
      4     and index_name = 'IDX';
        BLEVEL LEAF_BLOCKS CLUSTERING_FACTOR
             2        4017              3107
    SQL> select column_name,
      2         num_distinct,
      3         density,
      4         num_nulls,
      5         low_value,
      6         high_value
      7    from dba_tab_col_statistics
      8   where table_name = 'T'
      9     and column_name = 'VAL';
    COLUMN_NAME  NUM_DISTINCT    DENSITY  NUM_NULLS      LOW_VALUE      HIGH_VALUE
    VAL                     3 0,33333333          0           C102            C104So, Oracle suggests that values between 1 and 3 (raw C102 and C104) are distributed uniform and the density of the distribution is 0.33.
    Let's try to explain plan
    SQL> explain plan for
      2  select --+ no_cpu_costing
      3         *
      4    from t
      5   where val = 1
      6  ;
    Explained
    SQL> @plan
    | Id  | Operation         | Name | Rows  | Cost  |
    |   0 | SELECT STATEMENT  |      |   333K|   300 |
    |*  1 |  TABLE ACCESS FULL| T    |   333K|   300 |
    Predicate Information (identified by operation id):
       1 - filter("VAL"=1)
    Note
       - cpu costing is off (consider enabling it)Below is an excerpt from trace 10053
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 1000000  #Blks:  3106  AvgRowLen:  5.00
    Index Stats::
      Index: IDX  Col#: 1
        LVLS: 2  #LB: 4017  #DK: 3  LB/K: 1339.00  DB/K: 1035.00  CLUF: 3107.00
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#1): VAL(NUMBER)
        AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 0.33333 Min: 1 Max: 3
      Table:  T  Alias: T
        Card: Original: 1000000  Rounded: 333333  Computed: 333333.33  Non Adjusted: 333333.33
      END   Single Table Cardinality Estimation
      Access Path: TableScan
        Cost:  300.00  Resp: 300.00  Degree: 0
          Cost_io: 300.00  Cost_cpu: 0
          Resp_io: 300.00  Resp_cpu: 0
      Access Path: index (AllEqRange)
        Index: IDX
        resc_io: 2377.00  resc_cpu: 0
        ix_sel: 0.33333  ix_sel_with_filters: 0.33333
        Cost: 2377.00  Resp: 2377.00  Degree: 1
      Best:: AccessPath: TableScan
             Cost: 300.00  Degree: 1  Resp: 300.00  Card: 333333.33  Bytes: 0Cost of FTS here is 300 and cost of Index Range Scan here is 2377.
    I have disabled cpu costing, so selectivity does not affect the cost of FTS.
    cost of Index Range Scan is calculated as
    blevel + (leaf_blocks * selectivity + clustering_factor * selecivity) = 2 + (4017*0.33333 + 3107*0.33333) = 2377.
    Oracle considers that it has to read 2 root/branch blocks of the index, 1339 leaf blocks of the index and 1036 blocks of the table.
    Pay attention that selectivity is the major component of the cost of the Index Range Scan.
    Let's try to gather histograms:
    SQL> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent => 100, method_opt => 'for columns val size 3', cascade => true);
    PL/SQL procedure successfully completedIf you look at dba_tab_histograms you will see following
    SQL> select endpoint_value,
      2         endpoint_number
      3    from dba_tab_histograms
      4   where table_name = 'T'
      5     and column_name = 'VAL'
      6  ;
    ENDPOINT_VALUE ENDPOINT_NUMBER
                 1               1
                 2               2
                 3         1000000ENDPOINT_VALUE is the column value (in number for any type of data) and ENDPOINT_NUMBER is cumulative number of rows.
    Number of rows for any ENDPOINT_VALUE = ENDPOINT_NUMBER for this ENDPOINT_VALUE - ENDPOINT_NUMBER for the previous ENDPOINT_VALUE.
    explain plan and 10053 trace of the same query:
    | Id  | Operation                   | Name | Rows  | Cost  |
    |   0 | SELECT STATEMENT            |      |     1 |     4 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| T    |     1 |     4 |
    |*  2 |   INDEX RANGE SCAN          | IDX  |     1 |     3 |
    Predicate Information (identified by operation id):
       2 - access("VAL"=1)
    Note
       - cpu costing is off (consider enabling it)
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table:  T  Alias:  T
        #Rows: 1000000  #Blks:  3106  AvgRowLen:  5.00
    Index Stats::
      Index: IDX  Col#: 1
        LVLS: 2  #LB: 4017  #DK: 3  LB/K: 1339.00  DB/K: 1035.00  CLUF: 3107.00
    SINGLE TABLE ACCESS PATH
      BEGIN Single Table Cardinality Estimation
      Column (#1): VAL(NUMBER)
        AvgLen: 3.00 NDV: 3 Nulls: 0 Density: 5.0000e-07 Min: 1 Max: 3
        Histogram: Freq  #Bkts: 3  UncompBkts: 1000000  EndPtVals: 3
      Table:  T  Alias: T
        Card: Original: 1000000  Rounded: 1  Computed: 1.00  Non Adjusted: 1.00
      END   Single Table Cardinality Estimation
      Access Path: TableScan
        Cost:  300.00  Resp: 300.00  Degree: 0
          Cost_io: 300.00  Cost_cpu: 0
          Resp_io: 300.00  Resp_cpu: 0
      Access Path: index (AllEqRange)
        Index: IDX
        resc_io: 4.00  resc_cpu: 0
        ix_sel: 1.0000e-06  ix_sel_with_filters: 1.0000e-06
        Cost: 4.00  Resp: 4.00  Degree: 1
      Best:: AccessPath: IndexRange  Index: IDX
             Cost: 4.00  Degree: 1  Resp: 4.00  Card: 1.00  Bytes: 0Pay attention on selectivity, ix_sel: 1.0000e-06
    Cost of the FTS is still the same = 300,
    but cost of the Index Range Scan is 4 now: 2 root/branch blocks + 1 leaf block + 1 table block.
    Thus, conclusion: histograms allows to calculate selectivity more accurate. The aim is to have more efficient execution plans.
    Alexander Anokhin
    http://alexanderanokhin.wordpress.com/

  • How to run sql scripts using batch file for a web dynpro data dictionary

    Hi,
    I want to develop a sql script to be executed on the server alongwith the installation of a product to pre-populate web dynpro data dictionary tables required for the application.
    I further require to make the scripts independent of the database name,so that it can be run at any client environment.
    Your help will be appreciated and rewarded.

    See shoblock's answer
    call sql script from unix
    masterfile.sql:
    @file1 &1
    @file2 &2
    @file3 &3
    @file4 &4
    then just call the master script:
    sqlplus userid/password @masterfile <p1> <p2> <p3> <p4>                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to connect SQL server using JRun

    I am relatively dumb into Java. I am running JRun server in Unix box, and I need to connect to the SQL server (full marks for guesssing that SQL server is on the Win2K box).
    Now I want to connect to the SQL server from Jrun using JSPs.
    Question:
    1. Is JDBC:ODBC only way to connect to the SQL server, if yes -
    2. Do I have to create a DSN on the server for the JDBC:ODBC bridge to work
    3. Can I write my own database connection package,
    Appreciate if any body can help me with this problem, and if you have written a package to connect to the database, then would be nice if you would let me try the same.
    If you have any responses, please email me at [email protected]
    Thanks in Advance for the anticipated help.
    Viviar Prasad

    Hi Prasad,
    There is two ways(what I have done) to connect to sql server in jRun..
    1) using your jdbc - odbc connectivity using your type one jdbc driver..
    2) using type 4 driver.. (type 2 and 3 are not available right now for sql server)
    in 1 you have to create an dsn and use it directly in program bypassing jrun management console(I mean u don't have to configure for that)
    in 2 case you need to use jrun proprietery driver for the jdbc... there is a problem here..
    this driver is available in Jrun 3.0 enterprise edition only and still has got some problem so you need to apply service pack-2 to your Jrun installation and need to download the driver.. jdbc driver from following link
    ftp://ftp.allaire.com/kbftp/jrun/all/jrun_drivers.jar.zip
    this will suply a html doc along with the driver . which will explain you how to apply..
    you can download jdbc driver specilly for sql server from this link http://www.j-netdirect.com/jsqlconnect2_26.zip
    but it will expire in a month..
    any way once you have obtain any of the above driver, follow these step to configure jdbc in jrun.. you need to put the driver in the classpath..
    1 - open your management console
    2- select the webapplication for which you want to configure from left frame of mc(management console)
    3- select jdbc data source click the add button and follow the wizard..
    for more info ref to jrun quick start configuring jdbc settings that came with your JRun..
    or you can directly click the link there and type
    the name of the
    first
    datasource(you have supply any name you will be using that to access db in your java prog)
    second name of the the jdbc driver
    for Jrun it's
    allire.jrun.jdbc.JRunDriver
    for net direct
    com.jnetdirect.jsql.JSQLDriver
    third
    url for jrun driver
    jdbc:jrun:sqlserver://hostname:portno/databasename = dbname; USER = uname; Password = password
    where dbname is your database name
    uname is your sql server user name e.g sa
    password is your password for the corresponding user name
    where hostname is the name of the computer where your sqlserver is running it could be an ip e.g 10.0.0.32:1433
    1433 is the default port no for the sqlserver if you are using tcp protocol to connect to the sql server
    url for net direct
    jdbc:JSQLConnect//hostname:port/databasename=dbname; USER =uname; Password = password
    for any more information on this you can mail me in this id
    [email protected]
    All the best..
    regards
    Bishwa

Maybe you are looking for