SQL Optimization Help

Hi,
I'm relatively new to SQL and I'm trying to figure out a good way to run a select statement multiple times (with different parameters) and only have one results table.
SELECT *
FROM table
WHERE date=&1
This is what I'm doing now, but I'm sure it's not optimal:
SELECT *
FROM table
WHERE date=date1
UNION ALL
SELECT *
FROM table
WHERE date=date2
UNION ALL
SELECT *
FROM table
WHERE date=date1
ect....
Help please.

Hi,
Can you insert the parameters into a global temporary table? If so, you could join to the global temporary table like this:
/* Set up demo table */
create table source_table (
id number,
date_field date
create index idx_date_field on source_table (date_field);
/* Insert 10000 dates */
insert into source_table
select rownum, trunc(sysdate) + rownum from dual connect by rownum < 10000;
commit;
/* Set up the global temporary table */
create global temporary table gtt_dates (date_val date) on commit preserve rows;
/* Insert 10 random dates */
insert into gtt_dates
select * from (select date_field from source_table order by dbms_random.random) where rownum <= 10;
/* Demo query */
select a.*
from source_table a,
gtt_dates g
where a.date_field = g.date_val;
Regards,
Melvin

Similar Messages

  • SQL*Plus Help

    Hi! Does anyone know where i can download SQL*Plus Help? I need to get info on select syntax, sql built-in functions, etc..
    Thanks!

    hi
    you can proceed like this :
    go to the OTN
    the click on products.
    null

  • How to install SQL*Plus help facilities and demos.

    Hi, everyone
    It appears error message say "failure to login" during SQL*Plus
    part of the Oracle8 full installation. I knew that system want
    to install SQL*Plus help and demos through logining one of
    dba account(maybe system user account). However, due to
    password's reason, can not log in SQL*Plus, so installer can't
    execute named pupbld.sql script, result in SQL*Plus help and
    demos can not be installed.
    Now, I am intend to install these stuff lonely.
    Could anyone help me? thank a lot.
    William
    null

    Hi,
    The pupbld.sql isn't the correct script to create the help
    facility, it just creates product and user profile tables.
    The help script is at $ORACLE_HOME/sqlplus/admin/help (run as
    system)
    cd $ORACLE_HOME/sqlplus/admin/help
    sqlplus system/<password> @helptbl
    sqlldr system/<password> control=plushelp.ctl
    sqlldr system/<password> control=plshelp.ctl
    sqlldr system/<password> control=sqlhelp.ctl
    sqlplus system/<password> @helpindx
    I think it is necessary to run the pupbld.sql script, without
    this script everyone who logins in oracle with sqlplus will see
    an error message, but... Run the script again:
    $ORACLE_HOME/sqlplus/admin/pupbld.sql
    Best regards,
    Ari
    William (guest) wrote:
    : Hi, everyone
    : It appears error message say "failure to login" during SQL*Plus
    : part of the Oracle8 full installation. I knew that system want
    : to install SQL*Plus help and demos through logining one of
    : dba account(maybe system user account). However, due to
    : password's reason, can not log in SQL*Plus, so installer can't
    : execute named pupbld.sql script, result in SQL*Plus help and
    : demos can not be installed.
    : Now, I am intend to install these stuff lonely.
    : Could anyone help me? thank a lot.
    : William
    null

  • Will Oracle pl/sql certification help me get  IT job

    Hello guys,
    I have completed my B.tech in Computer Science, I am confused a bit , Can i get a job after getting certified in Oracle Associate Pl/sql developer

    1005323 wrote:
    Hello guys,
    I have completed my B.tech in Computer Science, I am confused a bit , Can i get a job after getting certified in Oracle Associate Pl/sql developerYou may get a job after achieving Pl/sql developer OCA
    You may get a job after without achieving Pl/sql developer OCA
    You may fail to get a job after achieving Pl/sql developer OCA
    You may fail to get a job after without achieving Pl/sql developer OCA
    There are several factors involved in getting a job. And there are several ways a job may be obtained. But usually there are there stages:
    - Stage Zero: A company but has a job to offer.
    - And you need to be aware of it. - A friend may tell you, or an agency may tell you. And it must suit you for location and remuneration etc.
    - Stage one: An interview is obtained with the company.
    - Stage two: The job is offered to you rather than anyone else and you find it acceptable.
    So ... to your question:
    "Can i get a job after getting certified in Oracle Associate Pl/sql developer?"
    Well .... there is only three possible answers ... yes, no, and maybe; and maybe is probably the only correct answer, and most people will have worked this out, which means the question may have not been the best question to have asked.
    (( That said I now read the title of the thread and it says: Re: Will Oracle pl/sql certification help me get IT job)
    I have been known on occasion to have been given a question by a boss.
    And I have answered him:
    "You have given me the wrong question
    The question you should have answer me is this.
    And the answer I will give you is this."
    And the boss goes away happy
    So you you a better question would have been:
    How much will an OCA PL/SQL certification increase my chances of getting a job?
    Mind you even that question won't help you get a much better answer.
    For a proportion of jobs where PL/SQL is relevant that will help (for those where it is not it might be occasionally be a problem), for people with identical CV's it sometimes might help get to interview stage. But there are other factors as well. For instance if I was thinking of giving you a job on the basis of your post I might for example:
    - Not be impressed with an "Hello Guys" greeting ( though this is a forum so that isn't relevant here).
    - Not be impressed with you being confused.
    - etc.
    You probably need to get a good appreciation of the job market in your locality; and the numbers of applicants for each job. Which jobs you can apply for, what is your skillset and knowing youself as well.
    Sometimes an ITIL certification may be a better differentiator for some positions in business. But it will depend on the job you can think you can get.

  • HTMLDB 1.5  SQL Optimization

    Hi All
    I'm using HTMLDB 1.5, and SQL optimization hints vanish from all regions when my app is migrated from development to production. e.g. /*+ hint INDEX */
    Tested re-importing the app in the dev environ and have the same issue.
    Is this a htmldb bug or am I doing something wrong?
    Thanks
    Kezie

    Kezie - Actually that particular bug was fixed in 1.5.1. If you can apply the 1.5.1 patch, the application installation page will not strip out hints. For SQL*Plus import/install, you must connect as FLOWS_010500 (DBA can change password) or connect as any schema assigned to the workspace into which your application will be installed. The workspace ID and app ID must be identical to those from the source HTML DB instance for this to work.
    Scott

  • 많은 INLIST와 MULTIPLE OR 연산을 갖는 SQL의 OPTIMIZATION

    제품 : ORACLE SERVER
    작성날짜 : 2004-04-19
    많은 Inlist와 multiple OR 연산을 갖는 SQL의 Optimization
    =========================================================
    PURPOSE
    이 문서는 IN 연산 내의 많은 IN List와 많은 OR 연산자를 갖는
    경우에 CBO가 어떻게 처리하는가에 대한 자료이다.
    Explanation
    많은 개발자나 DBA들은 IN 연산자와 OR 연산자를 사용하는 SQL이
    과도한 Optimization time을 야기시키는 문제를 경험했을 것이다.
    이 문서에서는 CBO가 어떻게 IN list와 OR 연산자를 처리하는지
    설명하고자 한다.
    CBO가 IN list 연산을 만나면 다음과 같은 몇 가지 option을 가지고 판단한다.
    1. SQL 문장을 UNION ALL 이 들어간 문장의 연속으로 나눈다.
    SELECT empno FROM emp WHERE deptno IN (10,20,30);
    라는 문장을 살펴보자.
    이 문장은 다음과 같이 다시 쓰여질 수 있다.
    SELECT empno FROM emp WHERE deptno = 10
    UNION ALL
    SELECT empno FROM emp WHERE deptno = 20
    UNION ALL
    SELECT empno FROM emp WHERE deptno = 30
    만약 deptno column이 indexed된다면 index는 각 branch 단에서 loopup하기
    위해 사용될 수 있다.
    만약 split이 Cost Based Optimizer로 자동으로 발생하지 않는다면
    USE_CONCAT hint 를 사용함으로써 강제로 수행될 수 있다.
    이 내용에 대해서는 <Note:17214.1>을 참조하도록 한다.
    2. IN list를 list로 남겨 두고, filter로서 값을 사용한다.
    Oracle 7에서 이 옵션은 index를 사용할 수 없다.
    Oracle 8에서 이 옵션은 index를 사용할 수 있는 'inlist iterator' 라는
    것을 사용하여 제공된다.
    NO_EXPAND hint를 사용함으로써 expand가 일어나지 않도록 CBO 에게 지정할 수 있다.
    아주 긴 inlist는 CBO 환경에서 문제를 야기시킬 수 있다. 특히 inlist가 많은 수의
    UNION ALL 문장으로 expand될 때 그러하다. 왜냐하면 CBO가 expand된 문장들에
    대해서 Cost를 결정해야 하기 때문이다. 이러한 expand된 문장들은 많은 수의
    branch 때문에 time을 소모하는 문장들이다.
    RBO(Rule Based Optimizer) 환경에서는 이것은 cost 산정을 하지 않으므로 문제가
    되지 않는다.
    Workaround
    만약 아주 긴 inlist 때문에 parsing 문제가 있다면 workaround는 다음과 같다.
    1) NO_EXPAND hint를 사용하도록 한다. 이 힌트를 쓰면 Oracle 7에서는 index를
    사용하지 않고, Oracle 8에서는 index를 사용할 수 있다.
    2) RBO 를 사용하도록 한다.
    3) Query를 재작성한다. Inlist가 lookup table에 저장이 되도록 해서
    inlist를 사용하는 대신에 그 table에 join을 한다.
    주의) hint를 사용하게 되면 CBO로 동작하게 됨을 기억해야 한다.
    Example
    none
    Reference Documents
    <Note:62153.1>

  • (SQL*PLUS HELP) RUNNING PUPBLD OR HELPINS ASKS FOR SYSTEM_PASS

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-22
    (SQL*PLUS HELP) RUNNING PUPBLD OR HELPINS ASKS FOR SYSTEM_PASS
    ==============================================================
    PURPOSE
    이 내용은 SQL*Plus 상에서 SQL*Plus command의 help를 보기 위한 방법이다.
    Problem Description
    SQL*Plus command의 help를 보기 위해서 helpins를 수행하면
    SYSTEM_PASS is not set이라는 에러 메시지가 발생하는 경우가 있다.
    이 자료는 pupbld 또는 helpins를 수행하기 전에 SYSTEM_PASS 환경변수를
    셋팅하는 방법에 대한 자료이다.
    아래와 같은 에러가 발생하는 경우 조치 방법에 대해 알아본다.
    SYSTEM_PASS not set.
    Set and export SYSTEM_PASS, then restart help (for helpins or
    profile for pupbld) installation.
    Workaround
    none
    Solution Description
    이 스크립트는 system user로 database에 connect되어 수행되어야 한다.
    helpins를 수행하기 위해서는 SYSTEM_PASS 환경변수가 셋팅되어 있어야 한다.
    NOTE
    For security reasons, do not set this variable in your shell
    startup scripts. (i.e. .login or .profile.).
    Set this environment variable at the prompt.
    1. Prompt에서 환경변수를 셋팅하기
    For C shell:
    % setenv SYSTEM_PASS system/<password>
    For Korn or Bourne shells:
    $ SYSTEM_PASS=system/<password> ;export SYSTEM_PASS
    2. Now run "$ORACLE_HOME/bin/pupbld" or "$ORACLE_HOME/bin/helpins".
    % cd $ORACLE_HOME/bin
    % pupbld
    or
    % helpins
    주의사항
    $ORACLE_HOME/bin/pupbld 스크립트와 $ORACLE_HOME/bin/helpins 스크
    립트를 수행하기 위해서는 반드시 SYSTEM_PASS 환경변수를 필요로 한다.
    Reference Document
    <Note:1037075.6>

    check it please whether it is a database version or just you are installing a client. Install Enterprize database on 2k system. I you are running a client software then you are to deinstall it.

  • Download Oracle SQL*Plus help related like word help

    Hello all
    I just wanna ask if where i can download Oracle SQL*Plus help related like word help.?
    ty

    <p>You can access SQL*Plus help from the command line in a SQL*Plus session by typing 'help index'. If you want more information than that, take a look at the SQL*Plus Quick Reference located <b>here</b> or the SQL*Plus User's Guide and Reference located <b>here</b>. These docs are all for Oracle 10g. Other version documentation can be found <b>here</b>.</p>
    Tom

  • SQL Server2008 help needed

    Having trouble with SQLServer 2008 (not MySQL) and my database connection in Dreamweaver CS6.  My document type is set as .asp using VBScript.  I can list the table information  but cannot use the insert wizard to add new records.  I don't get any errors after creating the insert form, but no records get inserted.  I'm not a VBScript expert, but do I have to manually write some code to insert records?  How do I attach it to a button?

    Thanks for the quick reply.  I won't be back in the office for a few days, but I'll try to post it when I get back in.  It's pretty much the code generated from the Dreamweaver Insert Record wizard.  I see where the submit button is created and the value is set but the action on the form is set to MM_insert, so I don't see where the submit code is actually called.
    Date: Wed, 3 Oct 2012 12:06:14 -0600
    From: [email protected]
    To: [email protected]
    Subject: SQL Server2008 help needed
        Re: SQL Server2008 help needed
        created by bregent in Dreamweaver General - View the full discussion
    This post should be moved to the app dev forum.  Please post the code from your form and the insert script pages.
         Please note that the Adobe Forums do not accept email attachments. If you want to embed a screen image in your message please visit the thread in the forum to embed the image at http://forums.adobe.com/message/4746757#4746757
         Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page: http://forums.adobe.com/message/4746757#4746757
         To unsubscribe from this thread, please visit the message page at http://forums.adobe.com/message/4746757#4746757. In the Actions box on the right, click the Stop Email Notifications link.
         Start a new discussion in Dreamweaver General by email or at Adobe Community
      For more information about maintaining your forum email notifications please go to http://forums.adobe.com/message/2936746#2936746.

  • How to optimize this SQL. Help needed.

    Hi All,
    Can you please help with this SQL:
    SELECT /*+ INDEX(zl1 zipcode_lat1) */
    zl2.zipcode as zipcode,l.location_id as location_id,
    sqrt(POWER((69.1 * ((zl2.latitude*57.295779513082320876798154814105) - (zl1.latitude*57.295779513082320876798154814105))),2) + POWER((69.1 * ((zl2.longitude*57.295779513082320876798154814105) - (zl1.longitude*57.295779513082320876798154814105)) * cos((zl1.latitude*57.295779513082320876798154814105)/57.3)),2)) as distance
    FROM location_atao l, zipcode_atao zl1, client c, zipcode_atao zl2
    WHERE zl1.zipcode = l.zipcode
    AND l.client_id = c.client_id
    AND c.client_id = 306363
    And l.appType = 'HOURLY'
    and c.milessearchzipcode >= sqrt(POWER((69.1 * ((zl2.latitude*57.295779513082320876798154814105) - (zl1.latitude*57.295779513082320876798154814105))),2) + POWER((69.1 * ((zl2.longitude*57.295779513082320876798154814105) - (zl1.longitude*57.295779513082320876798154814105)) * cos((zl1.latitude*57.295779513082320876798154814105)/57.3)),2))
    I tried to optimize it by adding country column in zipcode_atao table. So that we can limit the search in zipcode_atao table based on country.
    Any other suggestions.
    Thanks

    Welcome to the forum.
    Please follow the instructions given in this thread:
    How to post a SQL statement tuning request
    HOW TO: Post a SQL statement tuning request - template posting
    and add the nessecary details we need to your thread.
    Depending on your database version (the result of: select * from v$version; ):
    Have you tried running the query without the index-hint?
    Are your table (and index) statatistics up-to-date?

  • Optimization Help ?? Pl/sql procedure

    Hi people,
    I have 2 procedure and they are taking forever to complete.
    I need your help in optimizing those procedures to finish my task.
    My task is data migration. I am using Oracle 11g , DQL Dev 3.0
    I have to migrate about 2 million records.
    I would like to know why my procedure is taking a long time when it comes to insertion.
    My current code inserts about 300 records every 10 minutes.
    Could it be my Cases statements ??
    if it is, what would be a better solution ??
    Note: The code is working, however its slow. I need it to be faster.
    Thanks in advance for your help.
    here are some data just to understand what is the code doing:
    Please look at DOC_TYPE_CODE in procedure, maybe thats where the problem is (case statement)
    20100121085123687(SOR_DOC)-00010641.PDF
    20100121103547355(PSR_DOC)-058744_148631696.pdf
    20100121115927953(JC_DOC)-00013741.PDF
    20100122102257379(REV_DOC)-6-034848_141278871.pdf
    20100128105556824(OTHER_DOC)-2-059399_590886456.pdf
    20100203113810388(PLEA_DOC)-059019_44397339.pdfhere are the 2 procedures
    create or replace
    PROCEDURE RUN
    AS
    BEGIN
         FOR I IN (SELECT DCOLLECTIONID FROM COLMETA)
         LOOP
              DOCUMENT_INSERTION(TO_NUMBER(I.DCOLLECTIONID));
         END LOOP;
    END;
    create or replace
    PROCEDURE DOCUMENT_INSERTION(XCOL_ID NUMBER) AS
    COUNT_NUM NUMBER := 0;
    BEGIN
         -- GET THE COUNT
         SELECT      COUNT(*)
         INTO      COUNT_NUM
         FROM      DOCMETA D,
                   CASES_DOCUMENTS DOC,
                   COLLECTIONS C
         WHERE     D.XCOLLECTIONID = XCOL_ID
         AND      DOC.DID = D.DID
         AND      C.DCOLLECTIONID = D.XCOLLECTIONID
         AND      D.XFOLDERNAME = C.DCOLLECTIONNAME
         AND      DOC.DISPRIMARY = 1;
         -- START THE INSERTION PROCESS
         INSERT INTO DOCUMENTS
         SELECT
              DM.DID AS DID,
              C.DCOLLECTIONID AS SUBMISSION_ID,
              CASE
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                         INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) = 'PSR' THEN 10
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                         INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) = 'JC' THEN 20
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                         INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) = 'SOR' THEN 30
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                         INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) = 'PLEA' THEN 40
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                         INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) = 'INDICT' THEN 50
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                         INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) = 'OTHER' THEN 80
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                         INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) = 'AMEND' THEN 80
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                         INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) = 'REV' THEN 80
                   WHEN COUNT_NUM = 1 THEN 150   /* CODE FOR MAIN IS 150 */
                   ELSE 160       /* CODE FOR UNKNOWN IS 160 */
              END AS DOC_TYPE_CODE,
              NULL AS BFILEDATA,
              CASE
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                       INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) = 'AMEND' THEN 'OTHER'
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                       INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) = 'REV' THEN 'OTHER'
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                       INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) IS NULL
                         AND COUNT_NUM = 1 THEN 'MAIN'
                   WHEN SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                       INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1) IS NULL
                         AND COUNT_NUM > 1 THEN 'UNKNOWN'
                   ELSE SUBSTR(D.DORIGINALNAME, INSTR(D.DORIGINALNAME,'(') + 1,
                       INSTR(D.DORIGINALNAME,'_DOC') - INSTR(D.DORIGINALNAME,'(') - 1)
              END AS DOC_NAME,
              D.DFORMAT AS MIME_TYPE
    FROM      CASES_DOCUMENTS D,
              DOCMETA DM,
              COLMETA C
    WHERE     DM.XCOLLECTIONID = XCOL_ID
    AND      DM.XCOLLECTIONID = C.DCOLLECTIONID
    AND      D.DID = DM.DID
    AND      D.DISPRIMARY = 1;
    COMMIT;     /* Committing the record */
    END;

    Thanks everyone for the help.
    here is the DDL
    CREATE TABLE "CASES_DOCUMENTS"
        "DDOCID"        NUMBER(38,0) NOT NULL ENABLE,
        "DID"           NUMBER(38,0),
        "DISPRIMARY"    NUMBER(1,0) NOT NULL ENABLE,
        "DORIGINALNAME" VARCHAR2(255 CHAR),
        "DFORMAT"       VARCHAR2(80 CHAR),
        "DFILESIZE"     NUMBER(38,0)
      CREATE TABLE "DOCMETA"
        "DID"                   NUMBER(38,0) NOT NULL ENABLE,
        "XCOLLECTIONID"         NUMBER(38,0),
        "XUSSCID"              VARCHAR2(30 CHAR),
        "XSUBMISSIONID"        VARCHAR2(30 CHAR),
        "XMISSINGCASE"         VARCHAR2(30 CHAR),
         "XFOLDERNAME"          VARCHAR2(200 CHAR),
        "XDEFTYPE"       VARCHAR2(30 CHAR),
        CONSTRAINT "PK_DOCMETA" PRIMARY KEY ("DID")
      CREATE TABLE "COLMETA"
        "DCOLLECTIONID"         NUMBER(38,0) NOT NULL ENABLE,
        "XUSSCID"              VARCHAR2(30 CHAR),
        CONSTRAINT "PK_COLLECTIONMETA" PRIMARY KEY ("DCOLLECTIONID")
      CREATE TABLE "COLLECTIONS"
        "DCOLLECTIONID"       NUMBER(38,0) NOT NULL ENABLE,
        "DCOLLECTIONGUID"     VARCHAR2(36 CHAR) NOT NULL ENABLE,
         "DPARENTCOLLECTIONID" NUMBER(38,0),
        CONSTRAINT "PK_COLLECTIONID" PRIMARY KEY ("DCOLLECTIONID")
        CONSTRAINT "UK_COLLECTIONNAMES" UNIQUE ("DCOLLECTIONNAME", "DPARENTCOLLECTIONID")
    /* insert statements */
    Insert into "cases_documents" (DDOCID,DID,DISPRIMARY,DORIGINALNAME,DFORMAT,DFILESIZE) values (271787,135894,1,'20100121085123687(SOR_DOC)-00010641.PDF','Application/pdf',202115);
    Insert into "cases_documents" (DDOCID,DID,DISPRIMARY,DORIGINALNAME,DFORMAT,DFILESIZE) values (271823,135912,1,'20100121103547355(PSR_DOC)-058744_148631696.pdf','Application/pdf',1118423);
    Insert into "cases_documents" (DDOCID,DID,DISPRIMARY,DORIGINALNAME,DFORMAT,DFILESIZE) values (271861,135931,1,'20100121110415669(SOR_DOC)-058011_58041316.pdf','Application/pdf',219272);
    Insert into "cases_documents" (DDOCID,DID,DISPRIMARY,DORIGINALNAME,DFORMAT,DFILESIZE) values (271937,135969,1,'20100121115927953(JC_DOC)-00013741.PDF','Application/pdf',259548);
    Insert into "cases_documents" (DDOCID,DID,DISPRIMARY,DORIGINALNAME,DFORMAT,DFILESIZE) values (272051,136026,1,'20100122102257379(REV_DOC)-6-034848_141278871.pdf','Application/pdf',230619);
    Insert into "cases_documents" (DDOCID,DID,DISPRIMARY,DORIGINALNAME,DFORMAT,DFILESIZE) values (272087,136044,1,'20100125111357271(JC_DOC)-00011510.PDF','Application/pdf',102264);
    Insert into "cases_documents" (DDOCID,DID,DISPRIMARY,DORIGINALNAME,DFORMAT,DFILESIZE) values (272159,136080,1,'20100126112526792(SOR_DOC)-1-00013020.PDF','Application/pdf',244530);
    Insert into "cases_documents" (DDOCID,DID,DISPRIMARY,DORIGINALNAME,DFORMAT,DFILESIZE) values (272235,136118,1,'20100126182111728(PSR_DOC)-059335_498074466.pdf','Application/pdf',963139);
    Insert into "cases_documents" (DDOCID,DID,DISPRIMARY,DORIGINALNAME,DFORMAT,DFILESIZE) values (272275,136138,1,'20100127115025011(REV_DOC)-1-00006915.PDF','Application/pdf',649596);
    Insert into "docmeta" (DID,XCOLLECTIONID,XUSSCID,XSUBMISSIONID,XMISSINGCASE,XFOLDERNAME,XDEFTYPE) values (135894,798044066,'1303497','416738','false','124341_20100121085123687','10');
    Insert into "docmeta" (DID,XCOLLECTIONID,XUSSCID,XSUBMISSIONID,XMISSINGCASE,XFOLDERNAME,XDEFTYPE) values (135912,798044069,'1302059','416812','false','124341_20100121103547355','10');
    Insert into "docmeta" (DID,XCOLLECTIONID,XUSSCID,XSUBMISSIONID,XMISSINGCASE,XFOLDERNAME,XDEFTYPE) values (135931,798044072,'1300935','416834','false','124341_20100121110415669','10');
    Insert into "docmeta" (DID,XCOLLECTIONID,XUSSCID,XSUBMISSIONID,XMISSINGCASE,XFOLDERNAME,XDEFTYPE) values (135969,798044079,'1301693','416897','false','124341_20100121115927953','10');
    Insert into "docmeta" (DID,XCOLLECTIONID,XUSSCID,XSUBMISSIONID,XMISSINGCASE,XFOLDERNAME,XDEFTYPE) values (136026,798044088,null,'417263','false','124341_20100122102257379','12');
    Insert into "docmeta" (DID,XCOLLECTIONID,XUSSCID,XSUBMISSIONID,XMISSINGCASE,XFOLDERNAME,XDEFTYPE) values (136044,798044092,'901763','417720','false','124341_20100125111357271','12');
    Insert into "docmeta" (DID,XCOLLECTIONID,XUSSCID,XSUBMISSIONID,XMISSINGCASE,XFOLDERNAME,XDEFTYPE) values (136080,798044099,'1214058','418182','false','124341_20100126112526792','11');
    Insert into "docmeta" (DID,XCOLLECTIONID,XUSSCID,XSUBMISSIONID,XMISSINGCASE,XFOLDERNAME,XDEFTYPE) values (136118,798044105,'1304859','418444','false','124341_20100126182111728','10');
    Insert into "docmeta" (DID,XCOLLECTIONID,XUSSCID,XSUBMISSIONID,XMISSINGCASE,XFOLDERNAME,XDEFTYPE) values (1361380,798451924,'1273153','386886','false','124152_20090930102348552','10');
    Insert into "colmeta" (DCOLLECTIONID,XUSSCID) values (798044066,'1303497');
    Insert into "colmeta" (DCOLLECTIONID,XUSSCID) values (798044069,'1302059');
    Insert into "colmeta" (DCOLLECTIONID,XUSSCID) values (798044072,'1300935');
    Insert into "colmeta" (DCOLLECTIONID,XUSSCID) values (798044079,'1301693');
    Insert into "colmeta" (DCOLLECTIONID,XUSSCID) values (798044088,null);
    Insert into "colmeta" (DCOLLECTIONID,XUSSCID) values (798044092,'901763');
    Insert into "colmeta" (DCOLLECTIONID,XUSSCID) values (798044099,'1214058');
    Insert into "colmeta" (DCOLLECTIONID,XUSSCID) values (798044105,'1304859');
    Insert into "colmeta" (DCOLLECTIONID,XUSSCID) values (798451924,'1273153');
    Insert into "collections" (DCOLLECTIONID,DCOLLECTIONGUID,DPARENTCOLLECTIONID) values (798044066,'89532951-9E1A-1EAF-D74C-F317AD2A4880',798044005);
    Insert into "collections" (DCOLLECTIONID,DCOLLECTIONGUID,DPARENTCOLLECTIONID) values (798044069,'0533F5D5-5C70-6661-EF54-74F2B78F342C',798044005);
    Insert into "collections" (DCOLLECTIONID,DCOLLECTIONGUID,DPARENTCOLLECTIONID) values (798044072,'58A26BB4-3BE9-28C8-8A83-1DAD6FDBAD4D',798044005);
    Insert into "collections" (DCOLLECTIONID,DCOLLECTIONGUID,DPARENTCOLLECTIONID) values (798044079,'61471A50-1711-DA1B-90C9-9BE7EE4A1A1A',798044005);
    Insert into "collections" (DCOLLECTIONID,DCOLLECTIONGUID,DPARENTCOLLECTIONID) values (798044088,'412F77D8-9784-AE91-7B28-F469B852C70D',798044005);
    Insert into "collections" (DCOLLECTIONID,DCOLLECTIONGUID,DPARENTCOLLECTIONID) values (798044092,'E161DDD6-A297-A514-2BCA-C4E43415C3A6',798044005);
    Insert into "collections" (DCOLLECTIONID,DCOLLECTIONGUID,DPARENTCOLLECTIONID) values (798044099,'6CB6366A-83B1-ADD3-2F86-75C13401C866',798044005);
    Insert into "collections" (DCOLLECTIONID,DCOLLECTIONGUID,DPARENTCOLLECTIONID) values (798044105,'46556A82-E55A-DD4E-7D21-D39C9DC365AB',798044005);
    Insert into "collections" (DCOLLECTIONID,DCOLLECTIONGUID,DPARENTCOLLECTIONID) values (798451924,'C959D755-E792-6C32-A804-0F9DE9FDAC21',798451840);
    Note: xcollectionid in docmeta = dcollectionid in colmeta

  • Oracle 10.2.0.4 vs 10.2.0.5 SQL optimizer

    Hello,
    Recently we upgraded from Oracle 10.2.0.4 to 10.2.0.5 deployed on AIX 5. Immediately we could see slowness for a particular SQL which used partition as well as indexed column in predicate clause.
    e.g.
    SELECT COL1, COL2
    FROM TAB1 PARTITION (P1)
    WHERE TAB1.COL3 = 123;
    There is an index created on COL3. However explain plan for this SQL showed that this index was not getting used. Surprisingly, when we removed partition from SQL, itused the index
    SELECT COL1, COL2
    FROM TAB1
    WHERE TAB1.COL3 = 123;
    There is one more observation - When we reverted back to 10.2.0.4 optimization strategy on Oracle 10.2.0.5. The original SQL that had partition clause used the index as it should have been and explain plan matched to what was before the Oracle upgrade.
    I have few questions based on these observations. Any help will be appreciated.
    1. Are there any changes in the 10.2.0.5 optimizer that is making SQL becoming slow?
    2. Is there any problem in SQL that is making it slow?
    3. I believe moving to 10.2.0.4 optmizer on Oracle 10.2.0.5 is a short-term solution. Is there any permanent fix to this problem?
    4. Does Oracle 11g support 10.2.0.4 optimizer?
    Please let me know if more details are needed.
    Thank you!

    Onkar Talekar wrote:
    1. Are there any changes in the 10.2.0.5 optimizer that is making SQL becoming slow?There are always changes with the CBO happening, it's a complicated bit of software. Some bugs will be fixed, others introduced. You may have been unfortunate enough to hit a bug, search MOS or raise a SR with Oracle support if you feel that is the case.
    Onkar Talekar wrote:
    2. Is there any problem in SQL that is making it slow?Entirely possible you have a poorly written SQL statement, yes.
    Onkar Talekar wrote:
    3. I believe moving to 10.2.0.4 optmizer on Oracle 10.2.0.5 is a short-term solution. Is there any permanent fix to this problem?Yes, raise a SR with Oracle.
    Onkar Talekar wrote:
    4. Does Oracle 11g support 10.2.0.4 optimizer?Yes, but i wouldn't recommend running an 11 instance with optimizer compatibility set to less than the current version without a very compelling reason (the one you've posted doesn't seem to be compelling to me at the moment).
    What happens if you specify the partition column in the WHERE clause instead of the actual partition in the FROM clause ... Oracle should use partition elimination to visit only that partition and utilize the local index on COL3 (i am assuming there is a local index in play here).
    I would guess, a very speculative guess, that you hit a bug pertaining to specifying the partition name, and that if you can get Oracle to do a partition elimination on it's own (instead of 'hard coding' the partition name) that it will smarten up and you'll get the execution plan you want / expect ... just a guess.

  • 10.2.0.4 vs 10.2.0.5 SQL optimizer

    Hello,
    Recently we upgraded from Oracle 10.2.0.4 to 10.2.0.5 deployed on AIX 5. Immediately we could see slowness for a particular SQL which used partition as well as indexed column in predicate clause.
    e.g.
    SELECT COL1, COL2
    FROM TAB1 PARTITION (P1)
    WHERE TAB1.COL3 = 123;
    There is an index created on COL3. However explain plan for this SQL showed that this index was not getting used. Surprisingly, when we removed partition from SQL, itused the index
    SELECT COL1, COL2
    FROM TAB1
    WHERE TAB1.COL3 = 123;
    There is one more observation - When we reverted back to 10.2.0.4 optimization strategy on Oracle 10.2.0.5. The original SQL that had partition clause used the index as it should have been and explain plan matched to what was before the Oracle upgrade.
    I have few questions based on these observations. Any help will be appreciated.
    1. Are there any changes in the 10.2.0.5 optimizer that is making SQL becoming slow?
    2. Is there any problem in SQL that is making it slow?
    3. I believe moving to 10.2.0.4 optmizer on Oracle 10.2.0.5 is a short-term solution. Is there any permanent fix to this problem?
    4. Does Oracle 11g support 10.2.0.4 optimizer?
    Please let me know if more details are needed.
    Thank you!

    Have statistics been gathered after the upgrade ? Has the OPTIMIZER_FEATURES_ENABLE init.ora parameter been set to 10.2.0.5 after the upgrade ?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams142.htm#CHDFABEF
    Pl post the explain plans for the statement in both 10.2.0.4 and 10.2.0.5 following the instructions in this thread - HOW TO: Post a SQL statement tuning request - template posting
    Every upgrade/patch can introduce changes in the optimizer. You can use 10.2.0.4 features in 11g using the parameter mentioned above - but why would you want to do that ?
    HTH
    Srini

  • SQL Optimization - Exit on First Match

    Hi,
    I have a requirement where a query, sometimes, takes more than 25 seconds. This should come out in less than 1 second.
    About the Query :
    SELECT 1 FROM DUAL
    WHERE     exists
    (SELECT TM.AD
    FROM     TM,
         GM
    WHERE     TM.AD = GM.AD
    AND     TM.LOA = :b1
    and     GM.soid='Y');
    The way this query has been written, it fetches only 1 row. The plan of this query is (not from production but from my test instance as I could reproduce this but the number of rows differ) :
    Rows Row Source Operation
    1 FILTER (cr=26 pr=0 pw=0 time=433 us)
    1 FAST DUAL (cr=0 pr=0 pw=0 time=12 us)
    1 NESTED LOOPS (cr=26 pr=0 pw=0 time=398 us)
    9 TABLE ACCESS BY INDEX ROWID TM (cr=6 pr=0 pw=0 time=150 us)
    9 INDEX RANGE SCAN TM_LOA (cr=2 pr=0 pw=0 time=21 us)(object id 56302)
    1 TABLE ACCESS BY INDEX ROWID GM (cr=20 pr=0 pw=0 time=258 us)
    9 INDEX UNIQUE SCAN PK_GM (cr=11 pr=0 pw=0 time=123 us)(object id 56304)
    The plan of Production is exactly the same. The issue here is :
    1. LOA has an Index and for certain values of LOA, the number of records are around 1000. Issue is normally reported when the number of rows fetched are more than 800.
    2. The clustering factor of LOA index is not good and from the plan, it is observed that for every row fetched from an Index, approx equal number of blocks are read from table.
    3. AD column of GM is a Primary Key
    Also, the problem is visible, when the disk reads of this query is very high i.e. if the CR is 800, PR is 700. For any subsequent executions, it gets the results in less than a second.
    In my view, it is the table access of TM that is causing an increase in response time and therefore, if I can eliminate these (unwanted) table access. One way is reorganizing the table to improve the CF, but it can have a negative impact. Therefore, optimizing the query seems to be a better option. Based on the Query Plan, I assume, the optimizer gets 1000 rows from an Index and Table TM, then joins to GM. Fetching these 1000 rows seems to be an issue. The query can be optimized, if the search from TM exits immediately a matching row is found in GM table. Therefore, instead of fetching 1000 rows, it matches each and every row and exits immediately when the first match is found. AD in TM is not Unique, therefore, for each AD from TM, it checks for the existence in GM. So, in case there are 10 matching AD from TM and GM, the search should complete immediately on the first matching AD.
    Would appreciate help on this.
    Regards

    Hi,
    Will check for the performance with FIRST_ROWS and arrays, but, feel that these will not yield any benefit as 1) The code is directly run on the server, and 2) It is doing a proper index scan, but the code needs a modification to exit immediately as the first match is found.
    A pl/sql representation of this code is pasted below :
    create or replace function check_exists(la in varchar2)
    return number
    as
    cursor tm_csr is
    select ad from tm
    where LOA = la;
    l_number number:=0;
    l_ad tm.ad%type;
    begin
    open tm_csr;
    loop
    fetch tm_csr into l_ad;
    begin
    select 1 into l_number from gm
    where gm.ad = l_ad
    AND     GM.soid='Y';
    exception when others then
    l_number:=0;
    end;
    exit when tm_csr%notfound or l_number=1;
    end loop;
    close tm_csr;
    return l_number;
    end;
    The code, while not a feasible solution but is just a representation of the requirement, fetches AD from TM. Then it checks for the existence in GM and if a matching row is found, exits from the LOOP.
    Edited by: Vivek Sharma on Jul 1, 2009 12:20 PM

  • Sql query help

    hi guys
    i have sample data as mentioned below... need to find the duplicate rows where cd=cd and dt1=dt1 and tm1 difference should be less than or equal to 4 hrs..
    i can get the data with the query written below but my problem is that i am not allowed to use in-built sql server function... can you help me in writing the same without using in-built function...
    CREATE TABLE #t (id INT,dt1 datetime, tm1 datetime,cd varchar(10))
    INSERT INTO #t VALUES (101,'2013-04-24','1900-01-01 12:20:00.000','TC')
    INSERT INTO #t VALUES (101,'2013-04-24','1900-01-01 12:30:00.000','TC')
    INSERT INTO #t VALUES (101,'2013-08-02','1900-01-01 14:30:00.000','MN')
    INSERT INTO #t VALUES (101,'2013-08-02','1900-01-01 15:07:00.000','MN')
    INSERT INTO #t VALUES (101,'2013-07-06','1900-01-01 09:07:00.000','XY')
    INSERT INTO #t VALUES (101,'2013-11-27','1900-01-01 09:50:00.000','LM')
    INSERT INTO #t VALUES (101,'2013-07-06','1900-01-01 15:07:00.000','XY')
    select * From #t
    WITH MyCTE (rn,id, dt1, tm1, cd)
    AS(
    select row_number() over (partition by id ORDER BY dt1, tm1) rn,* from #t
    select case when ((dt1 = lead_start_Date) and (ct <='4.0') and (base_cd = lead_cd)) then 'Duplicate_Req' else '' end dt123,* from
    select abs(convert(decimal(5,1),datediff(MI,lead_Start_time,tm1)/60.00)) ct, * from
    SELECT base.rn b_rn,LEAd.rn l_rn,BASE.id
    ,BASE.dt1
    ,BASE.tm1
    ,base.cd base_cd
    ,LEAD.dt1 LEAD_START_DATE
    ,LEAD.tm1 LEAD_START_TIME
    ,lead.cd lead_cd
    --,DATEADD(dd,-1,LEAD.dt1) EXPECTED_END_DATE
    FROM MyCTE BASE
    LEFT JOIN MyCTE LEAD ON BASE.id = LEAD.id
    AND BASE.rn = LEAD.rn+1
    ) b
    )c

    if this code will not work for you then not sure if there are any other options
    Converted the CTE into an actual temp table.
    CTE and barebones T-SQL code are included in the script below.
    CREATE TABLE #t (id INT,dt1 datetime, tm1 datetime,cd varchar(10))
    INSERT INTO #t VALUES (101,'2013-04-24','1900-01-01 12:20:00.000','TC')
    INSERT INTO #t VALUES (101,'2013-04-24','1900-01-01 12:30:00.000','TC')
    INSERT INTO #t VALUES (101,'2013-08-02','1900-01-01 14:30:00.000','MN')
    INSERT INTO #t VALUES (101,'2013-08-02','1900-01-01 15:07:00.000','MN')
    INSERT INTO #t VALUES (101,'2013-07-06','1900-01-01 09:07:00.000','XY')
    INSERT INTO #t VALUES (101,'2013-11-27','1900-01-01 09:50:00.000','LM')
    INSERT INTO #t VALUES (101,'2013-07-06','1900-01-01 15:07:00.000','XY')
    INSERT INTO #t VALUES (101,'2013-08-02','1900-01-01 15:07:00.000','MN')
    select * From #t
    ;WITH MyCTE (rn,id, dt1, tm1, cd)
    AS(
    select row_number() over (partition by id ORDER BY dt1, tm1) rn,* from #t
    select case when ((dt1 = lead_start_Date) and (ct <='4.0') and (base_cd = lead_cd)) then 'Duplicate_Req' else '' end dt123,* from
    select abs(convert(decimal(5,1),datediff(MI,lead_Start_time,tm1)/60.00)) ct, * from
    SELECT base.rn b_rn,LEAd.rn l_rn,BASE.id
    ,BASE.dt1
    ,BASE.tm1
    ,base.cd base_cd
    ,LEAD.dt1 LEAD_START_DATE
    ,LEAD.tm1 LEAD_START_TIME
    ,lead.cd lead_cd
    --,DATEADD(dd,-1,LEAD.dt1) EXPECTED_END_DATE
    FROM MyCTE BASE
    LEFT JOIN MyCTE LEAD ON BASE.id = LEAD.id
    AND BASE.rn = LEAD.rn+1
    ) b
    )c
    select * into #copy From #t order by id, cd, dt1, tm1
    alter table #copy add seqno int identity(1,1)
    select distinct y.id,y.cd,y.dt1,y.tm1,y.seqno,case when z.cd is not null then 'Duplicate_Req' else '' end dt123
    from #copy y
    left outer join
    (select a.id,a.cd,a.dt1,a.tm1
    From #copy a
    left outer join #copy b
    on a.id = b.id
    and a.cd = b.cd
    and a.dt1 = b.dt1
    where a.seqno > b.seqno
    and abs(datediff(MINUTE,b.tm1,a.tm1)) <= 240) z
    on y.id = z.id
    and y.cd = z.cd
    and y.dt1 = z.dt1
    and y.tm1 = z.tm1
    order by y.dt1,y.tm1
    drop table #copy
    drop table #t

Maybe you are looking for