Logic for this query

Hi friends,
I have a requirement like I have the organizationid which changes for the employee over a period of time
its like org_id
CREATE TABLE ASSIGNMENTS
ASSIGNMENT_ID NUMBER(10) NOT NULL,
EFFECTIVE_START_DATE DATE NOT NULL,
EFFECTIVE_END_DATE DATE NOT NULL,
BUSINESS_GROUP_ID NUMBER(15) NOT NULL,
PERSON_ID NUMBER(15),
JOB_ID NUMBER(15),
ASSIGNMENT_STATUS_TYPE_ID NUMBER(9) NOT NULL,
SUPERVISOR_ID NUMBER(10),
ORGANIZATION_ID NUMBER(15) NOT NULL,
this is the table structure now if
for the same employee the organization id changes I need to caputre those emp
whose organization is chaged....using organization_id and the person_id
how to do that?

Audit the table, use a trigger and a history/journaling table, for example:
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_triggers.htm#ABC1032282
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:59412348055

Similar Messages

  • Is there a way to create a plan guide for this query?

    How can i create a plan guide for this query,suppose i can't change the query text:
    USE AdventureWorks2008R2;
    GO
    SET NOCOUNT ON;
    GO
    -- query plan statement starts
    DECLARE @Group nvarchar(50), @Sales money;
    SET @Group = N'North America';
    SET @Sales = 2000000;
    SET NOCOUNT OFF;
    SELECT FirstName, LastName, SalesYTD
    FROM Sales.vSalesPerson
    WHERE TerritoryGroup = @Group and SalesYTD >= @Sales;
    -- query plan statement ends
    AdventureWorks2008R2's parameterization option is simple, i want this type of query can reuse plan:
    DECLARE @Group nvarchar(50), @Sales money;
    SET @Group = N'Other Country';
    SET @Sales = 88;
    SET NOCOUNT OFF;
    SELECT FirstName, LastName, SalesYTD
    FROM Sales.vSalesPerson
    WHERE TerritoryGroup = @Group and SalesYTD >= @Sales;
    I tried many times ,but it didn't work:
    declare @xml nvarchar(max) -- the plan i want to reuse
    set @xml = (select cast (query_plan as nvarchar(max)) 
    from sys.dm_exec_query_plan (0x060006001464570B405D92620200000001000000000000000000000000000000000000000000000000000000))
    -- create plan guide 
    exec sp_create_plan_guide 
    @name ='Test'
    ,@stmt=N'SELECT FirstName, LastName, SalesYTD
    FROM Sales.vSalesPerson
    WHERE TerritoryGroup = @Group and SalesYTD >= @Sales;'
    ,@type =N'sql'
    ,@params =N'@Group nvarchar(50), @Sales money'
    ,@hints = @xml;
    Thanks.

    I guess you don't wanna fire these queries "adhoc" but prepared instead to reuse the plan:
    exec sp_executesql N'SELECT FirstName, LastName, SalesYTD FROM Sales.vSalesPerson WHERE TerritoryGroup = @Group and SalesYTD >= @Sales',
    N'@Group nvarchar(50), @Sales money', N'Other Country',88
    exec sp_executesql N'SELECT FirstName, LastName, SalesYTD FROM Sales.vSalesPerson WHERE TerritoryGroup = @Group and SalesYTD >= @Sales',
    N'@Group nvarchar(50), @Sales money', N'North America',2000000
    Bodo Michael Danitz - MCT, MCITP - free consultant - performance specialist - www.sql-server.de

  • Any room for improvement for this query? Explain Plan attached.

    Is there any room for improvement for this query? Table stats are up-to-date. Any suggestions Query rewrite, addition of indexes,...etc ??
    select sum(CONF
                 when (cd.actl_qty - cd.total_alloc_qty - lsd.Q < 0) then
                  0
                 else
                  cd.actl_qty - cd.total_alloc_qty - lsd.Q
               end)
      from (select sum(reqd_qty) as Q, ITEM_ID as ITEM
              from SHIP_DTL SD
             where exists (select 1
                      from CONF_dtl
                     where CONF_nbr = '1'
                       and ITEM_id = SD.ITEM_id)
             group by ITEM_id) lsd,
           CONF_dtl cd
    where lsd.ITEM = cd.ITEM_id
       and cd.CONF_nbr = '1'Total number of rows in the tables involved
    select count(*) from CONF_DTL;
      COUNT(*)
       1785889
    select count(*) from shp_dtl;
      COUNT(*)
        286675
      Explain Plan
    PLAN_TABLE_OUTPUT
    Plan hash value: 2325658044
    | Id  | Operation                           | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                    |     1 |    39 |     4  (25)| 00:00:01 |
    |   1 |  SORT AGGREGATE                     |                    |     1 |    39 |            |          |
    |   2 |   VIEW                              |                    |     1 |    39 |     4  (25)| 00:00:01 |
    |   3 |    HASH GROUP BY                    |                    |     1 |   117 |     4  (25)| 00:00:01 |
    |   4 |     TABLE ACCESS BY INDEX ROWID     | SHIP_DTL           |     1 |    15 |     1   (0)| 00:00:01
    |   5 |      NESTED LOOPS                   |                    |     1 |   117 |     3   (0)| 00:00:01 |
    |   6 |       MERGE JOIN CARTESIAN          |                    |     1 |   102 |     2   (0)| 00:00:01 |
    |   7 |        TABLE ACCESS BY INDEX ROWID  | CONF_DTL           |     1 |    70 |     1   (0)| 00:00:01 |
    |*  8 |         INDEX RANGE SCAN            | PK_CONF_DTL        |     1 |       |     1   (0)| 00:00:01 |
    |   9 |        BUFFER SORT                  |                    |     1 |    32 |     1   (0)| 00:00:01 |
    |  10 |         SORT UNIQUE                 |                    |     1 |    32 |     1   (0)| 00:00:01 |
    |  11 |          TABLE ACCESS BY INDEX ROWID| CONF_DTL           |     1 |    32 |     1   (0)| 00:00:01 |
    |* 12 |           INDEX RANGE SCAN          | PK_CONF_DTL        |     1 |       |     1   (0)| 00:00:01 |
    |* 13 |       INDEX RANGE SCAN              | SHIP_DTL_IND_6 |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("CD"."CONF_NBR"='1')
      12 - access("CONF_NBR"='1')
      13 - access("ITEM_ID"="SD"."ITEM_ID")
           filter("ITEM_ID"="CD"."ITEM_ID")

    Citizen_2 wrote:
    Is there any room for improvement for this query? Table stats are up-to-date. Any suggestions Query rewrite, addition of indexes,...etc ??You say that the table stats are up-to-date, but is the following assumption of the optimizer correct:
    select count(*)
    from CONF_dtl
    where CONF_nbr = '1';Does this query return a count of 1? I doubt that, but that's what Oracle estimates in the EXPLAIN PLAN output. Based on that assumption you get a cartesian join between the two CONF_DTL table instances, and the result - which is still expected to be one row at most - is then joined to the SHIP_DTL table using a NESTED LOOP.
    If above assumption is incorrect, the number of rows generated by the cartesian join can be tremendous rendering the NESTED LOOP operation quite inefficient.
    You can verify this by using the DBMS_XPLAN.DISPLAY_CURSOR function together with the GATHER_PLAN_STATISTICS hint, if you're already on 10g or later.
    For more information regarding the DISPLAY_CURSOR function, see e.g. here: http://jonathanlewis.wordpress.com/2006/11/09/dbms_xplan-in-10g/
    It will show you the actual cardinalities compared to the estimated cardinalities.
    If the estimate of the optimizer is incorrect, you should find out why. There still might be some issues with the statistics, since this is most obvious reason for incorrect estimates.
    Are your index statistics up-to-date?
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Logic for this comparision of character type operands.

    Source field name: ZDCNFG
    Target Info Objects:
    ZDVD59
    ZDVD10
    ZDVD14
    ZDVD18
    ZDVD25
    ZDVD50
    Scenario1: If ZDCNFG = 2(10). 2 is quantity and 10 in brackets is part number (DVD format). So I will have to update the quantity respective field as shown below.
    ZDVD59: 0
    ZDVD10: 2 -->updated 10 part
    ZDVD14: 0
    ZDVD18: 0
    ZDVD25: 0
    ZDVD50: 0
    Scenario2: If ZDCNFG: 2(25)3(50)
    ZDVD59: 0
    ZDVD10: 0
    ZDVD14: 0
    ZDVD18: 0
    ZDVD25: 2 --> Updated
    ZDVD50: 3 --> Updated
    Scenario3: If ZDCNFG: 2(5)3(9) ---This is exceptional case. If we have 5 and 9 then add and assign 5 to ZDVD59 rest all are 0.
    ZDVD59: 5  Sum of DVD5 and DVD9
    ZDVD10: 0
    ZDVD14: 0
    ZDVD18: 0
    ZDVD25: 0
    ZDVD50: 0
    Quantity is the value which is out side the brackets. For e.g. in 10(5), 10 is the quantity.
    Can any one give me a logic for this.
    I was trying with CA  CS etc..but no charm.
    Thanks
    Kiran

    Kiran,
    This should work
    parameters: zdcnfg(50).
    data: zdvd59(3),
    zdvd10(3),
    zdvd14(3),
    zdvd18(3),
    zdvd25(3),
    zdvd50(3),
    rcnt type i,
    roff type i,
    temp type i.
    data: static(50).
    start-of-selection.
      static = zdcnfg.
      replace all occurrences of '(5)' in zdcnfg with '+' replacement count rcnt.
      if rcnt ge 1.
        zdcnfg = static.
        do rcnt times.
          replace first occurrence of '(5)' in zdcnfg with '+' replacement offset roff.
          temp = roff - 1.
          add zdcnfg+temp(1) to zdvd59.
        enddo.
      endif.
      zdcnfg = static.
      clear: roff, rcnt, temp.
      replace all occurrences of '(9)' in zdcnfg with '+' replacement count rcnt.
      if rcnt ge 1.
        zdcnfg = static.
        do rcnt times.
          replace first occurrence of '(9)' in zdcnfg with '+' replacement offset roff.
          temp = roff - 1.
          add zdcnfg+temp(1) to zdvd59.
        enddo.
      endif.
        zdcnfg = static.
      clear: roff, rcnt, temp.
      replace all occurrences of '(10)' in zdcnfg with '+' replacement count rcnt.
      if rcnt ge 1.
        zdcnfg = static.
        do rcnt times.
          replace first occurrence of '(10)' in zdcnfg with '+' replacement offset roff.
          temp = roff - 1.
          add zdcnfg+temp(1) to zdvd10.
        enddo.
      endif.
        zdcnfg = static.
      clear: roff, rcnt, temp.
      replace all occurrences of '(14)' in zdcnfg with '+' replacement count rcnt.
      if rcnt ge 1.
        zdcnfg = static.
        do rcnt times.
          replace first occurrence of '(14)' in zdcnfg with '+' replacement offset roff.
          temp = roff - 1.
          add zdcnfg+temp(1) to zdvd14.
        enddo.
      endif.
        zdcnfg = static.
      clear: roff, rcnt, temp.
      replace all occurrences of '(18)' in zdcnfg with '+' replacement count rcnt.
      if rcnt ge 1.
        zdcnfg = static.
        do rcnt times.
          replace first occurrence of '(18)' in zdcnfg with '+' replacement offset roff.
          temp = roff - 1.
          add zdcnfg+temp(1) to zdvd18.
        enddo.
      endif.
        zdcnfg = static.
      clear: roff, rcnt, temp.
      replace all occurrences of '(25)' in zdcnfg with '+' replacement count rcnt.
      if rcnt ge 1.
        zdcnfg = static.
        do rcnt times.
          replace first occurrence of '(25)' in zdcnfg with '+' replacement offset roff.
          temp = roff - 1.
          add zdcnfg+temp(1) to zdvd25.
        enddo.
      endif.
        zdcnfg = static.
      clear: roff, rcnt, temp.
      replace all occurrences of '(50)' in zdcnfg with '+' replacement count rcnt.
      if rcnt ge 1.
        zdcnfg = static.
        do rcnt times.
          replace first occurrence of '(50)' in zdcnfg with '+' replacement offset roff.
          temp = roff - 1.
          add zdcnfg+temp(1) to zdvd50.
        enddo.
      endif.

  • Need the Logic for this Prg issue Pls

    Hi Friends,
    i have an urgent requirement..
    i am develop the report that is :
    Based on Selction Critirea kunnr(knvv-kunnr)
    i want Delete the
             Internet mail (SMTP) address FROM ADR6-MTP_ADDR
    AND Teletex number FROM ADR4-TTX_NUMBER..
    USING TABLES ARE KNVV , ADR6 AND ADR4.
    please how to Write the LOGIC For this Program .
    help me.. it is an urgent.. anyone.
    regards,

    Hi Alchermi,
    thanks for your reply soon.
    based on selction kunnr .. i want deete the ADR4-TTX_NUMBER..and ADR6-SMTP_ADDR From these 2 tables
    for these 2 fields..
    kunnr from knvv, selection field..
    below fields want be DELETED..
    ttx-number from adr4,
    smtp_addr from adr6.
    it is an urgent. help me .
    regards,

  • Logic for this senerio -- UDF

    Hi XI Gurus,
                          I hav to create a UDF.This is an IDOC - XI - FILE seerio. In  this senerio is I have an IDOc whose 7th segment contains a field. This field can store values in between 0 - 9. On the other hand I have a file structure in my left hand side from Messge mapping prespective , which contains 9 similar sized fields.
    Now the logic is if the field in the segment of the idoc has value say '3', then the file structure in the left hand side would have 3 in the field no 3 and all other fields 1, 2, 4, 5, 6, 7, 8, 9 would be zero.
    Can u guys suggest some logic for this???
    Thanks in advance !
    Poits would be rewarded.
    Arnab

    Hi Mohd,
    Select the Queue option for getting individual nodes
    public void test(String[] a,ResultList result,Container container){
    if (a<i> == 7) { //This is for selecting the node
    for(i=0;i<10;i++){  //This is for collecting the correct value
    ifIi==3){
    write what ever u have to done
    else{
    write the exceptional condition
    Dont to Reward points if this helps
    Regards
    Pragathi.

  • JZ0R2 -- No result set for this query

    hi,
    I'm using this code to update records in the database:
    facsk = myArray[ctr];
    query = 'excute dp_autogeo_accept" + facsk;
    stmt.executeQuery(query);
    No records are returned so I'm not sending anything to a result set or trying to read the next string. I get this error: JZ0R2: No result for this query. I'm not sure why the original developer chose to use executeQuery but I've also tried stmt.updateQuery and got this error.
    Can someone help me?

    hi,
    I'm using this code to update records in the
    database:
    facsk = myArray[ctr];
    query = 'excute dp_autogeo_accept" +
    _accept" + facsk;
    stmt.executeQuery(query);
    No records are returned so I'm not sending anything
    to a result set or trying to read the next string. I
    get this error: JZ0R2: No result for this
    query. I'm not sure why the original developer
    chose to use executeQuery but I've also tried
    stmt.updateQuery and got this error.
    Can someone help me?when you're executing a query that has no result set use stmt.executeUpdate() instead of stmt.executeQuery()

  • I need solution for this query

    hi all,
    could anyone please send me solution for this query these are the database tables am having
    TABLE NAME :USERS
    ATTRIBUTES
    UNAME
    PASSWORD
    GROUPNAME
    TABLE NAME:GROUPS
    ATTRIBUTES
    GROUPID
    GROUPNAME
    my requirement is that i need 2 acces the groupname of a particular user and the reamining groups to which i doesn't belong in A SINGLE QUERY
    my result needs to be like this
    Authorised group
    consultant
    UNAuthorised groups
    sales
    vender
    recruiter
    admin
    if any body got the solution please send

    hi All,
    I have 3 tables 1)PREVILEGES (groupname, previleges(values y or n only))
    2)GROUPS (groupid, groupname)
    3) USERS (uname, groupname).
    Here each user belongs to one group, each user hav a default previlege means example if user is consutant then he can access only consultant group. Means default previlege will be used. For default previleges, there is no record in Previleges table.
    The Problem is that, I need to reterive the groupname from users which de doesn't belong as well as his previleges from previleges, If there is no values in previleges in the table it should return n. or value what is therey.
    the different groups are
    SALES,CONSULTANT,VENDER,RECRUTER,ADMIN

  • Please help me in finding the solution for this query

    Hi Experts,
    How could i print column name's second word in next line
    suppose am taking ename alias as "employee name"
    i want to print the second word "name" in next line
    like
    "Employee
    Name"
    Whats the way of writing this query
    Please help me out
    Thanks in Advance

    Hi,
    914618 wrote:
    WITH mydata AS
    SELECT 1 emp_id, 'John Smith' emp_name FROM DUAL UNION ALL
    SELECT 2 emp_id, 'Steve Jobs' emp_name FROM DUAL UNION ALL
    SELECT 3 emp_id, 'Larry Allison' emp_name FROM DUAL
    SELECT emp_id, emp_name
    FROM mydata;
    after executing the above query the o/p I am getting is in the normal format like EMP_ID and EMP_NAME
    not according to my requirement like
    Emp Emp
    Id NameWhat are the results you reaally want? Do you want this?
    `   EMP_ID SPLIT_NAME
             1 John
               Smith
             2 Steve
               Jobs
             3 Larry
               AllisonIf so:
    SELECT  emp_id
    ,      REPLACE (emp_name, ' ', CHR (10))     AS split_name
    FROM      mydata
    ;The results above are 3 rows, not 6.
    What results would you want if emp_name contained 3 or more words? For example, if you add this to the sample data:
    SELECT 9 emp_id, 'Aung San Suu Kyi' emp_name FROM DUAL UNION ALL

  • Index is not using for this query

    I have this query and it doesn't use index. Can you put your suggestion please?
    SELECT /*+ ORDERED USE_HASH(IC_GSMRELATION) USE_HASH(IC_UTRANCELL) USE_HASH(IC_SECTOR) USE_HASH(bt) */
    /* cp */
    bt.value value,
    bt.tstamp tstamp,
    ic_GsmRelation.instance_id instance_id
    FROM
    xr_scenario_tmp IC_GSMRELATION,
    xr_scenario_tmp IC_UTRANCELL,
    xr_scenario_tmp IC_SECTOR,
    rg_busyhour_tmp bt
    WHERE
    bt.instance_id != -1
    AND (IC_GSMRELATION.entity_id = 133)
    AND (IC_GSMRELATION.parentinstance_id = ic_UtranCell.instance_id)
    AND (IC_UTRANCELL.entity_id = 254)
    AND (IC_UTRANCELL.parentinstance_id = ic_Sector.instance_id)
    AND (IC_SECTOR.entity_id = 227)
    AND (IC_SECTOR.parentinstance_id = bt.instance_id);
    table : xr_scenario_tmp
    entity_id          num
    instance_id          num
    parentinstance_id     num
    localkey          varchar
    indexes: 1. entity_id+instance_id
         2. entity_id+parentinstance_id
    table : rg_busyhour_tmp
    instance_id     notnull     num
    tstamp          notnull     date
    rank          notnumm     num
    value               float
    index: instance_id+tstamp+rank
    thanks

    user5797895 wrote:
    Thanks for the update
    1. I don't understand where to put {}. you meant in the forum page like below
    Use the tag. Read the [FAQ|http://wiki.oracle.com/page/Oracle+Discussion+Forums+FAQ?t=anon] for more information. It's the link on the top right corner.
    >
    2. AROUND 8000 IN DEV MACHINE. BUT 1.5M IN PRODUCTION
    It's a more or less useless exercise if you have that vast difference between the two systems. You need to test this thoroughly using a similar amount of data.
    3.
    Note: cpu costing is off, PLAN_TABLE' is old version
    You need to re-create your PLAN_TABLE. That's the reason why important information is missing from your plans. It's the so called "Predicate Information" section below the execution plan and it requires the correct version of the plan table. Drop your current plan table and re-run in SQL*Plus on the server:
    @?/rdbms/admin/utlxplan
    to re-create the plan table.
    Dynamic sampling doesn't alter the plan in any way no matter what sampling level I choose.
    When I added Cardinality it switched from 1 full table scan and 2 index read
    Can you post the statements with the hints included resp. just the first line including the hints used for the different attempts?
    # WITH dbms_stats.gather_table_stats, without cardinality it uses indexes all the time.
    How did you call DBMS_STATS.GATHER_TABLE_STATS, i.e. which parameter values where you using?
    # After deleting the table stats performance improved back
    All these different attempts are not really helpful if you don't say which of them was more effective than the other ones. That's why I'm asking for the "Predicate Information" section so that this information can be used to determine which of your tables might benefit from an indexed access path and which don't.
    As already mentioned several times if you use SQL tracing as described in one of the links provided you could see which operation produces how many rows. This would allow to determine if it is efficient or not.
    But given that you're doing all this with your test data it doesn't say much about the performance in your production environment.
    4. whether GTT created with "ON COMMIT PRESERVE ROWS"?
    YES - BUT DIFFERENT SESSIONS HAS DIFFERENT NUMBER OF ROWS
    The question is, whether the number of rows differs significantly, if yes, then you shouldn't use the DBMS_STATS approach
    5. neigher (48 sec. / 25 sec. run time) are sufficient, then what is the expected?
    ACTUALLY I AM DOING IT IN DEVELOPMENT MACHINVE. IN PRODUCTION THE NUMBER OF ROWS ARE DIFFERENT. LAST TIME WHEN WE RELEASED THE
    PATCH WITH THIS CODE, THE PERFORMANCE WAS BAD.
    See 2., you need to have a suitable test environment. It's a more or less useless exercise if you only have a fraction of the actual amount of data.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Can anyone please give me the logic for this....

    I have a database table and in it i have a field department code. now, that department code might have '00' or '   ' (blank) or anyother value. now i want to add up all the currency amounts with department code '00'  and '  ' (blank) which have same company code and currency type. I called all the entries with '00' into one internal table and enrties with '  ' (blank) to other internal table.
    can anyone give me the logic for adding all the entries at a time.
    thanks in advance.....

    HI Srinivas
    i have a solution for this question.
    Use At control break statement for this.
    Loop the internal table.
    Use at Event AT LAST. In that use SUM Statement. U will get sum of all the numeric fields.
    Or else u can use ON CHANGE OF Currency type
    then find the total amount by adding previouse value with ur current value.
    Like itab1-amount = total + itab1-amount.
    Appent this into ur internal or write it into ur report.
    Reward me if its useful.
    Regards
    Ravi

  • How to write SQL hints for this query?

    The query is like:
    select * from foo, t
    where foo.name in
    (select name from bar
    where id in (
    select id from car
    where date < sysdate
    and foo.a = t.b;
    I want the innermostsubquery 'select id from car ...' to be executed first, and the subquery 'select name from bar ...' to be execute second, and the outermost query 'select * from foo,t ...' to be executed the last. How can I write the Oracle sql hints to force the order?
    Thanks.

    user553560
    You might be able to create a large set of hints to force the access path you want - but unless you really know what you are doing with hints, you may find that your solution is very unstable (it might be luck rather than correctness that let's it work to start with).
    The difficulty in this query is the double layer of IN subqueries, so if you can rewrite the query, you might try manually unnesting as follows:
    select
         t1.*. t.*
    from
              select     
                   distinct t2.name
              from     t2
              where     t2.id in (
                        select     t3.id
                             from     t3
                        where     t3.dated < sysdate
         )     v,
         t1,
         t
    where
         t1.name = v.name
    and     t1.a = t.bDepending on your indexing and statistics, you may find that a simple /*+ unnest */ hint in the first subquery will be sufficient to do this for your. Again depending on the statistics you may find that you have to put extra hints into the above to make Oracle use the join method and indexes you think appropriate.
    N.B. The first step (as others have noted) is to check that your statistics are good before you start manipulating the code or using hints.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Alternative mathod for this query

    i wrote procedures
    one for getting data from source table to staging table. and
    seconed one is cross tab query to get data in target table format.
    can you help me make it more simple this query.
    thanks for your help in advance
    PROCEDURE repo_bk_proc is
    Begin
    --To Get all discrete Portfolios                           
    INSERT INTO repo_bk_staging
    (DATA_DT,
    REPO_BK_IND,
    PORTFOLIO,
    BUCKET,
    ACCT_CNT,
    PRIN_BAL)
    SELECT V_DATA_DT,
    CASE WHEN REPO_IND = ´N´ AND BKRPT_IND = ´N´ THEN ´01A_NONREPO_NONBK´
    WHEN REPO_IND = ´N´ AND BKRPT_IND = ´B´ THEN ´01B_NONREPO_BK´
    WHEN REPO_IND = ´R´ AND (STAT_SECND_1 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_2 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_3 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_4 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_5 IN (´B2´, ´B3´,´B7´,´BC´)) THEN ´02B_REPO_BK´
    WHEN REPO_IND = ´R´ AND BKRPT_IND = ´N´ THEN ´02A_REPO_NONBK´
    ELSE ´UNKNOWN´ END AS REPO_BK_IND,
    case
    when acct_num like ´110050000%´ then
    (case when LGL_ENT_CURR<=8999 then ´4´
    else ´10´ end)
    when acct_num like ´110050004%´ then
    (case when LGL_ENT_CURR<=8999 then ´2´
    else ´8´ end)
    when acct_num like ´110050006%´ then
    (case when LGL_ENT_CURR<=8999 then ´5´
    else ´11´ end)
    when acct_num like ´110050008%´ then
    (case when LGL_ENT_CURR<=8999 then ´1´
    else ´7´ end)
    ELSE ´ALL_0´ END AS PORTFOLIO,
    CASE WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,1),´mm´)-1 THEN ´00_CURRENT´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,0),´mm´)-1 THEN ´0_1_29´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-1),´mm´)-1 THEN ´1_30_59´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-2),´mm´)-1 THEN ´2_60_89´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-3),´mm´)-1 THEN ´3_90_119´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-4),´mm´)-1 THEN ´4_120_149´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-5),´mm´)-1 THEN ´5_150_179´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-6),´mm´)-1 THEN ´6_180_209´
    ELSE ´7_210_PLUS´ END AS BUCKET,
    COUNT(*) AS ACCT_CNT, SUM(PRIN_BAL) AS PRIN_BAL
    FROM AUTOR2.DLQ_RPT WHERE DATA_DT = trunc(v_data_Dt)
    GROUP BY
    CASE WHEN REPO_IND = ´N´ AND BKRPT_IND = ´N´ THEN ´01A_NONREPO_NONBK´
    WHEN REPO_IND = ´N´ AND BKRPT_IND = ´B´ THEN ´01B_NONREPO_BK´
    WHEN REPO_IND = ´R´ AND (STAT_SECND_1 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_2 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_3 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_4 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_5 IN (´B2´, ´B3´,´B7´,´BC´)) THEN ´02B_REPO_BK´
    WHEN REPO_IND = ´R´ AND BKRPT_IND = ´N´ THEN ´02A_REPO_NONBK´
    ELSE ´UNKNOWN´ END,
    case
    when acct_num like ´110050000%´ then
    (case when LGL_ENT_CURR<=8999 then ´4´
    else ´10´ end)
    when acct_num like ´110050004%´ then
    (case when LGL_ENT_CURR<=8999 then ´2´
    else ´8´ end)
    when acct_num like ´110050006%´ then
    (case when LGL_ENT_CURR<=8999 then ´5´
    else ´11´ end)
    when acct_num like ´110050008%´ then
    (case when LGL_ENT_CURR<=8999 then ´1´
    else ´7´ end)
    ELSE ´ALL_0´ END,
    CASE WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,1),´mm´)-1 THEN ´00_CURRENT´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,0),´mm´)-1 THEN ´0_1_29´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-1),´mm´)-1 THEN ´1_30_59´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-2),´mm´)-1 THEN ´2_60_89´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-3),´mm´)-1 THEN ´3_90_119´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-4),´mm´)-1 THEN ´4_120_149´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-5),´mm´)-1 THEN ´5_150_179´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-6),´mm´)-1 THEN ´6_180_209´
    ELSE ´7_210_PLUS´ END
    union
    -- To Get ALL portfolios total
    SELECT v_DATA_DT,
    CASE WHEN REPO_IND = ´N´ AND BKRPT_IND = ´N´ THEN ´01A_NONREPO_NONBK´
    WHEN REPO_IND = ´N´ AND BKRPT_IND = ´B´ THEN ´01B_NONREPO_BK´
    WHEN REPO_IND = ´R´ AND (STAT_SECND_1 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_2 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_3 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_4 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_5 IN (´B2´, ´B3´,´B7´,´BC´)) THEN ´02B_REPO_BK´
    WHEN REPO_IND = ´R´ AND BKRPT_IND = ´N´ THEN ´02A_REPO_NONBK´
    ELSE ´UNKNOWN´ END AS REPO_BK_IND,
    case when LGL_ENT_CURR<=8999 then ´01_ALL´
    else ´01_ALL_CO´ end AS PORTFOLIO,
    CASE WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,1),´mm´)-1 THEN ´00_CURRENT´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,0),´mm´)-1 THEN ´0_1_29´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-1),´mm´)-1 THEN ´1_30_59´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-2),´mm´)-1 THEN ´2_60_89´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-3),´mm´)-1 THEN ´3_90_119´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-4),´mm´)-1 THEN ´4_120_149´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-5),´mm´)-1 THEN ´5_150_179´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-6),´mm´)-1 THEN ´6_180_209´
    ELSE ´7_210_PLUS´ END AS BUCKET,
    COUNT(*) AS ACCT_CNT, SUM(PRIN_BAL) AS PRIN_BAL
    FROM AUTOR2.DLQ_RPT WHERE DATA_DT = trunc(v_data_Dt)
    GROUP BY
    CASE WHEN REPO_IND = ´N´ AND BKRPT_IND = ´N´ THEN ´01A_NONREPO_NONBK´
    WHEN REPO_IND = ´N´ AND BKRPT_IND = ´B´ THEN ´01B_NONREPO_BK´
    WHEN REPO_IND = ´R´ AND (STAT_SECND_1 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_2 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_3 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_4 IN (´B2´, ´B3´,´B7´,´BC´)
    OR STAT_SECND_5 IN (´B2´, ´B3´,´B7´,´BC´)) THEN ´02B_REPO_BK´
    WHEN REPO_IND = ´R´ AND BKRPT_IND = ´N´ THEN ´02A_REPO_NONBK´
    ELSE ´UNKNOWN´ END,
    case when LGL_ENT_CURR<=8999 then ´01_ALL´
    else ´01_ALL_CO´ end,
    CASE WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,1),´mm´)-1 THEN ´00_CURRENT´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,0),´mm´)-1 THEN ´0_1_29´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-1),´mm´)-1 THEN ´1_30_59´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-2),´mm´)-1 THEN ´2_60_89´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-3),´mm´)-1 THEN ´3_90_119´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-4),´mm´)-1 THEN ´4_120_149´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-5),´mm´)-1 THEN ´5_150_179´
    WHEN CONT_DATE > trunc(add_months(v_data_Dt+1,-6),´mm´)-1 THEN ´6_180_209´
    ELSE ´7_210_PLUS´ END
    ORDER BY REPO_BK_IND, BUCKET,PORTFOLIO;
    COMMIT;
    End repo_bk_proc;
    PROCEDURE repo_bk_rpt_proc is
    Begin
    INSERT INTO REPO_BK_RPT (
    DATA_DATE,PORTFOLIO,REPO_BK_IND,CURRENT_PRIN_BAL,CURRENT_ACCT_CNT,ONE_MTH_PRIN_BAL,ONE_MTH_ACCT_CNT,
    TWO_MTH_PRIN_BAL,TWO_MTH_ACCT_CNT,THR_MTH_PRIN_BAL,THR_MTH_ACCT_CNT,FOUR_MTH_PRIN_BAL,FOUR_MTH_ACCT_CNT,
    FIVE_MTH_PRIN_BAL,FIVE_MTH_ACCT_CNT,SIX_MTH_PRIN_BAL,SIX_MTH_ACCT_CNT,SEVE_MTH_PRIN_BAL,SEVE_MTH_ACCT_CNT,
    SEVE_MTH_PLUS_PRIN_BAL,SEVE_MTH_PLUS_ACCT_CNT)
    values
    SELECT A.DATA_DT, A.PORTFOLIO, A.REPO_BK_IND,
    (NVL((SELECT B.PRIN_BAL FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´00_CURRENT´),0))/1000 AS CURRENT_PRIN_BAL,
    NVL((SELECT B.ACCT_CNT FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´00_CURRENT´),0) AS CURRENT_ACCT_CNT,
    (NVL((SELECT B.PRIN_BAL FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´0_1_29´),0))/1000 AS ONE_MTH_PRIN_BAL,
    NVL((SELECT B.ACCT_CNT FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´0_1_29´),0) AS ONE_MTH_ACCT_CNT,
    (NVL((SELECT B.PRIN_BAL FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´1_30_59´),0))/1000 AS TWO_MTH_PRIN_BAL,
    NVL((SELECT B.ACCT_CNT FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´1_30_59´),0) AS TWO_MTH_ACCT_CNT,
    (NVL((SELECT B.PRIN_BAL FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´2_60_89´),0))/1000 AS THR_MTH_PRIN_BAL,
    NVL((SELECT B.ACCT_CNT FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´2_60_89´),0) AS THR_MTH_ACCT_CNT,
    (NVL((SELECT B.PRIN_BAL FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´3_90_119´),0))/1000 AS FOUR_MTH_PRIN_BAL,
    NVL((SELECT B.ACCT_CNT FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´3_90_119´),0) AS FOUR_MTH_ACCT_CNT,
    (NVL((SELECT B.PRIN_BAL FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´4_120_149´),0))/1000 AS FIVE_MTH_PRIN_BAL,
    NVL((SELECT B.ACCT_CNT FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´4_120_149´),0) AS FIVE_MTH_ACCT_CNT,
    (NVL((SELECT B.PRIN_BAL FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´5_150_179´),0))/1000 AS SIX_MTH_PRIN_BAL,
    NVL((SELECT B.ACCT_CNT FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´5_150_179´),0) AS SIX_MTH_ACCT_CNT,
    (NVL((SELECT B.PRIN_BAL FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´6_180_209´),0))/1000 AS SEVE_MTH_PRIN_BAL,
    NVL((SELECT B.ACCT_CNT FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´6_180_209´),0) AS SEVE_MTH_ACCT_CNT,
    (NVL((SELECT B.PRIN_BAL FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´7_210_PLUS´),0))/1000 AS SEVE_MTH_PLUS_PRIN_BAL,
    NVL((SELECT B.ACCT_CNT FROM REPO_BK_STAGING B WHERE A.PORTFOLIO = B.PORTFOLIO
    AND A.REPO_BK_IND =B.REPO_BK_IND AND B.BUCKET = ´7_210_PLUS´),0) AS SEVE_MTH_PLUS_ACCT_CNT
    FROM REPO_BK_STAGING A
    GROUP BY DATA_DT, PORTFOLIO, REPO_BK_IND
    ORDER BY PORTFOLIO,REPO_BK_IND;
    COMMIT;
    End repo_bk_rpt_proc;

    can you help me make it more simple this query.Not really sure what you're expecting us to do. If those are your business rules those are your business rules. You might be able to make it slightly more readable by putting the basic selects in an inner view and putting the aggregating calls in an outer view but then you would have to distinguish between the "discrete portfolios" and "all portfolios" queries.
    Which leads me to a question? Do you know why you are using UNION rather than UNION ALL? Because UNION does an additional sort and may produce a different set of data than running either with UNION ALL or two separate queries.
    Cheers, APC

  • Why between for date is not returning data for this query ?

    Hello,
    i have a table with this structure and i am writing this query for fetching some rows based on some condition , but this query is not returning any data . Can you please tell why ?
    ID     DT
    003     11/8/2011
    002     10/8/2011
    001     9/8/2011
    And the query is :
    SELECT * FROM TABLE_NAME WHERE DT BETWEEN TO_DATE('08/08/2011','dd/mm/yyyy') AND TO_DATE('12/08/2011','dd/mm/yyyy');
    Edited by: bootstrap on Aug 13, 2011 7:10 AM

    >
    >
    but what is the problem with that, why that date is not matched when i am providing the date format ?Which part don't you understand? You did not use TO_DATE while inserting data and default date format was mm/dd/yyyy, right? Same default date format is used if you issue:
    SELECT * FROM TABLE_NAME Your original post states the above select returns:
    ID     DT
    003     11/8/2011
    002     10/8/2011
    001     9/8/2011So dates you inserted are November 8, 2011, October 8 2011 and September 8 2011. Now TO_DATE('08/08/2011','dd/mm/yyyy') is August 8 2011 and TO_DATE('12/08/2011','dd/mm/yyyy') is August 12 2011. So obviously:
    SELECT * FROM TABLE_NAME WHERE DT BETWEEN TO_DATE('08/08/2011','dd/mm/yyyy') AND TO_DATE('12/08/2011','dd/mm/yyyy');will not return any rows. Bottome line - never write code that uses implicit date conversions since your code becomes client NLS settings dependent and might work for one client and fail or produce wrong results for other client.
    SY.

  • Is there any alternative logic for this issue

    Hello all ,
    I have a  program which is executed as a background job for every 15 minutes . Now my requirement is If this batch job is successfully finished, i have to capture that time and date at which the batch job is successfully finished and update the corresponding time and date in a ZTABLE .
    In order to achieve this , i have created a separate program which will also be running in background.in this program , using the table TBTCO, i m getting the last date and  last time at which the batch job is successfully finished and modifying the ZTABLE accordingly
    Is there any alternate solution for this ?
    thanks&regards,
    G.Pavan

    Hi gpavansap,
    When you define background job in SM36, your first step should be the program which you have scheduled now.
    Create a second step in the same job with another program which would just update system date and time to the ZTABLE. No need to read TBTCO and all.
    The second step would immediately execute as soon as the first step ended successfully.
    If you don't want to create a separate program. you can have a check-box "Update ZTable Only" in the selection screen of the first program itself. If this check-box is selected, the program should only update ZTABLE and do nothing else. Create a variant of your selection screen with this check-box selected and schedule this variant as "Second Step" of your back-ground job.
    Regards
    Suresh

Maybe you are looking for