Help with querying a 200 million record table

Hi ,
I need to query a 200 million record table which is partitioned by monthly activity.
But my problem is I need to see how many activities occured on one account in a time frame.
If there are 200 partitions, I need to go into all the partitions, get the activities of the account in the partition and at the end give the number of activities.
Fortunately, only activity is expected for an account in the partition which may be present or absent.
if this table had 100 records, i would use this..
select account_no, count(*)
from Acct_actvy
group by account_no;

Must stress that it is critical that you not write code (SQL or PL/SQL) that uses hardcoded partition names to find data.
That approach is very risk, prone to runtime errors, difficult to maintain and does not scale. It is not worth it.
From the developer's side, there should be total ignorance to the fact that a table is partitioned. A developer must treat a partition table no different than any other table.
To give you an idea.. this a copy-and-paste from a SQL*Plus session doing what you want to do. Against a partitioned table at least 3x bigger than yours. It covers about a 12 month period. There's a partition per day - and empty daily partitions for the next 2 years. The SQL aggregation is monthly. I selected a random network address to illustrate.
SQL> select count(*) from x25_calls;
  COUNT(*)
619491919
Elapsed: 00:00:19.68
SQL>
SQL>  select TRUNC(callendtime,'MM') AS MONTH, sourcenetworkaddress, count(*) from x25_calls where sourcenetworkaddress = '3103165962'
  2  group by TRUNC(callendtime,'MM'), sourcenetworkaddress;
MONTH               SOURCENETWORKADDRESS   COUNT(*)
2005/09/01 00:00:00 3103165962                 3599
2005/10/01 00:00:00 3103165962                 1184
2005/12/01 00:00:00 3103165962                    4
2005/06/01 00:00:00 3103165962                    1
2005/04/01 00:00:00 3103165962                  560
2005/08/01 00:00:00 3103165962                  101
2005/03/01 00:00:00 3103165962                 3330
7 rows selected.
Elapsed: 00:00:19.72As you can see - not a single reference to any partitioning. Excellent performance, despite running on an old K-class HP server.
The reason for the performance is simple. A correctly designed and implemented partitioning scheme that caters for most of the queries against the table. Correctly designed and implemented indexes - especially local bitmap indexes. Without any hacks like partition names and the like...

Similar Messages

  • Table with 200 millions records.

    Dear all,
    I have to create table which will accept 200 millions record. I have to do monthly report from these data.
    The performance make me very very concern, does anyone has any suggestion?
    Thanks in advance.

    Hi,
    I have a situation like yours.
    Each month, you need to create a new partition, for the next year, this is anothers partition.
    For example, you have a table
    SQL> CREATE TABLE sales99_cpart(
    2> sale_id NUMBER NOT NULL,
    3> sale_date DATE,
    4> prod_id NUMBER,
    5> qty NUMBER)
    6> PARTITION BY RANGE(sale_date)
    7> SUBPARTITION BY HASH(prod_id) SUBPARTITIONS 4
    8> STORE IN (data01,data02,data03,data04)
    9> (PARTITION cp1 VALUES LESS THAN('01-APR-1999'),
    10> PARTITION cp2 VALUES LESS THAN('01-JUL-1999'),
    11> PARTITION cp3 VALUES LESS THAN('01-OCT-1999'),
    12> PARTITION cp4 VALUES LESS THAN('01-JAN-2000'))
    13> /
    For the next year, add new partition and subpartition.
    Subpartitions are like table, after what you can use parallel query. It is very interristing for good performance.
    You can partition table by range on date, and subpartition by hash on id callcenter.
    Next year, if you want history, you can drop only one subpartition.
    The cost : Oracle partitionning is an option of Oracle entreprise edition, this is not default option.
    Nicolas.

  • Selecting Records from 125 million record table to insert into smaller table

    Oracle 11g
    I have a large table of 125 million records - t3_universe.  This table never gets updated or altered once loaded,  but holds data that we receive from a lead company.
    I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads -  that will be updated with regard to when the lead is mailed and for other relevant information.
    My question is what is the best (fastest) approach to select records from this 125 million record table to insert into the smaller table.  I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches.
    My current attempt has been to create a View using the query that selects the records as shown below.  Then use a second query that inserts into T3_Leads from this View V_Market.  This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause?    My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key. 
    CREATE VIEW V_Market  as
    WITH got_pairs    AS  
         SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */  l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no 
         ,      ROW_NUMBER () OVER ( PARTITION BY  l.address_key 
                                      ORDER BY      l.hh_verification_date  DESC 
                      ) AS r_num   
         FROM   t3_universe  e   
         JOIN   t3_universe  l  ON   
                l.address_key  = e.address_key
                AND l.zip_code = e.zip_code
              AND   l.p1_gender != e.p1_gender
                 AND   l.household_key != e.household_key         
                 AND  l.hh_verification_date  >= e.hh_verification_date 
      SELECT  * 
      FROM  got_pairs
      where l_hh_type !=1 and l_hh_type !=2 and filler_data != 1 and filler_data != 2 and zip_code in (select * from M_mansfield_02048) and p1_exact_age BETWEEN 25 and 70 and narrowband_income >= '8' and r_num = 1
    Then
    INSERT INTO T3_leads(zip, zip4, firstname, lastname, address, city, state, household_key, hh_type, address_key, income, relationship_status, gender, age, person_key, filler_data, p1_seq_no, p2_seq_no)
    select zip_code, zip_plus_4, p1_givenname, surname, address, city, state, household_key, l_hh_type, address_key, narrowband_income, p1_ms, p1_gender, p1_exact_age, p1_personkey, filler_data, p1_seq_no, p2_seq_no
    from V_Market;

    I had no trouble creating the view exactly as you posted it.  However, be careful here:
    and zip_code in (select * from M_mansfield_02048)
    You should name the column explicitly rather than select *.  (do you really have separate tables for different zip codes?)
    About the performance, it's hard to tell because you haven't posted anything we can use, like explain plans or traces but simply encapsulating your query into a view is not likely to make it any faster.
    Depending on the size of the subset of rows you're selecting, the /*+ INDEX hint may be doing your more harm than good.

  • Handling internal table with more than 1 million record

    Hi All,
    We are facing dump for storage parameters wrongly set.
    Basically the dump is due to the internal table having more than 1 million records. We have increased the storage parameter size from 512 to 2048 , still the dump is happening.
    Please advice is there any other way to handle these kinds of internal table.
    P:S we have tried the option of using hashed table, this does not suits our scenario.
    Thanks and Regards,
    Vijay

    Hi
    your problem can be solved by populating the internal table in chunks. for that you have to use Database Cursor concept.
    hope this code helps.
    G_PACKAGE_SIZE = 50000.
      * Using DB Cursor to fetch data in batch.
      OPEN CURSOR WITH HOLD DB_CURSOR FOR
             SELECT *
               FROM ZTABLE.
        DO.
        FETCH NEXT CURSOR DB_CURSOR
           INTO CORRESPONDING FIELDS OF TABLE IT_ZTABLE
           PACKAGE SIZE G_PACKAGE_SIZE.
        IF SY-SUBRC NE 0.
          CLOSE CURSOR DB_CURSOR.
          EXIT.
        ENDIF.

  • How can I update a particular column in a 7 million record table, where it has many conditions to go.

    I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
    then the particular status will show in status column, in that way I need to write the query as 16 different cases. 
    Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
    it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me out

    Here is the code I have written to get the data from temp tables which are taking records from 7 millions table with  filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
    the part of Status column.
    SELECT
    z.SYSTEMNAME
    --,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
    --else NULL
    --End AS SubSystemName
    , CASE
    WHEN z.TAX_ID IN
    (SELECT DISTINCT zxc.TIN
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE zxc.[SubSystem Name] <> 'NULL'
    THEN
    (SELECT DISTINCT [Subsystem Name]
    FROM .dbo.SQS_Provider_Tracking zxc
    WHERE z.TAX_ID = zxc.TIN)
    End As SubSYSTEMNAME
    ,z.PROVIDERNAME
    ,z.STATECODE
    ,z.TAX_ID
    ,z.SRC_PAR_CD
    ,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
    , CASE
    WHEN z.SRC_PAR_CD IN ('E','O','S','W')
    THEN 'Nonpar Waiver'
    -- --Is Puerto Rico of Lifesynch
    WHEN z.TAX_ID IN
    (SELECT DISTINCT a.TAX_ID
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.Bucket <> 'Nonpar'
    THEN
    (SELECT DISTINCT a.Bucket
    FROM .dbo.SQS_NonPar_PR_LS_TINs a
    WHERE a.TAX_ID = z.TAX_ID)
    --**Amendment Mailed**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT b.PROV_TIN
    FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
    where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN
    (SELECT DISTINCT b.Mailing
    FROM .dbo.SQS_Mailed_TINs_010614 b
    WHERE z.TAX_ID = b.PROV_TIN
    -- --**Amendment Mailed Wave 3-5**
    WHEN z.TAX_ID In
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (3rd Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (3rd Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (4th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (4th Wave)'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Amendment Mailed (5th Wave)'
    and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
    and z.Hosp_Ind = 'P'
    THEN 'Amendment Mailed (5th Wave)'
    -- --**Top Objecting Systems**
    WHEN z.SYSTEMNAME IN
    ('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
    THEN 'Top Objecting Systems'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Top Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H'
    THEN 'Top Objecting Systems'
    -- --**Other Objecting Hospitals**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Objector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Other Objecting Hospitals'
    -- --**Objecting Physicians**
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE obj.[Objector?] in ('Objector','Top Objector')
    and z.TAX_ID = obj.TIN
    and z.Hosp_Ind = 'P')
    THEN 'Objecting Physicians'
    --****Rejecting Hospitals****
    WHEN (z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    INNER JOIN .dbo.SQS_Provider_Tracking obj
    ON h.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector'
    WHERE z.TAX_ID = h.TAX_ID
    OR h.SMG_ID IS NOT NULL
    )and z.Hosp_Ind = 'H')
    THEN 'Rejecting Hospitals'
    --****Rejecting Physciains****
    WHEN
    (z.TAX_ID IN
    (SELECT DISTINCT
    obj.TIN
    FROM .dbo.SQS_Provider_Tracking obj
    WHERE z.TAX_ID = obj.TIN
    AND obj.[Objector?] = 'Rejector')
    and z.Hosp_Ind = 'P')
    THEN 'REjecting Physicians'
    ----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
    -- --**Non-Objecting Hospitals**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    h.TAX_ID
    FROM
    #HIHO_Records h
    WHERE
    (z.TAX_ID = h.TAX_ID)
    OR h.SMG_ID IS NOT NULL)
    and z.Hosp_Ind = 'H'
    THEN 'Non-Objecting Hospitals'
    -- **Outstanding Contracts for Review**
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'Non-Objecting Bilateral Physicians'
    AND z.TAX_ID = qz.PROV_TIN)
    Then 'Non-Objecting Bilateral Physicians'
    When z.TAX_ID in
    (select distinct
    p.TAX_ID
    from dbo.SQS_CoC_Potential_Mail_List p
    where p.amendmentrights <> 'Unilateral'
    AND z.TAX_ID = p.TAX_ID)
    THEN 'Non-Objecting Bilateral Physicians'
    WHEN z.TAX_ID IN
    (SELECT DISTINCT
    qz.PROV_TIN
    FROM
    [SQS_Mailed_TINs] qz
    where qz.Mailing = 'More Research Needed'
    AND qz.PROV_TIN = z.TAX_ID)
    THEN 'More Research Needed'
    WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
    THEN 'ERROR'
    else 'Market Review/Preparing to Mail'
    END AS [STATUS Column]
    Please suggest on this

  • Performance issues in million records table

    I Have a scenario wherein have some 20 tables each with a million and more records. [ Historical ]
    On an average I do add 1500 - 2500 records a day... i.e would add a million records every year on an average
    Am looking for archival solutions for these master tables.
    Operations on Archival Tables, would be limited to read.
    Expected benefits
    User base would be around 2500 users on the whole - but expect 300 - 500 parallel users at the max.
    Very limited usage on Historical data - compared to operations on current data
    Performance on operations over current data is important compared over that on historical data
    Environment - Oracle 9i - Should be migrating to Oracle 10g sooner.
    Some solutions i cud think of ...
    [ 1 ] Put every archived record into a archival table and fetch it from there
    i.e clearly distinguish searches as current or archival - prior to searching
    the impact i feel is again archival tables are ever increasing by approx a million in a year
    [ 2 ] Put records into various archival tables each differentiated by a year
    For instance every year i do replicate the set of tables and that year data goes into that table.
    how do i do a fetch??
    Note - i do have a unique way of identifying each record in my master table - the primary key is based on YYYYMMXXXXXXXXXX format eg: 2008070000562330, will the year part help me in anyway to check with the correct table
    The major concern is i do have a very good response based on indexing and other common things, but would not want this to downgrade in a year and more, but expect to improvise on the current response timings and also do ensure to conitnue the same over a period of time.
    Also I don't want to make change to every query in my app - until there is no way out..

    Hi,
    Read the following documentation link about Partitioning in Oracle.
    Best Regards,
    Alex

  • Need help with Query

    Hi,
    Good day everyone! I need help writing a query. I have this table with the following data in them...
    ACCT_CODE  FSYR      YTD_AMT
    A123            11          100  
    A456            11          200
    A123            10          50
    A456            10          100I want the output to look like this:
    ACCT_CODE     CURRENT_YEAR(11)       PRIOR_YEAR(10)
    A123               100                              50
    A456               200                              100The user will input the fiscal year and based on that input, I want to get the prior year value as well.
    Thank you for all your help!!
    Edited by: user5737516 on Jun 29, 2011 6:48 AM
    Edited by: user5737516 on Jun 29, 2011 6:50 AM

    user5737516 wrote:
    Hi,
    Good day everyone! I need help writing a query. I have this table with the following data in them...
    ACCT_CODE FSYR YTD_AMT
    A123 11 100
    A456 11 200
    A123 10 50
    A456 10 100
    I want the output to look like this:
    ACCT_CODE CURRENT_YEAR PRIOR_YEAR
    A123 100 50
    A456 200 100
    The user will input the fiscal year and based on that input, I want to get the prior year value as well.
    Thank you for all your help!!what is prior year?

  • Help with query output

    Hello, I have the following query that I'm running in Oracle SQL Developer 1.2.1
    WITH group_by_4_column_results AS
    (SELECT m_atschunk.employee AS employee,
    SUM(CASE
    WHEN
    M_ATSCHUNK.rolloffDaysCount = '108545043' AND m_atschunk.infractiondate BETWEEN SYSDATE - 365 AND SYSDATE THEN 1 ELSE 0 END) as rolloffs,
    M_ATSCHUNK.INFRACTIONDATE + 365 as infractionDate
    FROM M_ATSCHUNK
    group by employee, infractionDate, rolloffDaysCount
    SELECT g4.*,
    SUM (rolloffs) OVER (PARTITION BY employee) AS total_rolloffs
    FROM group_by_4_column_results g4
    It will output the key elements of what I need. But where it sums up the 'total_rolloffs', I need to add that number back into the infractiondate column. Any help would be greatly appreciated.
    CREATE TABLE M_ATSCHUNK
    (EMPLOYEE varchar(50),
    ROLLOFFDAYSCOUNT varchar(3),
    INFRACTIONDATE date)
    INSERT INTO M_ATSCHUNK (EMPLOYEE, ROLLINGOFFDAYSCOUNT, INFRACTIONDATE)
    VALUES ('PHIL','YES', (to_date('2010/01/01 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('PHIL','YES', (to_date('2010/01/02 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('PHIL','YES', (to_date('2010/01/03 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('PHIL','YES', (to_date('2010/01/04 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('PHIL','YES', (to_date('2010/01/05 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('PHIL','NO', (to_date('2010/02/01 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('PHIL','NO', (to_date('2010/03/01 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('NIKI','YES', (to_date('2010/01/01 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('NIKI','YES', (to_date('2010/01/01 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('NIKI','YES', (to_date('2010/01/01 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('NIKI','NO', (to_date('2010/01/01 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('NIKI','NO', (to_date('2010/01/01 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('NIKI','NO', (to_date('2010/01/01 08:00:00', 'yyyy/mm/dd hh24:mi:ss'))),
    VALUES ('NIKI','NO', (to_date('2010/01/01 08:00:00', 'yyyy/mm/dd hh24:mi:ss')))

    Phil3061 wrote:
    I need to add that number back into the infractiondate column.Well, in general you need to use UPDATE or better MERGE. Howebver, your data sample does not show any rollofs:
    SQL> SELECT * FROM M_ATSCHUNK;
    EMPLOYEE                                           ROL INFRACTIO
    PHIL                                               YES 01-JAN-10
    PHIL                                               YES 02-JAN-10
    PHIL                                               YES 03-JAN-10
    PHIL                                               YES 04-JAN-10
    PHIL                                               YES 05-JAN-10
    PHIL                                               NO  01-FEB-10
    PHIL                                               NO  01-MAR-10
    NIKI                                               YES 01-JAN-10
    NIKI                                               YES 01-JAN-10
    NIKI                                               YES 01-JAN-10
    NIKI                                               NO  01-JAN-10
    EMPLOYEE                                           ROL INFRACTIO
    NIKI                                               NO  01-JAN-10
    NIKI                                               NO  01-JAN-10
    NIKI                                               NO  01-JAN-10
    14 rows selected.
    SQL> WITH group_by_4_column_results AS
      2  (SELECT m_atschunk.employee AS employee,
      3  SUM(CASE
      4  WHEN
      5  M_ATSCHUNK.rolloffDaysCount = '108545043' AND m_atschunk.infractiondate BETWEEN SYSDATE - 365 AND SYSDATE THEN 1 ELSE 0 END) as
    rolloffs,
      6  M_ATSCHUNK.INFRACTIONDATE + 365 as infractionDate
      7  FROM M_ATSCHUNK
      8  group by employee, infractionDate, rolloffDaysCount
      9  )
    10  SELECT g4.*,
    11  SUM (rolloffs) OVER (PARTITION BY employee) AS total_rolloffs
    12  FROM group_by_4_column_results g4
    13  /
    EMPLOYEE                                             ROLLOFFS INFRACTIO TOTAL_ROLLOFFS
    NIKI                                                        0 01-JAN-11              0
    NIKI                                                        0 01-JAN-11              0
    PHIL                                                        0 01-JAN-11              0
    PHIL                                                        0 02-JAN-11              0
    PHIL                                                        0 03-JAN-11              0
    PHIL                                                        0 04-JAN-11              0
    PHIL                                                        0 05-JAN-11              0
    PHIL                                                        0 01-FEB-11              0
    PHIL                                                        0 01-MAR-11              0
    9 rows selected.
    SQL> So adjust data sample and based on it tell us what are the expected results.
    SY.

  • Help with query calculations (recursive)

    Hi All,
    I want some help with a query using a base rate and the result use in the next calculation year.
    Here an example:
    create table rate_type(
    rate_type_id    number,
    rate_desc       nvarchar2(50),
    rate_base_year  number
    insert into rate_type(rate_type_id, rate_desc, rate_base_year) values (1, 'Desc1', 4.6590);
    insert into rate_type(rate_type_id, rate_desc, rate_base_year) values (2, 'Desc2', 4.6590);
    create table rates (
    rate_type_id number
    rate_year    number,
    rate_value   number
    insert into rates(rate_type_id, rate_year, rate_value) values (1, 2012, 1.2);
    insert into rates(rate_type_id, rate_year, rate_value) values (1, 2013, 1.3);
    insert into rates(rate_type_id, rate_year, rate_value) values (1, 2014, 1.4);
    insert into rates(rate_type_id, rate_year, rate_value) values (2, 2012, 1.2);
    insert into rates(rate_type_id, rate_year, rate_value) values (2, 2013, 1.3);
    insert into rates(rate_type_id, rate_year, rate_value) values (2, 2014, 1.4);The calculation for the first year should be the base rate of the rate type. The next year should use the result of the previous year and so on.
    The result of my sample data is:
    2012 = 4.659 + 1.2 + 4.659 * (1.2 * 0.01) = 5.9149
    2013 = 5.9149 + 1.3 + 5.9149 * (1.3 * 0.01) = 7.1859
    2014 = 7.1859 + 1.4 + 7.1859 * (1.4 * 0.01) = 8.4721Query result:
    NAME 2012 2013 2014
    Desc1 5.9149 7.1859 8.4721
    Desc2 XXXX XXX XXXX
    How can I do this in one select statement? Any ideas?
    Thanks!

    Assuming you are on 11.2:
    with t as (
               select  a.rate_type_id,
                       rate_desc,
                       rate_year,
                       rate_base_year,
                       rate_value,
                       count(*) over(partition by a.rate_type_id) cnt,
                       row_number() over(partition by a.rate_type_id order by rate_year) rn
                 from  rate_type a,
                       rates b
                 where a.rate_type_id = b.rate_type_id
        r(
          rate_type_id,
          rate_desc,
          rate_year,
          rate_base_year,
          rate_value,
          cnt,
          rn,
          result
         ) as (
                select  rate_type_id,
                        rate_desc,
                        rate_year,
                        rate_base_year,
                        rate_value,
                        cnt,
                        rn,
                        rate_base_year + rate_value + rate_base_year * rate_value * 0.01 result
                  from  t
                  where rn = 1
               union all
                select  t.rate_type_id,
                        t.rate_desc,
                        t.rate_year,
                        t.rate_base_year,
                        t.rate_value,
                        t.cnt,
                        t.rn,
                        r.result + t.rate_value + r.result * t.rate_value * 0.01 result
                  from  r,
                        t
                  where t.rate_type_id = r.rate_type_id
                    and t.rn = r.rn + 1
    select  *
      from  (
             select  rate_desc name,
                     rate_year,
                     result
               from  r
               where rn <= cnt
      pivot (sum(result) for rate_year in (2012,2013,2014))
      order by name
    NAME             2012       2013       2014
    Desc1        5.914908  7.2918018 8.79388703
    Desc2        5.914908  7.2918018 8.79388703
    SQL> Obviously pivoting assumes you know rate_year values upfront. If not, then without pivoting:
    with t as (
               select  a.rate_type_id,
                       rate_desc,
                       rate_year,
                       rate_base_year,
                       rate_value,
                       count(*) over(partition by a.rate_type_id) cnt,
                       row_number() over(partition by a.rate_type_id order by rate_year) rn
                 from  rate_type a,
                       rates b
                 where a.rate_type_id = b.rate_type_id
        r(
          rate_type_id,
          rate_desc,
          rate_year,
          rate_base_year,
          rate_value,
          cnt,
          rn,
          result
         ) as (
                select  rate_type_id,
                        rate_desc,
                        rate_year,
                        rate_base_year,
                        rate_value,
                        cnt,
                        rn,
                        rate_base_year + rate_value + rate_base_year * rate_value * 0.01 result
                  from  t
                  where rn = 1
               union all
                select  t.rate_type_id,
                        t.rate_desc,
                        t.rate_year,
                        t.rate_base_year,
                        t.rate_value,
                        t.cnt,
                        t.rn,
                        r.result + t.rate_value + r.result * t.rate_value * 0.01 result
                  from  r,
                        t
                  where t.rate_type_id = r.rate_type_id
                    and t.rn = r.rn + 1
    select  rate_desc name,
            rate_year,
            result
      from  r
      where rn <= cnt
      order by name,
               rate_year
    NAME        RATE_YEAR     RESULT
    Desc1            2012   5.914908
    Desc1            2013  7.2918018
    Desc1            2014 8.79388703
    Desc2            2012   5.914908
    Desc2            2013  7.2918018
    Desc2            2014 8.79388703
    6 rows selected.
    SQL> SY.

  • Reg update of a 10 million record table from 1 million record table

    I have 2 tables
    Tabke 1 : 10 millio records
    21 indexes --> 1). acct_id , acct_seq_no -- index_1
    2). c1 , c2 , c3 -- index_2
    Table 2 : 1 .5 million records
    1 index on ( acct_id, acct_seq_no) - idx_1
    common keys are acct_id and acct_seq_no
    I'm updating table1 from table 2
    I need to use index ( index_1 from table_1 ) and (idx_1 from table_2)
    How can I make my query to use only this particular index.
    MY query ia as follows
    UPDATE csban_&1 csb
    SET (
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    ) =
    ( SELECT acct_id,
    acct_seq_no,
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    FROM csban_abi_temp
    WHERE csb.acct_id = temp.acct_id
    AND csb.acct_seq_no = temp.acct_seq_no
    AND rownum < 2
    WHERE
    EXISTS
    SELECT 1
    FROM csban_abi_temp temp1
    WHERE csb.acct_id = temp1.acct_id
    AND csb.acct_seq_no = temp1.acct_seq_no
    DO I need to put and index hint after this
    UPDATE csban_&1 csb --???????? /*+ indedx (csb.index_1) */
    Thanks in advance

    Thanks a lot david and rob for sharing the info.
    Please find the details
    SQL> EXPLAIN PLAN FOR
    UPDATE csban_2 csb
    SET (
    duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    ) =
    ( SELECT duns_no,
    hdqtrs_duns_no,
    us_ultmt_duns_no,
    sci_id,
    blg_cl_id,
    cl_id
    FROM csban_abi_temp temp
    WHERE csb.acct_id = temp.acct_id
    AND csb.acct_seq_no = temp.acct_seq_no
    AND rownum < 2
    WHERE
    EXISTS
    SELECT 1
    FROM csban_abi_temp temp1
    WHERE 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
    21 22 23 24 25 26 27 28 29
    Explained.
    SQL>
    SQL>
    SQL> SELECT * FROM TABLE(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 584770029
    | Id | Operation | Name | Rows | Bytes | Cost
    | TQ |IN-OUT| PQ Distrib |
    PLAN_TABLE_OUTPUT
    | 0 | UPDATE STATEMENT | | 530K| 19M| 8213
    | | | |
    | 1 | UPDATE | CSBAN_2 | | |
    | | | |
    |* 2 | FILTER | | | |
    | | | |
    | 3 | PX COORDINATOR | | | |
    | | | |
    PLAN_TABLE_OUTPUT
    | 4 | PX SEND QC (RANDOM) | :TQ10000 | 530K| 19M| 8213
    | Q1,00 | P->S | QC (RAND) |
    | 5 | PX BLOCK ITERATOR | | 530K| 19M| 8213
    | Q1,00 | PCWC | |
    | 6 | TABLE ACCESS FULL | CSBAN_2 | 530K| 19M| 8213
    | Q1,00 | PCWP | |
    |* 7 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | 10 | 3
    PLAN_TABLE_OUTPUT
    | | | |
    |* 8 | COUNT STOPKEY | | | |
    | | | |
    | 9 | TABLE ACCESS BY INDEX ROWID| CSBAN_ABI_TEMP | 1 | 38 | 4
    | | | |
    |* 10 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | | 3
    | | | |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    2 - filter( EXISTS (SELECT 0 FROM "CSBAN_ABI_TEMP" "TEMP1" WHERE "TEMP1"."ACC
    T_SEQ_NO"=:B1 AND
    "TEMP1"."ACCT_ID"=:B2))
    PLAN_TABLE_OUTPUT
    7 - access("TEMP1"."ACCT_ID"=:B1 AND "TEMP1"."ACCT_SEQ_NO"=:B2)
    8 - filter(ROWNUM<2)
    10 - access("TEMP"."ACCT_ID"=:B1 AND "TEMP"."ACCT_SEQ_NO"=:B2)
    Note
    - cpu costing is off (consider enabling it)
    30 rows selected.
    The query had completed and it took 1 hr 47 mts.
    SQL> SQL> SQL> Updating CSBAN from TEMP table
    old 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_&1 csb
    new 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_1 csb
    1611807 rows updated.
    Elapsed: 01:47:16.40

  • 11g: Multicolumn pivot - help with query

    Hi everyone,
    My first attempt at posting here was not very successful. I have now read the rules, and done more homework.
    I'm fairly new to Oracle databases, but have some experience with MySQL and Postgres from earlier.
    First of all, here is an example table of what I have today. This is only an extract of the full table, but these fields are the interesting ones.
    CREATE TABLE trans
         ("FROM_LIC" int, "FROM_LOCATION" varchar2(18), "TO_LOCATION" varchar2(18), "CREATE_DT" timestamp)
    INSERT ALL
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100002563, '215', 'INN_MONO_05', '04-Mar-2013 11:54:21 AM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100002563, 'INN_MONO_05', 'INN_MONO_06_BANE_R', '04-Mar-2013 11:55:08 AM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100002563, 'INN_MONO_06_BANE_R', 'TROLLEY_19', '04-Mar-2013 12:01:06 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100002563, 'TROLLEY_19', 'UT_OPPLAST_5_2', '04-Mar-2013 12:01:56 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100002563, 'UT_OPPLAST_5_2', 'STG010801', '04-Mar-2013 12:01:56 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003259, '231', 'INN_MONO_04', '04-Mar-2013 02:18:31 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003259, 'INN_MONO_04', 'INN_MONO_04_BANE_L', '04-Mar-2013 02:19:28 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003259, 'INN_MONO_04_BANE_L', 'TROLLEY_5', '04-Mar-2013 02:22:41 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003259, 'TROLLEY_5', 'UT_OPPLAST_3_1', '04-Mar-2013 02:23:35 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003262, '243', 'INN_MONO_06', '04-Mar-2013 01:37:49 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003262, 'INN_MONO_06', 'INN_MONO_06_BANE_R', '04-Mar-2013 01:39:09 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003262, 'INN_MONO_06_BANE_R', 'TROLLEY_10', '04-Mar-2013 01:43:48 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003262, 'TROLLEY_10', 'UT_OPPLAST_5_2', '04-Mar-2013 01:44:58 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003262, 'UT_OPPLAST_5_2', 'STG010904', '04-Mar-2013 01:46:30 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003263, '231', 'INN_MONO_04', '04-Mar-2013 02:18:31 PM')
         INTO trans ("FROM_LIC", "FROM_LOCATION", "TO_LOCATION", "CREATE_DT")
               VALUES (4100003263, 'INN_MONO_04', 'INN_MONO_04_BANE_R', '04-Mar-2013 02:19:20 PM')
    SELECT * FROM dual
    ;As of now, I have a query that returns a data grid from the table, looking exactly like the example table I've included.
    Here is a copy of the query that gives the example table as a result from the table in our database:
    select * from (
        select distinct from_lic, from_location, to_location, create_dt from trans where to_location like 'INN_MONO___' and create_dt > sysdate-(3/24)
        union
        select distinct from_lic, from_location, to_location, create_dt from trans where from_location like 'INN_MONO___' and to_location like 'INN_MONO%BANE%' and create_dt > sysdate-(3/24)
        union
        select distinct from_lic, from_location, to_location, create_dt from trans where from_location like 'INN_MONO%BANE%' and to_location like 'TROLLEY%' and create_dt > sysdate-(3/24)
        union
        select distinct from_lic, from_location, to_location, create_dt from trans where from_location like 'TROLLEY%' and to_location like 'UT_OPPLAST____' and create_dt > sysdate-(3/24)
        union
        select distinct from_lic, from_location, to_location, create_dt from trans where from_location like 'UT_OPPLAST____%' and (to_location like 'STG%' or to_location like 'UT_OPPLAST%GULV') and create_dt > sysdate-(3/24)
      ) order by from_lic, create_dtI would be delighted if you could help me formulating a new query that gives THIS output, from the same table, with the same constraints as in the original query:
    CREATE TABLE new_table
         ("FROM_LIC" int, "FLOOR" timestamp, "INFEED" timestamp, "TROLLEY" timestamp, "OUTFEED" timestamp, "STAGING" varchar2(16))
    INSERT ALL
         INTO new_table ("FROM_LIC", "FLOOR", "INFEED", "TROLLEY", "OUTFEED", "STAGING")
               VALUES (4100002563, '04-Mar-2013 11:54:00 AM', '04-Mar-2013 11:55:00 AM', '04-Mar-2013 12:01:00 PM', '04-Mar-2013 12:01:00 PM', '03.04.2013 12:01')
         INTO new_table ("FROM_LIC", "FLOOR", "INFEED", "TROLLEY", "OUTFEED", "STAGING")
               VALUES (4100003259, '04-Mar-2013 02:18:00 PM', '04-Mar-2013 02:19:00 PM', '04-Mar-2013 02:22:00 PM', '04-Mar-2013 02:23:00 PM', NULL)
         INTO new_table ("FROM_LIC", "FLOOR", "INFEED", "TROLLEY", "OUTFEED", "STAGING")
               VALUES (4100003262, '04-Mar-2013 01:37:00 PM', '04-Mar-2013 01:39:00 PM', '04-Mar-2013 01:43:00 PM', '04-Mar-2013 01:44:00 PM', '03.04.2013 13:46')
    SELECT * FROM dual
    ;What happened here is that for each instance of from_lic, the line yielded from
    select distinct from_lic, from_location, to_location, create_dt from trans where to_location like 'INN_MONO___' and create_dt > sysdate-(3/24)The value for CREATE_DT ---> in the new column called FLOOR.
    And the following for the rest of the columns:
    select distinct from_lic, from_location, to_location, create_dt from trans where from_location like 'INN_MONO___' and to_location like 'INN_MONO%BANE%' and create_dt > sysdate-(3/24)The value for CREATE_DT ---> INFEED.
    select distinct from_lic, from_location, to_location, create_dt from trans where from_location like 'INN_MONO%BANE%' and to_location like 'TROLLEY%' and create_dt > sysdate-(3/24)The value for CREATE_DT ---> TROLLEY.
    select distinct from_lic, from_location, to_location, create_dt from trans where from_location like 'TROLLEY%' and to_location like 'UT_OPPLAST____' and create_dt > sysdate-(3/24)The value for CREATE_DT ---> OUTFEED.
    select distinct from_lic, from_location, to_location, create_dt from trans where from_location like 'UT_OPPLAST____%' and (to_location like 'STG%' or to_location like 'UT_OPPLAST%GULV') and create_dt > sysdate-(3/24)The value for CREATE_DT ---> STAGING.
    Originally I was thinking of a pivot for this, but there is possibly better ways to accomplish this.
    I would like to have the data in the described format to make data mining easier.
    The query will run on a table with >3M lines.
    I hope this way of formulating the question is better, and more comprehensible.
    Please do not hesitate to ask questions, and I'll do my best to answer.
    I can test any suggested queries in the production database when necessary.
    Edited by: 997749 on Apr 3, 2013 5:18 AM
    Edited by: 997749 on Apr 3, 2013 6:20 AM

    Hi,
    what about something like this:
    with trans as
       SELECT 4100002563 from_lic, 'INN_MONO_05'        to_location, TO_DATE('03.04.2013 11:54:21', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100002563 from_lic, 'INN_MONO_06_BANE_R' to_location, TO_DATE('03.04.2013 11:55:08', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100002563 from_lic, 'TROLLEY_19'         to_location, TO_DATE('03.04.2013 12:01:06', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100002563 from_lic, 'STG010801'          to_location, TO_DATE('03.04.2013 12:01:56', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100002563 from_lic, 'UT_OPPLAST_5_2'     to_location, TO_DATE('03.04.2013 12:01:56', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100005037 from_lic, 'INN_MONO_05'        to_location, TO_DATE('03.04.2013 11:18:31', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100005037 from_lic, 'INN_MONO_04_BANE_R' to_location, TO_DATE('03.04.2013 11:21:54', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100005037 from_lic, 'TROLLEY_16'         to_location, TO_DATE('03.04.2013 11:25:43', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100005037 from_lic, 'UT_OPPLAST_3_4'     to_location, TO_DATE('03.04.2013 11:26:37', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100005037 from_lic, 'STG010703'          to_location, TO_DATE('03.04.2013 11:27:31', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100006658 from_lic, 'INN_MONO_08'        to_location, TO_DATE('03.04.2013 11:00:31', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100006658 from_lic, 'INN_MONO_08_BANE_L' to_location, TO_DATE('03.04.2013 11:02:35', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100006658 from_lic, 'TROLLEY_22'         to_location, TO_DATE('03.04.2013 11:07:54', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL UNION ALL
       SELECT 4100006658 from_lic, 'UT_OPPLAST_7_3'     to_location, TO_DATE('03.04.2013 11:09:03', 'DD.MM.YYYY HH24:MI:SS') create_dt FROM DUAL
    , got_category AS
       SELECT from_lic
            , CASE
                 WHEN to_location like 'INN_MONO%BANE%'
                 THEN 'INFEED'
                 WHEN to_location like 'INN_MONO%'
                 THEN 'FLOOR'
                 WHEN to_location like 'TROLLEY%'
                 THEN 'TROLLEY'
                 WHEN to_location like 'STG' OR to_location like 'UT%GULV' 
                 THEN 'STAGED'
                 WHEN to_location like 'UT_OPPLAST%'
                 THEN 'OUTFEED'
                 ELSE 'UNKNOWN'
              END as cat
            , create_dt
         FROM trans
        WHERE create_dt > (sysdate -10/24) -- adjusted for different timezone
    SELECT * --FROM_LIC, lic_a, lic_b, lic_c, lic_d, lic_e
      FROM got_category
    PIVOT(MAX(create_dt) FOR cat IN('FLOOR'   as floor,
                                     'INFEED'  as infeed,
                                     'TROLLEY' as trolley,
                                     'OUTFEED' as ooutfeed,
                                     'STAGED'  as staged));
      FROM_LIC FLOOR                  INFEED                 TROLLEY                OOUTFEED               STAGED               
    4100002563 03-APR-2013 11:54:21   03-APR-2013 11:55:08   03-APR-2013 12:01:06   03-APR-2013 12:01:56                        
    4100005037 03-APR-2013 11:18:31   03-APR-2013 11:21:54   03-APR-2013 11:25:43   03-APR-2013 11:26:37                        
    4100006658 03-APR-2013 11:00:31   03-APR-2013 11:02:35   03-APR-2013 11:07:54   03-APR-2013 11:09:03                         If not useful, please post your input data (CREATE TABLE and INSERT statements) and your expected output.
    Regards.
    Al

  • Update performance on a 38 million records table

    Hi all,
    I´m trying to create a script to update a table that have around 38 million records. That table isn´t partitioned and I just have to update one CHAR(1 byte) field and set it to 'N'.
    The Database is 10g r2 running on a Unix TRU64.
    The script I create have a LOOP on a CURSOR that Bulk 200.000 records by pass and do a FORALL to update the table by ROWID.
    The problem is, on the performances tests that method took about 20 minutes to update 1 million rows and should take about 13 hours to update all table.
    My question is: Is that any way to improve the performance?
    The Script:
    DECLARE
    CURSOR C1
    IS
    SELECT ROWID
    FROM RTG.TCLIENTE_RTG;
    type rowidtab is table of rowid;
    d_rowid rowidtab;
    v_char char(1) := 'N';
    BEGIN
    OPEN C1;
    LOOP
    FETCH C1
    BULK COLLECT INTO d_rowid LIMIT 200000;
    FORALL i IN d_rowid.FIRST..d_rowid.LAST
    UPDATE RTG.TCLIENTE_RTG
    SET CLI_VALID_IND = v_char
    WHERE ROWID = d_rowid(i);
    COMMIT;
    EXIT WHEN C1%NOTFOUND;
    END LOOP;
    CLOSE C1;
    END;
    Kind Regards,
    Fabio

    I'm just curious... Is this a new varchar2(1) column that has been added to that table? If so will the value for this column remain 'N' for the future for the majority of the rows in that table?
    Has this column specifically been introduced to support one of the business functions in your application / will it not be used everywhere where the table is currently in use?
    If your answers to above questions contain many yes'ses, then why did you choose to add a column for this that needs to be initialized to 'N' for all existing rows?
    Why not add a new single-column table for this requirement: the single column being the pk-column(s) of the existing table. And the meaning being if a pk is present in this new table, then the "CLI_VALID_IND" for this client is 'yes'. And if a pk is not present, then the "CLI_VALID_IND" for this client is 'no'.
    That way you only have to add the new table. And do nothing more. Of course your SQL statements in support for the business logic of this new business function will have to use, and maybe join, this new table. But is that really a huge disadvantage?

  • I need help with paginating my Oracle database records in JSP

    Please can someone help with a script that allows me to split my resultsets from an Oracle database in JSP...really urgent..any help would be really appreciated. I'm using the Oracle Apache http server and JSP environment that comes with the database...thanks

    First thing you have to do is decide on a platform and
    database. Check to see what your server supports. Whether it is ASP
    and MSSQL, PHP and MySQL or less likely Cold Fusion.

  • Producing nested xml with querying attributes from a nested table

    How can one produce nested xml querying columns from a nested table? Looking at the object documentation, I can readily unnest the tables. Using your examples in the book, unnesting is select po.pono, ..., l.* from purchaseorder_objtab po, table (po.lineitemlist_ntab) l where l.quantity = 2;
    what if I don't want to unnest and don't want a cursor. I would like to produce nested xml.

    Gail,
    Although you can use XSU (XML-SQL Util) in 8.1.7, I would recommend that you upgrade to 9i for much better support (both in functionality and performance) of XML in the database. For example, in Oracle9i there are:
    - a new datatype - XMLType for storing and retrieving XML documents
    - DBMS_XMLGEN package and SYS_XMLGEN, SYS_XMLAGG functions for generating XML document from complex SQL queries
    - A pipelined table function to break down large XML documents into rows.
    You can check out some examples using SYS_XMLGEN and DBMS_XMLGEN for your specific needs at http://download-west.oracle.com/otndoc/oracle9i/901_doc/appdev.901/a88894/adx05xml.htm#1017141
    Regards,
    Geoff

  • Updating table with more than 50 million records

    Hi Team,
    Updating three columns in a table which has 60 millions of records is taking a lot of time. How to improve the performance of the query. The table has to be updated based on the values from different database tables.
    Thanks,
    Eshwar.
    Please don't forget to Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful. It will helpful to other users.

    Split them into small batches
    when you are using multiple tables, have them with JOINs
    have the corresponding indexes for joining columns
    disable triggers if any (if possible and doesnt affect your logic - wud be least favourable step ..)
    also these shud help:
    http://blogs.msdn.com/b/repltalk/archive/2011/10/10/lessons-learned-updating-100-millions-rows.aspx
    http://www.sqlservergeeks.com/blogs/AhmadOsama/personal/450/sql-server-optimizing-update-queries-for-large-data-volumes
    http://stackoverflow.com/questions/7344984/update-large-number-of-rows-sql-server-2005
    Thanks,
    Jay
    <If the post was helpful mark as 'Helpful' and if the post answered your query, mark as 'Answered'>

Maybe you are looking for

  • How do I convert an applet to a standalone application.

    Hi Everyone, I am currently working on this Applet, I have tried putting the main in and building a frame to hold the applet but everytime I try something I just get new errors, What I conclusively want to do is put the applet in a frame and add menu

  • Crystal reports no longer working in VS Studio 2005

    I'm using VS Studio 2005 SP1.  I can no longer create a new Windows Form solution using the Windows crystal reports application template. The following message is displayed.  A error has occurred while attempting to load the crystal reports runtime.

  • Qty balance at Sales Order

    Hi Experts, Is that possible that to show a balance qty at Sales Order when DO have been drage. E.g. It is blanket order with Qty 1000 when only draged 500 then, it will show the balance Qty 500 at Sales Order column. Thanks. Regards, Danny,

  • Dataq Active X class

    Hello, I'm new to Labview. Using a Dataq DI-400 ISA card, I'm trying set up LV 6.0 to do real time calculations (mean, derivative, and frequency) then save the data to Excel (or ASCII file). Dataq provides two Active X class methods for accessing the

  • How to control button in struts

    Hi there! How do we control buttons in struts. eg: If I've multiple buttons in the jsp, such as "Save Record", "Update REcord", " Delete Record " etc... How do I know which button is clicked?. How to control it ? Any help pretty much appreciated. Tha