Table with 200 millions records.

Dear all,
I have to create table which will accept 200 millions record. I have to do monthly report from these data.
The performance make me very very concern, does anyone has any suggestion?
Thanks in advance.

Hi,
I have a situation like yours.
Each month, you need to create a new partition, for the next year, this is anothers partition.
For example, you have a table
SQL> CREATE TABLE sales99_cpart(
2> sale_id NUMBER NOT NULL,
3> sale_date DATE,
4> prod_id NUMBER,
5> qty NUMBER)
6> PARTITION BY RANGE(sale_date)
7> SUBPARTITION BY HASH(prod_id) SUBPARTITIONS 4
8> STORE IN (data01,data02,data03,data04)
9> (PARTITION cp1 VALUES LESS THAN('01-APR-1999'),
10> PARTITION cp2 VALUES LESS THAN('01-JUL-1999'),
11> PARTITION cp3 VALUES LESS THAN('01-OCT-1999'),
12> PARTITION cp4 VALUES LESS THAN('01-JAN-2000'))
13> /
For the next year, add new partition and subpartition.
Subpartitions are like table, after what you can use parallel query. It is very interristing for good performance.
You can partition table by range on date, and subpartition by hash on id callcenter.
Next year, if you want history, you can drop only one subpartition.
The cost : Oracle partitionning is an option of Oracle entreprise edition, this is not default option.
Nicolas.

Similar Messages

  • Deleting records from a table with 12 million records

    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    Narayan

    user7202581 wrote:
    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • Delete 50 Million records from a table with 60 Million records

    Hi,
    I'm using oracle9.2.0.7 on win2k3 32bit.
    I need to delete 50M rows from a table that contains 60M records. This db was just passed on to me. I tried to use the delete statement but it takes too long. After reading the articles and forums, the best way to delete that many records from a table is to create a temp table, transfer the data needed to the temp table, drop the big table then rename temp table to big table. But the key here is in creating an exact replica of the big table.I have gotten the create table, indexes and constraints script in the export file from my production DB. But in the forums I read, I noticed that I haven't gotten the create grant script, is there a view I could use to get this? Can dbms.metadata get this?
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    I just want to make sure that I haven't left anything out. Kindly help.
    Thanks and Best Regards

    Can dbms.metadata get this?
    Yes, dbms_metadata can get the grants.
    YAS@10GR2 > select dbms_metadata.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST') from dual;
    DBMS_METADATA.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST')
      GRANT SELECT ON "YAS"."TEST" TO "SYS"
    When I need to create an exact replica of my big table, I only need:
    create table, indexes, constraints, and grants script right? Did I miss anything?
    There are triggers, foreign keys referencing this table (which will not permit you to drop the table if you do not take care of them), snapshot logs on the table, snapshots based on the table, etc...

  • Importing a Partitioned Table with 10 Million Records.

    I've been trying to import from a dump file using:
    imp system/######@******** fromuser=fusr touser=tusr file=/f1/f2/expfl.dbf log=/o1/implg.log grants=N &
    import done in US7ASCII character set and UTF8 NCHAR character set
    import server uses UTF8 character set (possible charset conversion)
    This contains a Table 'Tab_Mil_Rec', with almost 10 millions of records and has 10 partitions.
    Done in 9i, on Solaris9.
    Problem is the process abruptly ends at 'Tab_Mil_Rec'. This table is created but nothing is imported. I checked the log file, but it has logged events before this table, but nothing (not evens errors or termination message) after that. No errors are thrown even at os level, I don't know exactly because this was done as backgruond job.
    Can anybody guess went wrong and whats the next step?

    Hi,
    Can you tried import partition by partition of this table ?
    Cheers

  • Like % in a query running on an Oracle Apps table with 8 million records

    I am running the below query. As per the explain plan it is using the index on organization_id and inventory_item_id.
    select segment1 from mtl_system_items where organization_id = 100 and inventory_item_id like '123456%'
    It takes about 15 min to run this query which is a long time as this query returns values to the frontend created in asp. The webpage would time out by the time this query completes running. do you have any suggestions on how to run this query faster?

    It is an oracle apps table. below is the structure -
    Name Null? Type
    INVENTORY_ITEM_ID NOT NULL NUMBER
    ORGANIZATION_ID NOT NULL NUMBER
    LAST_UPDATE_DATE NOT NULL DATE
    LAST_UPDATED_BY NOT NULL NUMBER
    CREATION_DATE NOT NULL DATE
    CREATED_BY NOT NULL NUMBER
    LAST_UPDATE_LOGIN NUMBER
    SUMMARY_FLAG NOT NULL VARCHAR2(1)
    ENABLED_FLAG NOT NULL VARCHAR2(1)
    START_DATE_ACTIVE DATE
    END_DATE_ACTIVE DATE
    DESCRIPTION VARCHAR2(240)
    BUYER_ID NUMBER(9)
    ACCOUNTING_RULE_ID NUMBER
    INVOICING_RULE_ID NUMBER
    SEGMENT1 VARCHAR2(40)
    SEGMENT2 VARCHAR2(40)
    SEGMENT3 VARCHAR2(40)
    SEGMENT4 VARCHAR2(40)
    SEGMENT5 VARCHAR2(40)
    SEGMENT6 VARCHAR2(40)
    SEGMENT7 VARCHAR2(40)
    SEGMENT8 VARCHAR2(40)
    SEGMENT9 VARCHAR2(40)
    SEGMENT10 VARCHAR2(40)
    SEGMENT11 VARCHAR2(40)
    SEGMENT12 VARCHAR2(40)
    SEGMENT13 VARCHAR2(40)
    SEGMENT14 VARCHAR2(40)
    SEGMENT15 VARCHAR2(40)
    SEGMENT16 VARCHAR2(40)
    SEGMENT17 VARCHAR2(40)
    SEGMENT18 VARCHAR2(40)
    SEGMENT19 VARCHAR2(40)
    SEGMENT20 VARCHAR2(40)
    ATTRIBUTE_CATEGORY VARCHAR2(30)
    ATTRIBUTE1 VARCHAR2(150)
    ATTRIBUTE2 VARCHAR2(150)
    ATTRIBUTE3 VARCHAR2(150)
    ATTRIBUTE4 VARCHAR2(150)
    ATTRIBUTE5 VARCHAR2(150)
    ATTRIBUTE6 VARCHAR2(150)
    ATTRIBUTE7 VARCHAR2(150)
    ATTRIBUTE8 VARCHAR2(150)
    ATTRIBUTE9 VARCHAR2(150)
    ATTRIBUTE10 VARCHAR2(150)
    ATTRIBUTE11 VARCHAR2(150)
    ATTRIBUTE12 VARCHAR2(150)
    ATTRIBUTE13 VARCHAR2(150)
    ATTRIBUTE14 VARCHAR2(150)
    ATTRIBUTE15 VARCHAR2(150)
    PURCHASING_ITEM_FLAG NOT NULL VARCHAR2(1)
    SHIPPABLE_ITEM_FLAG NOT NULL VARCHAR2(1)
    CUSTOMER_ORDER_FLAG NOT NULL VARCHAR2(1)
    INTERNAL_ORDER_FLAG NOT NULL VARCHAR2(1)
    SERVICE_ITEM_FLAG NOT NULL VARCHAR2(1)
    INVENTORY_ITEM_FLAG NOT NULL VARCHAR2(1)
    ENG_ITEM_FLAG NOT NULL VARCHAR2(1)
    INVENTORY_ASSET_FLAG NOT NULL VARCHAR2(1)
    PURCHASING_ENABLED_FLAG NOT NULL VARCHAR2(1)
    CUSTOMER_ORDER_ENABLED_FLAG NOT NULL VARCHAR2(1)
    INTERNAL_ORDER_ENABLED_FLAG NOT NULL VARCHAR2(1)
    SO_TRANSACTIONS_FLAG NOT NULL VARCHAR2(1)
    MTL_TRANSACTIONS_ENABLED_FLAG NOT NULL VARCHAR2(1)
    STOCK_ENABLED_FLAG NOT NULL VARCHAR2(1)
    BOM_ENABLED_FLAG NOT NULL VARCHAR2(1)
    BUILD_IN_WIP_FLAG NOT NULL VARCHAR2(1)
    REVISION_QTY_CONTROL_CODE NUMBER
    ITEM_CATALOG_GROUP_ID NUMBER
    CATALOG_STATUS_FLAG VARCHAR2(1)
    RETURNABLE_FLAG VARCHAR2(1)
    DEFAULT_SHIPPING_ORG NUMBER
    COLLATERAL_FLAG VARCHAR2(1)
    TAXABLE_FLAG VARCHAR2(1)
    QTY_RCV_EXCEPTION_CODE VARCHAR2(25)
    ALLOW_ITEM_DESC_UPDATE_FLAG VARCHAR2(1)
    INSPECTION_REQUIRED_FLAG VARCHAR2(1)
    RECEIPT_REQUIRED_FLAG VARCHAR2(1)
    MARKET_PRICE NUMBER
    HAZARD_CLASS_ID NUMBER
    RFQ_REQUIRED_FLAG VARCHAR2(1)
    QTY_RCV_TOLERANCE NUMBER
    LIST_PRICE_PER_UNIT NUMBER
    UN_NUMBER_ID NUMBER
    PRICE_TOLERANCE_PERCENT NUMBER
    ASSET_CATEGORY_ID NUMBER
    ROUNDING_FACTOR NUMBER
    UNIT_OF_ISSUE VARCHAR2(25)
    ENFORCE_SHIP_TO_LOCATION_CODE VARCHAR2(25)
    ALLOW_SUBSTITUTE_RECEIPTS_FLAG VARCHAR2(1)
    ALLOW_UNORDERED_RECEIPTS_FLAG VARCHAR2(1)
    ALLOW_EXPRESS_DELIVERY_FLAG VARCHAR2(1)
    DAYS_EARLY_RECEIPT_ALLOWED NUMBER
    DAYS_LATE_RECEIPT_ALLOWED NUMBER
    RECEIPT_DAYS_EXCEPTION_CODE VARCHAR2(25)
    RECEIVING_ROUTING_ID NUMBER
    INVOICE_CLOSE_TOLERANCE NUMBER
    RECEIVE_CLOSE_TOLERANCE NUMBER
    AUTO_LOT_ALPHA_PREFIX VARCHAR2(30)
    START_AUTO_LOT_NUMBER VARCHAR2(30)
    LOT_CONTROL_CODE NUMBER
    SHELF_LIFE_CODE NUMBER
    SHELF_LIFE_DAYS NUMBER
    SERIAL_NUMBER_CONTROL_CODE NUMBER
    START_AUTO_SERIAL_NUMBER VARCHAR2(30)
    AUTO_SERIAL_ALPHA_PREFIX VARCHAR2(30)
    SOURCE_TYPE NUMBER
    SOURCE_ORGANIZATION_ID NUMBER
    SOURCE_SUBINVENTORY VARCHAR2(10)
    EXPENSE_ACCOUNT NUMBER
    ENCUMBRANCE_ACCOUNT NUMBER
    RESTRICT_SUBINVENTORIES_CODE NUMBER
    UNIT_WEIGHT NUMBER
    WEIGHT_UOM_CODE VARCHAR2(3)
    VOLUME_UOM_CODE VARCHAR2(3)
    UNIT_VOLUME NUMBER
    RESTRICT_LOCATORS_CODE NUMBER
    LOCATION_CONTROL_CODE NUMBER
    SHRINKAGE_RATE NUMBER
    ACCEPTABLE_EARLY_DAYS NUMBER
    PLANNING_TIME_FENCE_CODE NUMBER
    DEMAND_TIME_FENCE_CODE NUMBER
    LEAD_TIME_LOT_SIZE NUMBER
    STD_LOT_SIZE NUMBER
    CUM_MANUFACTURING_LEAD_TIME NUMBER
    OVERRUN_PERCENTAGE NUMBER
    MRP_CALCULATE_ATP_FLAG VARCHAR2(1)
    ACCEPTABLE_RATE_INCREASE NUMBER
    ACCEPTABLE_RATE_DECREASE NUMBER
    CUMULATIVE_TOTAL_LEAD_TIME NUMBER
    PLANNING_TIME_FENCE_DAYS NUMBER
    DEMAND_TIME_FENCE_DAYS NUMBER
    END_ASSEMBLY_PEGGING_FLAG VARCHAR2(1)
    REPETITIVE_PLANNING_FLAG VARCHAR2(1)
    PLANNING_EXCEPTION_SET VARCHAR2(10)
    BOM_ITEM_TYPE NOT NULL NUMBER
    PICK_COMPONENTS_FLAG NOT NULL VARCHAR2(1)
    REPLENISH_TO_ORDER_FLAG NOT NULL VARCHAR2(1)
    BASE_ITEM_ID NUMBER
    ATP_COMPONENTS_FLAG NOT NULL VARCHAR2(1)
    ATP_FLAG NOT NULL VARCHAR2(1)
    FIXED_LEAD_TIME NUMBER
    VARIABLE_LEAD_TIME NUMBER
    WIP_SUPPLY_LOCATOR_ID NUMBER
    WIP_SUPPLY_TYPE NUMBER
    WIP_SUPPLY_SUBINVENTORY VARCHAR2(10)
    PRIMARY_UOM_CODE VARCHAR2(3)
    PRIMARY_UNIT_OF_MEASURE VARCHAR2(25)
    ALLOWED_UNITS_LOOKUP_CODE NUMBER
    COST_OF_SALES_ACCOUNT NUMBER
    SALES_ACCOUNT NUMBER
    DEFAULT_INCLUDE_IN_ROLLUP_FLAG VARCHAR2(1)
    INVENTORY_ITEM_STATUS_CODE VARCHAR2(10)
    INVENTORY_PLANNING_CODE NUMBER
    PLANNER_CODE VARCHAR2(10)
    PLANNING_MAKE_BUY_CODE NUMBER
    FIXED_LOT_MULTIPLIER NUMBER
    ROUNDING_CONTROL_TYPE NUMBER
    CARRYING_COST NUMBER
    POSTPROCESSING_LEAD_TIME NUMBER
    PREPROCESSING_LEAD_TIME NUMBER
    FULL_LEAD_TIME NUMBER
    ORDER_COST NUMBER
    MRP_SAFETY_STOCK_PERCENT NUMBER
    MRP_SAFETY_STOCK_CODE NUMBER
    MIN_MINMAX_QUANTITY NUMBER
    MAX_MINMAX_QUANTITY NUMBER
    MINIMUM_ORDER_QUANTITY NUMBER
    FIXED_ORDER_QUANTITY NUMBER
    FIXED_DAYS_SUPPLY NUMBER
    MAXIMUM_ORDER_QUANTITY NUMBER
    ATP_RULE_ID NUMBER
    PICKING_RULE_ID NUMBER
    RESERVABLE_TYPE NUMBER
    POSITIVE_MEASUREMENT_ERROR NUMBER
    NEGATIVE_MEASUREMENT_ERROR NUMBER
    ENGINEERING_ECN_CODE VARCHAR2(50)
    ENGINEERING_ITEM_ID NUMBER
    ENGINEERING_DATE DATE
    SERVICE_STARTING_DELAY NUMBER
    VENDOR_WARRANTY_FLAG NOT NULL VARCHAR2(1)
    SERVICEABLE_COMPONENT_FLAG VARCHAR2(1)
    SERVICEABLE_PRODUCT_FLAG NOT NULL VARCHAR2(1)
    BASE_WARRANTY_SERVICE_ID NUMBER
    PAYMENT_TERMS_ID NUMBER
    PREVENTIVE_MAINTENANCE_FLAG VARCHAR2(1)
    PRIMARY_SPECIALIST_ID NUMBER
    SECONDARY_SPECIALIST_ID NUMBER
    SERVICEABLE_ITEM_CLASS_ID NUMBER
    TIME_BILLABLE_FLAG VARCHAR2(1)
    MATERIAL_BILLABLE_FLAG VARCHAR2(30)
    EXPENSE_BILLABLE_FLAG VARCHAR2(1)
    PRORATE_SERVICE_FLAG VARCHAR2(1)
    COVERAGE_SCHEDULE_ID NUMBER
    SERVICE_DURATION_PERIOD_CODE VARCHAR2(10)
    SERVICE_DURATION NUMBER
    WARRANTY_VENDOR_ID NUMBER
    MAX_WARRANTY_AMOUNT NUMBER
    RESPONSE_TIME_PERIOD_CODE VARCHAR2(30)
    RESPONSE_TIME_VALUE NUMBER
    NEW_REVISION_CODE VARCHAR2(30)
    INVOICEABLE_ITEM_FLAG NOT NULL VARCHAR2(1)
    TAX_CODE VARCHAR2(50)
    INVOICE_ENABLED_FLAG NOT NULL VARCHAR2(1)
    MUST_USE_APPROVED_VENDOR_FLAG NOT NULL VARCHAR2(1)
    REQUEST_ID NUMBER
    PROGRAM_APPLICATION_ID NUMBER
    PROGRAM_ID NUMBER
    PROGRAM_UPDATE_DATE DATE
    OUTSIDE_OPERATION_FLAG NOT NULL VARCHAR2(1)
    OUTSIDE_OPERATION_UOM_TYPE VARCHAR2(25)
    SAFETY_STOCK_BUCKET_DAYS NUMBER
    AUTO_REDUCE_MPS NUMBER(22)
    COSTING_ENABLED_FLAG NOT NULL VARCHAR2(1)
    AUTO_CREATED_CONFIG_FLAG NOT NULL VARCHAR2(1)
    CYCLE_COUNT_ENABLED_FLAG NOT NULL VARCHAR2(1)
    ITEM_TYPE VARCHAR2(30)
    MODEL_CONFIG_CLAUSE_NAME VARCHAR2(10)
    SHIP_MODEL_COMPLETE_FLAG VARCHAR2(1)
    MRP_PLANNING_CODE NUMBER
    RETURN_INSPECTION_REQUIREMENT NUMBER
    ATO_FORECAST_CONTROL NUMBER
    RELEASE_TIME_FENCE_CODE NUMBER
    RELEASE_TIME_FENCE_DAYS NUMBER
    CONTAINER_ITEM_FLAG VARCHAR2(1)
    VEHICLE_ITEM_FLAG VARCHAR2(1)
    MAXIMUM_LOAD_WEIGHT NUMBER
    MINIMUM_FILL_PERCENT NUMBER
    CONTAINER_TYPE_CODE VARCHAR2(30)
    INTERNAL_VOLUME NUMBER
    WH_UPDATE_DATE DATE
    PRODUCT_FAMILY_ITEM_ID NUMBER
    GLOBAL_ATTRIBUTE_CATEGORY VARCHAR2(150)
    GLOBAL_ATTRIBUTE1 VARCHAR2(150)
    GLOBAL_ATTRIBUTE2 VARCHAR2(150)
    GLOBAL_ATTRIBUTE3 VARCHAR2(150)
    GLOBAL_ATTRIBUTE4 VARCHAR2(150)
    GLOBAL_ATTRIBUTE5 VARCHAR2(150)
    GLOBAL_ATTRIBUTE6 VARCHAR2(150)
    GLOBAL_ATTRIBUTE7 VARCHAR2(150)
    GLOBAL_ATTRIBUTE8 VARCHAR2(150)
    GLOBAL_ATTRIBUTE9 VARCHAR2(150)
    GLOBAL_ATTRIBUTE10 VARCHAR2(150)
    PURCHASING_TAX_CODE VARCHAR2(50)
    The query is as below
    select segment1 from mtl_system_items where organization_id = 100 and inventory_item_id like '123456%'
    The explain plan is as below -
    Plan
    SELECT STATEMENT RULE          
         2 TABLE ACCESS BY INDEX ROWID INV.MTL_SYSTEM_ITEMS      
              1 INDEX RANGE SCAN NON-UNIQUE INV.MTL_SYSTEM_ITEMS_N1
    The INV.MTL_SYSTEM_ITEMS_N1 index is created on
    ORGANIZATION_ID and SEGMENT1

  • Help with querying a 200 million record table

    Hi ,
    I need to query a 200 million record table which is partitioned by monthly activity.
    But my problem is I need to see how many activities occured on one account in a time frame.
    If there are 200 partitions, I need to go into all the partitions, get the activities of the account in the partition and at the end give the number of activities.
    Fortunately, only activity is expected for an account in the partition which may be present or absent.
    if this table had 100 records, i would use this..
    select account_no, count(*)
    from Acct_actvy
    group by account_no;

    Must stress that it is critical that you not write code (SQL or PL/SQL) that uses hardcoded partition names to find data.
    That approach is very risk, prone to runtime errors, difficult to maintain and does not scale. It is not worth it.
    From the developer's side, there should be total ignorance to the fact that a table is partitioned. A developer must treat a partition table no different than any other table.
    To give you an idea.. this a copy-and-paste from a SQL*Plus session doing what you want to do. Against a partitioned table at least 3x bigger than yours. It covers about a 12 month period. There's a partition per day - and empty daily partitions for the next 2 years. The SQL aggregation is monthly. I selected a random network address to illustrate.
    SQL> select count(*) from x25_calls;
      COUNT(*)
    619491919
    Elapsed: 00:00:19.68
    SQL>
    SQL>  select TRUNC(callendtime,'MM') AS MONTH, sourcenetworkaddress, count(*) from x25_calls where sourcenetworkaddress = '3103165962'
      2  group by TRUNC(callendtime,'MM'), sourcenetworkaddress;
    MONTH               SOURCENETWORKADDRESS   COUNT(*)
    2005/09/01 00:00:00 3103165962                 3599
    2005/10/01 00:00:00 3103165962                 1184
    2005/12/01 00:00:00 3103165962                    4
    2005/06/01 00:00:00 3103165962                    1
    2005/04/01 00:00:00 3103165962                  560
    2005/08/01 00:00:00 3103165962                  101
    2005/03/01 00:00:00 3103165962                 3330
    7 rows selected.
    Elapsed: 00:00:19.72As you can see - not a single reference to any partitioning. Excellent performance, despite running on an old K-class HP server.
    The reason for the performance is simple. A correctly designed and implemented partitioning scheme that caters for most of the queries against the table. Correctly designed and implemented indexes - especially local bitmap indexes. Without any hacks like partition names and the like...

  • Updating table with 13 milllion records

    Hi
    I need to update a table a with 14 million records from table b, i need to do the complete update in 4 hours time, the problem here is to update 4 million records itself it is taking 6 hours
    Detailed design
    Table b is the staging table which has 14 million records in it.
    Table a is the table which needs to be updated with the table b data
    I am taking the data in table b into a cursor and updating it row by row into table a, the columns on which i am joining table b and table a are indexed
    is there any other better design where in i can complete the task in under 4 hours
    thnks for ur answers in advance

    update
        select
            a.column2       a_column2,
            a.column3       a_column3,
            a.column4       a_column4,
            b.column2       b_column2,
            b.column3       b_column3,
            b.column4       b_column4
        from
            table_a a, table_b b
        where
            a.column1 = b.column1
    set
        a_column2 = b_column2,
        a_column3 = b_column3,
        a_column4 = b_column4;Or
    update table_a a set (column2, column3, column4) =
        select b.column2, b.column3, b.column4
        from table_b b
        where b.column1 = a.column1
    where exists
        select null from table_b b
        where b.column1 = a.column1
    Doh! Too Late. :) Me too!
    Message was edited by:
    3360

  • Planfunction in IP or with BW modelling - case with 15 million records

    Hi,
    we need to implement a simple planfunction (qty * price) which has to be executed for 15 million records at a time (qty of 15 million records multiplied with average price calculated on a higher level). I'd like to still implement this with a simple FOX formula but are fearing the performance, given the number of records. Does anyone has experience with this number of records. Would you suggest to do this within IP or using BW modelling. The maximum lead time accepted is 24 hours for this planfunction ...
    The planfunction is expected to be executed in a batch or background mode, but should be triggered from an IP input query and not using RSPC for example...
    please advise.
    D

    Hi Dries,
    using BI IP you should definitely do a partition via planning sequence in a process chain, cf.
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/45/946677f8fb0cf2e10000000a114a6b/frameset.htm
    Planning functions load the requested data into main memory, with 15 million records you will have a problem. In addition it is not a good idea to emply only one work process with the whole work (a planning function uses only one work process). So partition the problem to be able to use parallelization.
    Process chains can be triggered via an API, cf. function group RSPC_API. So you can easily start a process chain via a planning function.
    Regards,
    Gregor

  • Issue in adding not null constraint on 250 G  table with 50 million rows.

    Guys,
    I need to add not null constraint on 2 column of a table with 50 million rows and ~250 GB in size, These 2 columns are newly added and I have also update the value for each of these columns to not null for each row.
    After that I am adding not null constraint on these 2 columns this is taking 1 hour to complete, Is there any way to speed up this, I don't want to use ENABLE NOVALIDATE option or rather I can't use that option.

    user445775 wrote:
    Guys,
    I need to add not null constraint on 2 column of a table with 50 million rows and ~250 GB in size, These 2 columns are newly added and I have also update the value for each of these columns to not null for each row.
    After that I am adding not null constraint on these 2 columns this is taking 1 hour to complete, Is there any way to speed up this, I don't want to use ENABLE NOVALIDATE option or rather I can't use that option.And what's wrong with it taking an hour? Presumably, this is a one time operation, and it doesn't really interfere with anything else.

  • How to get the table with no. of records after filter in webdynpro

    Dear Gurus,
    How to get the table with no. of records after filter in webdynpro?
    Thanks in advance.
    Sankar

    Hello Sankar,
    Please explain your requirement clearly so that we can help you easily.
    To get the table records from your context node use method get_static_attributes_table()
    data lo_nd_mynode       type ref to if_wd_context_node. 
    data lt_atrributes_table  type wd_this->elements_mynode. 
    lo_nd_mynode = wd_context->get_child_node( name = wd_this->wdctx_mynode ). 
    lo_nd_mynode->get_static_attributes_table( importing table = lt_atrributes_table ). 
    Note: You should have already defined your context node as a Dictionary Structure.
    BR,
    RAM

  • Problem with Fetching Million Records from Table COEP into an Internal Tabl

    Hi Everyone ! Hope things are going well.
           Table : COEP has 6 million records.
    I am trying to get records based on certain criteria, that is, there are atleast 5 conditions in the WHERE clause.
    I've noticed it takes about 15 minutes to populate the internal table. How can i improve the performance to less than a minute for a fetch of 500 records from a database set of 6 million?
    Regards,
    Owais...

    The first obvious sugession would be to use the proper indexes. I had a similar Issue with COVP which is a join of COEP and COBK. I got substanstial performance improvement by adding "where LEDNR EQ '00'" in the where clause.
    Here is my select:
              SELECT kokrs
                     belnr
                     buzei
                     ebeln
                     ebelp
                     wkgbtr
                     refbn
                     bukrs
                     gjahr
                FROM covp CLIENT SPECIFIED
                INTO TABLE i_coep
                 FOR ALL ENTRIES IN i_objnr
               WHERE mandt EQ sy-mandt
                 AND lednr EQ '00'
                 AND objnr = i_objnr-objnr
                 AND kokrs = c_conarea.

  • Database table with potentially millions of records

    Hello,
    We want to keep track of user's transaction history from the performance database.  The workload statistics contain the user transaction history information, however since the workload performance statistics are intended for temporary purposes and data from these tables are deleted every few months, we loose all the user's historical records.
    We want to keep track of the following in a table that we can query later:
    User ID      - Length 12
    Transaction  - Length 20
    Date         - Length 8
    With over 20,000 end users in production this can translate into thousands of records to be inserted into this table daily.
    What is the best way to store this type of information?  Is there a specific table type that is designed for storing massive data quantity?  Also, over time (few years) this table can grow into millions or hundreds of millions of records.  How can we manage that in terms of performance and storage space?
    If anyone has worked with database tables with very large amounts of records, and would like to share your experiences, please let us know how we could/should structure this function in our environment.
    Best Regards.

    Hi SS
    Alternatively, you can use a <u>cluster table</u>. For more help refer to F1 help on <b>"IMPORT TO / EXPORT FROM DATABASE"</b> statements.
    Or you can store data as a <u>file</u> on the application server using <b>"OPEN DATASET, TRANSFER, CLOSE DATASET"</b> statements.
    You can also select to archieve data of older than some definite date.
    You can also mix your alternatives for the recent and archieve data.
    *--Serdar <a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=qbk%2bsag%2bjiw%3d">[ BC ]</a>

  • How to update a table that has  Million Records

    Hi,
    Lets consider the basic EMP table and lets assume that it has around 20 Million Records . we need to have an update statement.Normal UPdate statement may hang the system or it may take a lot of time.
    The basic or Normal update statement goes like this and hope it may not work.
    update emp set hiredate = sysdate where comm is null and hiredate is null;Basic statement may not work. sugestions Needed.
    Regards,
    Vinesh

    sri wrote:
    I heard Bulk collect will resolve these type of issues and i am really poor at Bulk Collect concepts.Exactly what type of issue are you concerned with? The business requirements here are pretty important-- what problem is the UPDATE causing, specifically, that you are trying to work around.
    so looking for a solution to the problem using Bulk Collect .Without knowing the problem, it's very tough to suggest a solution. If you process data in batches using BULK COLLECT, your UPDATE statement will take longer to run and will consume more resources on the database. If the problem you are trying to solve is that your UPDATE is not fast enough, this is a poor approach.
    On the other hand, if you process data in batches, and do interim commits, you can probably hold locks on individual rows for a shorter amount of time. That would only be a concern, though, if you have some other process that is trying to update the same rows that you are updating at the same time that you're updating them, which is pretty rare. And breaking your update into multiple transactions introduces a whole bunch of complexity. You now have to write a bunch of code to ensure that your process is restartable should the update fail mid-way through leaving some number of updates committed and some number rolled back. You have to have a very detailed understanding of the data and data consistency to ensure that breaking up the transaction isn't going to negatively impact any process, report, etc. To do it correctly is a pile of work and then it's something that is constantly at risk of creating problems in the future when requirements change.
    In the vast majority of cases, you're better off issuing a simple SQL statement during a time when the system isn't particularly busy.
    Justin

  • Counting rows for db tables with 500 million+ entries

    Hi
    SE16 transaction timesout in foreground when showing number of entries for db tables with 500million+ entries. In background this takes too long.
    I am writing a custom report to get number of records of a table and using OPEN CURSOR concept to determine number of records.
    Is there any other efficient way to read number of records from such huge tables.
            OPEN CURSOR l_cursor FOR
            SELECT COUNT(*)
               FROM (u_str_param-p_tabn) WHERE (l_tab_cond).  " u_str_param-p_tabn is the table name in input and l_tab_cond is a
              DO.                                                                              " dynamic where condition
                FETCH NEXT CURSOR l_cursor INTO l_new_count.
               PACKAGE SIZE p_pack.
                IF sy-subrc NE 0.
                  EXIT.
                ELSE.
                  l_tot_cnt = l_tot_cnt + l_new_count. " l_tot_cnt will contain number of records at end of loops
                  CLEAR l_new_count.
                ENDIF.
              ENDDO.
              CLOSE CURSOR l_cursor.

    Hello,
    For sure it is a huge number of entries!!!
    Is any key-fields?
    Use a variable to keep a counter of the key-fields , for example row_table and also a low and high variable .
    Try to use a do-loop and in this loop use:
    do .
    low_row  = row_table.
    low_high = row_table + 200.000.:increse the select every 200.000 entries
    Do  "select statement into table"  based on the key-fields 
    For example : if the key-field is the docnr,
    Do a "select into table itab where docnr >low_row
                                            and docnr < low_high"
    Count the lines of itab and keep them in a variable.
    free the itab.
    enddo
    Good luck.
    Antonis

Maybe you are looking for

  • Work item text not visible in SAP inbox

    Hi All, I have used standard task in custom workflow. On triggering task, workitem text maintained for workitem is not triggered under title colum in SAP inbox.Instead, workitem name is  visible in Inbox. Can you please guide how workitem text would

  • IPhoto icon changed to a question mark in my launch pad...

    Hi All, I used to have an iPhoto icon in my launch pad in order to pull up my pictures/movies.  Where I used to have the "regular" iPhoto icon, I now have a question mark and when I click on it, nothing happens.  I tried to use finder to find my iPho

  • JDev9.0.3.1038: Two problems...

    Hi. I resentlly upgraded JDev9.0.3 preview with JDev9.0.3 Production Release. Well, this is not quite true. I resentlly installed Oracle Developer Suite 9.0.2, replaced JDev9.0.2 with JDev9.0.3Preview and replaced again with JDev9.0.3 Production Rele

  • Finder Font... Can it be changed?

    I have a user with a reading disability. He can read some fonts better than others. With OS9 and before the Finder font could be changed as well as the size. Every since OSX we have not been able to change the font in the finder. Some applications ch

  • Maximum length of film to burn DVD on Toast

    Hi Panel Thanks again for all your help - new question. My film is 1 hr and 20 minutes long - can I burn this at best quality in Toast - or must I keep to 60 minutes? I am in a rush as this film must be ready for Sunday - I can't experiment either as