Optimize query on table with 10 million transactions

Is there a better way than showed below to retrieve the transactions from a table that contains more than 9 million transactions.
SELECT FROM TAB_COMM WHERE CARR_CD='ABC' AND POL_NUM LIKE 'HPA%';How to optimize this query to show results for policy starting with HPA? This query returns more than 1 million transactions. There are already many indexes on this table and not sure if adding index will work on LIKE operator?

Thanks for all the responses. So much to learn from you all!
I could understand that we may need to add another index to this table. Before adding one, please look into the existing ones:
CREATE TABLE TAB_COMM
  CARR_COMM_KY_ID     NUMBER                    NOT NULL,
  PRU_WK_CD           NUMBER                    NOT NULL,
  CARR_CD             VARCHAR2(5 BYTE)          NOT NULL,
  SSN_TIN_NUM         VARCHAR2(9 BYTE),
  POL_NUM             VARCHAR2(20 BYTE),
  GRP_POL_NUM         VARCHAR2(20 BYTE),
  REC_TYP_CD          VARCHAR2(1 BYTE),
  EXTRCT_DT           DATE,
  OUT_BRKRG_ORG_CD    VARCHAR2(3 BYTE),
  SSN_TIN_CD          VARCHAR2(1 BYTE),
  CARR_AGT_CNTR_NUM   VARCHAR2(20 BYTE),
  SP_PCT_RT           NUMBER(5,4),
  INS_FRST_NAME       VARCHAR2(20 BYTE),
  INS_MID_NAME        VARCHAR2(25 BYTE),
  INS_LST_NAME        VARCHAR2(50 BYTE),
  INS_SFX_NAME        VARCHAR2(20 BYTE),
  ISS_DT              DATE,
  POL_DT              DATE,
  YR_INFRC_NUM        DATE,
  PROD_CD             VARCHAR2(20 BYTE)         NOT NULL,
  APPL_DT             DATE,
  IPP_DT              DATE,
  PREM_DUE_DT         DATE,
  ISS_ST_CD           VARCHAR2(2 BYTE),
  RES_ST_CD           VARCHAR2(2 BYTE),
  FRST_RNWL_CD        VARCHAR2(4 BYTE)          NOT NULL,
  TRANS_TYP_CD        VARCHAR2(4 BYTE)          NOT NULL,
  GRS_PREM_AMT        NUMBER(11,2),
  COMM_PREM_AMT       NUMBER(11,2),
  COMM_RT             NUMBER(5,4),
  PREM_MODE_CD        VARCHAR2(2 BYTE),
  PRU_REVN_AMT        NUMBER(11,2),
  AGNT_EARN_COMM_AMT  NUMBER(11,2),
  CK_NUM              VARCHAR2(8 BYTE),
  CK_DT               DATE,
  ISS_AGE_NUM         VARCHAR2(2 BYTE),
  CNTR_NUM            VARCHAR2(6 BYTE),
  SM_CNTR_NUM         VARCHAR2(6 BYTE),
  SM_OVRD_AMT         NUMBER(11,2),
  PDI_REVN_AMT        NUMBER(11,2),
  OVRD_STAT_CD        VARCHAR2(2 BYTE),
  MOD_BY_ID           VARCHAR2(50 BYTE),
  MOD_DT              DATE,
  CRT_BY_ID           VARCHAR2(50 BYTE),
  CRT_DT              DATE,
  BAT_ID              VARCHAR2(20 BYTE),
  ST_SRC_CD           VARCHAR2(1 BYTE),
  PRODTN_STAT_CD      VARCHAR2(2 BYTE),
  CHANL_CD            VARCHAR2(4 BYTE)          DEFAULT '00',
  COMP_PLN_CD         VARCHAR2(2 BYTE),
  GA_CD               VARCHAR2(12 BYTE)         NOT NULL,
  CARR_PROD_CD        VARCHAR2(25 BYTE)         NOT NULL,
  AGNT_SHR_RT         NUMBER(5,4),
  SRC_FILE_NAME       VARCHAR2(100 BYTE),
  GA_OVRD_AMT         NUMBER(11,2),
  PDI_OVRD_RT         NUMBER(5,4),
  DAMIS_WK_CD         NUMBER,
  RECNCL_STAT_CD      VARCHAR2(2 BYTE),
  REC_ID              NUMBER(10),
  TERM_DUR            NUMBER(4),
  SALES_CD            CHAR(1 BYTE),
  INS_ADDR            VARCHAR2(50 BYTE),
  INS_CITY_NAME       VARCHAR2(30 BYTE),
  INS_ZIP_CD          VARCHAR2(5 BYTE),
  TRKG_CD             VARCHAR2(12 BYTE),
  SUPL_KIND_CD        CHAR(1 BYTE)
)This huge table has following index columns:
Index1 columns
POL_NUM, CNTR_NUM, POL_DT, ISS_DT, TRANS_TYP_DT,COMM_PREM_AMT, PRU_REVN_AMTIndex2 columns
SRC_FILE_NAME, REC_IDIndex3, 4, 5, 6, 7, 8, 9, 10 - single column indexes on
CARR_CD
SP_PCT_RT
CNTR_NUM
POL_NUM
INS_FRST_NAME
INS_LST_NAME
INS_MID_NAME
CARR_COMM_KY_ID <- pkIndex 6 columns
POL_NUM, CNTR_NUM, "INS_FRST_NAME"||"INS_MID_NAME"||"INS_LST_NAME"||"INS_SFX_NAME", TRUNC("EXTRCT_DT"), SRC_FILE_NAME, PROD_CD, TRANS_TYP_CD, FRST_RNWL_CD, PRODTN_STAT_CD, GA_CDIndex 11 columns
POL_NUM, CNTR_NUM, "INS_FRST_NAME"||"INS_MID_NAME"||"INS_LST_NAME"||"INS_SFX_NAME", TRUNC("EXTRCT_DT"), SRC_FILE_NAME, PROD_CD, TRANS_TYP_CD, FRST_RNWL_CD, PRODTN_STAT_CD, GA_CDI should add index only if it brings the result faster, I don't want to add another index in this huge list of indexes. Please advice.

Similar Messages

  • Issue in adding not null constraint on 250 G  table with 50 million rows.

    Guys,
    I need to add not null constraint on 2 column of a table with 50 million rows and ~250 GB in size, These 2 columns are newly added and I have also update the value for each of these columns to not null for each row.
    After that I am adding not null constraint on these 2 columns this is taking 1 hour to complete, Is there any way to speed up this, I don't want to use ENABLE NOVALIDATE option or rather I can't use that option.

    user445775 wrote:
    Guys,
    I need to add not null constraint on 2 column of a table with 50 million rows and ~250 GB in size, These 2 columns are newly added and I have also update the value for each of these columns to not null for each row.
    After that I am adding not null constraint on these 2 columns this is taking 1 hour to complete, Is there any way to speed up this, I don't want to use ENABLE NOVALIDATE option or rather I can't use that option.And what's wrong with it taking an hour? Presumably, this is a one time operation, and it doesn't really interfere with anything else.

  • ORA-00604 ORA-00904 When query partitioned table with partitioned indexes

    Got ORA-00604 ORA-00904 When query partitioned table with partitioned indexes in the data warehouse environment.
    Query runs fine when query the partitioned table without partitioned indexes.
    Here is the query.
    SELECT al2.vdc_name, al7.model_series_name, COUNT (DISTINCT (al1.vin)),
    al27.accessory_code
    FROM vlc.veh_vdc_accessorization_fact al1,
    vlc.vdc_dim al2,
    vlc.model_attribute_dim al7,
    vlc.ppo_list_dim al18,
    vlc.ppo_list_indiv_type_dim al23,
    vlc.accy_type_dim al27
    WHERE ( al2.vdc_id = al1.vdc_location_id
    AND al7.model_attribute_id = al1.model_attribute_id
    AND al18.mydppolist_id = al1.ppo_list_id
    AND al23.mydppolist_id = al18.mydppolist_id
    AND al23.mydaccytyp_id = al27.mydaccytyp_id
    AND ( al7.model_series_name IN ('SCION TC', 'SCION XA', 'SCION XB')
    AND al2.vdc_name IN
    ('PORT OF BALTIMORE',
    'PORT OF JACKSONVILLE - LEXUS',
    'PORT OF LONG BEACH',
    'PORT OF NEWARK',
    'PORT OF PORTLAND'
    AND al27.accessory_code IN ('42', '43', '44', '45')
    GROUP BY al2.vdc_name, al7.model_series_name, al27.accessory_code

    I would recommend that you post this at the following OTN forum:
    Database - General
    General Database Discussions
    and perhaps at:
    Oracle Warehouse Builder
    Warehouse Builder
    The Oracle OLAP forum typically does not cover general data warehousing topics.

  • Database table with potentially millions of records

    Hello,
    We want to keep track of user's transaction history from the performance database.  The workload statistics contain the user transaction history information, however since the workload performance statistics are intended for temporary purposes and data from these tables are deleted every few months, we loose all the user's historical records.
    We want to keep track of the following in a table that we can query later:
    User ID      - Length 12
    Transaction  - Length 20
    Date         - Length 8
    With over 20,000 end users in production this can translate into thousands of records to be inserted into this table daily.
    What is the best way to store this type of information?  Is there a specific table type that is designed for storing massive data quantity?  Also, over time (few years) this table can grow into millions or hundreds of millions of records.  How can we manage that in terms of performance and storage space?
    If anyone has worked with database tables with very large amounts of records, and would like to share your experiences, please let us know how we could/should structure this function in our environment.
    Best Regards.

    Hi SS
    Alternatively, you can use a <u>cluster table</u>. For more help refer to F1 help on <b>"IMPORT TO / EXPORT FROM DATABASE"</b> statements.
    Or you can store data as a <u>file</u> on the application server using <b>"OPEN DATASET, TRANSFER, CLOSE DATASET"</b> statements.
    You can also select to archieve data of older than some definite date.
    You can also mix your alternatives for the recent and archieve data.
    *--Serdar <a href="https://www.sdn.sap.com:443http://www.sdn.sap.comhttp://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.sdn.businesscard.sdnbusinesscard?u=qbk%2bsag%2bjiw%3d">[ BC ]</a>

  • Counting rows for db tables with 500 million+ entries

    Hi
    SE16 transaction timesout in foreground when showing number of entries for db tables with 500million+ entries. In background this takes too long.
    I am writing a custom report to get number of records of a table and using OPEN CURSOR concept to determine number of records.
    Is there any other efficient way to read number of records from such huge tables.
            OPEN CURSOR l_cursor FOR
            SELECT COUNT(*)
               FROM (u_str_param-p_tabn) WHERE (l_tab_cond).  " u_str_param-p_tabn is the table name in input and l_tab_cond is a
              DO.                                                                              " dynamic where condition
                FETCH NEXT CURSOR l_cursor INTO l_new_count.
               PACKAGE SIZE p_pack.
                IF sy-subrc NE 0.
                  EXIT.
                ELSE.
                  l_tot_cnt = l_tot_cnt + l_new_count. " l_tot_cnt will contain number of records at end of loops
                  CLEAR l_new_count.
                ENDIF.
              ENDDO.
              CLOSE CURSOR l_cursor.

    Hello,
    For sure it is a huge number of entries!!!
    Is any key-fields?
    Use a variable to keep a counter of the key-fields , for example row_table and also a low and high variable .
    Try to use a do-loop and in this loop use:
    do .
    low_row  = row_table.
    low_high = row_table + 200.000.:increse the select every 200.000 entries
    Do  "select statement into table"  based on the key-fields 
    For example : if the key-field is the docnr,
    Do a "select into table itab where docnr >low_row
                                            and docnr < low_high"
    Count the lines of itab and keep them in a variable.
    free the itab.
    enddo
    Good luck.
    Antonis

  • Deleting records from a table with 12 million records

    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    Narayan

    user7202581 wrote:
    We need to delete some records on this table.
    SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
    Name Null? Type
    CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
    CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
    CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
    CLM_PMT_CHCK_AMT NUMBER(9,2)
    CLM_PMT_CHCK_DT DATE
    CLM_PMT_PAYEE_NAME VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
    CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
    CLM_PMT_PAYEE_CITY VARCHAR2(19)
    CLM_PMT_PAYEE_STATE_CD CHAR(2)
    CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
    CLM_PMT_SUM_CHCK_IND CHAR(1)
    CLM_PMT_PAYEE_TYPE_CD CHAR(1)
    CLM_PMT_CHCK_STTS_CD CHAR(2)
    SYSTEM_INSERT_DT DATE
    SYSTEM_UPDATE_DT
    I only need to delete the records based on this condition
    select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
    Thsi table has 12 million records.
    Please advise
    Regards,
    NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
    where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;

  • Select query on a table with 13 million of rows

    Hi guys,
    I have been trying to perform a select query on a table which has 13 millions of entries however it took around 58 min to complete.
    The table has 8 columns with 4 Primary keys looks like below:
    (PK) SegmentID > INT
    (PK) IPAddress > VARCHAR (45)
    MAC Address > VARCHAR (45)
    (PK) Application Name > VARCHAR (45)
    Total Bytes > INT
    Dates > VARCHAR (45)
    Times > VARCHAR (45)
    (PK) DateTime > DATETIME
    The sql query format is :
    select ipaddress, macaddress, sum(totalbytes), applicationname , dates,
    times from appstat where segmentid = 1 and datetime between '2011-01-03
    15:00:00.0' and '2011-01-04 15:00:00.0' group by ipaddress,
    applicationname order by applicationname, sum(totalbytes) desc
    Is there a way I can improve this query to be faster (through my.conf or any other method)?
    Any feedback is welcomed.
    Thank you.
    Mus

    Tolls wrote:
    What db is this?
    You never said.
    Anyway, it looks like it's using the Primary Key to find the correct rows.
    Is that the correct number of rows returned?
    5 million?
    Sorted?I am using MySQL. By the way, the query time has been much more faster (22 sec) after I changed the configuration file (based on my-huge.cnf).
    The number of rows returned is 7999 Rows
    This is some portion of the my.cnf
    # The MySQL server
    [mysqld]
    port = 3306
    socket = /var/lib/mysql/mysql.sock
    skip-locking
    key_buffer = 800M
    max_allowed_packet = 1M
    table_cache = 256
    sort_buffer_size = 1M
    read_buffer_size = 1M
    read_rnd_buffer_size = 4M
    myisam_sort_buffer_size = 64M
    thread_cache_size = 8
    query_cache_size= 16M
    log = /var/log/mysql.log
    log-slow-queries = /var/log/mysqld.slow.log
    long_query_time=10
    # Try number of CPU's*2 for thread_concurrency
    thread_concurrency = 6
    Is there anything else I need to tune so it can be faster ?
    Thanks a bunch.
    Edited by: user578505 on Jan 17, 2011 6:47 PM

  • Like % in a query running on an Oracle Apps table with 8 million records

    I am running the below query. As per the explain plan it is using the index on organization_id and inventory_item_id.
    select segment1 from mtl_system_items where organization_id = 100 and inventory_item_id like '123456%'
    It takes about 15 min to run this query which is a long time as this query returns values to the frontend created in asp. The webpage would time out by the time this query completes running. do you have any suggestions on how to run this query faster?

    It is an oracle apps table. below is the structure -
    Name Null? Type
    INVENTORY_ITEM_ID NOT NULL NUMBER
    ORGANIZATION_ID NOT NULL NUMBER
    LAST_UPDATE_DATE NOT NULL DATE
    LAST_UPDATED_BY NOT NULL NUMBER
    CREATION_DATE NOT NULL DATE
    CREATED_BY NOT NULL NUMBER
    LAST_UPDATE_LOGIN NUMBER
    SUMMARY_FLAG NOT NULL VARCHAR2(1)
    ENABLED_FLAG NOT NULL VARCHAR2(1)
    START_DATE_ACTIVE DATE
    END_DATE_ACTIVE DATE
    DESCRIPTION VARCHAR2(240)
    BUYER_ID NUMBER(9)
    ACCOUNTING_RULE_ID NUMBER
    INVOICING_RULE_ID NUMBER
    SEGMENT1 VARCHAR2(40)
    SEGMENT2 VARCHAR2(40)
    SEGMENT3 VARCHAR2(40)
    SEGMENT4 VARCHAR2(40)
    SEGMENT5 VARCHAR2(40)
    SEGMENT6 VARCHAR2(40)
    SEGMENT7 VARCHAR2(40)
    SEGMENT8 VARCHAR2(40)
    SEGMENT9 VARCHAR2(40)
    SEGMENT10 VARCHAR2(40)
    SEGMENT11 VARCHAR2(40)
    SEGMENT12 VARCHAR2(40)
    SEGMENT13 VARCHAR2(40)
    SEGMENT14 VARCHAR2(40)
    SEGMENT15 VARCHAR2(40)
    SEGMENT16 VARCHAR2(40)
    SEGMENT17 VARCHAR2(40)
    SEGMENT18 VARCHAR2(40)
    SEGMENT19 VARCHAR2(40)
    SEGMENT20 VARCHAR2(40)
    ATTRIBUTE_CATEGORY VARCHAR2(30)
    ATTRIBUTE1 VARCHAR2(150)
    ATTRIBUTE2 VARCHAR2(150)
    ATTRIBUTE3 VARCHAR2(150)
    ATTRIBUTE4 VARCHAR2(150)
    ATTRIBUTE5 VARCHAR2(150)
    ATTRIBUTE6 VARCHAR2(150)
    ATTRIBUTE7 VARCHAR2(150)
    ATTRIBUTE8 VARCHAR2(150)
    ATTRIBUTE9 VARCHAR2(150)
    ATTRIBUTE10 VARCHAR2(150)
    ATTRIBUTE11 VARCHAR2(150)
    ATTRIBUTE12 VARCHAR2(150)
    ATTRIBUTE13 VARCHAR2(150)
    ATTRIBUTE14 VARCHAR2(150)
    ATTRIBUTE15 VARCHAR2(150)
    PURCHASING_ITEM_FLAG NOT NULL VARCHAR2(1)
    SHIPPABLE_ITEM_FLAG NOT NULL VARCHAR2(1)
    CUSTOMER_ORDER_FLAG NOT NULL VARCHAR2(1)
    INTERNAL_ORDER_FLAG NOT NULL VARCHAR2(1)
    SERVICE_ITEM_FLAG NOT NULL VARCHAR2(1)
    INVENTORY_ITEM_FLAG NOT NULL VARCHAR2(1)
    ENG_ITEM_FLAG NOT NULL VARCHAR2(1)
    INVENTORY_ASSET_FLAG NOT NULL VARCHAR2(1)
    PURCHASING_ENABLED_FLAG NOT NULL VARCHAR2(1)
    CUSTOMER_ORDER_ENABLED_FLAG NOT NULL VARCHAR2(1)
    INTERNAL_ORDER_ENABLED_FLAG NOT NULL VARCHAR2(1)
    SO_TRANSACTIONS_FLAG NOT NULL VARCHAR2(1)
    MTL_TRANSACTIONS_ENABLED_FLAG NOT NULL VARCHAR2(1)
    STOCK_ENABLED_FLAG NOT NULL VARCHAR2(1)
    BOM_ENABLED_FLAG NOT NULL VARCHAR2(1)
    BUILD_IN_WIP_FLAG NOT NULL VARCHAR2(1)
    REVISION_QTY_CONTROL_CODE NUMBER
    ITEM_CATALOG_GROUP_ID NUMBER
    CATALOG_STATUS_FLAG VARCHAR2(1)
    RETURNABLE_FLAG VARCHAR2(1)
    DEFAULT_SHIPPING_ORG NUMBER
    COLLATERAL_FLAG VARCHAR2(1)
    TAXABLE_FLAG VARCHAR2(1)
    QTY_RCV_EXCEPTION_CODE VARCHAR2(25)
    ALLOW_ITEM_DESC_UPDATE_FLAG VARCHAR2(1)
    INSPECTION_REQUIRED_FLAG VARCHAR2(1)
    RECEIPT_REQUIRED_FLAG VARCHAR2(1)
    MARKET_PRICE NUMBER
    HAZARD_CLASS_ID NUMBER
    RFQ_REQUIRED_FLAG VARCHAR2(1)
    QTY_RCV_TOLERANCE NUMBER
    LIST_PRICE_PER_UNIT NUMBER
    UN_NUMBER_ID NUMBER
    PRICE_TOLERANCE_PERCENT NUMBER
    ASSET_CATEGORY_ID NUMBER
    ROUNDING_FACTOR NUMBER
    UNIT_OF_ISSUE VARCHAR2(25)
    ENFORCE_SHIP_TO_LOCATION_CODE VARCHAR2(25)
    ALLOW_SUBSTITUTE_RECEIPTS_FLAG VARCHAR2(1)
    ALLOW_UNORDERED_RECEIPTS_FLAG VARCHAR2(1)
    ALLOW_EXPRESS_DELIVERY_FLAG VARCHAR2(1)
    DAYS_EARLY_RECEIPT_ALLOWED NUMBER
    DAYS_LATE_RECEIPT_ALLOWED NUMBER
    RECEIPT_DAYS_EXCEPTION_CODE VARCHAR2(25)
    RECEIVING_ROUTING_ID NUMBER
    INVOICE_CLOSE_TOLERANCE NUMBER
    RECEIVE_CLOSE_TOLERANCE NUMBER
    AUTO_LOT_ALPHA_PREFIX VARCHAR2(30)
    START_AUTO_LOT_NUMBER VARCHAR2(30)
    LOT_CONTROL_CODE NUMBER
    SHELF_LIFE_CODE NUMBER
    SHELF_LIFE_DAYS NUMBER
    SERIAL_NUMBER_CONTROL_CODE NUMBER
    START_AUTO_SERIAL_NUMBER VARCHAR2(30)
    AUTO_SERIAL_ALPHA_PREFIX VARCHAR2(30)
    SOURCE_TYPE NUMBER
    SOURCE_ORGANIZATION_ID NUMBER
    SOURCE_SUBINVENTORY VARCHAR2(10)
    EXPENSE_ACCOUNT NUMBER
    ENCUMBRANCE_ACCOUNT NUMBER
    RESTRICT_SUBINVENTORIES_CODE NUMBER
    UNIT_WEIGHT NUMBER
    WEIGHT_UOM_CODE VARCHAR2(3)
    VOLUME_UOM_CODE VARCHAR2(3)
    UNIT_VOLUME NUMBER
    RESTRICT_LOCATORS_CODE NUMBER
    LOCATION_CONTROL_CODE NUMBER
    SHRINKAGE_RATE NUMBER
    ACCEPTABLE_EARLY_DAYS NUMBER
    PLANNING_TIME_FENCE_CODE NUMBER
    DEMAND_TIME_FENCE_CODE NUMBER
    LEAD_TIME_LOT_SIZE NUMBER
    STD_LOT_SIZE NUMBER
    CUM_MANUFACTURING_LEAD_TIME NUMBER
    OVERRUN_PERCENTAGE NUMBER
    MRP_CALCULATE_ATP_FLAG VARCHAR2(1)
    ACCEPTABLE_RATE_INCREASE NUMBER
    ACCEPTABLE_RATE_DECREASE NUMBER
    CUMULATIVE_TOTAL_LEAD_TIME NUMBER
    PLANNING_TIME_FENCE_DAYS NUMBER
    DEMAND_TIME_FENCE_DAYS NUMBER
    END_ASSEMBLY_PEGGING_FLAG VARCHAR2(1)
    REPETITIVE_PLANNING_FLAG VARCHAR2(1)
    PLANNING_EXCEPTION_SET VARCHAR2(10)
    BOM_ITEM_TYPE NOT NULL NUMBER
    PICK_COMPONENTS_FLAG NOT NULL VARCHAR2(1)
    REPLENISH_TO_ORDER_FLAG NOT NULL VARCHAR2(1)
    BASE_ITEM_ID NUMBER
    ATP_COMPONENTS_FLAG NOT NULL VARCHAR2(1)
    ATP_FLAG NOT NULL VARCHAR2(1)
    FIXED_LEAD_TIME NUMBER
    VARIABLE_LEAD_TIME NUMBER
    WIP_SUPPLY_LOCATOR_ID NUMBER
    WIP_SUPPLY_TYPE NUMBER
    WIP_SUPPLY_SUBINVENTORY VARCHAR2(10)
    PRIMARY_UOM_CODE VARCHAR2(3)
    PRIMARY_UNIT_OF_MEASURE VARCHAR2(25)
    ALLOWED_UNITS_LOOKUP_CODE NUMBER
    COST_OF_SALES_ACCOUNT NUMBER
    SALES_ACCOUNT NUMBER
    DEFAULT_INCLUDE_IN_ROLLUP_FLAG VARCHAR2(1)
    INVENTORY_ITEM_STATUS_CODE VARCHAR2(10)
    INVENTORY_PLANNING_CODE NUMBER
    PLANNER_CODE VARCHAR2(10)
    PLANNING_MAKE_BUY_CODE NUMBER
    FIXED_LOT_MULTIPLIER NUMBER
    ROUNDING_CONTROL_TYPE NUMBER
    CARRYING_COST NUMBER
    POSTPROCESSING_LEAD_TIME NUMBER
    PREPROCESSING_LEAD_TIME NUMBER
    FULL_LEAD_TIME NUMBER
    ORDER_COST NUMBER
    MRP_SAFETY_STOCK_PERCENT NUMBER
    MRP_SAFETY_STOCK_CODE NUMBER
    MIN_MINMAX_QUANTITY NUMBER
    MAX_MINMAX_QUANTITY NUMBER
    MINIMUM_ORDER_QUANTITY NUMBER
    FIXED_ORDER_QUANTITY NUMBER
    FIXED_DAYS_SUPPLY NUMBER
    MAXIMUM_ORDER_QUANTITY NUMBER
    ATP_RULE_ID NUMBER
    PICKING_RULE_ID NUMBER
    RESERVABLE_TYPE NUMBER
    POSITIVE_MEASUREMENT_ERROR NUMBER
    NEGATIVE_MEASUREMENT_ERROR NUMBER
    ENGINEERING_ECN_CODE VARCHAR2(50)
    ENGINEERING_ITEM_ID NUMBER
    ENGINEERING_DATE DATE
    SERVICE_STARTING_DELAY NUMBER
    VENDOR_WARRANTY_FLAG NOT NULL VARCHAR2(1)
    SERVICEABLE_COMPONENT_FLAG VARCHAR2(1)
    SERVICEABLE_PRODUCT_FLAG NOT NULL VARCHAR2(1)
    BASE_WARRANTY_SERVICE_ID NUMBER
    PAYMENT_TERMS_ID NUMBER
    PREVENTIVE_MAINTENANCE_FLAG VARCHAR2(1)
    PRIMARY_SPECIALIST_ID NUMBER
    SECONDARY_SPECIALIST_ID NUMBER
    SERVICEABLE_ITEM_CLASS_ID NUMBER
    TIME_BILLABLE_FLAG VARCHAR2(1)
    MATERIAL_BILLABLE_FLAG VARCHAR2(30)
    EXPENSE_BILLABLE_FLAG VARCHAR2(1)
    PRORATE_SERVICE_FLAG VARCHAR2(1)
    COVERAGE_SCHEDULE_ID NUMBER
    SERVICE_DURATION_PERIOD_CODE VARCHAR2(10)
    SERVICE_DURATION NUMBER
    WARRANTY_VENDOR_ID NUMBER
    MAX_WARRANTY_AMOUNT NUMBER
    RESPONSE_TIME_PERIOD_CODE VARCHAR2(30)
    RESPONSE_TIME_VALUE NUMBER
    NEW_REVISION_CODE VARCHAR2(30)
    INVOICEABLE_ITEM_FLAG NOT NULL VARCHAR2(1)
    TAX_CODE VARCHAR2(50)
    INVOICE_ENABLED_FLAG NOT NULL VARCHAR2(1)
    MUST_USE_APPROVED_VENDOR_FLAG NOT NULL VARCHAR2(1)
    REQUEST_ID NUMBER
    PROGRAM_APPLICATION_ID NUMBER
    PROGRAM_ID NUMBER
    PROGRAM_UPDATE_DATE DATE
    OUTSIDE_OPERATION_FLAG NOT NULL VARCHAR2(1)
    OUTSIDE_OPERATION_UOM_TYPE VARCHAR2(25)
    SAFETY_STOCK_BUCKET_DAYS NUMBER
    AUTO_REDUCE_MPS NUMBER(22)
    COSTING_ENABLED_FLAG NOT NULL VARCHAR2(1)
    AUTO_CREATED_CONFIG_FLAG NOT NULL VARCHAR2(1)
    CYCLE_COUNT_ENABLED_FLAG NOT NULL VARCHAR2(1)
    ITEM_TYPE VARCHAR2(30)
    MODEL_CONFIG_CLAUSE_NAME VARCHAR2(10)
    SHIP_MODEL_COMPLETE_FLAG VARCHAR2(1)
    MRP_PLANNING_CODE NUMBER
    RETURN_INSPECTION_REQUIREMENT NUMBER
    ATO_FORECAST_CONTROL NUMBER
    RELEASE_TIME_FENCE_CODE NUMBER
    RELEASE_TIME_FENCE_DAYS NUMBER
    CONTAINER_ITEM_FLAG VARCHAR2(1)
    VEHICLE_ITEM_FLAG VARCHAR2(1)
    MAXIMUM_LOAD_WEIGHT NUMBER
    MINIMUM_FILL_PERCENT NUMBER
    CONTAINER_TYPE_CODE VARCHAR2(30)
    INTERNAL_VOLUME NUMBER
    WH_UPDATE_DATE DATE
    PRODUCT_FAMILY_ITEM_ID NUMBER
    GLOBAL_ATTRIBUTE_CATEGORY VARCHAR2(150)
    GLOBAL_ATTRIBUTE1 VARCHAR2(150)
    GLOBAL_ATTRIBUTE2 VARCHAR2(150)
    GLOBAL_ATTRIBUTE3 VARCHAR2(150)
    GLOBAL_ATTRIBUTE4 VARCHAR2(150)
    GLOBAL_ATTRIBUTE5 VARCHAR2(150)
    GLOBAL_ATTRIBUTE6 VARCHAR2(150)
    GLOBAL_ATTRIBUTE7 VARCHAR2(150)
    GLOBAL_ATTRIBUTE8 VARCHAR2(150)
    GLOBAL_ATTRIBUTE9 VARCHAR2(150)
    GLOBAL_ATTRIBUTE10 VARCHAR2(150)
    PURCHASING_TAX_CODE VARCHAR2(50)
    The query is as below
    select segment1 from mtl_system_items where organization_id = 100 and inventory_item_id like '123456%'
    The explain plan is as below -
    Plan
    SELECT STATEMENT RULE          
         2 TABLE ACCESS BY INDEX ROWID INV.MTL_SYSTEM_ITEMS      
              1 INDEX RANGE SCAN NON-UNIQUE INV.MTL_SYSTEM_ITEMS_N1
    The INV.MTL_SYSTEM_ITEMS_N1 index is created on
    ORGANIZATION_ID and SEGMENT1

  • Internal Table with 22 Million Records

    Hello,
    I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
    Any tips on how I can optimize my coding? I have attached the Short-Dump.
    Thanks,
    SD
      DATA: ls_source TYPE y_source_fields,
            ls_target TYPE y_target_fields.
      DATA: it_source_tmp TYPE yt_source_fields,
            et_target_tmp TYPE yt_target_fields.
      TYPES: BEGIN OF IT_TAB1,
              BPARTNER TYPE /BI0/OIBPARTNER,
              DATEBIRTH TYPE /BI0/OIDATEBIRTH,
              ALTER TYPE /GKV/BW01_ALTER,
              ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
              END OF IT_TAB1.
      DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
            WITH NON-UNIQUE KEY BPARTNER,
            WA_XX_TAB1 TYPE IT_TAB1.
      it_source_tmp[] = it_source[].
      SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
      DELETE ADJACENT DUPLICATES FROM it_source_tmp
                            COMPARING /B99/S_BWPKKD.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
      LOOP AT it_source INTO ls_source.
        READ TABLE IT_XX_TAB1
          INTO WA_XX_TAB1
          WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
        IF sy-subrc = 0.
          ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
        ENDIF.
        MOVE-CORRESPONDING ls_source TO ls_target.
        APPEND ls_target TO et_target.
        CLEAR ls_target.
      ENDLOOP.

    Hi SD,
    Please put the select querry in below condition marked in bold.
    IF it_source_tmp[]  IS NOT INTIAL.
      SELECT BPARTNER
              DATEBIRTH
        FROM /B99/ABW00GO0600
        INTO TABLE IT_XX_TAB1
        FOR ALL ENTRIES IN it_source_tmp
        WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
    ENDIF.
    This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
    Regards,
    Pravin

  • SQL query - 2 tables with no relationship

    I'm fairly new to SQL, but I'm trying to put together a query using 2 tables. They have no relationship. I'm using this information in a C# app and I'm trying to pull it as clean as possible to use in a class.
    For example, I have a table called Server where I am selecting 2 columns,
    ServerID and Type. The second table called
    App I am selecting the AppVersion column.
    Question is how can output the data without duplicate AppVersion values? I would rather see null or empty.
    I have tried below along with a CROSS JOIN and I get duplicates in the AppVersion. 
    SELECT ServerID, Type, AppVersion
    FROM Server, App;
    I know that is wrong as it outputs duplicates for AppVersion in the results.
    Thanks in advance. 
    A

    I didn't get your question. When you cross join two different tables the output is a Cartesian product of the two tables.
    The Record (A, 1, 1.1) is different from (B, 1, 1,1).
    You can use a distinct to eliminate if you have duplicates in data and you don't want to see them.
    SELECT DISTINCT ServerID, Type, AppVersion
    FROM Server, App;
    Please mark as answer, if this has helped you solve the issue.
    Good Luck :) .. visit www.sqlsaga.com for more t-sql code snippets and BI related how to articles.

  • Table with 200 millions records.

    Dear all,
    I have to create table which will accept 200 millions record. I have to do monthly report from these data.
    The performance make me very very concern, does anyone has any suggestion?
    Thanks in advance.

    Hi,
    I have a situation like yours.
    Each month, you need to create a new partition, for the next year, this is anothers partition.
    For example, you have a table
    SQL> CREATE TABLE sales99_cpart(
    2> sale_id NUMBER NOT NULL,
    3> sale_date DATE,
    4> prod_id NUMBER,
    5> qty NUMBER)
    6> PARTITION BY RANGE(sale_date)
    7> SUBPARTITION BY HASH(prod_id) SUBPARTITIONS 4
    8> STORE IN (data01,data02,data03,data04)
    9> (PARTITION cp1 VALUES LESS THAN('01-APR-1999'),
    10> PARTITION cp2 VALUES LESS THAN('01-JUL-1999'),
    11> PARTITION cp3 VALUES LESS THAN('01-OCT-1999'),
    12> PARTITION cp4 VALUES LESS THAN('01-JAN-2000'))
    13> /
    For the next year, add new partition and subpartition.
    Subpartitions are like table, after what you can use parallel query. It is very interristing for good performance.
    You can partition table by range on date, and subpartition by hash on id callcenter.
    Next year, if you want history, you can drop only one subpartition.
    The cost : Oracle partitionning is an option of Oracle entreprise edition, this is not default option.
    Nicolas.

  • JSF Best Prac.: Single ADF Query, Many Tables with Different Qry. Params

    One query (ex. queryPeopleBySurnameFirstLetter) needs to be used multiple times on the same page to generate different tables. Each table represents a different letter (A,B,...Z) and would list the first 5 for that letter. Using design view, the first table is easy and prompts you for the query params on creation, but subsequent tables bind to that same request (and param).
    This is just a sample, but this situation seems to effect a lot of developers (aka. our developers). Does anyone have any input on what is the easiest way for a someone to do this in JDeveloper? Is there a way to handle it all from the design view without having to manually change data bindings in the data definitions? Is there an existing tutorial that I'm missing?
    Preemptive Thanks for Your Input,
    Raymond

    I think you'll need to go an change the pagedef in such a case to add further iterators.
    But for the page scenario you are describing instead of having multiple queries, you can probably achieve it with a single query and a smart use of the af:foreach tag.
    See: http://groundside.com/blog/DuncanMills.php?p=458&more=1&c=1&tb=1&pb=1#more458
    and
    http://kuba.zilp.pl/?id=61

  • Get min and max for a column from table with 24 million rows.

    What is the best way to re-write the following query in a procedure for the table which has around 24 million rows?
    SELECT MIN(ft_src_ref_id), MAX(ft_src_ref_id )
    INTO gn_Min_ID, gn_Max_ID
    from UI_PURGE_FT;
    Thanks
    Edited by: tcode on Jun 21, 2012 12:31 PM

    Which of the following plan is better, can you please breifly explain?
    Also I need to gather statics to know recursive calls , db block gets ,consistent gets etc. etc. how can I get that?
    Thanks
    1.
    Execution Plan
    Plan hash value: 3702745568
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 6 | 13991 (2)| 00:02:48 |
    | 1 | SORT AGGREGATE | | 1 | 6 | | |
    | 2 | INDEX FAST FULL SCAN | UI_PURGE_FT_PK | 23M| 136M| 13991 (2)| 00:02:48 |
    2.
    Execution Plan
    Plan hash value: 1974183109
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | | 2 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 6 | | |
    | 2 | INDEX FULL SCAN (MIN/MAX)| UI_PURGE_FT_PK | 23M| 136M| 2 (0)| 00:00:01 |
    | 3 | SORT AGGREGATE | | 1 | 6 | | |
    | 4 | INDEX FULL SCAN (MIN/MAX)| UI_PURGE_FT_PK | 23M| 136M| 2 (0)| 00:00:01 |
    | 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
    ---------------------------------------------------------------------------------------------

  • Locating row with invalid number from  table with few million rows

    We need to query the a table or a view of same to obtain a id of type VARCHAR with bind value being NUMBER. Any attempts in this regard I try seem to result ORA-01722: invalid number. Once case is below here
    SQL> select TEXT_ID FROM ATT_TABLE where to_number(TEXT_ID) = 9411956;
    select TEXT_ID FROM ATT_TABLE where to_number(TEXT_ID) = 9411956
    ERROR at line 1:
    ORA-12801: error signaled in parallel query server P010
    ORA-01722: invalid number
    1. TEXT_ID column is meant to have numeric values in VARCHAR type. It is possible that some record / row is not complying. How can I locate the non-complying records ?
    2. What would be a VIEW definition that would return only records with valid numeric data in TEXT_ID column.
    Edited by: bonjonbovi on Feb 13, 2009 3:58 PM

    Of course the regexp_like is really cpu expensive, but there's no other solution in your situation
    The only way to differ numberic value from character, is REGEXP_LIKE function
    But you can do it using PL/SQL
    BEGIN
       FOR c IN (SELECT TO_NUMBER (ID) ID
                   FROM test_number)
       LOOP
          DBMS_OUTPUT.put_line (c.ID);
       END LOOP;
    EXCEPTION
       WHEN OTHERS
       THEN
          NULL;
    END;

  • XML SQL - query the table with XML in column [urgent]

    I have table which have to be query-ied with
    oracle.xml.sql.query.OracleXMLQuery class to produce XML
    document for web.
    Problem is that in one column is already XML data (well formed
    HTML) like this:
    <font face="Arial">click</font>
    and when I call OracleXMLQuery.getXMLString() it encode all's '<'
    as ';lt' and so on.
    Can I turn off that kind of encoding some way or is it better
    sollution for this?
    thanks in advance
    Bojan
    null

    You can use an XSLT Stylesheet to achieve what you are wanting to do.
    Say the name of the column containing
    the markup is named "DOC", then
    transforming the output of OracleXMLQuery
    by the following stylesheet using
    the oracle.xml.parser.v2.XSLProcessor...
    <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
    <xsl utput method="xml" omit-xml-declaration="yes"/>
    <xsl:template match="@*|node()">
    <xsl:copy>
    <xsl:apply-templates select="@*|node()"/>
    </xsl:copy>
    </xsl:template>
    <xsl:template match="DOC">
    <xsl:value-of select="." disable-output-escaping="yes"/>
    </xsl:template>
    </xsl:stylesheet>
    Will give the results you are looking for.
    You can use the XSQL Servlet to make this combination even easier if that is applicable to your situation.

Maybe you are looking for

  • ASE 16.0 installation on Linux(REDHAT 6.5)

    HI All, Below are the configuration details of my PC processor: Intel I7 Memory: 8GB Hard Disk: 1TB I have installed VM ware on windows 8.1 and installed Redhat 6.5 on virtual machine. VM mare configuration details Memory : 2GB hard disk: 50 GB Downl

  • Problen whit Mega 865

      Can any one help. I try to instal Connect3D /Ati Radeeon 9600 graphics card. Whe i turn power on nothig hapends. Why?  

  • Adding text pages in iDVD

    Is there any other way to burn text pages in iDVD 5, than to import them in .pdf or other picture formats ? Wouldn't it be easier to simply type or paste text on a blank page ? Thank you for your help. Liberty

  • Screen enhancement vs BAPIs

    I developed an enhancement to substitute one of the subscreens used in the Material Maintenance (Retail) to release some of the standard validations in the Forecast Planning subscreen. It was done by creating a copy of the Standard subcreen, and modi

  • Trying to upgrade from tiger to snow leopard

    I'm a poor college student who would like to upgrade to snow leopard from tiger on my mac book pro. I talked to someone at the apple store, and they wanted me to buy the box set, that had ilife and iwork in it as well. I wanted to know if there is a