Table has 80 million records - Performance impact if we stop archiving
HI All,
I have a table (Oracle 11g) which has around 80 million records till now we used to do weekly archiving to maintain size. But now one of the architect at my firm suggested that oracle does not have any problem with maintaining even billion of records with just a few performance tuning.
I was just wondering is it true and moreover what kind of effect would be their on querying and insertion if table size is 80 million and increasing every day ?
Any comments welcomed.
What is true is that Oracle database can manage tables with billions of rows but when talking about data size you should give table size instead of number of rows because you wont't have the same table size if the average row size is 50 bytes or if the average row size is 5K.
About performance impact, it depends on the queries that access this table: the more data queries need to process and/or to return as result set, the more this can have an impact on performance for these queries.
You don't give enough input to give a good answer. Ideally you should give DDL statements to create this table and its indexes and SQL queries that are using these tables.
In some cases using table partitioning can really help: but this is not always true (and you can only use partitioning with Entreprise Edition and additional licensing).
Please read http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112 .
Similar Messages
-
Deleting records from a table with 12 million records
We need to delete some records on this table.
SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
Name Null? Type
CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
CLM_PMT_CHCK_AMT NUMBER(9,2)
CLM_PMT_CHCK_DT DATE
CLM_PMT_PAYEE_NAME VARCHAR2(30)
CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
CLM_PMT_PAYEE_CITY VARCHAR2(19)
CLM_PMT_PAYEE_STATE_CD CHAR(2)
CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
CLM_PMT_SUM_CHCK_IND CHAR(1)
CLM_PMT_PAYEE_TYPE_CD CHAR(1)
CLM_PMT_CHCK_STTS_CD CHAR(2)
SYSTEM_INSERT_DT DATE
SYSTEM_UPDATE_DT
I only need to delete the records based on this condition
select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
Thsi table has 12 million records.
Please advise
Regards,
Narayanuser7202581 wrote:
We need to delete some records on this table.
SQL> desc CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak;
Name Null? Type
CLM_PMT_CHCK_NUM NOT NULL NUMBER(9)
CLM_PMT_CHCK_ACCT NOT NULL VARCHAR2(5)
CLM_PMT_PAYEE_POSTAL_EXT_CD VARCHAR2(4)
CLM_PMT_CHCK_AMT NUMBER(9,2)
CLM_PMT_CHCK_DT DATE
CLM_PMT_PAYEE_NAME VARCHAR2(30)
CLM_PMT_PAYEE_ADDR_LINE_1 VARCHAR2(30)
CLM_PMT_PAYEE_ADDR_LINE_2 VARCHAR2(30)
CLM_PMT_PAYEE_CITY VARCHAR2(19)
CLM_PMT_PAYEE_STATE_CD CHAR(2)
CLM_PMT_PAYEE_POSTAL_CD VARCHAR2(5)
CLM_PMT_SUM_CHCK_IND CHAR(1)
CLM_PMT_PAYEE_TYPE_CD CHAR(1)
CLM_PMT_CHCK_STTS_CD CHAR(2)
SYSTEM_INSERT_DT DATE
SYSTEM_UPDATE_DT
I only need to delete the records based on this condition
select * from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018;
Thsi table has 12 million records.
Please advise
Regards,
NarayanDELETE from CDR_CLMS_ADMN.MDL_CLM_PMT_ENT_bak
where CLM_PMT_CHCK_ACCT='00107' AND CLM_PMT_CHCK_NUM>=002196611 AND CLM_PMT_CHCK_NUM<=002197018; -
Internal Table with 22 Million Records
Hello,
I am faced with the problem of working with an internal table which has 22 million records and it keeps growing. The following code has been written in an APD. I have tried every possible way to optimize the coding using Sorted/Hashed Tables but it ends in a dump as a result of insufficient memory.
Any tips on how I can optimize my coding? I have attached the Short-Dump.
Thanks,
SD
DATA: ls_source TYPE y_source_fields,
ls_target TYPE y_target_fields.
DATA: it_source_tmp TYPE yt_source_fields,
et_target_tmp TYPE yt_target_fields.
TYPES: BEGIN OF IT_TAB1,
BPARTNER TYPE /BI0/OIBPARTNER,
DATEBIRTH TYPE /BI0/OIDATEBIRTH,
ALTER TYPE /GKV/BW01_ALTER,
ALTERSGRUPPE TYPE /GKV/BW01_ALTERGR,
END OF IT_TAB1.
DATA: IT_XX_TAB1 TYPE SORTED TABLE OF IT_TAB1
WITH NON-UNIQUE KEY BPARTNER,
WA_XX_TAB1 TYPE IT_TAB1.
it_source_tmp[] = it_source[].
SORT it_source_tmp BY /B99/S_BWPKKD ASCENDING.
DELETE ADJACENT DUPLICATES FROM it_source_tmp
COMPARING /B99/S_BWPKKD.
SELECT BPARTNER
DATEBIRTH
FROM /B99/ABW00GO0600
INTO TABLE IT_XX_TAB1
FOR ALL ENTRIES IN it_source_tmp
WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
LOOP AT it_source INTO ls_source.
READ TABLE IT_XX_TAB1
INTO WA_XX_TAB1
WITH TABLE KEY BPARTNER = ls_source-/B99/S_BWPKKD.
IF sy-subrc = 0.
ls_target-DATEBIRTH = WA_XX_TAB1-DATEBIRTH.
ENDIF.
MOVE-CORRESPONDING ls_source TO ls_target.
APPEND ls_target TO et_target.
CLEAR ls_target.
ENDLOOP.Hi SD,
Please put the select querry in below condition marked in bold.
IF it_source_tmp[] IS NOT INTIAL.
SELECT BPARTNER
DATEBIRTH
FROM /B99/ABW00GO0600
INTO TABLE IT_XX_TAB1
FOR ALL ENTRIES IN it_source_tmp
WHERE BPARTNER = it_source_tmp-/B99/S_BWPKKD.
ENDIF.
This will solve your performance issue. Here when internal table it_source_tmp have no records, that time it was fetchin all the records from the database.Now after this conditio it will not select anyrecords if the table contains no records.
Regards,
Pravin -
Hi Guys,
I have to add a new Big INT column to existing table in production, which holds 700 million records and would like to know the impact?
I have been tolled by one of my colleagues that last time they tried adding a column to same table during working hour and it locked out the table and impacted the users.
Please suggest/share If any one had similar experience.
Thanks Shiven:) If Answer is Helpful, Please VoteIf you add a new column to a table using an ALTER TABLE ADD command and specify that the new column allows NULLs and you do not define a default value, then it will take a table lock. However, once it gets the table lock, it will essentially run instantly
and then free the table lock. That will add this new column as the last column in the table, for example
ALTER MyTable ADD MyNewColumn bigint NULL;
But if you your change adds a new column with a default value, or you do something like using table designer to add the new column in the middle of the current list of columns, then SQL will have to rewrite the table. So it will get a table lock, rewrite
the whole table and then free the table lock. That will take a considerable amount of time and the table lock will be held for that whole period of time.
But, no matter how you make the change, if at all possible, I would not alter a table schema on a production database during working hours. Do it when nothing else is going on.
Tom -
Table with 200 millions records.
Dear all,
I have to create table which will accept 200 millions record. I have to do monthly report from these data.
The performance make me very very concern, does anyone has any suggestion?
Thanks in advance.Hi,
I have a situation like yours.
Each month, you need to create a new partition, for the next year, this is anothers partition.
For example, you have a table
SQL> CREATE TABLE sales99_cpart(
2> sale_id NUMBER NOT NULL,
3> sale_date DATE,
4> prod_id NUMBER,
5> qty NUMBER)
6> PARTITION BY RANGE(sale_date)
7> SUBPARTITION BY HASH(prod_id) SUBPARTITIONS 4
8> STORE IN (data01,data02,data03,data04)
9> (PARTITION cp1 VALUES LESS THAN('01-APR-1999'),
10> PARTITION cp2 VALUES LESS THAN('01-JUL-1999'),
11> PARTITION cp3 VALUES LESS THAN('01-OCT-1999'),
12> PARTITION cp4 VALUES LESS THAN('01-JAN-2000'))
13> /
For the next year, add new partition and subpartition.
Subpartitions are like table, after what you can use parallel query. It is very interristing for good performance.
You can partition table by range on date, and subpartition by hash on id callcenter.
Next year, if you want history, you can drop only one subpartition.
The cost : Oracle partitionning is an option of Oracle entreprise edition, this is not default option.
Nicolas. -
Reg update of a 10 million record table from 1 million record table
I have 2 tables
Tabke 1 : 10 millio records
21 indexes --> 1). acct_id , acct_seq_no -- index_1
2). c1 , c2 , c3 -- index_2
Table 2 : 1 .5 million records
1 index on ( acct_id, acct_seq_no) - idx_1
common keys are acct_id and acct_seq_no
I'm updating table1 from table 2
I need to use index ( index_1 from table_1 ) and (idx_1 from table_2)
How can I make my query to use only this particular index.
MY query ia as follows
UPDATE csban_&1 csb
SET (
duns_no,
hdqtrs_duns_no,
us_ultmt_duns_no,
sci_id,
blg_cl_id,
cl_id
) =
( SELECT acct_id,
acct_seq_no,
duns_no,
hdqtrs_duns_no,
us_ultmt_duns_no,
sci_id,
blg_cl_id,
cl_id
FROM csban_abi_temp
WHERE csb.acct_id = temp.acct_id
AND csb.acct_seq_no = temp.acct_seq_no
AND rownum < 2
WHERE
EXISTS
SELECT 1
FROM csban_abi_temp temp1
WHERE csb.acct_id = temp1.acct_id
AND csb.acct_seq_no = temp1.acct_seq_no
DO I need to put and index hint after this
UPDATE csban_&1 csb --???????? /*+ indedx (csb.index_1) */
Thanks in advanceThanks a lot david and rob for sharing the info.
Please find the details
SQL> EXPLAIN PLAN FOR
UPDATE csban_2 csb
SET (
duns_no,
hdqtrs_duns_no,
us_ultmt_duns_no,
sci_id,
blg_cl_id,
cl_id
) =
( SELECT duns_no,
hdqtrs_duns_no,
us_ultmt_duns_no,
sci_id,
blg_cl_id,
cl_id
FROM csban_abi_temp temp
WHERE csb.acct_id = temp.acct_id
AND csb.acct_seq_no = temp.acct_seq_no
AND rownum < 2
WHERE
EXISTS
SELECT 1
FROM csban_abi_temp temp1
WHERE 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29
Explained.
SQL>
SQL>
SQL> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 584770029
| Id | Operation | Name | Rows | Bytes | Cost
| TQ |IN-OUT| PQ Distrib |
PLAN_TABLE_OUTPUT
| 0 | UPDATE STATEMENT | | 530K| 19M| 8213
| | | |
| 1 | UPDATE | CSBAN_2 | | |
| | | |
|* 2 | FILTER | | | |
| | | |
| 3 | PX COORDINATOR | | | |
| | | |
PLAN_TABLE_OUTPUT
| 4 | PX SEND QC (RANDOM) | :TQ10000 | 530K| 19M| 8213
| Q1,00 | P->S | QC (RAND) |
| 5 | PX BLOCK ITERATOR | | 530K| 19M| 8213
| Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL | CSBAN_2 | 530K| 19M| 8213
| Q1,00 | PCWP | |
|* 7 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | 10 | 3
PLAN_TABLE_OUTPUT
| | | |
|* 8 | COUNT STOPKEY | | | |
| | | |
| 9 | TABLE ACCESS BY INDEX ROWID| CSBAN_ABI_TEMP | 1 | 38 | 4
| | | |
|* 10 | INDEX RANGE SCAN | IDX_CSB_ABI_TMP | 1 | | 3
| | | |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
2 - filter( EXISTS (SELECT 0 FROM "CSBAN_ABI_TEMP" "TEMP1" WHERE "TEMP1"."ACC
T_SEQ_NO"=:B1 AND
"TEMP1"."ACCT_ID"=:B2))
PLAN_TABLE_OUTPUT
7 - access("TEMP1"."ACCT_ID"=:B1 AND "TEMP1"."ACCT_SEQ_NO"=:B2)
8 - filter(ROWNUM<2)
10 - access("TEMP"."ACCT_ID"=:B1 AND "TEMP"."ACCT_SEQ_NO"=:B2)
Note
- cpu costing is off (consider enabling it)
30 rows selected.
The query had completed and it took 1 hr 47 mts.
SQL> SQL> SQL> Updating CSBAN from TEMP table
old 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_&1 csb
new 1: UPDATE /*+ INDEX(acct_id,acct_seq_no) */ csban_1 csb
1611807 rows updated.
Elapsed: 01:47:16.40 -
Delete 50 Million records from a table with 60 Million records
Hi,
I'm using oracle9.2.0.7 on win2k3 32bit.
I need to delete 50M rows from a table that contains 60M records. This db was just passed on to me. I tried to use the delete statement but it takes too long. After reading the articles and forums, the best way to delete that many records from a table is to create a temp table, transfer the data needed to the temp table, drop the big table then rename temp table to big table. But the key here is in creating an exact replica of the big table.I have gotten the create table, indexes and constraints script in the export file from my production DB. But in the forums I read, I noticed that I haven't gotten the create grant script, is there a view I could use to get this? Can dbms.metadata get this?
When I need to create an exact replica of my big table, I only need:
create table, indexes, constraints, and grants script right? Did I miss anything?
I just want to make sure that I haven't left anything out. Kindly help.
Thanks and Best RegardsCan dbms.metadata get this?
Yes, dbms_metadata can get the grants.
YAS@10GR2 > select dbms_metadata.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST') from dual;
DBMS_METADATA.GET_DEPENDENT_DDL('OBJECT_GRANT','TEST')
GRANT SELECT ON "YAS"."TEST" TO "SYS"
When I need to create an exact replica of my big table, I only need:
create table, indexes, constraints, and grants script right? Did I miss anything?
There are triggers, foreign keys referencing this table (which will not permit you to drop the table if you do not take care of them), snapshot logs on the table, snapshots based on the table, etc... -
Importing a Partitioned Table with 10 Million Records.
I've been trying to import from a dump file using:
imp system/######@******** fromuser=fusr touser=tusr file=/f1/f2/expfl.dbf log=/o1/implg.log grants=N &
import done in US7ASCII character set and UTF8 NCHAR character set
import server uses UTF8 character set (possible charset conversion)
This contains a Table 'Tab_Mil_Rec', with almost 10 millions of records and has 10 partitions.
Done in 9i, on Solaris9.
Problem is the process abruptly ends at 'Tab_Mil_Rec'. This table is created but nothing is imported. I checked the log file, but it has logged events before this table, but nothing (not evens errors or termination message) after that. No errors are thrown even at os level, I don't know exactly because this was done as backgruond job.
Can anybody guess went wrong and whats the next step?Hi,
Can you tried import partition by partition of this table ?
Cheers -
I am using the "fetch all" vi to do this, but it is retrieving one record at a time, (if you examine the block diagram) How can i retrieve all records in a faster more efficient manner?
If this isn't faster than your previous method, then I think you found the wrong example. If you have the Database Connectivity Toolkit installed, then go to the LabVIEW Help menu and select "Find Examples". It defaults to searching for tasks, so open the "Communicating with External Applications" and "Databases" and open the Read All Data. The List Column names just gives the correct header to the resulting table and is a fast operation. That's not what you are supposed to be looking at ... it's the DBTools Select All Data subVI that is the important one. If you open it and look at its diagram, you'll see that it uses a completely different set of ADO methods and properties to retrieve all the data.
-
Like % in a query running on an Oracle Apps table with 8 million records
I am running the below query. As per the explain plan it is using the index on organization_id and inventory_item_id.
select segment1 from mtl_system_items where organization_id = 100 and inventory_item_id like '123456%'
It takes about 15 min to run this query which is a long time as this query returns values to the frontend created in asp. The webpage would time out by the time this query completes running. do you have any suggestions on how to run this query faster?It is an oracle apps table. below is the structure -
Name Null? Type
INVENTORY_ITEM_ID NOT NULL NUMBER
ORGANIZATION_ID NOT NULL NUMBER
LAST_UPDATE_DATE NOT NULL DATE
LAST_UPDATED_BY NOT NULL NUMBER
CREATION_DATE NOT NULL DATE
CREATED_BY NOT NULL NUMBER
LAST_UPDATE_LOGIN NUMBER
SUMMARY_FLAG NOT NULL VARCHAR2(1)
ENABLED_FLAG NOT NULL VARCHAR2(1)
START_DATE_ACTIVE DATE
END_DATE_ACTIVE DATE
DESCRIPTION VARCHAR2(240)
BUYER_ID NUMBER(9)
ACCOUNTING_RULE_ID NUMBER
INVOICING_RULE_ID NUMBER
SEGMENT1 VARCHAR2(40)
SEGMENT2 VARCHAR2(40)
SEGMENT3 VARCHAR2(40)
SEGMENT4 VARCHAR2(40)
SEGMENT5 VARCHAR2(40)
SEGMENT6 VARCHAR2(40)
SEGMENT7 VARCHAR2(40)
SEGMENT8 VARCHAR2(40)
SEGMENT9 VARCHAR2(40)
SEGMENT10 VARCHAR2(40)
SEGMENT11 VARCHAR2(40)
SEGMENT12 VARCHAR2(40)
SEGMENT13 VARCHAR2(40)
SEGMENT14 VARCHAR2(40)
SEGMENT15 VARCHAR2(40)
SEGMENT16 VARCHAR2(40)
SEGMENT17 VARCHAR2(40)
SEGMENT18 VARCHAR2(40)
SEGMENT19 VARCHAR2(40)
SEGMENT20 VARCHAR2(40)
ATTRIBUTE_CATEGORY VARCHAR2(30)
ATTRIBUTE1 VARCHAR2(150)
ATTRIBUTE2 VARCHAR2(150)
ATTRIBUTE3 VARCHAR2(150)
ATTRIBUTE4 VARCHAR2(150)
ATTRIBUTE5 VARCHAR2(150)
ATTRIBUTE6 VARCHAR2(150)
ATTRIBUTE7 VARCHAR2(150)
ATTRIBUTE8 VARCHAR2(150)
ATTRIBUTE9 VARCHAR2(150)
ATTRIBUTE10 VARCHAR2(150)
ATTRIBUTE11 VARCHAR2(150)
ATTRIBUTE12 VARCHAR2(150)
ATTRIBUTE13 VARCHAR2(150)
ATTRIBUTE14 VARCHAR2(150)
ATTRIBUTE15 VARCHAR2(150)
PURCHASING_ITEM_FLAG NOT NULL VARCHAR2(1)
SHIPPABLE_ITEM_FLAG NOT NULL VARCHAR2(1)
CUSTOMER_ORDER_FLAG NOT NULL VARCHAR2(1)
INTERNAL_ORDER_FLAG NOT NULL VARCHAR2(1)
SERVICE_ITEM_FLAG NOT NULL VARCHAR2(1)
INVENTORY_ITEM_FLAG NOT NULL VARCHAR2(1)
ENG_ITEM_FLAG NOT NULL VARCHAR2(1)
INVENTORY_ASSET_FLAG NOT NULL VARCHAR2(1)
PURCHASING_ENABLED_FLAG NOT NULL VARCHAR2(1)
CUSTOMER_ORDER_ENABLED_FLAG NOT NULL VARCHAR2(1)
INTERNAL_ORDER_ENABLED_FLAG NOT NULL VARCHAR2(1)
SO_TRANSACTIONS_FLAG NOT NULL VARCHAR2(1)
MTL_TRANSACTIONS_ENABLED_FLAG NOT NULL VARCHAR2(1)
STOCK_ENABLED_FLAG NOT NULL VARCHAR2(1)
BOM_ENABLED_FLAG NOT NULL VARCHAR2(1)
BUILD_IN_WIP_FLAG NOT NULL VARCHAR2(1)
REVISION_QTY_CONTROL_CODE NUMBER
ITEM_CATALOG_GROUP_ID NUMBER
CATALOG_STATUS_FLAG VARCHAR2(1)
RETURNABLE_FLAG VARCHAR2(1)
DEFAULT_SHIPPING_ORG NUMBER
COLLATERAL_FLAG VARCHAR2(1)
TAXABLE_FLAG VARCHAR2(1)
QTY_RCV_EXCEPTION_CODE VARCHAR2(25)
ALLOW_ITEM_DESC_UPDATE_FLAG VARCHAR2(1)
INSPECTION_REQUIRED_FLAG VARCHAR2(1)
RECEIPT_REQUIRED_FLAG VARCHAR2(1)
MARKET_PRICE NUMBER
HAZARD_CLASS_ID NUMBER
RFQ_REQUIRED_FLAG VARCHAR2(1)
QTY_RCV_TOLERANCE NUMBER
LIST_PRICE_PER_UNIT NUMBER
UN_NUMBER_ID NUMBER
PRICE_TOLERANCE_PERCENT NUMBER
ASSET_CATEGORY_ID NUMBER
ROUNDING_FACTOR NUMBER
UNIT_OF_ISSUE VARCHAR2(25)
ENFORCE_SHIP_TO_LOCATION_CODE VARCHAR2(25)
ALLOW_SUBSTITUTE_RECEIPTS_FLAG VARCHAR2(1)
ALLOW_UNORDERED_RECEIPTS_FLAG VARCHAR2(1)
ALLOW_EXPRESS_DELIVERY_FLAG VARCHAR2(1)
DAYS_EARLY_RECEIPT_ALLOWED NUMBER
DAYS_LATE_RECEIPT_ALLOWED NUMBER
RECEIPT_DAYS_EXCEPTION_CODE VARCHAR2(25)
RECEIVING_ROUTING_ID NUMBER
INVOICE_CLOSE_TOLERANCE NUMBER
RECEIVE_CLOSE_TOLERANCE NUMBER
AUTO_LOT_ALPHA_PREFIX VARCHAR2(30)
START_AUTO_LOT_NUMBER VARCHAR2(30)
LOT_CONTROL_CODE NUMBER
SHELF_LIFE_CODE NUMBER
SHELF_LIFE_DAYS NUMBER
SERIAL_NUMBER_CONTROL_CODE NUMBER
START_AUTO_SERIAL_NUMBER VARCHAR2(30)
AUTO_SERIAL_ALPHA_PREFIX VARCHAR2(30)
SOURCE_TYPE NUMBER
SOURCE_ORGANIZATION_ID NUMBER
SOURCE_SUBINVENTORY VARCHAR2(10)
EXPENSE_ACCOUNT NUMBER
ENCUMBRANCE_ACCOUNT NUMBER
RESTRICT_SUBINVENTORIES_CODE NUMBER
UNIT_WEIGHT NUMBER
WEIGHT_UOM_CODE VARCHAR2(3)
VOLUME_UOM_CODE VARCHAR2(3)
UNIT_VOLUME NUMBER
RESTRICT_LOCATORS_CODE NUMBER
LOCATION_CONTROL_CODE NUMBER
SHRINKAGE_RATE NUMBER
ACCEPTABLE_EARLY_DAYS NUMBER
PLANNING_TIME_FENCE_CODE NUMBER
DEMAND_TIME_FENCE_CODE NUMBER
LEAD_TIME_LOT_SIZE NUMBER
STD_LOT_SIZE NUMBER
CUM_MANUFACTURING_LEAD_TIME NUMBER
OVERRUN_PERCENTAGE NUMBER
MRP_CALCULATE_ATP_FLAG VARCHAR2(1)
ACCEPTABLE_RATE_INCREASE NUMBER
ACCEPTABLE_RATE_DECREASE NUMBER
CUMULATIVE_TOTAL_LEAD_TIME NUMBER
PLANNING_TIME_FENCE_DAYS NUMBER
DEMAND_TIME_FENCE_DAYS NUMBER
END_ASSEMBLY_PEGGING_FLAG VARCHAR2(1)
REPETITIVE_PLANNING_FLAG VARCHAR2(1)
PLANNING_EXCEPTION_SET VARCHAR2(10)
BOM_ITEM_TYPE NOT NULL NUMBER
PICK_COMPONENTS_FLAG NOT NULL VARCHAR2(1)
REPLENISH_TO_ORDER_FLAG NOT NULL VARCHAR2(1)
BASE_ITEM_ID NUMBER
ATP_COMPONENTS_FLAG NOT NULL VARCHAR2(1)
ATP_FLAG NOT NULL VARCHAR2(1)
FIXED_LEAD_TIME NUMBER
VARIABLE_LEAD_TIME NUMBER
WIP_SUPPLY_LOCATOR_ID NUMBER
WIP_SUPPLY_TYPE NUMBER
WIP_SUPPLY_SUBINVENTORY VARCHAR2(10)
PRIMARY_UOM_CODE VARCHAR2(3)
PRIMARY_UNIT_OF_MEASURE VARCHAR2(25)
ALLOWED_UNITS_LOOKUP_CODE NUMBER
COST_OF_SALES_ACCOUNT NUMBER
SALES_ACCOUNT NUMBER
DEFAULT_INCLUDE_IN_ROLLUP_FLAG VARCHAR2(1)
INVENTORY_ITEM_STATUS_CODE VARCHAR2(10)
INVENTORY_PLANNING_CODE NUMBER
PLANNER_CODE VARCHAR2(10)
PLANNING_MAKE_BUY_CODE NUMBER
FIXED_LOT_MULTIPLIER NUMBER
ROUNDING_CONTROL_TYPE NUMBER
CARRYING_COST NUMBER
POSTPROCESSING_LEAD_TIME NUMBER
PREPROCESSING_LEAD_TIME NUMBER
FULL_LEAD_TIME NUMBER
ORDER_COST NUMBER
MRP_SAFETY_STOCK_PERCENT NUMBER
MRP_SAFETY_STOCK_CODE NUMBER
MIN_MINMAX_QUANTITY NUMBER
MAX_MINMAX_QUANTITY NUMBER
MINIMUM_ORDER_QUANTITY NUMBER
FIXED_ORDER_QUANTITY NUMBER
FIXED_DAYS_SUPPLY NUMBER
MAXIMUM_ORDER_QUANTITY NUMBER
ATP_RULE_ID NUMBER
PICKING_RULE_ID NUMBER
RESERVABLE_TYPE NUMBER
POSITIVE_MEASUREMENT_ERROR NUMBER
NEGATIVE_MEASUREMENT_ERROR NUMBER
ENGINEERING_ECN_CODE VARCHAR2(50)
ENGINEERING_ITEM_ID NUMBER
ENGINEERING_DATE DATE
SERVICE_STARTING_DELAY NUMBER
VENDOR_WARRANTY_FLAG NOT NULL VARCHAR2(1)
SERVICEABLE_COMPONENT_FLAG VARCHAR2(1)
SERVICEABLE_PRODUCT_FLAG NOT NULL VARCHAR2(1)
BASE_WARRANTY_SERVICE_ID NUMBER
PAYMENT_TERMS_ID NUMBER
PREVENTIVE_MAINTENANCE_FLAG VARCHAR2(1)
PRIMARY_SPECIALIST_ID NUMBER
SECONDARY_SPECIALIST_ID NUMBER
SERVICEABLE_ITEM_CLASS_ID NUMBER
TIME_BILLABLE_FLAG VARCHAR2(1)
MATERIAL_BILLABLE_FLAG VARCHAR2(30)
EXPENSE_BILLABLE_FLAG VARCHAR2(1)
PRORATE_SERVICE_FLAG VARCHAR2(1)
COVERAGE_SCHEDULE_ID NUMBER
SERVICE_DURATION_PERIOD_CODE VARCHAR2(10)
SERVICE_DURATION NUMBER
WARRANTY_VENDOR_ID NUMBER
MAX_WARRANTY_AMOUNT NUMBER
RESPONSE_TIME_PERIOD_CODE VARCHAR2(30)
RESPONSE_TIME_VALUE NUMBER
NEW_REVISION_CODE VARCHAR2(30)
INVOICEABLE_ITEM_FLAG NOT NULL VARCHAR2(1)
TAX_CODE VARCHAR2(50)
INVOICE_ENABLED_FLAG NOT NULL VARCHAR2(1)
MUST_USE_APPROVED_VENDOR_FLAG NOT NULL VARCHAR2(1)
REQUEST_ID NUMBER
PROGRAM_APPLICATION_ID NUMBER
PROGRAM_ID NUMBER
PROGRAM_UPDATE_DATE DATE
OUTSIDE_OPERATION_FLAG NOT NULL VARCHAR2(1)
OUTSIDE_OPERATION_UOM_TYPE VARCHAR2(25)
SAFETY_STOCK_BUCKET_DAYS NUMBER
AUTO_REDUCE_MPS NUMBER(22)
COSTING_ENABLED_FLAG NOT NULL VARCHAR2(1)
AUTO_CREATED_CONFIG_FLAG NOT NULL VARCHAR2(1)
CYCLE_COUNT_ENABLED_FLAG NOT NULL VARCHAR2(1)
ITEM_TYPE VARCHAR2(30)
MODEL_CONFIG_CLAUSE_NAME VARCHAR2(10)
SHIP_MODEL_COMPLETE_FLAG VARCHAR2(1)
MRP_PLANNING_CODE NUMBER
RETURN_INSPECTION_REQUIREMENT NUMBER
ATO_FORECAST_CONTROL NUMBER
RELEASE_TIME_FENCE_CODE NUMBER
RELEASE_TIME_FENCE_DAYS NUMBER
CONTAINER_ITEM_FLAG VARCHAR2(1)
VEHICLE_ITEM_FLAG VARCHAR2(1)
MAXIMUM_LOAD_WEIGHT NUMBER
MINIMUM_FILL_PERCENT NUMBER
CONTAINER_TYPE_CODE VARCHAR2(30)
INTERNAL_VOLUME NUMBER
WH_UPDATE_DATE DATE
PRODUCT_FAMILY_ITEM_ID NUMBER
GLOBAL_ATTRIBUTE_CATEGORY VARCHAR2(150)
GLOBAL_ATTRIBUTE1 VARCHAR2(150)
GLOBAL_ATTRIBUTE2 VARCHAR2(150)
GLOBAL_ATTRIBUTE3 VARCHAR2(150)
GLOBAL_ATTRIBUTE4 VARCHAR2(150)
GLOBAL_ATTRIBUTE5 VARCHAR2(150)
GLOBAL_ATTRIBUTE6 VARCHAR2(150)
GLOBAL_ATTRIBUTE7 VARCHAR2(150)
GLOBAL_ATTRIBUTE8 VARCHAR2(150)
GLOBAL_ATTRIBUTE9 VARCHAR2(150)
GLOBAL_ATTRIBUTE10 VARCHAR2(150)
PURCHASING_TAX_CODE VARCHAR2(50)
The query is as below
select segment1 from mtl_system_items where organization_id = 100 and inventory_item_id like '123456%'
The explain plan is as below -
Plan
SELECT STATEMENT RULE
2 TABLE ACCESS BY INDEX ROWID INV.MTL_SYSTEM_ITEMS
1 INDEX RANGE SCAN NON-UNIQUE INV.MTL_SYSTEM_ITEMS_N1
The INV.MTL_SYSTEM_ITEMS_N1 index is created on
ORGANIZATION_ID and SEGMENT1 -
SQL Query to fetch records from tables which have 75+ million records
Hi,
I have the explain plan for a sql stmt.Require your suggestions to improve this.
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | 340 | 175K| 19075 |
| 1 | TEMP TABLE TRANSFORMATION | | | | |
| 2 | LOAD AS SELECT | | | | |
| 3 | SORT GROUP BY | | 32M| 1183M| 799K|
| 4 | TABLE ACCESS FULL | CLM_DETAIL_PRESTG | 135M| 4911M| 464K|
| 5 | LOAD AS SELECT | | | | |
| 6 | TABLE ACCESS FULL | CLM_HEADER_PRESTG | 1 | 274 | 246K|
PLAN_TABLE_OUTPUT
| 7 | LOAD AS SELECT | | | | |
| 8 | SORT UNIQUE | | 744K| 85M| 8100 |
| 9 | TABLE ACCESS FULL | DAILY_PROV_PRESTG | 744K| 85M| 1007 |
| 10 | UNION-ALL | | | | |
| 11 | SORT UNIQUE | | 177 | 97350 | 9539 |
| 12 | HASH JOIN | | 177 | 97350 | 9538 |
| 13 | HASH JOIN OUTER | | 3 | 1518 | 9533 |
| 14 | HASH JOIN | | 1 | 391 | 8966 |
| 15 | TABLE ACCESS BY INDEX ROWID | CLM_DETAIL_PRESTG | 1 | 27 | 3 |
| 16 | NESTED LOOPS | | 1 | 361 | 10 |
| 17 | NESTED LOOPS OUTER | | 1 | 334 | 7 |
PLAN_TABLE_OUTPUT
| 18 | NESTED LOOPS OUTER | | 1 | 291 | 4 |
| 19 | VIEW | | 1 | 259 | 2 |
| 20 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C9_DA2D01AD | 1 | 269 | 2 |
| 21 | INDEX RANGE SCAN | CLM_PAYMNT_CLMEXT_PRESTG_IDX | 1 | 32 | 2 |
| 22 | TABLE ACCESS BY INDEX ROWID| CLM_PAYMNT_CHKEXT_PRESTG | 1 | 43 | 3 |
| 23 | INDEX RANGE SCAN | CLM_PAYMNT_CHKEXT_PRESTG_IDX | 1 | | 2 |
| 24 | INDEX RANGE SCAN | CLM_DETAIL_PRESTG_IDX | 6 | | 2 |
| 25 | VIEW | | 32M| 934M| 8235 |
| 26 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C8_DA2D01AD | 32M| 934M| 8235 |
| 27 | VIEW | | 744K| 81M| 550 |
| 28 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66CA_DA2D01AD | 744K| 81M| 550 |
PLAN_TABLE_OUTPUT
| 29 | TABLE ACCESS FULL | CCP_MBRSHP_XREF | 5288 | 227K| 5 |
| 30 | SORT UNIQUE | | 163 | 82804 | 9536 |
| 31 | HASH JOIN | | 163 | 82804 | 9535 |
| 32 | HASH JOIN OUTER | | 3 | 1437 | 9530 |
| 33 | HASH JOIN | | 1 | 364 | 8963 |
| 34 | NESTED LOOPS OUTER | | 1 | 334 | 7 |
| 35 | NESTED LOOPS OUTER | | 1 | 291 | 4 |
| 36 | VIEW | | 1 | 259 | 2 |
| 37 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C9_DA2D01AD | 1 | 269 | 2 |
| 38 | INDEX RANGE SCAN | CLM_PAYMNT_CLMEXT_PRESTG_IDX | 1 | 32 | 2 |
| 39 | TABLE ACCESS BY INDEX ROWID | CLM_PAYMNT_CHKEXT_PRESTG | 1 | 43 | 3 |
PLAN_TABLE_OUTPUT
| 40 | INDEX RANGE SCAN | CLM_PAYMNT_CHKEXT_PRESTG_IDX | 1 | | 2 |
| 41 | VIEW | | 32M| 934M| 8235 |
| 42 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66C8_DA2D01AD | 32M| 934M| 8235 |
| 43 | VIEW | | 744K| 81M| 550 |
| 44 | TABLE ACCESS FULL | SYS_TEMP_0FD9D66CA_DA2D01AD | 744K| 81M| 550 |
| 45 | TABLE ACCESS FULL | CCP_MBRSHP_XREF | 5288 | 149K| 5 |
The CLM_DETAIL_PRESTG table has 100 million records and the CLM_HEADER_PRESTG table has 75 million records.
Any gussestions on how to getch huge records from tables of this size will help.
Regards,
NarayanWITH CLAIM_DTL
AS ( SELECT
ICN_NUM,
MIN (FIRST_SRVC_DT) AS FIRST_SRVC_DT,
MAX (LAST_SRVC_DT) AS LAST_SRVC_DT,
MIN (PLC_OF_SRVC_CD) AS PLC_OF_SRVC_CD
FROM CCP_STG.CLM_DETAIL_PRESTG CD WHERE ACT_CD <>'D'
GROUP BY ICN_NUM),
CLAIM_HDR
AS (SELECT
ICN_NUM,
SBCR_ID,
MBR_ID,
MBR_FIRST_NAME,
MBR_MI,
MBR_LAST_NAME,
MBR_BIRTH_DATE,
GENDER_TYPE_CD,
SBCR_RLTNSHP_TYPE_CD,
SBCR_FIRST_NAME,
SBCR_MI,
SBCR_LAST_NAME,
SBCR_ADDR_LINE_1,
SBCR_ADDR_LINE2,
SBCR_ADDR_CITY,
SBCR_ADDR_STATE,
SBCR_ZIP_CD,
PRVDR_NUM,
CLM_PRCSSD_DT,
CLM_TYPE_CLASS_CD,
AUTHO_NUM,
TOT_BILLED_AMT,
HCFA_DRG_TYPE_CD,
FCLTY_ADMIT_DT,
ADMIT_TYPE,
DSCHRG_STATUS_CD,
FILE_BILLING_NPI,
CLAIM_LOCATION_CD,
CLM_RELATED_ICN_1,
SBCR_ID||0
|| MBR_ID
|| GENDER_TYPE_CD
|| SBCR_RLTNSHP_TYPE_CD
|| MBR_BIRTH_DATE
AS MBR_ENROLL_ID,
SUBSCR_INSGRP_NM ,
CAC,
PRVDR_PTNT_ACC_ID,
BILL_TYPE,
PAYEE_ASSGN_CODE,
CREAT_RUN_CYC_EXEC_SK,
PRESTG_INSRT_DT
FROM CCP_STG.CLM_HEADER_PRESTG P WHERE ACT_CD <>'D' AND SUBSTR(CLM_PRCSS_TYPE_CD,4,1) NOT IN ('1','2','3','4','5','6') ),
PROV AS ( SELECT DISTINCT
PROV_ID,
PROV_FST_NM,
PROV_MD_NM,
PROV_LST_NM,
PROV_BILL_ADR1,
PROV_BILL_CITY,
PROV_BILL_STATE,
PROV_BILL_ZIP,
CASE WHEN PROV_SEC_ID_QL='E' THEN PROV_SEC_ID
ELSE NULL
END AS PROV_SEC_ID,
PROV_ADR1,
PROV_CITY,
PROV_STATE,
PROV_ZIP
FROM CCP_STG.DAILY_PROV_PRESTG),
MBR_XREF AS (SELECT SUBSTR(MBR_ENROLL_ID,1,17)||DECODE ((SUBSTR(MBR_ENROLL_ID,18,1)),'E','1','S','2','D','3')||SUBSTR(MBR_ENROLL_ID,19) AS MBR_ENROLLL_ID,
NEW_MBR_FLG
FROM CCP_STG.CCP_MBRSHP_XREF)
SELECT DISTINCT CLAIM_HDR.ICN_NUM AS ICN_NUM,
CLAIM_HDR.SBCR_ID AS SBCR_ID,
CLAIM_HDR.MBR_ID AS MBR_ID,
CLAIM_HDR.MBR_FIRST_NAME AS MBR_FIRST_NAME,
CLAIM_HDR.MBR_MI AS MBR_MI,
CLAIM_HDR.MBR_LAST_NAME AS MBR_LAST_NAME,
CLAIM_HDR.MBR_BIRTH_DATE AS MBR_BIRTH_DATE,
CLAIM_HDR.GENDER_TYPE_CD AS GENDER_TYPE_CD,
CLAIM_HDR.SBCR_RLTNSHP_TYPE_CD AS SBCR_RLTNSHP_TYPE_CD,
CLAIM_HDR.SBCR_FIRST_NAME AS SBCR_FIRST_NAME,
CLAIM_HDR.SBCR_MI AS SBCR_MI,
CLAIM_HDR.SBCR_LAST_NAME AS SBCR_LAST_NAME,
CLAIM_HDR.SBCR_ADDR_LINE_1 AS SBCR_ADDR_LINE_1,
CLAIM_HDR.SBCR_ADDR_LINE2 AS SBCR_ADDR_LINE2,
CLAIM_HDR.SBCR_ADDR_CITY AS SBCR_ADDR_CITY,
CLAIM_HDR.SBCR_ADDR_STATE AS SBCR_ADDR_STATE,
CLAIM_HDR.SBCR_ZIP_CD AS SBCR_ZIP_CD,
CLAIM_HDR.PRVDR_NUM AS PRVDR_NUM,
CLAIM_HDR.CLM_PRCSSD_DT AS CLM_PRCSSD_DT,
CLAIM_HDR.CLM_TYPE_CLASS_CD AS CLM_TYPE_CLASS_CD,
CLAIM_HDR.AUTHO_NUM AS AUTHO_NUM,
CLAIM_HDR.TOT_BILLED_AMT AS TOT_BILLED_AMT,
CLAIM_HDR.HCFA_DRG_TYPE_CD AS HCFA_DRG_TYPE_CD,
CLAIM_HDR.FCLTY_ADMIT_DT AS FCLTY_ADMIT_DT,
CLAIM_HDR.ADMIT_TYPE AS ADMIT_TYPE,
CLAIM_HDR.DSCHRG_STATUS_CD AS DSCHRG_STATUS_CD,
CLAIM_HDR.FILE_BILLING_NPI AS FILE_BILLING_NPI,
CLAIM_HDR.CLAIM_LOCATION_CD AS CLAIM_LOCATION_CD,
CLAIM_HDR.CLM_RELATED_ICN_1 AS CLM_RELATED_ICN_1,
CLAIM_HDR.SUBSCR_INSGRP_NM,
CLAIM_HDR.CAC,
CLAIM_HDR.PRVDR_PTNT_ACC_ID,
CLAIM_HDR.BILL_TYPE,
CLAIM_DTL.FIRST_SRVC_DT AS FIRST_SRVC_DT,
CLAIM_DTL.LAST_SRVC_DT AS LAST_SRVC_DT,
CLAIM_DTL.PLC_OF_SRVC_CD AS PLC_OF_SRVC_CD,
PROV.PROV_LST_NM AS BILL_PROV_LST_NM,
PROV.PROV_FST_NM AS BILL_PROV_FST_NM,
PROV.PROV_MD_NM AS BILL_PROV_MID_NM,
PROV.PROV_BILL_ADR1 AS BILL_PROV_ADDR1,
PROV.PROV_BILL_CITY AS BILL_PROV_CITY,
PROV.PROV_BILL_STATE AS BILL_PROV_STATE,
PROV.PROV_BILL_ZIP AS BILL_PROV_ZIP,
PROV.PROV_SEC_ID AS BILL_PROV_EIN,
PROV.PROV_ID AS SERV_FAC_ID ,
PROV.PROV_ADR1 AS SERV_FAC_ADDR1 ,
PROV.PROV_CITY AS SERV_FAC_CITY ,
PROV.PROV_STATE AS SERV_FAC_STATE ,
PROV.PROV_ZIP AS SERV_FAC_ZIP ,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_1,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_2,
CHK_PAYMNT.CLM_PMT_PAYEE_CITY,
CHK_PAYMNT.CLM_PMT_PAYEE_STATE_CD,
CHK_PAYMNT.CLM_PMT_PAYEE_POSTAL_CD,
CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK
FROM CLAIM_DTL,(select * FROM CCP_STG.CLM_DETAIL_PRESTG WHERE ACT_CD <>'D') CLM_DETAIL_PRESTG, CLAIM_HDR,CCP_STG.MBR_XREF,PROV,CCP_STG.CLM_PAYMNT_CLMEXT_PRESTG CLM_PAYMNT,CCP_STG.CLM_PAYMNT_CHKEXT_PRESTG CHK_PAYMNT
WHERE
CLAIM_HDR.ICN_NUM = CLM_DETAIL_PRESTG.ICN_NUM
AND CLAIM_HDR.ICN_NUM = CLAIM_DTL.ICN_NUM
AND CLAIM_HDR.ICN_NUM=CLM_PAYMNT.ICN_NUM(+)
AND CLM_PAYMNT.CLM_PMT_CHCK_ACCT=CHK_PAYMNT.CLM_PMT_CHCK_ACCT
AND CLM_PAYMNT.CLM_PMT_CHCK_NUM=CHK_PAYMNT.CLM_PMT_CHCK_NUM
AND CLAIM_HDR.MBR_ENROLL_ID = MBR_XREF.MBR_ENROLLL_ID
AND CLM_DETAIL_PRESTG.FIRST_SRVC_DT >= 20110101
AND MBR_XREF.NEW_MBR_FLG = 'Y'
AND PROV.PROV_ID(+)=SUBSTR(CLAIM_HDR.PRVDR_NUM,6)
AND MOD(SUBSTR(CLAIM_HDR.ICN_NUM,14,2),2)=0
UNION ALL
SELECT DISTINCT CLAIM_HDR.ICN_NUM AS ICN_NUM,
CLAIM_HDR.SBCR_ID AS SBCR_ID,
CLAIM_HDR.MBR_ID AS MBR_ID,
CLAIM_HDR.MBR_FIRST_NAME AS MBR_FIRST_NAME,
CLAIM_HDR.MBR_MI AS MBR_MI,
CLAIM_HDR.MBR_LAST_NAME AS MBR_LAST_NAME,
CLAIM_HDR.MBR_BIRTH_DATE AS MBR_BIRTH_DATE,
CLAIM_HDR.GENDER_TYPE_CD AS GENDER_TYPE_CD,
CLAIM_HDR.SBCR_RLTNSHP_TYPE_CD AS SBCR_RLTNSHP_TYPE_CD,
CLAIM_HDR.SBCR_FIRST_NAME AS SBCR_FIRST_NAME,
CLAIM_HDR.SBCR_MI AS SBCR_MI,
CLAIM_HDR.SBCR_LAST_NAME AS SBCR_LAST_NAME,
CLAIM_HDR.SBCR_ADDR_LINE_1 AS SBCR_ADDR_LINE_1,
CLAIM_HDR.SBCR_ADDR_LINE2 AS SBCR_ADDR_LINE2,
CLAIM_HDR.SBCR_ADDR_CITY AS SBCR_ADDR_CITY,
CLAIM_HDR.SBCR_ADDR_STATE AS SBCR_ADDR_STATE,
CLAIM_HDR.SBCR_ZIP_CD AS SBCR_ZIP_CD,
CLAIM_HDR.PRVDR_NUM AS PRVDR_NUM,
CLAIM_HDR.CLM_PRCSSD_DT AS CLM_PRCSSD_DT,
CLAIM_HDR.CLM_TYPE_CLASS_CD AS CLM_TYPE_CLASS_CD,
CLAIM_HDR.AUTHO_NUM AS AUTHO_NUM,
CLAIM_HDR.TOT_BILLED_AMT AS TOT_BILLED_AMT,
CLAIM_HDR.HCFA_DRG_TYPE_CD AS HCFA_DRG_TYPE_CD,
CLAIM_HDR.FCLTY_ADMIT_DT AS FCLTY_ADMIT_DT,
CLAIM_HDR.ADMIT_TYPE AS ADMIT_TYPE,
CLAIM_HDR.DSCHRG_STATUS_CD AS DSCHRG_STATUS_CD,
CLAIM_HDR.FILE_BILLING_NPI AS FILE_BILLING_NPI,
CLAIM_HDR.CLAIM_LOCATION_CD AS CLAIM_LOCATION_CD,
CLAIM_HDR.CLM_RELATED_ICN_1 AS CLM_RELATED_ICN_1,
CLAIM_HDR.SUBSCR_INSGRP_NM,
CLAIM_HDR.CAC,
CLAIM_HDR.PRVDR_PTNT_ACC_ID,
CLAIM_HDR.BILL_TYPE,
CLAIM_DTL.FIRST_SRVC_DT AS FIRST_SRVC_DT,
CLAIM_DTL.LAST_SRVC_DT AS LAST_SRVC_DT,
CLAIM_DTL.PLC_OF_SRVC_CD AS PLC_OF_SRVC_CD,
PROV.PROV_LST_NM AS BILL_PROV_LST_NM,
PROV.PROV_FST_NM AS BILL_PROV_FST_NM,
PROV.PROV_MD_NM AS BILL_PROV_MID_NM,
PROV.PROV_BILL_ADR1 AS BILL_PROV_ADDR1,
PROV.PROV_BILL_CITY AS BILL_PROV_CITY,
PROV.PROV_BILL_STATE AS BILL_PROV_STATE,
PROV.PROV_BILL_ZIP AS BILL_PROV_ZIP,
PROV.PROV_SEC_ID AS BILL_PROV_EIN,
PROV.PROV_ID AS SERV_FAC_ID ,
PROV.PROV_ADR1 AS SERV_FAC_ADDR1 ,
PROV.PROV_CITY AS SERV_FAC_CITY ,
PROV.PROV_STATE AS SERV_FAC_STATE ,
PROV.PROV_ZIP AS SERV_FAC_ZIP ,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_1,
CHK_PAYMNT.CLM_PMT_PAYEE_ADDR_LINE_2,
CHK_PAYMNT.CLM_PMT_PAYEE_CITY,
CHK_PAYMNT.CLM_PMT_PAYEE_STATE_CD,
CHK_PAYMNT.CLM_PMT_PAYEE_POSTAL_CD,
CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK
FROM CLAIM_DTL, CLAIM_HDR,MBR_XREF,PROV,CCP_STG.CLM_PAYMNT_CLMEXT_PRESTG CLM_PAYMNT,CCP_STG.CLM_PAYMNT_CHKEXT_PRESTG CHK_PAYMNT
WHERE CLAIM_HDR.ICN_NUM = CLAIM_DTL.ICN_NUM
AND CLAIM_HDR.ICN_NUM=CLM_PAYMNT.ICN_NUM(+)
AND CLM_PAYMNT.CLM_PMT_CHCK_ACCT=CHK_PAYMNT.CLM_PMT_CHCK_ACCT
AND CLM_PAYMNT.CLM_PMT_CHCK_NUM=CHK_PAYMNT.CLM_PMT_CHCK_NUM
AND CLAIM_HDR.MBR_ENROLL_ID = MBR_XREF.MBR_ENROLLL_ID
-- AND TRUNC(CLAIM_HDR.PRESTG_INSRT_DT) = TRUNC(SYSDATE)
AND CLAIM_HDR.CREAT_RUN_CYC_EXEC_SK = 123638.000000000000000
AND MBR_XREF.NEW_MBR_FLG = 'N'
AND PROV.PROV_ID(+)=SUBSTR(CLAIM_HDR.PRVDR_NUM,6)
AND MOD(SUBSTR(CLAIM_HDR.ICN_NUM,14,2),2)=0; -
I am designing a table, for which I am loading the data into my table from different tables by giving joins. But I have Status column, for which I have about 16 different statuses from different tables, now for each case I have a condition, if it satisfies
then the particular status will show in status column, in that way I need to write the query as 16 different cases.
Now, my question is what is the best way to write these cases for the to satisfy all the conditions and also get the data quickly to the table. As the data we are getting is mostly from big tables about 7 million records. And if we give the logic as case
it will scan for each case and about 16 times it will scan the table, How can I do this faster? Can anyone help me outHere is the code I have written to get the data from temp tables which are taking records from 7 millions table with filtering records of year 2013. This is taking more than an hour to run. Iam posting the part of code which is running slow, mainly
the part of Status column.
SELECT
z.SYSTEMNAME
--,Case when ZXC.[Subsystem Name] <> 'NULL' Then zxc.[SubSystem Name]
--else NULL
--End AS SubSystemName
, CASE
WHEN z.TAX_ID IN
(SELECT DISTINCT zxc.TIN
FROM .dbo.SQS_Provider_Tracking zxc
WHERE zxc.[SubSystem Name] <> 'NULL'
THEN
(SELECT DISTINCT [Subsystem Name]
FROM .dbo.SQS_Provider_Tracking zxc
WHERE z.TAX_ID = zxc.TIN)
End As SubSYSTEMNAME
,z.PROVIDERNAME
,z.STATECODE
,z.TAX_ID
,z.SRC_PAR_CD
,SUM(z.SEQUEST_AMT) Actual_Sequestered_Amt
, CASE
WHEN z.SRC_PAR_CD IN ('E','O','S','W')
THEN 'Nonpar Waiver'
-- --Is Puerto Rico of Lifesynch
WHEN z.TAX_ID IN
(SELECT DISTINCT a.TAX_ID
FROM .dbo.SQS_NonPar_PR_LS_TINs a
WHERE a.Bucket <> 'Nonpar'
THEN
(SELECT DISTINCT a.Bucket
FROM .dbo.SQS_NonPar_PR_LS_TINs a
WHERE a.TAX_ID = z.TAX_ID)
--**Amendment Mailed**
WHEN z.TAX_ID IN
(SELECT DISTINCT b.PROV_TIN
FROM .dbo.SQS_Mailed_TINs_010614 b WITH (NOLOCK )
where not exists (select * from dbo.sqs_objector_TINs t where b.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN
(SELECT DISTINCT b.Mailing
FROM .dbo.SQS_Mailed_TINs_010614 b
WHERE z.TAX_ID = b.PROV_TIN
-- --**Amendment Mailed Wave 3-5**
WHEN z.TAX_ID In
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (3rd Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (3rd Wave)'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (4th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (4th Wave)'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Amendment Mailed (5th Wave)'
and not exists (select * from dbo.sqs_objector_TINs t where qz.PROV_TIN = t.prov_tin))
and z.Hosp_Ind = 'P'
THEN 'Amendment Mailed (5th Wave)'
-- --**Top Objecting Systems**
WHEN z.SYSTEMNAME IN
('ADVENTIST HEALTH SYSTEM','ASCENSION HEALTH ALLIANCE','AULTMAN HEALTH FOUNDATION','BANNER HEALTH SYSTEM')
THEN 'Top Objecting Systems'
WHEN z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Top Objector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H'
THEN 'Top Objecting Systems'
-- --**Other Objecting Hospitals**
WHEN (z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Objector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H')
THEN 'Other Objecting Hospitals'
-- --**Objecting Physicians**
WHEN (z.TAX_ID IN
(SELECT DISTINCT
obj.TIN
FROM .dbo.SQS_Provider_Tracking obj
WHERE obj.[Objector?] in ('Objector','Top Objector')
and z.TAX_ID = obj.TIN
and z.Hosp_Ind = 'P')
THEN 'Objecting Physicians'
--****Rejecting Hospitals****
WHEN (z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
INNER JOIN .dbo.SQS_Provider_Tracking obj
ON h.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector'
WHERE z.TAX_ID = h.TAX_ID
OR h.SMG_ID IS NOT NULL
)and z.Hosp_Ind = 'H')
THEN 'Rejecting Hospitals'
--****Rejecting Physciains****
WHEN
(z.TAX_ID IN
(SELECT DISTINCT
obj.TIN
FROM .dbo.SQS_Provider_Tracking obj
WHERE z.TAX_ID = obj.TIN
AND obj.[Objector?] = 'Rejector')
and z.Hosp_Ind = 'P')
THEN 'REjecting Physicians'
----**********ALL OBJECTORS SHOULD HAVE BEEN BUCKETED AT THIS POINT IN THE QUERY**********
-- --**Non-Objecting Hospitals**
WHEN z.TAX_ID IN
(SELECT DISTINCT
h.TAX_ID
FROM
#HIHO_Records h
WHERE
(z.TAX_ID = h.TAX_ID)
OR h.SMG_ID IS NOT NULL)
and z.Hosp_Ind = 'H'
THEN 'Non-Objecting Hospitals'
-- **Outstanding Contracts for Review**
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'Non-Objecting Bilateral Physicians'
AND z.TAX_ID = qz.PROV_TIN)
Then 'Non-Objecting Bilateral Physicians'
When z.TAX_ID in
(select distinct
p.TAX_ID
from dbo.SQS_CoC_Potential_Mail_List p
where p.amendmentrights <> 'Unilateral'
AND z.TAX_ID = p.TAX_ID)
THEN 'Non-Objecting Bilateral Physicians'
WHEN z.TAX_ID IN
(SELECT DISTINCT
qz.PROV_TIN
FROM
[SQS_Mailed_TINs] qz
where qz.Mailing = 'More Research Needed'
AND qz.PROV_TIN = z.TAX_ID)
THEN 'More Research Needed'
WHEN z.TAX_ID IN (SELECT DISTINCT qz.PROV_TIN FROM [SQS_Mailed_TINs] qz where qz.Mailing = 'Objector' AND qz.PROV_TIN = z.TAX_ID)
THEN 'ERROR'
else 'Market Review/Preparing to Mail'
END AS [STATUS Column]
Please suggest on this -
Problem with Fetching Million Records from Table COEP into an Internal Tabl
Hi Everyone ! Hope things are going well.
Table : COEP has 6 million records.
I am trying to get records based on certain criteria, that is, there are atleast 5 conditions in the WHERE clause.
I've noticed it takes about 15 minutes to populate the internal table. How can i improve the performance to less than a minute for a fetch of 500 records from a database set of 6 million?
Regards,
Owais...The first obvious sugession would be to use the proper indexes. I had a similar Issue with COVP which is a join of COEP and COBK. I got substanstial performance improvement by adding "where LEDNR EQ '00'" in the where clause.
Here is my select:
SELECT kokrs
belnr
buzei
ebeln
ebelp
wkgbtr
refbn
bukrs
gjahr
FROM covp CLIENT SPECIFIED
INTO TABLE i_coep
FOR ALL ENTRIES IN i_objnr
WHERE mandt EQ sy-mandt
AND lednr EQ '00'
AND objnr = i_objnr-objnr
AND kokrs = c_conarea. -
Selecting Records from 125 million record table to insert into smaller table
Oracle 11g
I have a large table of 125 million records - t3_universe. This table never gets updated or altered once loaded, but holds data that we receive from a lead company.
I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads - that will be updated with regard to when the lead is mailed and for other relevant information.
My question is what is the best (fastest) approach to select records from this 125 million record table to insert into the smaller table. I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches.
My current attempt has been to create a View using the query that selects the records as shown below. Then use a second query that inserts into T3_Leads from this View V_Market. This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause? My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key.
CREATE VIEW V_Market as
WITH got_pairs AS
SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */ l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no
, ROW_NUMBER () OVER ( PARTITION BY l.address_key
ORDER BY l.hh_verification_date DESC
) AS r_num
FROM t3_universe e
JOIN t3_universe l ON
l.address_key = e.address_key
AND l.zip_code = e.zip_code
AND l.p1_gender != e.p1_gender
AND l.household_key != e.household_key
AND l.hh_verification_date >= e.hh_verification_date
SELECT *
FROM got_pairs
where l_hh_type !=1 and l_hh_type !=2 and filler_data != 1 and filler_data != 2 and zip_code in (select * from M_mansfield_02048) and p1_exact_age BETWEEN 25 and 70 and narrowband_income >= '8' and r_num = 1
Then
INSERT INTO T3_leads(zip, zip4, firstname, lastname, address, city, state, household_key, hh_type, address_key, income, relationship_status, gender, age, person_key, filler_data, p1_seq_no, p2_seq_no)
select zip_code, zip_plus_4, p1_givenname, surname, address, city, state, household_key, l_hh_type, address_key, narrowband_income, p1_ms, p1_gender, p1_exact_age, p1_personkey, filler_data, p1_seq_no, p2_seq_no
from V_Market;I had no trouble creating the view exactly as you posted it. However, be careful here:
and zip_code in (select * from M_mansfield_02048)
You should name the column explicitly rather than select *. (do you really have separate tables for different zip codes?)
About the performance, it's hard to tell because you haven't posted anything we can use, like explain plans or traces but simply encapsulating your query into a view is not likely to make it any faster.
Depending on the size of the subset of rows you're selecting, the /*+ INDEX hint may be doing your more harm than good. -
Creating table to hold 10 million records
What should be the TABLESPACE,PCTUSED,PCTFREE,INITRANS, MAXTRANS, STORAGE details for creating a table to hold 10 million records.
TABLESPACE,A tablespace big enough to hold 10 million rows. You may decide to have a separate tablespace for a big table, you may not.
PCTUSEDAre these records likely to be deleted?
PCTFREEAre these records likely to be updated?
INITRANS, MAXTRANSHow many concurrent users are likely to be working with these records?
STORAGE Do you want to override the default storage values of the tablespace you are using?
In short, these questions can only be answered by somebody who understands your application i.e. you. The required values of these parameters has got little to do with the fact that the table has 10 million rows. You would need to answer these same questions for a table that held only ten thousand rows.
Cheers, APC
Maybe you are looking for
-
Microsoft Silverlight application deployment to NetWeaver CE v7.1 EhP1
Hello, I'm a bit new to the MS Silverlight development arena but have worked extensively with web development technologies in the past. Recently I was tasked with getting a Silverlight application to deploy onto NW CE but I'm not sure how to perform
-
Can I keep my itunes library on an external hard drive and still use it??
Hello, I recently purchased a new laptop because my old computer wwent on the fritz. Luckily I backed up my library on my external hard drive. My question is can a) connect my itouch and ihone to my nex computer and b) run my library strictly from m
-
I cannot use my iPhone to send e-mail
I just changed my password on my G-mail account. As such, I changed the password on my iPhone for use of G-mail. However, when I tried to send an e-mail, it did not go through and said "Unable to Send E-mail" A copy is placed in your Cutbox??? Also,
-
I need to save (export) several pdf files as Word files. I need the Word files to NOT have section breaks. Is this possible to do? I have tried the options in the Settings (Retain flowing text; Retain page layout) but it does not help.
-
Deploying printer through VBScript used by GPO
Hi there everyone I need to deploy printers on a windows 7 machine through Group Policy. In the past a GPO has run a .vbs script that installs the printer including it's name, port, drivers (with location on server of those drivers). Recently it seem