Implimenting SCD Type 2 using SQL Merge
Hello ,
I'm working on implimenting SCD type 2 to keep track of changes in dim_A with fields as
Table A
VQ_dim_Id , VQ_Name, Contact_Type , Contact_Type, Center_Node_Id , Sed_id , Eed_Id , Insert_date, IsCurrent
Table B
CT_Number, Virtual_Queue , Center_Node_ID , Center, Contact_Type.
Table B acts as source to determine changes for Updates and inserts for Table A . with join condition on VQ_Name and Virtual_Queue ( Which is bad but business rules have no way to determined by ID)
my quesiton is I can update VQ_Name , Contact_Type , Center_Node_Id from Source (table B) , Is there any best practice in place where I can insert EED_Id, SED_id as well in case they are missing in the source... any code snippet to impliment SCD with this
kinda functionality helpful !
Thank you ,
Vishal.
You can use
WHEN NOT MATCHED THEN
INSERT .......
A best example for SCD type 2 Merge join can be found here...
http://www.mssqltips.com/sqlservertip/2883/using-the-sql-server-merge-statement-to-process-type-2-slowly-changing-dimensions/
Please mark as answer, if this post helped you solve the issue.
Good Luck :) .. visit www.sqlsaga.com for more t-sql code snippets and BI related how to articles.
Similar Messages
-
Implimenting SCD type 2 without Merge
Hi ,
While using merge for detecting SCD types 1/2 is ideal but I'd like to impliement SCD type 2 without using merge.Can any one post some good blogs/references to acheive it without using merge.
THanks ,
Vishal.Have a look @ these links please:
sql server 2005 - T-SQL Syntax for SCD Type 2
SSIS - Using Checksum
to Load Data into Slowly Changing Dimensions
sqldevelop.wordpress.com -
Error in merge statement when trying to impliment SCD type 2 using merge...
Hi ,
I'm trying to impliment SCD type 2 using Merge using below blog as reference but sime how it is erroring out with error
http://www.made2mentor.com/2013/08/how-to-load-slowly-changing-dimensions-using-t-sql-merge/
Msg 207, Level 16, State 1, Line 40
Invalid column name 'Current'.
Msg 207, Level 16, State 1, Line 38
Invalid column name 'Current'.
Msg 207, Level 16, State 1, Line 47
Invalid column name 'Current'.
Here is the code below...
--Create Temporaty table to hold dimension records
IF OBJECT_ID('tempdb..#DimVirtualQueue') IS NOT NULL
DROP TABLE #DimVirtualQueue;
CREATE TABLE #DimVirtualQueue
( [VQ_name] [varchar](50) NULL,
[contact_type] [varchar](50) NULL,
[center_node_id] [int] NULL,
[sed_id] [datetime] NULL,
[eed_id] [datetime] NULL,
[insert_date] [datetime] NULL,
[Current] [char](1) NOT NULL
INSERT INTO #DimVirtualQueue(VQ_name, contact_type, center_node_id, sed_id, eed_id, insert_date,[Current] )
SELECT VQ_name, contact_type, center_node_id, sed_id , eed_id,GETDATE(),'Y'
FROM
( --Declare Source and Target tables.
MERGE dbo.tblSwDM_dim_VQ_test AS TARGET
--Source
USING (SELECT
RTRIM(LTRIM(Stage.RESOURCE_NAME)) AS VQ_name,
'Unknown' AS contact_type,
0 AS center_node_id,
CONVERT(INT,CONVERT(VARCHAR(8),GMT_START_TIME,112)) AS sed_id,
CONVERT(INT,CONVERT(VARCHAR(8),ISNULL(GMT_END_TIME,'2070-01-01'),112)) AS eed_id,
GETDATE() AS insert_date
FROM dbo.tblGenesys_stg_RESOURCE_ Stage
WHERE resource_type = 'queue'
AND resource_subtype = 'VirtualQueue'
AND NOT EXISTS (SELECT 1 FROM dbo.tblSwDM_dim_VQ AS dim
WHERE RTRIM(LTRIM(stage.RESOURCE_NAME)) = RTRIM(LTRIM(dim.vq_name))) ) SOURCE
ON TARGET.VQ_name = SOURCE.VQ_name
WHEN NOT MATCHED BY TARGET
THEN
INSERT ( VQ_name, contact_type, center_node_id, sed_id, eed_id, insert_date,[Current] )
VALUES (SOURCE.VQ_name,SOURCE.contact_type,SOURCE.center_node_id,SOURCE.sed_id,SOURCE.eed_id,SOURCE.insert_date,'Y')
WHEN MATCHED AND TARGET.[Current] = 'Y'
AND EXISTS (
SELECT SOURCE.VQ_name
EXCEPT
SELECT TARGET.VQ_name
--Expire the records in target if exist in source.
THEN UPDATE SET TARGET.[Current] = 'N',
TARGET.[eed_id] = SOURCE.eed_id
OUTPUT $Action ActionOut, SOURCE.VQ_name,SOURCE.contact_type,SOURCE.center_node_id,SOURCE.sed_id,SOURCE.eed_id) AS MergeOut
WHERE MergeOut.ActionOut = 'UPDATE';
--Insert data into dimension
INSERT tblSwDM_dim_VQ_test
SELECT VQ_name,contact_type,center_node_id,sed_id,eed_id,insert_date,[Current] FROM #DimVirtualQueue
Any help to resolve issue is appreciated...
Thanks,
Vishal..You need to show the DDL of your target table: dbo.tblSwDM_dim_VQ_test.
Do you have a column named [Current] in this table? -
External Content Type Using SQL
I’m trying to setup an external content type using just SharePoint designer.
I will need to be able to show other individual within the organization how to create these content types so getting it to work with just SharePoint designer is the goal.
No Programming involved please.
Abbreviated Steps I have taken
The account that we will use to connect to 2012 SQL server has been created and the correct permissions have been granted to the database.
The secure store service account has been created and setup
Go into SharePoint designer 2010 to create the external content type using
Impersonated Custom Identity and inputting in the username and password that was setup in SQL server and added to the secure store credentials for the secure store service. I receive error cannot login with the provided credentials.
I have searched the web trying to correct this issue and cannot find anything that will assist.
Everything either shows you to connect with windows identity (not an option) or breezes past this issue.To connect as an impersonated custom identity you have to:
1. Be sure SQL login on SQL Server uses SQL authentication
(not Windows one!)
2. Create target application in Secure Store with:
a. Target app type - "Group"
b. Field types - Username and
Password
3. Don't forget to grant required permission to a new Traget Application.
Here is a good guide (look at steps 4-8):
http://lightningtools.com/bcs_meta_man/sharepoint-2010-secure-store-service-and-oracle/ -
How to read varbinary data type using sql server
Hello,
I'm converted a text file data into varbinary format, and stored in DB table. Now I need to read
Varbinary column and store in Temp table
in sql server.
can some one help on this.
Regards,
Praven
Regards, PraveenI understand this question is related to your previous thread and I believe what Erland suggested is the best way to implement this.
https://social.msdn.microsoft.com/Forums/en-US/a96a8952-0378-4238-9d9d-85b053182174/send-direct-text-file-as-a-input-parameter-to-sp?forum=transactsql
I believe when the user uploads the data its getting uploaded onto the Application Server, You could then open the file and import the files into SQL server(all using application code) and do the manipulations. I believe this is the better way of handling
this.
Also what is the expected file size and what is the kind of data you are expecting?
Satheesh
My Blog | How to ask questions in technical forum -
Hi ,
I'm trying to impliment SCD type 2 using Merge but unlike typical Merge where you have target and source table, my Inserts come from one table and updates/changes are determined from another table.. I have issue with updates.
below is structure of three tables :
Dimension Table :
VQ_id, VQ_name,
contact_type, center_node_id,
sed_id, eed_id,
IsCurrent, insert_date
VQ_Id is dimension ID based on which Inserts and undates are determined.
VQ_Name : type 1 change
Contact_type , Center_node_ID : type 2 changes
is Current : flag
sed_id , eed_id are start and end effective date ID's
Insert table :
VQ_id,VQ_Name ,Contact_Type , Center_node_ID , Sed_id , eed_id , Insert_date
from the above table, based on VQ_ID , new records are determined .
Updates/history records :
Type 2 changes are tracked based on below table..
VQ_ID, contact_type,
center_node_id, Start_Effective_Date,
CT_ID, Submit_Date
Based on VQ_ID , contact_type, center_node_id,
Start_Effective_Date , end effective date are determined.
Any help in this regard is appreciated...
Thanks ,
Vishal.-- This is dimension table
CREATE TABLE [dbo].[tblSwDM_dim_VQ](
[VQ_dim_id] [int] IDENTITY(1,1) NOT NULL,
[VQ_id] [int] NOT NULL,
[VQ_name] [varchar](50) NOT NULL,
[contact_type] [varchar](50) NULL,
[center_node_id] [int] NULL,
[sed_id] [int] NULL,
[eed_id] [int] NULL,
[IsCurrent] [bit] NOT NULL,
[insert_date] [datetime] NULL,
[Start_Effective_Date] AS (CONVERT([datetime],CONVERT([varchar](8),[sed_id],(0)),(0))),
[End_Effective_Date] AS (CONVERT([datetime],CONVERT([varchar](8),[eed_id],(0)),(0))),
CONSTRAINT [Pk_tblswDM_dim_VQ] PRIMARY KEY CLUSTERED
[VQ_dim_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
-- THis ifrom where updates/type2 changes would be loaded
CREATE TABLE [dbo].[tblSwDM_stg_change_control_next_gen](
[row_id] [int] IDENTITY(1,1) NOT NULL,
[VQ_id] [int] NOT NULL,
[contact_type] [varchar](50) NOT NULL,
[center_node_id] [int] NOT NULL,
[Start_Effective_Date] [datetime] NOT NULL,
[CT_ID] [int] NULL,
[Submit_Date] [datetime] NOT NULL,
[isValid] [bit] NULL,
[Remarks] [varchar](100) NULL
) ON [PRIMARY]
Example...
for a Record in dimention table... [dbo].[tblSwDM_dim_VQ]
Before Updates :
VQ_dim_id VQ_id VQ_name contact_type center_node_id sed_id eed_id IsCurrent insert_date Start_Effective_Date End_Effective_Date
2203 376946 Fraud_Span_Det_VQ RFD USCC Detection 4536 20131018 20700101 1 2014-03-21 12:02:42.750 2013-10-18 00:00:00.000 2070-01-01 00:00:00.000
Final Result :
VQ_dim_id VQ_id VQ_name contact_type center_node_id sed_id eed_id IsCurrent insert_date Start_Effective_Date End_Effective_Date
2203 376946 Fraud_Span_Det_VQ RFD USCC Detection 4536 20131018 20140423 0 2014-03-21 12:02:42.750 2013-10-18 00:00:00.000 2014-04-23 00:00:00.000
2605 376946 Fraud_Span_Det_VQ RFS USCC Spanish 4537 20140424 20700101 1 2014-05-07 11:51:00.543 2014-04-24 00:00:00.000 2070-01-01 00:00:00.000 -
I want to get all the documents based on content type using SQL Server Query. I know that, querying the content database without using API is not advisable, but still i want to perform this action through SQL Server Query. Can someone assist ?
You're right, it's not advisable, may result in corruption of your databases and might impact performance and stability. But assuming you're happy to do that then it is possible.
Before you go down that route, have you considered using something more safe like PowerShell? I've seen a script exactly like the one you describe and it would take far less time to do it through PS than it would through SQL. -
SQL merge and after insert or update on ... for each row fires too often?
Hello,
there is a base table, which has a companion history table
- lets say USER_DATA & USER_DATA_HIST.
For each update on USER_DATA there has to be recorded the old condition of the USER_DATA record into the USER_DATA_HIST (insert new record)
- to have the history of changes to USER_DATA.
The first approach was to do the insert for the row trigger:
trigger user_data_tr_aiu after insert or update on user_data for each rowBut the performance was bad, because for a bulk update to USER_DATA, there have been individual inserts per records.
So i tried a trick:
Instead of doing the real insert into USER_DATA_HIST, i collect the USER_DATA_HIST data into a pl/sql collection first.
And later i do a bulk insert for the collection in the USER_DATA_HIST table with stmt trigger:
trigger user_data_tr_ra after insert or update on user_dataBut sometimes i recognize, that the list of entries saved in the pl/sql collection are more than my USER_DATA records being updated.
(BTW, for the update i use SQL merge, because it's driven by another table.)
As there is a uniq tracking_id in USER_DATA record, i could identify, that there are duplicates.
If i sort for the tracking_id and remove duplicate i get exactly the #no of records updated by the SQL merge.
So how comes, that there are duplicates?
I can try to make a sample 'sqlplus' program, but it will take some time.
But maybe somebody knows already about some issues here(?!)
- many thanks!
best regards,
FrankHello
Not sure really. Although it shouldn't take long to do a test case - it only took me 10 mins....
SQL>
SQL> create table USER_DATA
2 ( id number,
3 col1 varchar2(100)
4 )
5 /
Table created.
SQL>
SQL> CREATE TABLE USER_DATA_HIST
2 ( id number,
3 col1 varchar2(100),
4 tmsp timestamp
5 )
6 /
Table created.
SQL>
SQL> CREATE OR REPLACE PACKAGE pkg_audit_user_data
2 IS
3
4 PROCEDURE p_Init;
5
6 PROCEDURE p_Log
7 ( air_UserData IN user_data%ROWTYPE
8 );
9
10 PROCEDURE p_Write;
11 END;
12 /
Package created.
SQL> CREATE OR REPLACE PACKAGE BODY pkg_audit_user_data
2 IS
3
4 TYPE tt_UserData IS TABLE OF user_data_hist%ROWTYPE INDEX BY BINARY_INTEGER;
5
6 pt_UserData tt_UserData;
7
8 PROCEDURE p_Init
9 IS
10
11 BEGIN
12
13
14 IF pt_UserData.COUNT > 0 THEN
15
16 pt_UserData.DELETE;
17
18 END IF;
19
20 END;
21
22 PROCEDURE p_Log
23 ( air_UserData IN user_data%ROWTYPE
24 )
25 IS
26 ln_Idx BINARY_INTEGER;
27
28 BEGIN
29
30 ln_Idx := pt_UserData.COUNT + 1;
31
32 pt_UserData(ln_Idx).id := air_UserData.id;
33 pt_UserData(ln_Idx).col1 := air_UserData.col1;
34 pt_UserData(ln_Idx).tmsp := SYSTIMESTAMP;
35
36 END;
37
38 PROCEDURE p_Write
39 IS
40
41 BEGIN
42
43 FORALL li_Idx IN INDICES OF pt_UserData
44 INSERT
45 INTO
46 user_data_hist
47 VALUES
48 pt_UserData(li_Idx);
49
50 END;
51 END;
52 /
Package body created.
SQL>
SQL> CREATE OR REPLACE TRIGGER preu_s_user_data BEFORE UPDATE ON user_data
2 DECLARE
3
4 BEGIN
5
6 pkg_audit_user_data.p_Init;
7
8 END;
9 /
Trigger created.
SQL> CREATE OR REPLACE TRIGGER preu_r_user_data BEFORE UPDATE ON user_data
2 FOR EACH ROW
3 DECLARE
4
5 lc_Row user_data%ROWTYPE;
6
7 BEGIN
8
9 lc_Row.id := :NEW.id;
10 lc_Row.col1 := :NEW.col1;
11
12 pkg_audit_user_data.p_Log
13 ( lc_Row
14 );
15
16 END;
17 /
Trigger created.
SQL> CREATE OR REPLACE TRIGGER postu_s_user_data AFTER UPDATE ON user_data
2 DECLARE
3
4 BEGIN
5
6 pkg_audit_user_data.p_Write;
7
8 END;
9 /
Trigger created.
SQL>
SQL>
SQL> insert
2 into
3 user_data
4 select
5 rownum,
6 dbms_random.string('u',20)
7 from
8 dual
9 connect by
10 level <=10
11 /
10 rows created.
SQL> select * from user_data
2 /
ID COL1
1 GVZHKXSSJZHUSLLIDQTO
2 QVNXLTGJXFUDUHGYKANI
3 GTVHDCJAXLJFVTFSPFQI
4 CNVEGOTDLZQJJPVUXWYJ
5 FPOTZAWKMWHNOJMMIOKP
6 BZKHAFATQDBUVFBCOSPT
7 LAQAIDVREFJZWIQFUPMP
8 DXFICIPCBCFTPAPKDGZF
9 KKSMMRAQUORRPUBNJFCK
10 GBLTFZJAOPKFZFCQPGYW
10 rows selected.
SQL> select * from user_data_hist
2 /
no rows selected
SQL>
SQL> MERGE
2 INTO
3 user_data a
4 USING
5 ( SELECT
6 rownum + 8 id,
7 dbms_random.string('u',20) col1
8 FROM
9 dual
10 CONNECT BY
11 level <= 10
12 ) b
13 ON (a.id = b.id)
14 WHEN MATCHED THEN
15 UPDATE SET a.col1 = b.col1
16 WHEN NOT MATCHED THEN
17 INSERT(a.id,a.col1)
18 VALUES (b.id,b.col1)
19 /
10 rows merged.
SQL> select * from user_data_hist
2 /
ID COL1 TMSP
9 XGURXHHZGSUKILYQKBNB 05-AUG-11 10.04.15.577989
10 HLVUTUIFBAKGMXBDJTSL 05-AUG-11 10.04.15.578090
SQL> select * from v$version
2 /
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - ProductionHTH
David -
Creating an SCD Type 2 in T SQL without using MERGE
I am attempting to create an SCD type 2 using T-SQL without MERGE (I'm not allowed to use it as a condition of the development work I am doing). I can use a CTE, but I have not tried that yet.
I have a temp table that contains ten records that I am testing with. The following is one variant of the code I have used to try and make this work:
declare
@System_User nchar(50)
,@CurrentDate datetime
,@MaxCheckDate datetime
set @System_User = system_user
set @CurrentDate = getdate()
--INSERT
insert dim.slot
Source_Slot_ID
,Slot_Start_DateTime
,Patients_PerSlot
,IsSlotSearchable
,IsSlotVisible
,[Created_Date]
,[Created_By]
select
src.IdSlot
,src.SlotDateTime
,src.PatientsPerSlot
,src.IsSlotSearchable
,src.IsSlotVisible
,@CurrentDate
,@System_User
from #TmepSlot src
left join dim.Slot dest
on src.IdSlot = dest.Source_Slot_ID
left join (select source_slot_id, max(created_date) as created_date from dim.slot group by Source_Slot_ID) MaxRecord
on dest.Source_Slot_ID = MaxRecord.Source_Slot_ID
and dest.Created_Date = MaxRecord.created_date
where dest.Source_Slot_ID is null
or
src.PatientsPerSlot
<> dest.Patients_PerSlot
or
src.IsSlotSearchable <> dest.IsSlotSearchable
or
src.IsSlotVisible
<> dest.IsSlotVisible
The problem with this variation is that when I change a value in the source like src.Patients_PerSlot, and then run the query, I get the new record i expect, but when I run the query again, a duplicate record gets created.
How do I correctly isolate the correct latest record and add the changed record without inserting that changed record more than once?
Thank you for your help.
cdun2Hi,
shouldn't you use an inner join between dest and maxrecord like so:
from #TmepSlot src
left join (dim.Slot dest
inner join (select source_slot_id, max(created_date) as created_date from dim.slot group by Source_Slot_ID) MaxRecord
on dest.Source_Slot_ID = MaxRecord.Source_Slot_ID
and dest.Created_Date = MaxRecord.created_date)
on src.IdSlot = dest.Source_Slot_ID
where dest.Source_Slot_ID is null
regards,
Rudolf
Rudolf Swiers
Thanks! I don't remember when I've done a join that way, but it makes sense.
cdun2 -
Hello
I am writing a pl-sql which will perform SCD type 1.
The Type 1 methodology overwrites old data with new data, and therefore does not track historical data at all. This is most appropriate when correcting certain types of data errors, such as the spelling of a name. (Assuming you won't ever need to know how it used to be misspelled in the past.)
Another example would be of a database table that keeps supplier information.
Supplier_key Supplier_Name Supplier_State
001 Phlogistical Supply Company CA
so hence I created two tables
create table ssn_load1
( ssn number(10,0),
credit_score number(6,0));
create table ssn_load2
( ssn number(10,0),
credit_score number(6,0));
and the target table
create table ssn_target
( sq_id number(8,0) primary key,
ssn number(10,0),
credit_score number(6,0));
since I want sq_id as auto incremented,I have created a following trigger
CREATE SEQUENCE test_sequence
START WITH 1
INCREMENT BY 1;
CREATE OR REPLACE TRIGGER test_trigger
BEFORE INSERT
ON ssn_target
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT test_sequence.nextval INTO :NEW.SQ_ID FROM dual;
END;
Now inoder to perform type 1 I have followed a following thing
YOu have source table
tbl1(col1, col2, col3) - where col1 is the key and (col2, col3) are the attributes
target table
tbl2(col21, col22) - where (col21, col21) are the attributes
do this join for change data capture
select <col list>,
case when col22 is null and col23 is null then 'NEW'
when col2 = col22 and col3 = col23 then 'NO CHANGE'
else 'MODIFIED' end
from
tbl1 LEFT OUTER JOIN tbl2
ON tbl1.col1 = tbl2.col21
following is my pl-sql ,but I dont know how to write insert update statement in the switch case.
declare
cursor ssn1
is select from ssn_load1;*
for row in ssn1 loop
select s2.ssn,s2.credit_score from ssn_load2 s2 left outer join ssn_load1 s1 on s1.ssn=s2.ssn;
* SWITCH (ssn)*
* DO*
CASE 'Insert':if s2.ssn!=s1.ssn
Insert into ssn_target(ssn,credit_score)values();
CASE 'No Change':
CASE 'Modify':Update ssn_target set
DEFAULT:
DO END;
end loop;
END;
please help me how to proceed from here
will be waiting for reply
Thank You!!Isn't this a MERGE?
In the outer join approach, it might be simpler to do the join in the cursor and keep the PL/SQL simple. I didn't really understand your three-table scenario though. -
Anybody got SCD Type 2's to perform quickly using dimension operator
Hi there,
Hitting major performance problems running mappings to populate SCD Type 2's when they have large amounts of pre-existing data.
Anybody got this performing acceptably? Tried indexing but to no avail.
Many ThanksHi there,
Thanks for getting back to me - found the patch and this patch hasd already been applied.
An example of the sql being generated in a really simple mapping with the dimension operator for small tables is as follows
MERGE
/*+ APPEND PARALLEL("NS_0") */
INTO
"RETAILER_PUBLISHER_NS"
USING
(SELECT
"MERGE_DELTA_ROW_0"."NS_OUTLET_SRC_ID$1" "NS_OUTLET_SRC_ID",
"MERGE_DELTA_ROW_0"."NS_PUBLISHER_CODE$1" "NS_PUBLISHER_CODE",
"MERGE_DELTA_ROW_0"."NS_TITLE_CLASSIFICATION_CODE$1" "NS_TITLE_CLASSIFICATION_CODE",
"MERGE_DELTA_ROW_0"."NS_SUPPLY_FLAG$1" "NS_SUPPLY_FLAG",
"MERGE_DELTA_ROW_0"."NS_EFF_DATE$1" "NS_EFF_DATE",
"MERGE_DELTA_ROW_0"."NS_EXP_DATE$1" "NS_EXP_DATE",
"MERGE_DELTA_ROW_0"."NS_ID$1" "NS_ID"
FROM
(SELECT
"NS_ID" "NS_ID$1",
"NS_OUTLET_SRC_ID" "NS_OUTLET_SRC_ID$1",
"NS_PUBLISHER_CODE" "NS_PUBLISHER_CODE$1",
"NS_TITLE_CLASSIFICATION_CODE" "NS_TITLE_CLASSIFICATION_CODE$1",
"NS_SUPPLY_FLAG" "NS_SUPPLY_FLAG$1",
"NS_EFF_DATE" "NS_EFF_DATE$1",
"NS_EXP_DATE" "NS_EXP_DATE$1"
FROM
(SELECT
(Case When (("SPLITTER_INPUT_SUBQUERY"."NS_ID_0_0" IS NULL) OR ((("SPLITTER_INPUT_SUBQUERY"."NS_EXP_DATE_0_0" IS NULL AND TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_0_0", 'J.HH24.MI.SS') <= TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1", 'J.HH24.MI.SS')) OR ("SPLITTER_INPUT_SUBQUERY"."NS_EXP_DATE_0_0" IS NOT NULL AND TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_0_0", 'J.HH24.MI.SS') <= TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1", 'J.HH24.MI.SS') AND TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EXP_DATE_0_0", 'J.HH24.MI.SS') >= TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1", 'J.HH24.MI.SS'))) AND (("SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_1" IS NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_2" IS NOT NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_1" IS NOT NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_2" IS NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_1" != "SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_2") OR ("SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_1" IS NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_0_0" IS NOT NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_1" IS NOT NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_0_0" IS NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_1" != "SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_0_0")))) then ("SPLITTER_INPUT_SUBQUERY"."NS_ID_1") else ("SPLITTER_INPUT_SUBQUERY"."NS_ID_0_0") end)/* MERGE_DELTA_ROW.OUTGRP1.NS_ID */ "NS_ID",
"SPLITTER_INPUT_SUBQUERY"."NS_OUTLET_SRC_ID_1"/* MERGE_DELTA_ROW.OUTGRP1.NS_OUTLET_SRC_ID */ "NS_OUTLET_SRC_ID",
"SPLITTER_INPUT_SUBQUERY"."NS_PUBLISHER_CODE_1"/* MERGE_DELTA_ROW.OUTGRP1.NS_PUBLISHER_CODE */ "NS_PUBLISHER_CODE",
"SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_1"/* MERGE_DELTA_ROW.OUTGRP1.NS_TITLE_CLASSIFICATION_CODE */ "NS_TITLE_CLASSIFICATION_CODE",
"SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_1"/* MERGE_DELTA_ROW.OUTGRP1.NS_SUPPLY_FLAG */ "NS_SUPPLY_FLAG",
(Case When (("SPLITTER_INPUT_SUBQUERY"."NS_ID_0_0" IS NULL)) then ((case when ("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1" < SYSDATE ) then ("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1") else ( SYSDATE ) end)) when ((("SPLITTER_INPUT_SUBQUERY"."NS_EXP_DATE_0_0" IS NULL AND TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_0_0", 'J.HH24.MI.SS') <= TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1", 'J.HH24.MI.SS')) OR ("SPLITTER_INPUT_SUBQUERY"."NS_EXP_DATE_0_0" IS NOT NULL AND TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_0_0", 'J.HH24.MI.SS') <= TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1", 'J.HH24.MI.SS') AND TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EXP_DATE_0_0", 'J.HH24.MI.SS') >= TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1", 'J.HH24.MI.SS'))) AND (("SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_1" IS NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_2" IS NOT NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_1" IS NOT NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_2" IS NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_1" != "SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_2") OR ("SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_1" IS NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_0_0" IS NOT NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_1" IS NOT NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_0_0" IS NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_1" != "SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_0_0"))) then ("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1") else ("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_0_0") end)/* MERGE_DELTA_ROW.OUTGRP1.NS_EFF_DATE */ "NS_EFF_DATE",
(Case When ((ROW_NUMBER() OVER (PARTITION BY "SPLITTER_INPUT_SUBQUERY"."NS_OUTLET_SRC_ID_1","SPLITTER_INPUT_SUBQUERY"."NS_PUBLISHER_CODE_1" ORDER BY "SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1" DESC)) = 1) then (Case When (("SPLITTER_INPUT_SUBQUERY"."NS_ID_0_0" IS NULL) OR ((("SPLITTER_INPUT_SUBQUERY"."NS_EXP_DATE_0_0" IS NULL AND TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_0_0", 'J.HH24.MI.SS') <= TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1", 'J.HH24.MI.SS')) OR ("SPLITTER_INPUT_SUBQUERY"."NS_EXP_DATE_0_0" IS NOT NULL AND TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_0_0", 'J.HH24.MI.SS') <= TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1", 'J.HH24.MI.SS') AND TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EXP_DATE_0_0", 'J.HH24.MI.SS') >= TO_CHAR("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1", 'J.HH24.MI.SS'))) AND (("SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_1" IS NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_2" IS NOT NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_1" IS NOT NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_2" IS NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_1" != "SPLITTER_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CO_2") OR ("SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_1" IS NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_0_0" IS NOT NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_1" IS NOT NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_0_0" IS NULL) OR ("SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_1" != "SPLITTER_INPUT_SUBQUERY"."NS_SUPPLY_FLAG_0_0")))) then ( TO_DATE('31-DEC-4000') ) else ("SPLITTER_INPUT_SUBQUERY"."NS_EXP_DATE_0_0") end) else (("SPLITTER_INPUT_SUBQUERY"."NS_EFF_DATE_1" - INTERVAL '1' SECOND)) end)/* MERGE_DELTA_ROW.OUTGRP1.NS_EXP_DATE */ "NS_EXP_DATE"
FROM
(SELECT
"INGRP1"."NS_ID" "NS_ID_1",
"INGRP1"."NS_OUTLET_SRC_ID" "NS_OUTLET_SRC_ID_1",
"INGRP1"."NS_PUBLISHER_CODE" "NS_PUBLISHER_CODE_1",
"INGRP1"."NS_TITLE_CLASSIFICATION_CODE" "NS_TITLE_CLASSIFICATION_CO_1",
"INGRP1"."NS_SUPPLY_FLAG" "NS_SUPPLY_FLAG_1",
"INGRP1"."NS_EFF_DATE" "NS_EFF_DATE_1",
"INGRP1"."NS_EXP_DATE" "NS_EXP_DATE_1",
"INGRP2"."NS_ID" "NS_ID_0_0",
"INGRP2"."NS_OUTLET_SRC_ID" "NS_OUTLET_SRC_ID_0_0",
"INGRP2"."NS_PUBLISHER_CODE" "NS_PUBLISHER_CODE_0_0",
"INGRP2"."NS_TITLE_CLASSIFICATION_CODE" "NS_TITLE_CLASSIFICATION_CO_2",
"INGRP2"."NS_SUPPLY_FLAG" "NS_SUPPLY_FLAG_0_0",
"INGRP2"."NS_EFF_DATE" "NS_EFF_DATE_0_0",
"INGRP2"."NS_EXP_DATE" "NS_EXP_DATE_0_0",
"INGRP2"."DIMENSION_KEY" "DIMENSION_KEY_0"
FROM
( SELECT
"RETAILER_PUBLISHER_NS"."NS_ID" "NS_ID",
"RETAILER_PUBLISHER_NS"."NS_OUTLET_SRC_ID" "NS_OUTLET_SRC_ID",
"RETAILER_PUBLISHER_NS"."NS_PUBLISHER_CODE" "NS_PUBLISHER_CODE",
"RETAILER_PUBLISHER_NS"."NS_TITLE_CLASSIFICATION_CODE" "NS_TITLE_CLASSIFICATION_CODE",
"RETAILER_PUBLISHER_NS"."NS_SUPPLY_FLAG" "NS_SUPPLY_FLAG",
"RETAILER_PUBLISHER_NS"."NS_EFF_DATE" "NS_EFF_DATE",
"RETAILER_PUBLISHER_NS"."NS_EXP_DATE" "NS_EXP_DATE",
"RETAILER_PUBLISHER_NS"."DIMENSION_KEY" "DIMENSION_KEY"
FROM
"RETAILER_PUBLISHER_NS" "RETAILER_PUBLISHER_NS"
WHERE
( "RETAILER_PUBLISHER_NS"."DIMENSION_KEY" = "RETAILER_PUBLISHER_NS"."NS_ID" ) AND
( "RETAILER_PUBLISHER_NS"."NS_ID" IS NOT NULL ) ) "INGRP2"
RIGHT OUTER JOIN ( SELECT
NULL "NS_ID",
"LOOKUP_INPUT_SUBQUERY"."NS_OUTLET_SRC_ID$2" "NS_OUTLET_SRC_ID",
"LOOKUP_INPUT_SUBQUERY"."NS_PUBLISHER_CODE$2" "NS_PUBLISHER_CODE",
"LOOKUP_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CODE$2" "NS_TITLE_CLASSIFICATION_CODE",
"LOOKUP_INPUT_SUBQUERY"."NS_SUPPLY_FLAG$2" "NS_SUPPLY_FLAG",
"LOOKUP_INPUT_SUBQUERY"."NS_EFF_DATE$2" "NS_EFF_DATE",
"LOOKUP_INPUT_SUBQUERY"."NS_EXP_DATE$2" "NS_EXP_DATE"
FROM
(SELECT
"DEDUP_SRC"."NS_ID$3" "NS_ID$2",
"DEDUP_SRC"."NS_OUTLET_SRC_ID$3" "NS_OUTLET_SRC_ID$2",
"DEDUP_SRC"."NS_PUBLISHER_CODE$3" "NS_PUBLISHER_CODE$2",
"DEDUP_SRC"."NS_TITLE_CLASSIFICATION_CODE$3" "NS_TITLE_CLASSIFICATION_CODE$2",
"DEDUP_SRC"."NS_SUPPLY_FLAG$3" "NS_SUPPLY_FLAG$2",
"DEDUP_SRC"."NS_EFF_DATE$3" "NS_EFF_DATE$2",
"DEDUP_SRC"."NS_EXP_DATE$3" "NS_EXP_DATE$2"
FROM
(SELECT
NULL/* DEDUP_SRC.OUTGRP1.NS_ID */ "NS_ID$3",
("PUB_AGENT_MATRIX_CC"."PAM_CUSTOMER_ID"/* EXPR_SRC.OUTGRP1.NS_OUTLET_SRC_ID */)/* DEDUP_SRC.OUTGRP1.NS_OUTLET_SRC_ID */ "NS_OUTLET_SRC_ID$3",
((to_char("PUB_AGENT_MATRIX_CC"."PAM_PUBLISHER_CODE")/* EXP.OUTGRP1.PAM_PUBLISHER_CODE */)/* EXPR_SRC.OUTGRP1.NS_PUBLISHER_CODE */)/* DEDUP_SRC.OUTGRP1.NS_PUBLISHER_CODE */ "NS_PUBLISHER_CODE$3",
("PUB_AGENT_MATRIX_CC"."PAM_TITLCLAS_CODE"/* EXPR_SRC.OUTGRP1.NS_TITLE_CLASSIFICATION_CODE */)/* DEDUP_SRC.OUTGRP1.NS_TITLE_CLASSIFICATION_CODE */ "NS_TITLE_CLASSIFICATION_CODE$3",
("PUB_AGENT_MATRIX_CC"."PAM_SUPPLY_FLAG"/* EXPR_SRC.OUTGRP1.NS_SUPPLY_FLAG */)/* DEDUP_SRC.OUTGRP1.NS_SUPPLY_FLAG */ "NS_SUPPLY_FLAG$3",
MIN(("PUB_AGENT_MATRIX_CC"."PAM_EFFECTIVE_DATE"/* EXPR_SRC.OUTGRP1.NS_EFF_DATE */)) KEEP (DENSE_RANK FIRST ORDER BY NULL/* EXPR_SRC.OUTGRP1.NS_ID */)/* DEDUP_SRC.OUTGRP1.NS_EFF_DATE */ "NS_EFF_DATE$3",
NULL/* DEDUP_SRC.OUTGRP1.NS_EXP_DATE */ "NS_EXP_DATE$3"
FROM
"REFSTG"."PUB_AGENT_MATRIX_CC" "PUB_AGENT_MATRIX_CC"
WHERE
( "PUB_AGENT_MATRIX_CC"."PAM_ADD_REMOVE_FLAG" = 'A' )
GROUP BY
("PUB_AGENT_MATRIX_CC"."PAM_CUSTOMER_ID"/* EXPR_SRC.OUTGRP1.NS_OUTLET_SRC_ID */), ((to_char("PUB_AGENT_MATRIX_CC"."PAM_PUBLISHER_CODE")/* EXP.OUTGRP1.PAM_PUBLISHER_CODE */)/* EXPR_SRC.OUTGRP1.NS_PUBLISHER_CODE */), ("PUB_AGENT_MATRIX_CC"."PAM_TITLCLAS_CODE"/* EXPR_SRC.OUTGRP1.NS_TITLE_CLASSIFICATION_CODE */), ("PUB_AGENT_MATRIX_CC"."PAM_SUPPLY_FLAG"/* EXPR_SRC.OUTGRP1.NS_SUPPLY_FLAG */),NULL,NULL/* RETAILER_PUBLISHER_NS.DEDUP_SRC */) "DEDUP_SRC") "LOOKUP_INPUT_SUBQUERY"
WHERE
( NOT ( "LOOKUP_INPUT_SUBQUERY"."NS_OUTLET_SRC_ID$2" IS NULL AND "LOOKUP_INPUT_SUBQUERY"."NS_PUBLISHER_CODE$2" IS NULL ) ) ) "INGRP1" ON ( ( ( "INGRP2"."NS_EFF_DATE" IS NULL OR ( ( "INGRP2"."NS_EXP_DATE" IS NULL AND TO_CHAR ( "INGRP2"."NS_EFF_DATE" , 'J.HH24.MI.SS' ) <= TO_CHAR ( "INGRP1"."NS_EFF_DATE" , 'J.HH24.MI.SS' ) ) OR ( "INGRP2"."NS_EXP_DATE" IS NOT NULL AND TO_CHAR ( "INGRP2"."NS_EFF_DATE" , 'J.HH24.MI.SS' ) <= TO_CHAR ( "INGRP1"."NS_EFF_DATE" , 'J.HH24.MI.SS' ) AND TO_CHAR ( "INGRP2"."NS_EXP_DATE" , 'J.HH24.MI.SS' ) >= TO_CHAR ( "INGRP1"."NS_EFF_DATE" , 'J.HH24.MI.SS' ) ) ) ) ) AND ( ( "INGRP2"."NS_PUBLISHER_CODE" = "INGRP1"."NS_PUBLISHER_CODE" ) ) AND ( ( "INGRP2"."NS_OUTLET_SRC_ID" = "INGRP1"."NS_OUTLET_SRC_ID" ) ) )) "SPLITTER_INPUT_SUBQUERY"
WHERE
( ( ( "SPLITTER_INPUT_SUBQUERY"."NS_OUTLET_SRC_ID_1" = "SPLITTER_INPUT_SUBQUERY"."NS_OUTLET_SRC_ID_0_0" AND "SPLITTER_INPUT_SUBQUERY"."NS_PUBLISHER_CODE_1" = "SPLITTER_INPUT_SUBQUERY"."NS_PUBLISHER_CODE_0_0" ) ) OR ( "SPLITTER_INPUT_SUBQUERY"."NS_OUTLET_SRC_ID_0_0" IS NULL AND "SPLITTER_INPUT_SUBQUERY"."NS_PUBLISHER_CODE_0_0" IS NULL ) )
UNION
SELECT
"DEDUP_SCD_SRC"."NS_ID$4" "NS_ID",
"DEDUP_SCD_SRC"."NS_OUTLET_SRC_ID$4" "NS_OUTLET_SRC_ID",
"DEDUP_SCD_SRC"."NS_PUBLISHER_CODE$4" "NS_PUBLISHER_CODE",
"DEDUP_SCD_SRC"."NS_TITLE_CLASSIFICATION_CODE$4" "NS_TITLE_CLASSIFICATION_CODE",
"DEDUP_SCD_SRC"."NS_SUPPLY_FLAG$4" "NS_SUPPLY_FLAG",
"DEDUP_SCD_SRC"."NS_EFF_DATE$4" "NS_EFF_DATE",
"DEDUP_SCD_SRC"."NS_EXP_DATE$4" "NS_EXP_DATE"
FROM
(SELECT
"AGG_INPUT"."NS_ID$5"/* DEDUP_SCD_SRC.OUTGRP1.NS_ID */ "NS_ID$4",
"AGG_INPUT"."NS_OUTLET_SRC_ID$5"/* DEDUP_SCD_SRC.OUTGRP1.NS_OUTLET_SRC_ID */ "NS_OUTLET_SRC_ID$4",
"AGG_INPUT"."NS_PUBLISHER_CODE$5"/* DEDUP_SCD_SRC.OUTGRP1.NS_PUBLISHER_CODE */ "NS_PUBLISHER_CODE$4",
MIN("AGG_INPUT"."NS_TITLE_CLASSIFICATION_CODE$5") KEEP (DENSE_RANK FIRST ORDER BY "AGG_INPUT"."NS_TITLE_CLASSIFICATION_CODE$5" NULLS LAST)/* DEDUP_SCD_SRC.OUTGRP1.NS_TITLE_CLASSIFICATION_CODE */ "NS_TITLE_CLASSIFICATION_CODE$4",
MIN("AGG_INPUT"."NS_SUPPLY_FLAG$5") KEEP (DENSE_RANK FIRST ORDER BY "AGG_INPUT"."NS_SUPPLY_FLAG$5" NULLS LAST)/* DEDUP_SCD_SRC.OUTGRP1.NS_SUPPLY_FLAG */ "NS_SUPPLY_FLAG$4",
MIN("AGG_INPUT"."NS_EFF_DATE$5") KEEP (DENSE_RANK FIRST ORDER BY "AGG_INPUT"."NS_EFF_DATE$5" NULLS LAST)/* DEDUP_SCD_SRC.OUTGRP1.NS_EFF_DATE */ "NS_EFF_DATE$4",
MIN("AGG_INPUT"."NS_EXP_DATE$5") KEEP (DENSE_RANK FIRST ORDER BY "AGG_INPUT"."NS_EXP_DATE$5" NULLS LAST)/* DEDUP_SCD_SRC.OUTGRP1.NS_EXP_DATE */ "NS_EXP_DATE$4"
FROM
(SELECT
"SPLITTER_INPUT_SUBQUERY$1"."NS_ID_0_0$1"/* UPDATE_DELTA_ROW.OUTGRP1.NS_ID */ "NS_ID$5",
"SPLITTER_INPUT_SUBQUERY$1"."NS_OUTLET_SRC_ID_1$1"/* UPDATE_DELTA_ROW.OUTGRP1.NS_OUTLET_SRC_ID */ "NS_OUTLET_SRC_ID$5",
"SPLITTER_INPUT_SUBQUERY$1"."NS_PUBLISHER_CODE_1$1"/* UPDATE_DELTA_ROW.OUTGRP1.NS_PUBLISHER_CODE */ "NS_PUBLISHER_CODE$5",
"SPLITTER_INPUT_SUBQUERY$1"."NS_TITLE_CLASSIFICATION_CO_2$1"/* UPDATE_DELTA_ROW.OUTGRP1.NS_TITLE_CLASSIFICATION_CODE */ "NS_TITLE_CLASSIFICATION_CODE$5",
"SPLITTER_INPUT_SUBQUERY$1"."NS_SUPPLY_FLAG_0_0$1"/* UPDATE_DELTA_ROW.OUTGRP1.NS_SUPPLY_FLAG */ "NS_SUPPLY_FLAG$5",
"SPLITTER_INPUT_SUBQUERY$1"."NS_EFF_DATE_0_0$1"/* UPDATE_DELTA_ROW.OUTGRP1.NS_EFF_DATE */ "NS_EFF_DATE$5",
("SPLITTER_INPUT_SUBQUERY$1"."NS_EFF_DATE_1$1" - INTERVAL '1' SECOND)/* UPDATE_DELTA_ROW.OUTGRP1.NS_EXP_DATE */ "NS_EXP_DATE$5"
FROM
(SELECT
"INGRP1"."NS_ID" "NS_ID_1$1",
"INGRP1"."NS_OUTLET_SRC_ID" "NS_OUTLET_SRC_ID_1$1",
"INGRP1"."NS_PUBLISHER_CODE" "NS_PUBLISHER_CODE_1$1",
"INGRP1"."NS_TITLE_CLASSIFICATION_CODE" "NS_TITLE_CLASSIFICATION_CO_1$1",
"INGRP1"."NS_SUPPLY_FLAG" "NS_SUPPLY_FLAG_1$1",
"INGRP1"."NS_EFF_DATE" "NS_EFF_DATE_1$1",
"INGRP1"."NS_EXP_DATE" "NS_EXP_DATE_1$1",
"INGRP2"."NS_ID" "NS_ID_0_0$1",
"INGRP2"."NS_OUTLET_SRC_ID" "NS_OUTLET_SRC_ID_0_0$1",
"INGRP2"."NS_PUBLISHER_CODE" "NS_PUBLISHER_CODE_0_0$1",
"INGRP2"."NS_TITLE_CLASSIFICATION_CODE" "NS_TITLE_CLASSIFICATION_CO_2$1",
"INGRP2"."NS_SUPPLY_FLAG" "NS_SUPPLY_FLAG_0_0$1",
"INGRP2"."NS_EFF_DATE" "NS_EFF_DATE_0_0$1",
"INGRP2"."NS_EXP_DATE" "NS_EXP_DATE_0_0$1",
"INGRP2"."DIMENSION_KEY" "DIMENSION_KEY_0$1"
FROM
( SELECT
"RETAILER_PUBLISHER_NS"."NS_ID" "NS_ID",
"RETAILER_PUBLISHER_NS"."NS_OUTLET_SRC_ID" "NS_OUTLET_SRC_ID",
"RETAILER_PUBLISHER_NS"."NS_PUBLISHER_CODE" "NS_PUBLISHER_CODE",
"RETAILER_PUBLISHER_NS"."NS_TITLE_CLASSIFICATION_CODE" "NS_TITLE_CLASSIFICATION_CODE",
"RETAILER_PUBLISHER_NS"."NS_SUPPLY_FLAG" "NS_SUPPLY_FLAG",
"RETAILER_PUBLISHER_NS"."NS_EFF_DATE" "NS_EFF_DATE",
"RETAILER_PUBLISHER_NS"."NS_EXP_DATE" "NS_EXP_DATE",
"RETAILER_PUBLISHER_NS"."DIMENSION_KEY" "DIMENSION_KEY"
FROM
"RETAILER_PUBLISHER_NS" "RETAILER_PUBLISHER_NS"
WHERE
( "RETAILER_PUBLISHER_NS"."DIMENSION_KEY" = "RETAILER_PUBLISHER_NS"."NS_ID" ) AND
( "RETAILER_PUBLISHER_NS"."NS_ID" IS NOT NULL ) ) "INGRP2"
RIGHT OUTER JOIN ( SELECT
NULL "NS_ID",
"LOOKUP_INPUT_SUBQUERY"."NS_OUTLET_SRC_ID$2" "NS_OUTLET_SRC_ID",
"LOOKUP_INPUT_SUBQUERY"."NS_PUBLISHER_CODE$2" "NS_PUBLISHER_CODE",
"LOOKUP_INPUT_SUBQUERY"."NS_TITLE_CLASSIFICATION_CODE$2" "NS_TITLE_CLASSIFICATION_CODE",
"LOOKUP_INPUT_SUBQUERY"."NS_SUPPLY_FLAG$2" "NS_SUPPLY_FLAG",
"LOOKUP_INPUT_SUBQUERY"."NS_EFF_DATE$2" "NS_EFF_DATE",
"LOOKUP_INPUT_SUBQUERY"."NS_EXP_DATE$2" "NS_EXP_DATE"
FROM
(SELECT
"DEDUP_SRC"."NS_ID$3" "NS_ID$2",
"DEDUP_SRC"."NS_OUTLET_SRC_ID$3" "NS_OUTLET_SRC_ID$2",
"DEDUP_SRC"."NS_PUBLISHER_CODE$3" "NS_PUBLISHER_CODE$2",
"DEDUP_SRC"."NS_TITLE_CLASSIFICATION_CODE$3" "NS_TITLE_CLASSIFICATION_CODE$2",
"DEDUP_SRC"."NS_SUPPLY_FLAG$3" "NS_SUPPLY_FLAG$2",
"DEDUP_SRC"."NS_EFF_DATE$3" "NS_EFF_DATE$2",
"DEDUP_SRC"."NS_EXP_DATE$3" "NS_EXP_DATE$2"
FROM
(SELECT
NULL/* DEDUP_SRC.OUTGRP1.NS_ID */ "NS_ID$3",
("PUB_AGENT_MATRIX_CC"."PAM_CUSTOMER_ID"/* EXPR_SRC.OUTGRP1.NS_OUTLET_SRC_ID */)/* DEDUP_SRC.OUTGRP1.NS_OUTLET_SRC_ID */ "NS_OUTLET_SRC_ID$3",
((to_char("PUB_AGENT_MATRIX_CC"."PAM_PUBLISHER_CODE")/* EXP.OUTGRP1.PAM_PUBLISHER_CODE */)/* EXPR_SRC.OUTGRP1.NS_PUBLISHER_CODE */)/* DEDUP_SRC.OUTGRP1.NS_PUBLISHER_CODE */ "NS_PUBLISHER_CODE$3",
("PUB_AGENT_MATRIX_CC"."PAM_TITLCLAS_CODE"/* EXPR_SRC.OUTGRP1.NS_TITLE_CLASSIFICATION_CODE */)/* DEDUP_SRC.OUTGRP1.NS_TITLE_CLASSIFICATION_CODE */ "NS_TITLE_CLASSIFICATION_CODE$3",
("PUB_AGENT_MATRIX_CC"."PAM_SUPPLY_FLAG"/* EXPR_SRC.OUTGRP1.NS_SUPPLY_FLAG */)/* DEDUP_SRC.OUTGRP1.NS_SUPPLY_FLAG */ "NS_SUPPLY_FLAG$3",
MIN(("PUB_AGENT_MATRIX_CC"."PAM_EFFECTIVE_DATE"/* EXPR_SRC.OUTGRP1.NS_EFF_DATE */)) KEEP (DENSE_RANK FIRST ORDER BY NULL/* EXPR_SRC.OUTGRP1.NS_ID */)/* DEDUP_SRC.OUTGRP1.NS_EFF_DATE */ "NS_EFF_DATE$3",
NULL/* DEDUP_SRC.OUTGRP1.NS_EXP_DATE */ "NS_EXP_DATE$3"
FROM
"REFSTG"."PUB_AGENT_MATRIX_CC" "PUB_AGENT_MATRIX_CC"
WHERE
( "PUB_AGENT_MATRIX_CC"."PAM_ADD_REMOVE_FLAG" = 'A' )
GROUP BY
("PUB_AGENT_MATRIX_CC"."PAM_CUSTOMER_ID"/* EXPR_SRC.OUTGRP1.NS_OUTLET_SRC_ID */), ((to_char("PUB_AGENT_MATRIX_CC"."PAM_PUBLISHER_CODE")/* EXP.OUTGRP1.PAM_PUBLISHER_CODE */)/* EXPR_SRC.OUTGRP1.NS_PUBLISHER_CODE */), ("PUB_AGENT_MATRIX_CC"."PAM_TITLCLAS_CODE"/* EXPR_SRC.OUTGRP1.NS_TITLE_CLASSIFICATION_CODE */), ("PUB_AGENT_MATRIX_CC"."PAM_SUPPLY_FLAG"/* EXPR_SRC.OUTGRP1.NS_SUPPLY_FLAG */),NULL,NULL/* RETAILER_PUBLISHER_NS.DEDUP_SRC */) "DEDUP_SRC") "LOOKUP_INPUT_SUBQUERY"
WHERE
( NOT ( "LOOKUP_INPUT_SUBQUERY"."NS_OUTLET_SRC_ID$2" IS NULL AND "LOOKUP_INPUT_SUBQUERY"."NS_PUBLISHER_CODE$2" IS NULL ) ) ) "INGRP1" ON ( ( ( "INGRP2"."NS_EFF_DATE" IS NULL OR ( ( "INGRP2"."NS_EXP_DATE" IS NULL AND TO_CHAR ( "INGRP2"."NS_EFF_DATE" , 'J.HH24.MI.SS' ) <= TO_CHAR ( "INGRP1"."NS_EFF_DATE" , 'J.HH24.MI.SS' ) ) OR ( "INGRP2"."NS_EXP_DATE" IS NOT NULL AND TO_CHAR ( "INGRP2"."NS_EFF_DATE" , 'J.HH24.MI.SS' ) <= TO_CHAR ( "INGRP1"."NS_EFF_DATE" , 'J.HH24.MI.SS' ) AND TO_CHAR ( "INGRP2"."NS_EXP_DATE" , 'J.HH24.MI.SS' ) >= TO_CHAR ( "INGRP1"."NS_EFF_DATE" , 'J.HH24.MI.SS' ) ) ) ) ) AND ( ( "INGRP2"."NS_PUBLISHER_CODE" = "INGRP1"."NS_PUBLISHER_CODE" ) ) AND ( ( "INGRP2"."NS_OUTLET_SRC_ID" = "INGRP1"."NS_OUTLET_SRC_ID" ) ) )) "SPLITTER_INPUT_SUBQUERY$1"
WHERE
( "SPLITTER_INPUT_SUBQUERY$1"."NS_OUTLET_SRC_ID_1$1" = "SPLITTER_INPUT_SUBQUERY$1"."NS_OUTLET_SRC_ID_0_0$1" AND "SPLITTER_INPUT_SUBQUERY$1"."NS_PUBLISHER_CODE_1$1" = "SPLITTER_INPUT_SUBQUERY$1"."NS_PUBLISHER_CODE_0_0$1" ) AND
( ( "SPLITTER_INPUT_SUBQUERY$1"."NS_EXP_DATE_0_0$1" IS NULL AND TO_CHAR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_EFF_DATE_0_0$1" , 'J.HH24.MI.SS' ) <= TO_CHAR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_EFF_DATE_1$1" , 'J.HH24.MI.SS' ) ) OR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_EXP_DATE_0_0$1" IS NOT NULL AND TO_CHAR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_EFF_DATE_0_0$1" , 'J.HH24.MI.SS' ) <= TO_CHAR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_EFF_DATE_1$1" , 'J.HH24.MI.SS' ) AND TO_CHAR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_EXP_DATE_0_0$1" , 'J.HH24.MI.SS' ) >= TO_CHAR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_EFF_DATE_1$1" , 'J.HH24.MI.SS' ) ) ) AND
( ( "SPLITTER_INPUT_SUBQUERY$1"."NS_TITLE_CLASSIFICATION_CO_1$1" IS NULL AND "SPLITTER_INPUT_SUBQUERY$1"."NS_TITLE_CLASSIFICATION_CO_2$1" IS NOT NULL ) OR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_TITLE_CLASSIFICATION_CO_1$1" IS NOT NULL AND "SPLITTER_INPUT_SUBQUERY$1"."NS_TITLE_CLASSIFICATION_CO_2$1" IS NULL ) OR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_TITLE_CLASSIFICATION_CO_1$1" != "SPLITTER_INPUT_SUBQUERY$1"."NS_TITLE_CLASSIFICATION_CO_2$1" ) OR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_SUPPLY_FLAG_1$1" IS NULL AND "SPLITTER_INPUT_SUBQUERY$1"."NS_SUPPLY_FLAG_0_0$1" IS NOT NULL ) OR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_SUPPLY_FLAG_1$1" IS NOT NULL AND "SPLITTER_INPUT_SUBQUERY$1"."NS_SUPPLY_FLAG_0_0$1" IS NULL ) OR ( "SPLITTER_INPUT_SUBQUERY$1"."NS_SUPPLY_FLAG_1$1" != "SPLITTER_INPUT_SUBQUERY$1"."NS_SUPPLY_FLAG_0_0$1" ) )) "AGG_INPUT"
GROUP BY
"AGG_INPUT"."NS_ID$5", "AGG_INPUT"."NS_OUTLET_SRC_ID$5", "AGG_INPUT"."NS_PUBLISHER_CODE$5"/* RETAILER_PUBLISHER_NS.DEDUP_SCD_SRC */) "DEDUP_SCD_SRC") ) "MERGE_DELTA_ROW_0"
MERGE_SUBQUERY
ON (
"RETAILER_PUBLISHER_NS"."NS_OUTLET_SRC_ID" = "MERGE_SUBQUERY"."NS_OUTLET_SRC_ID" AND
"RETAILER_PUBLISHER_NS"."NS_PUBLISHER_CODE" = "MERGE_SUBQUERY"."NS_PUBLISHER_CODE" AND
"RETAILER_PUBLISHER_NS"."NS_EFF_DATE" = "MERGE_SUBQUERY"."NS_EFF_DATE" AND
"RETAILER_PUBLISHER_NS"."NS_ID" = "MERGE_SUBQUERY"."NS_ID"
WHEN MATCHED THEN
UPDATE
SET
"NS_TITLE_CLASSIFICATION_CODE" = "MERGE_SUBQUERY"."NS_TITLE_CLASSIFICATION_CODE",
"NS_SUPPLY_FLAG" = "MERGE_SUBQUERY"."NS_SUPPLY_FLAG",
"NS_EXP_DATE" = "MERGE_SUBQUERY"."NS_EXP_DATE"
WHEN NOT MATCHED THEN
INSERT
("RETAILER_PUBLISHER_NS"."NS_ID",
"RETAILER_PUBLISHER_NS"."NS_OUTLET_SRC_ID",
"RETAILER_PUBLISHER_NS"."NS_PUBLISHER_CODE",
"RETAILER_PUBLISHER_NS"."NS_TITLE_CLASSIFICATION_CODE",
"RETAILER_PUBLISHER_NS"."NS_SUPPLY_FLAG",
"RETAILER_PUBLISHER_NS"."NS_EFF_DATE",
"RETAILER_PUBLISHER_NS"."NS_EXP_DATE",
"RETAILER_PUBLISHER_NS"."DIMENSION_KEY")
VALUES
("RETAILER_PUBLISHER_NS_SEQ".NEXTVAL,
"MERGE_SUBQUERY"."NS_OUTLET_SRC_ID",
"MERGE_SUBQUERY"."NS_PUBLISHER_CODE",
"MERGE_SUBQUERY"."NS_TITLE_CLASSIFICATION_CODE",
"MERGE_SUBQUERY"."NS_SUPPLY_FLAG",
"MERGE_SUBQUERY"."NS_EFF_DATE",
"MERGE_SUBQUERY"."NS_EXP_DATE",
"RETAILER_PUBLISHER_NS_SEQ".CURRVAL)
Explain plan:
MERGE STATEMENT, GOAL = ALL_ROWS 1412 2 286
MERGE DW RETAILER_PUBLISHER_NS
VIEW DW
SEQUENCE DW RETAILER_PUBLISHER_NS_SEQ
HASH JOIN OUTER 1412 2 256
VIEW DW 940 2 170
SORT UNIQUE 940 2 218
UNION-ALL
WINDOW SORT 470 1 133
FILTER
NESTED LOOPS OUTER 468 1 133
VIEW DW 4 1 65
SORT GROUP BY 4 1 25
TABLE ACCESS FULL REFSTG PUB_AGENT_MATRIX_CC 3 1 25
VIEW SYS 464 1 68
VIEW DW 464 1 68
TABLE ACCESS FULL DW RETAILER_PUBLISHER_NS 464 1 43
VIEW DW 469 1 85
SORT GROUP BY 469 1 90
NESTED LOOPS 468 1 90
VIEW DW 4 1 37
SORT GROUP BY 4 1 25
TABLE ACCESS FULL REFSTG PUB_AGENT_MATRIX_CC 3 1 25
VIEW SYS 464 1 53
VIEW DW 464 1 68
TABLE ACCESS FULL DW RETAILER_PUBLISHER_NS 464 1 43
TABLE ACCESS FULL DW RETAILER_PUBLISHER_NS 467 337417 14508931
Is this similar to the sql generated at your end? Do you use special loading hints, anything specail with indexing - we have tried standard indexing.
Does this look untoward - have you any other suggestions?
Thanks for your interest. -
Has anyone ever implemented SCD Type-4 using SSIS??
Hello Experts!!
I have been trying to implement SCD TYPE-4 using SSIS and really got stuck and searched on-line for help. for my surprise, there isn't anything up on this topic.
I know the theory behind SCD Type-4 is to maintain history in seperate tables in a rapid changing dimensions.
please help if any of you ever implemented scd type-4 using SSIS.Hi,
The stock Slowly Changing Dimension Transformation of SSIS only supports SCD Type 1 and Type 2. For SCD Type 4, it maintains two tables: one to keep the current data, and the other one to keep the historical data. As a workaround, you can also implement
SCD Type 1 via SCD Transformation, and implement Change Data Capture at the same time. SSIS also provides CDC Control Task and related Data Flow components.
References:
http://www.bidn.com/blogs/TomLannen/bidn-blog/2606/slowly-changing-dimension-type-1-changes-using-ssis-scd-component
http://www.mattmasson.com/2011/12/cdc-in-ssis-for-sql-server-2012-2/
Regards,
Mike Yin
TechNet Community Support -
Date type is not captured in Visual composer 7.0 using SQL server as DB
Date type is not captured in Visual composer 7.0 when using SQL server as DB, and field type is "SmallDateTime" in DB.
Create new Text tab in fields of Table & select Type as Date in VC or use DSRT date function
-
How to create longtext or blob data types in SQL using labview
Hello,
I am fairly new to SQL, and I'm using the labview database connectivity toolset. Using labview 6.1
I am using the DB Tools Create Table vi to create my tables. I want the tables to hold 1000 character strings. But the longest string that I can insert seem to be 255 characters. I think It is a limitation of the "String" data type in SQL so I need to use text or blob types. The problem is I created a control for the "Column Information" field and I see the following selections for the data type. (String, Long, Single, Double, date/time, binary). I dont see any selection for text or blobs. How do I define another data type that is not part of the selection in the control?
Thanks for any help.I don't know about defining long text, but the equivalent of a BLOB should be the binary data type, which just holds the data you put into it without formatting it.
Try to take over the world! -
Managing statistics for object collections used as table types in SQL
Hi All,
Is there a way to manage statistics for collections used as table types in SQL.
Below is my test case
Oracle Version :
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> Original Query :
SELECT
9999,
tbl_typ.FILE_ID,
tf.FILE_NM ,
tf.MIME_TYPE ,
dbms_lob.getlength(tfd.FILE_DATA)
FROM
TG_FILE tf,
TG_FILE_DATA tfd,
SELECT
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) tbl_typ
WHERE
tf.FILE_ID = tfd.FILE_ID
AND tf.FILE_ID = tbl_typ.FILE_ID
AND tfd.FILE_ID = tbl_typ.FILE_ID;
Elapsed: 00:00:02.90
Execution Plan
Plan hash value: 3970072279
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 194 | 4567 (2)| 00:00:55 |
|* 1 | HASH JOIN | | 1 | 194 | 4567 (2)| 00:00:55 |
|* 2 | HASH JOIN | | 8168 | 287K| 695 (3)| 00:00:09 |
| 3 | VIEW | | 8168 | 103K| 29 (0)| 00:00:01 |
| 4 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | TG_FILE | 565K| 12M| 659 (2)| 00:00:08 |
| 7 | TABLE ACCESS FULL | TG_FILE_DATA | 852K| 128M| 3863 (1)| 00:00:47 |
Predicate Information (identified by operation id):
1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
Statistics
7 recursive calls
0 db block gets
16783 consistent gets
16779 physical reads
0 redo size
916 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
select
index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
from
all_indexes
where table_name in ('TG_FILE','TG_FILE_DATA');
INDEX_NAME BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS SAMPLE_SIZE
TG_FILE_PK 2 2160 552842 21401 552842 285428
TG_FILE_DATA_PK 2 3544 852297 61437 852297 852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
So my question is, is there any way by which I can change the statistics while using collections in SQL ?
I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
Modified query with hints :
SELECT
/*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
9999,
tbl_typ.FILE_ID,
tf.FILE_NM ,
tf.MIME_TYPE ,
dbms_lob.getlength(tfd.FILE_DATA)
FROM
TG_FILE tf,
TG_FILE_DATA tfd,
SELECT
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
tbl_typ
WHERE
tf.FILE_ID = tfd.FILE_ID
AND tf.FILE_ID = tbl_typ.FILE_ID
AND tfd.FILE_ID = tbl_typ.FILE_ID;
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 1670128954
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 194 | 29978 (1)| 00:06:00 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 1 | 194 | 29978 (1)| 00:06:00 |
| 3 | NESTED LOOPS | | 8168 | 1363K| 16379 (1)| 00:03:17 |
| 4 | VIEW | | 8168 | 103K| 29 (0)| 00:00:01 |
| 5 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
| 6 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID | TG_FILE_DATA | 1 | 158 | 2 (0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | TG_FILE_DATA_PK | 1 | | 1 (0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | TG_FILE_PK | 1 | | 1 (0)| 00:00:01 |
| 10 | TABLE ACCESS BY INDEX ROWID | TG_FILE | 1 | 23 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
filter("TF"."FILE_ID"="TFD"."FILE_ID")
Statistics
0 recursive calls
0 db block gets
16 consistent gets
8 physical reads
0 redo size
916 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
Thanks,
BThanks Tubby,
While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
http://www.oracle-developer.net/display.php?id=427
If we go across the document, it has mentioned in total 3 hints to set statistics :
1) CARDINALITY (Undocumented)
2) OPT_ESTIMATE ( Undocumented )
3) DYNAMIC_SAMPLING ( Documented )
4) Extensible Optimiser
Tried it out with different hints and it is working as expected.
i.e. cardinality and opt_estimate are taking the default set value
But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
With CARDINALITY hint
SELECT
/*+ cardinality( e, 5) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1467416936
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 10 | 29 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 5 | 10 | 29 (0)| 00:00:01 |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
With OPT_ESTIMATE hint
SELECT
/*+ opt_estimate(table, e, scale_rows=0.0006) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Execution Plan
Plan hash value: 4043204977
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 485 | 29 (0)| 00:00:01 |
| 1 | VIEW | | 5 | 485 | 29 (0)| 00:00:01 |
| 2 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 5 | 10 | 29 (0)| 00:00:01 |
| 3 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
With DYNAMIC_SAMPLING hint
SELECT
/*+ dynamic_sampling( e, 5) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1467416936
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 4 | 11 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 2 | 4 | 11 (0)| 00:00:01 |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
Note
- dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
Regards,
B
Maybe you are looking for
-
Hard Drive Failed, issue installing windows 7 pro
Hi all, and thanks for the help. So, I have this HP computer that crashed. Unfortunetly I wasn't correctly running backups and something happened to the hard drive. So, I wiped it clean..... I never had a Windows 7 Pro install disk. So when I went to
-
Generate XML file dynamically using java
Hi, I searched on this topic on the net and also in this forum but everything leads to parsing an existing file using JAXP technology. But i need to create an xml file dynamically from the data coming from a different source. I can do it manually by
-
SOLMANUP where to find upgrade master DVD?
We're upgrading sol.man. 701 to 71. I've used mopz to download all files, I'm using SOLMANUP to run the upgrade. I'm still in the "Extraction phase", where I'm supposed to point out the directories for the upgrade. It complains for "Enter at least th
-
Can anyone explain why my Dell laptop and my iphone can "see" a server in Luton when using a wireless broadband (I get 20 Mbps download speed) but my ipad can not see this server and finds one in Milton Keynes which gives me 0.37 download speed.
-
About bdc using session method
how to update mm thru mk01. please provide the source code with a flat file to be updated. it's urgent.