Joining on two large tables give breaks connections?
I am doing inner join on two large tables (172,818 and 146,215) give breaks connections. Using Oracle 8.1.7.0.0
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Oraigannly I was trying to do alter table to add constrints, it gave the error aswell.
ALTER TABLE a ADD (CONSTRAINT a_FK
FOREIGN KEY (a_ID, a_VERSION)
REFERENCES b(b_ID, b_VERSION)
DEFERRABLE INITIALLY IMMEDIATE)
Also gives same error. Trace file does not make sense to me.
Thanks for the reply, no luck yet.
SQL> show parameter optimizer_max_permutations ;
NAME TYPE VALUE
optimizer_max_permutations integer 80000
SQL> show parameter resource_limit ;
NAME TYPE VALUE
resource_limit boolean FALSE
SQL>
Similar Messages
-
Joining of two internal tables...... urgent.
Dear all experts,
I am going to join two internal tables, one table contains some fields from mara
and another table contains some fields from makt.
i have to join these tables without using for all entries.
below is program mentioned... I am not getting exactly how to put the logic, to get fields into table itab_3.
<b>*------- defining internal tables.</b>
DATA: BEGIN OF <b>itab_1</b> OCCURS 0,
matnr TYPE mara-matnr,
END OF itab_1.
DATA: BEGIN OF <b>itab_2</b> OCCURS 0,
matnr TYPE makt-matnr,
maktx TYPE makt-maktx,
spras TYPE makt-spras,
END OF itab_2.
DATA: BEGIN OF <b>itab_3</b> OCCURS 0,
matnr TYPE mara-matnr,
spras TYPE makt-spras,
END OF itab_3.
<b>*---taking data to first internal table.-</b>
SELECT matnr
FROM mara
INTO TABLE itab_1
WHERE ernam = 'RUDISILL'.
<b>*--taking data to second internal table.--</b>
SELECT matnr maktx spras
FROM makt
INTO TABLE itab_2.
sort itab_1 by matnr.
sort itab_2 by matnr.
can anybody please tell, how to take fields of itab_2 to itab_3, where all matnr in itab_2 should be equal to all matnr in itab_1.
points will be surely assigned to your help.
waiting
Warm regards
Vinay.hi,
kindly chk this sample.
check this.
data : begin of itab1 occurs 0. "itab with work area.
key_field1 like ztable1-key_field1,
field1 like ztable1-field1,
field2 like ztable1-field2,
endof itab1.
data : begin of itab2 occurs 0. "itab with work area.
key_field2 like ztable2-key_field2,
field3 like ztable2-field3,
field4 like ztable2-field4,
endof itab2.
data : begin of itab_final occurs 0.
key_field1 like ztable1-key_field1,
field1 like ztable1-field1,
field2 like ztable1-field2,
field3 like ztable2-field3,
field4 like ztable2-field4,
endof itab_final.
put the date final(merged) internal table
1. loop at itab1.
read table itab2 with key keyfield2 = itab1-keyfield1.
if sy-surc = 0.
itab_final-key_field1 = itab1-keyfield1
itab_final-field1 = itab1-field1.
itab_final-field2 = itab1-keyfield2.
itab_final-field3 = itab2-field2.
itab_final-field4 = itab2-keyfield2.
append itab_final.
clear itab_final.
endif.
endloop.
or
LOOP AT ITAB1.
MOVE-CORRESPONDING TO ITAB1 to ITAB_FINAL.
READ TABLE ITAB2 WITH KEY FILED1 = ITAB1-FIELD1.
if sy-subrc = 0.
MOVE-CORRESPONDING TO ITAB2 to ITAB_FINAL.
endif,
READ TABLE ITAB3 WITH KEY FILED1 = ITAB1-FIELD1.
if sy-subrc = 0.
MOVE-CORRESPONDING TO ITAB2 to ITAB_FINAL.
endif,
append itab_final.
clear itab_final.
endloop
Regards,
Anversha -
Join between two nested tables question
Hi. I have two nested tables with a join statement drawing info from both. The query looks like this:
select to_char(N.LASTLOGONDATE, 'YYYY'), count(n.u_name), A.ACCOUNTDISABLED
from coclastlogon, TABLE(COCLASTLOGON.RLLS) N , userpwaudit, TABLE(USERPWAUDIT.PVSS) A
where N.U_NAME = A.USERNAME (+)
and coclastlogon.AUDITID = 12
and userpwaudit.AUDITID = 12
group by to_char(N.LASTLOGONDATE, 'YYYY'), A.ACCOUNTDISABLED
The query runs fine except its not generating non-matching values from the A.ACCOUNTDISABLED field. It is my understanding that placing the (+) will instruct the statement to add these columns to the result. If I use the same query with the same unnested data it produces the non-matching result set as I require.
My feeling is there might be an issue with the way I've set my "FROM" but I'm not too sure how to proceed.
Any suggestions?
Thanks!You're right, by non-matching I mean null values.
Running the query I wrote returns this result:
(N.LASTLOGONDATE,'YYYY') / COUNT(N.U_NAME) / ACCOUNTDISABLED
2005 3408 No
2002 1 Yes
Running the query using un-nested data returns this result (this is what I'm after):
(N.LASTLOGONDATE,'YYYY') /COUNT(N.U_NAME) /ACCOUNTDISABLED
2005 3408 No
2002 1 Yes
2005 27 -
As you can see, the value I'm after is the one with ACCOUNTDISABLED 'null' value. Essentially, "coclastlogon, TABLE(COCLASTLOGON.RLLS) N" has more values in the N.U_NAME column than the table it is being joined to. That is why I added the (+). I just need to account for all values in TABLE(COCLASTLOGON.RLLS) than just the ones that find a join with "userpwaudit, TABLE(USERPWAUDIT.PVSS) A" -
Inner Join between two big tables
Hi There,
I have a situation where in which I have to write an inner join between two table of the order of (30k to 60 rows).
My query is as simple as,
select A.a,B.b from A ,B where A.a = B.b;
N.B: a and b are of type varchar
But the problem is it takes nearly 15 mins to run. Is there any better way of doing an inner join between such bigger tables?
Thanks,
Jose John.Thank you all for your help.....Indexing works....:)
--JJ -
Hi Guys,
I have two internal tables with same structure, ITAB1 is having 100 records and ITAB2 is having 150 records, i need to club two internal tables into ITAB3.
I hope, we can loop one internal table append record by record.
Is any other way, can we club two internal tables.
Thanks,
Gourisankar.Hi,
You can use INSERT LNES OF ITAB1 INTO ITAB3 and afterwards do the same with ITAB2.
Regards, Gerd Rother -
I've got 2 tables: pay_run_results (+/- 35.000.000 records) and XX_PAY_COSTS (25.000.000 records)
When in join those table i get an error: ORA-01652: unable to extend temp segment by 128 in tablespace temp1
So i thought the temp space would be to small. But a dba'er told met the temp space is 4,4 GB
To reduce the total records i join a other table. see below
select
from
pay_run_results,
XX_PAY_COSTS,
PAY_PAYROLL_ACTIONS
where
PAY_RUN_RESULTS.RUN_RESULT_ID = XX_PAY_COSTS.run_result_id
and PAY_PAYROLL_ACTIONS.PAYROLL_ACTION_ID = XX_PAY_COSTS.PAYROLL_ACTION_ID
and PAY_PAYROLL_ACTIONS.ACTION_TYPE ='C'
and XX_PAY_COSTS.DEBIT_OR_CREDIT = 'C'
When running the above query it took 44 minutes to complete, but i did not get the ora-01652 :)
So I got 2 questions:
1) why you get an error ORA-01652 when not sorting in your query. Is the temp space also used for temporaly save the result of a query? The result must give +/2 25.000.000 records
2) The query below gives +/- 3.000.000 records. But still running for 44 minutes. How do you know what's normal? I think 44 minutes is quite long.
Tnx for helping....You'll need to provide more information like database version, execution plan (or even better: a tkprof/trace report with wait events).
It is explained here:
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html -
Can we apply join between two internal tables?
Itab has fields A,B,C.
Data: begin of itab occurs 1,
A type I,
B type I,
C type I,
End of itab.
Jtab has fields A, I, J.
Data: begin of itab occurs 1,
A type I,
I type I,
J type I,
End of itab
The common field between itab and jtab is u201CAu201D.
Now I need to collect A,B,C,I, J in another internal table ktab.
How should I be doing this.
If I use a SELECT query with inner join between itab and jtab it says u201Citab is not a database tableu201D.
How should I get the result ktab with A B C I J fields?
Please help, nay help will be highly appreciated?a®s wrote:
>
> sort itab_all by A
> delete adjacent duplicates from itab_all comparing A.
>
>
Do you have the above code in ?
Here A is common field between both tables, first we are appending itab_1 & itab_2 into table itab_all then deleting the adjacent duplicates from itab_all. then we are reading itab_1 & itab_2 for possible matches and update the same values in itab_all
so there will NOT be a chance of duplicates in itab_all
a® -
I have some queries that use an inner join between a table with a few hundred rows and a table that will eventually have many millions of rows. The join is on an integer value that is part of the primary key on the larger table. The primary key
on said table consists of the integer and another field which is a BigInt (representing Date/time to the millisecond). The query also has predicate (where clause) with an exact match for the BigInt.
The query take about a second to execute at the moment but I was wondering whether I should expect a large increase in execution time as the years go by.
Is an inner join on the large table advisable?
By the way, the first field in the primary key is the integer followed by the BigInt, so any thought of selecting on the BigInt into temp table before attempting the join probably won't help.
R CampbellJust in case anyone wants to see the full picture (which I am not actually expecting) this is a script for all SQL objects involved.
The numbers of rows in the tables are.
Tags 5,000
NumericSamples millions (over time)
TagGroups 50
GroupTags 500
CREATE TABLE [dbo].[Tags](
[ID] [int] NOT NULL,
[TagName] [nvarchar](110) NOT NULL,
[Address] [nvarchar](80) NULL,
[DataTypeID] [smallint] NOT NULL,
[DatasourceID] [smallint] NOT NULL,
[Location] [nvarchar](4000) NULL,
[Properties] [nvarchar](4000) NULL,
[LastReadSampleTime] [bigint] NOT NULL,
[Archived] [bit] NOT NULL,
CONSTRAINT [Tags_ID_PK] PRIMARY KEY CLUSTERED
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Tags] WITH NOCHECK ADD CONSTRAINT [Tags_DatasourceID_Datasources_ID_FK] FOREIGN KEY([DatasourceID])
REFERENCES [dbo].[Datasources] ([ID])
GO
ALTER TABLE [dbo].[Tags] CHECK CONSTRAINT [Tags_DatasourceID_Datasources_ID_FK]
GO
ALTER TABLE [dbo].[Tags] WITH NOCHECK ADD CONSTRAINT [Tags_DataTypeID_DataTypes_ID_FK] FOREIGN KEY([DataTypeID])
REFERENCES [dbo].[DataTypes] ([ID])
GO
ALTER TABLE [dbo].[Tags] CHECK CONSTRAINT [Tags_DataTypeID_DataTypes_ID_FK]
GO
ALTER TABLE [dbo].[Tags] ADD CONSTRAINT [DF_Tags_LastReadSampleTime] DEFAULT ((552877956000000000.)) FOR [LastReadSampleTime]
GO
ALTER TABLE [dbo].[Tags] ADD DEFAULT ((0)) FOR [Archived]
GO
CREATE TABLE [dbo].[NumericSamples](
[TagID] [int] NOT NULL,
[SampleDateTime] [bigint] NOT NULL,
[SampleValue] [float] NULL,
[QualityID] [smallint] NOT NULL,
CONSTRAINT [NumericSamples_TagIDSampleDateTime_PK] PRIMARY KEY CLUSTERED
[TagID] ASC,
[SampleDateTime] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[NumericSamples] WITH NOCHECK ADD CONSTRAINT [NumericSamples_QualityID_Qualities_ID_FK] FOREIGN KEY([QualityID])
REFERENCES [dbo].[Qualities] ([ID])
GO
ALTER TABLE [dbo].[NumericSamples] CHECK CONSTRAINT [NumericSamples_QualityID_Qualities_ID_FK]
GO
ALTER TABLE [dbo].[NumericSamples] WITH NOCHECK ADD CONSTRAINT [NumericSamples_TagID_Tags_ID_FK] FOREIGN KEY([TagID])
REFERENCES [dbo].[Tags] ([ID])
GO
ALTER TABLE [dbo].[NumericSamples] CHECK CONSTRAINT [NumericSamples_TagID_Tags_ID_FK]
GO
CREATE TABLE [dbo].[TagGroups](
[ID] [int] IDENTITY(1,1) NOT NULL,
[TagGroup] [varchar](50) NULL,
[Aggregates] [varchar](250) NULL,
[NumericData] [bit] NULL,
CONSTRAINT [PK_TagGroups] PRIMARY KEY CLUSTERED
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[TagGroups] ADD CONSTRAINT [DF_Tag_Groups_Aggregates] DEFAULT ('First') FOR [Aggregates]
GO
ALTER TABLE [dbo].[TagGroups] ADD CONSTRAINT [DF_TagGroups_NumericData] DEFAULT ((1)) FOR [NumericData]
GO
CREATE TABLE [dbo].[GroupTags](
[ID] [int] IDENTITY(1,1) NOT NULL,
[TagGroupID] [int] NULL,
[TagName] [varchar](150) NULL,
[ColumnName] [varchar](50) NULL,
[SortOrder] [int] NULL,
[TotalFactor] [float] NULL,
CONSTRAINT [PK_GroupTags] PRIMARY KEY CLUSTERED
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[GroupTags] WITH CHECK ADD CONSTRAINT [FK_GroupTags_TagGroups] FOREIGN KEY([TagGroupID])
REFERENCES [dbo].[TagGroups] ([ID])
ON UPDATE CASCADE
ON DELETE CASCADE
GO
ALTER TABLE [dbo].[GroupTags] CHECK CONSTRAINT [FK_GroupTags_TagGroups]
GO
ALTER TABLE [dbo].[GroupTags] ADD CONSTRAINT [DF_GroupTags_TotalFactor] DEFAULT ((1)) FOR [TotalFactor]
GO
CREATE VIEW [dbo].[vw_GroupTags]
AS
SELECT TOP (10000) dbo.TagGroups.TagGroup AS TableName, dbo.TagGroups.Aggregates AS SortOrder, dbo.GroupTags.SortOrder AS TagIndex, dbo.GroupTags.TagName,
dbo.Tags.ID AS TagId, dbo.TagGroups.NumericData, dbo.GroupTags.TotalFactor, dbo.GroupTags.ColumnName
FROM dbo.TagGroups INNER JOIN
dbo.GroupTags ON dbo.TagGroups.ID = dbo.GroupTags.TagGroupID INNER JOIN
dbo.Tags ON dbo.GroupTags.TagName = dbo.Tags.TagName
ORDER BY SortOrder, TagIndex
CREATE procedure [dbo].[GetTagTableValues]
@SampleDateTime bigint,
@TableName varchar(50),
@PadRows int = 0
as
BEGIN
DECLARE @i int
DECLARE @ResultSet table(TagName varchar(150), SampleValue float, ColumnName varchar(50), SortOrder int, TagIndex int)
set @i = 0
INSERT INTO @ResultSet
SELECT vw_GroupTags.TagName, NumericSamples.SampleValue, vw_GroupTags.ColumnName, vw_GroupTags.SortOrder, vw_GroupTags.TagIndex
FROM vw_GroupTags INNER JOIN NumericSamples ON vw_GroupTags.TagId = NumericSamples.TagID
WHERE (vw_GroupTags.TableName = @TableName) AND (NumericSamples.SampleDateTime = @SampleDateTime)
set @i = @@ROWCOUNT
if @i < @PadRows
BEGIN
WHILE @i < @PadRows
BEGIN
INSERT @ResultSet (TagName, SampleValue, ColumnName, SortOrder, TagIndex) VALUES ('', NULL, '', 0, 0)
set @i = @i + 1
END
END
select TagName, SampleValue, ColumnName, SortOrder, TagIndex
from @ResultSet
END
R Campbell -
Slow query due to large table and full table scan
Hi,
We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
Two of the full table scans are of the two large tables mentioned above.
This is an example query:
SELECT table1.column, table2.column, table3.column
FROM table1
JOIN table2 on table1.table2Id = table2.id
LEFT JOIN table3 on table2.table3id = table3.id
WHERE table1.id IN(
SELECT id
FROM (
(SELECT a.*, rownum rnum FROM(
SELECT table1.id
FROM table1,
table2,
table3
WHERE
table1.table2id = table2.id
AND
table2.table3id IS NULL OR table2.table3id = :table3IdParameter
) a
WHERE rownum <= :end))
WHERE rnum >= :start
Table1 and table2 are the large tables in this example. This query starts two full table scans on those tables.
Can we avoid this? We have, what we think are, the correct indexes.
/best regards, Håkan>
Hi Håkan - welcome to the forum.
We have a large Oracle database, v 10g. Two of the tables in the database have over one million rows.
We have a few queries which take a lot of time to execute. Not always though, it that seems when load is high the queries tend
to take much longer. Average time may be 1 or 2 seconds, but maxtime can be up to 2 minutes.
We have now used Oracle Grid to help us examine the queries. We have found that some of the queries require two or three full table scans.
Two of the full table scans are of the two large tables mentioned above.
This is an example query:Firstly, please read the forum FAQ - top right of page.
Please format your SQL using tags [code /code].
In order to help us to help you.
Please post table structures - relevant (i.e. joined, FK, PK fields only) in the form - note use of code tags - we can just run table create script.
CREATE TABLE table1
Field1 Type1,
Field2 Type2,
FieldN TypeN
);Then give us some table data - not 100's of records - just enough in the form
INSERT INTO Table1 VALUES(Field1, Field2.... FieldN);
..Please post EXPLAIN PLAN - again with tags.
HTH,
Paul...
/best regards, Håkan -
Efficiently Querying Large Table
I have to query a recordset of 14k records against (join) a very large table: billions of data -even a count of the table does not return any resulty after 15 mins.
I tried a plsql procedure to store the first recordset in a temp table and then preparing two cursors: one on the temp table and the other on the large table.
However, the plsql procedure runs for a long time and just gives up with this error:
SQL> exec match;
ERROR:
ORA-01041: internal error. hostdef extension doesn't exist
BEGIN match; END;
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Is there is way through which I can query more efficiently?
- Using chucks of records from the large table at a time - how to do that? (rowid)
- Or just ask the dba to partition the table - but the whole table would still need to be queried.
The temp table is:
CREATE TABLE test AS SELECT a.mon_ord_no, a.mo_type_id, a.p2a_pbu_id, a.creation_date,b.status_date_time,
a.expiry_date, a.current_mo_status_desc_id, a.amount,
a.purchaser_name, a.recipent_name, a.mo_id_type_id,
a.mo_redeemed_by_id, a.recipient_type, c.pbu_id, c.txn_seq_no, c.txn_date_time
FROM mon_order a, mo_status b, host_txn_log c
where a.mon_ord_no = b.mon_ord_no
and a.mon_ord_no = c.mon_ord_no
and b.status_date_time = c.txn_date_time
and b.status_desc_id = 7
and a.current_mo_status_desc_id = 7
and a.amount is not null
and a.amount > 0
order by b.status_date_time;
and the PL/SQL Procedure is:
CREATE OR REPLACE PROCEDURE MATCH
IS
--DECLARE
deleted INTEGER :=0;
counter INTEGER :=0;
CURSOR v_table IS
SELECT DISTINCT pbu_id, txn_seq_no, create_date
FROM host_v
WHERE status = 4;
v_table_record v_table%ROWTYPE;
CURSOR temp_table (v_pbu_id NUMBER, v_txn_seq_no NUMBER, v_create_date DATE) IS
SELECT * FROM test
WHERE pbu_id = v_pbu_id
AND txn_seq_no = v_txn_seq_no
AND creation_date = v_create_date;
temp_table_record temp_table%ROWTYPE;
BEGIN
OPEN v_table;
LOOP
FETCH v_table INTO voucher_table_record;
EXIT WHEN v_table%NOTFOUND;
OPEN temp_table (v_table_record.pbu_id, v_table_record.txn_seq_no, v_table_record.create_date);
LOOP
FETCH temp_table INTO temp_table_record;
EXIT WHEN temp_table %FOUND;
DELETE FROM test WHERE pbu_id = v_table_record.pbu_id AND
temp_table_record.txn_seq_no = v_table_record.txn_seq_no AND
temp_table_record.creation_date = v_table_record.create_date;
END LOOP;
CLOSE temp_table;
END LOOP;
CLOSE v_table;
END MATCH;
/Many thanks,
I can get the explain plan for the SQL statement, but I am not sure how to get it for teh PLSQL. Which section in the PLSQL do I get the explain plan. I am using SQL Navigator.
I can create the cursor with the join, and if it does not need the delete statement, then there is no need requirement for the procedure itself. Should I just run the query as a SQL statement?
You have not said what I should do with the rowid?
Regards -
Calculating count from two fact tables
Hi Guys,
My requirement,
i have two fact table F1, F2 connected with a D1 dimension table, all table connected with a id column
F1 ---------D1------------F2
n 1 1 n
i want to find out the distinct count from Fact F1 where the value also present in F2( ie. F1.id= F2.id condition).
this measure usefull through out my project so i want create a logical column, how to create it.
I tried this way:
i Created two fact table in to one fact table by adding logical source, and created a logical column from existing logical source where F1.id=D1.id and D1.id=F2.Id but it is wrong
Can any suggestion for solve my problem.
Thanks in advance!!!Hi,
You can create an opaque view in physical layer and wrire a sql like
Select f1.id,count(*) from f1,f2 where f1.id=f2.id group by f1.id.Join this through id with dimesnion and add this fact table in LTS of your fact and use count(*) in report.
Regards,
Sandeep -
Unable to join Unit Cost(Cost table) to Sales Table.
Hi All,
I am beginner for OBIEE technology.May be this question is silly for you but i am facing issue in it.I am mentioning my issue below:-
We have two tables one is SALES(Fact Table ) and second one is COST (Fact Table). I want to apply Calculation Measures on this logical tables(BMM Layer).As per Oracle document,when they are dragging Unit_Cost column from Cost table (Physical Layer) to Sales table then no logical source table is creating for them but when i am trying this then I am getting one more logical source table (COST).So my doubts are :-
1) Why is the other LST is creating for me?
2) I s there any join require between COST and SALES Table to remove this Logical source table(Cost)?
Please let me know if any clarification is require on this issue.
Thanks
Shashank Jainit is correct. we you have two or more sources for same logical table obiee will create LTS for each source. you can't join these two fact tables physically but they will join using conformed dimension(common dim) based on physical schema design in physical layer.
Thanks
Jay. -
Network or database calls are made when joining more than one table
Hi Friends,
could anybody please let me know how may networks are called when joining more than one table.
Thanks
RinkyHi Rinky,
Normally when a JOIN between two database tables is made then following steps occur:-
1) The control goes to database. Based on the JOINING and WHERE condition, an internal table is created in the DATABASE only which is filled. So here the computation is done at DATABASE level.
2) Once the internal table is filled at database level, it is sent back to the application level.
A Join operation normally minimizes the round trips to the database as most of the computation is done at database level only and results sent back to the Application layer.
<b>Thus for a simple JOIN OPERATION makes a single DATABASE call.</b>
NOTE: If you are satisfied with the explanation, then please reward points
accordingly :).
Thanks and regards,
Ravi . -
Why oh why, weird performance on joining large tables
Hello.
I have a large table cotaining dates and customer data. Organised as:
DATE CUSTOMER_ID INFOCOLUMN1 INFOCOLUMN2 etc...
Rows per date are a couple of million.
What I'm trying to do is to make a comparison between date a and date b and track changes in the database.
When I do a:
SELECT stuff
FROM table t1
INNER JOIN table t2
ON t1.CUSTOMER_ID = t2.CUSTOMER_ID
WHERE t1.date = TO_DATE(SOME_DATE)
AND t2.date = TO_DATE(SOME_OTHER_DATE)I get a result in about 40 seconds which is acceptable.
Then I try doing:
SELECT stuff
FROM (SELECT TO_DATE(LAST_DAY(ADD_MONTHS(SYSDATE, 0 - r.l))) AS DATE FROM dual INNER JOIN (SELECT level l FROM dual CONNECT BY LEVEL <= 1) r ON 1 = 1) time
INNER JOIN table t1
ON t1.date = time.date
INNER JOIN table t2
ON t1.CUSTOMER_ID = t2.CUSTOMER_ID
WHERE t2.date = ADD_MONTHS(time.date, -1)Ie i generate a datefield from a subselect which I then use to join the tables with.
When I try that the query takes an hour or two to complete with the same resultset as the first example.
THe only difference is that in the first case I give the dates literally but in the other case I generate them in the subselect. It's the same dates and they are formatted as dates in both cases.
Any ideas?
Thanks
Edited by: user1970293 on 2010-apr-29 00:52
Edited by: user1970293 on 2010-apr-29 00:59When I try that the query takes an hour or two to complete with the same resultset as the first example.If you get the same results, than why change the query to the second one?
THe only difference is that in the first case I give the dates literally but in the other case I generate them in the subselect. It's the same dates and they are formatted as dates in both cases.Dates are dates,... the formatting is just "pretty"
This
select to_date(last_day(add_months(sysdate
,0 - r.l)))
from dual
inner join (select level l from dual connect by level <= 1) r on 1 = 1doesn't make much sense... what is it supposed to do?
(by the way: you are doing a TO_DATE on a DATE...) -
Can we implement the custom sql query in CR for joining the two tables
Hi All,
Is there anyway to implement the custom sql query in CR for joining the two tables?
My requirement here is I need to write sql logics for joining the two tables...
Thanks,
GanaIn the Database Expert, expand the Create New Connection folder and browse the subfolders to locate your data source.
Log on to your data source if necessary.
Under your data source, double-click the Add Command node.
In the Add Command to Report dialog box, enter an appropriate query/command for the data source you have opened.
For example:
SELECT
Customer.`Customer ID`,
Customer.`Customer Name`,
Customer.`Last Year's Sales`,
Customer.`Region`,
Customer.`Country`,
Orders.`Order Amount`,
Orders.`Customer ID`,
Orders.`Order Date`
FROM
Customer Customer INNER JOIN Orders Orders ON
Customer.`Customer ID` = Orders.`Customer ID`
WHERE
(Customer.`Country` = 'USA' OR
Customer.`Country` = 'Canada') AND
Customer.`Last Year's Sales` < 10000.
ORDER BY
Customer.`Country` ASC,
Customer.`Region` ASC
Note: The use of double or single quotes (and other SQL syntax) is determined by the database driver used by your report. You must, however, manually add the quotes and other elements of the syntax as you create the command.
Optionally, you can create a parameter for your command by clicking Create and entering information in the Command Parameter dialog box.
For more information about creating parameters, see To create a parameter for a command object.
Click OK.
You are returned to the Report Designer. In the Field Explorer, under Database Fields, a Command table appears listing the database fields you specified.
Note:
To construct the virtual table from your Command, the command must be executed once. If the command has parameters, you will be prompted to enter values for each one.
By default, your command is called Command. You can change its alias by selecting it and pressing F2.
Maybe you are looking for
-
30 Day Free Trial Will Not Work on Vista
Anyone have any suggestions? I would like to try out Dreamweaver (I am a GoLive CS2 lover) and have downloaded Dreamweaver twice and loaded it twice and cannot get the 30 day free trial to load. I get an error on start up that says Adobe cannot open
-
Open VI reference seems to run in the user interface thread. If you open a user dialog all the open vi reference functions halt until the dialog has been closed (even though they run in paralell loops etc, no data flow explanation). MTO
-
How can i buy an opensparc processor?
Hi all. How can i buy a opensparc processor? Is it possible to buy only the processsor and only the motherbord? My point is to avoid spending of so much money and to have a sparc. Opensparc is good idea but where can i get the physical processor avai
-
TS1398 the wi-fi not available in my iphone
the wifi in not working in my IPhone how can I fix that
-
iPhoto '08 (7.1.5) crashes when trying to access a project (calendar). I was trying to view then delete the calendar. It loads fine, and I can view the rest of my album without problems. Two other calendars were viewed and deleted without a problem.