Why Data Sort is required for Merge Join Transformation
hi,
In my understanding Merge Join works very similar to T SQL joins. But I am trying to understand why Sort is a compulsory requirement? I have looked into following article but not helpful
http://technet.microsoft.com/en-us/library/ms137653(v=sql.105).aspx
aamertaimor
aamertaimor
Merge Join is going to walk through two sets in the order that you gave in your input or using the Sort transformation. So, it loads one record from one input and one record from the second input. If the keys match, it will output the row with information
from both inputs. If the left input has a value that is less than the value in right input, then either the row from the left input is sent out with no right input information (if there is an outer join) or if the join is an inner join, the
Merge Join component will get the next row from Left input. If the right input has a value that is less than the value that in the left input, then if there is a full outer join, output the right input with no left input information otherwise, get the
next row from the right input.
Here is the beauty, SSIS only needs to retain a couple rows in memory.
What if Microsoft decided that there is no requirement for sorting? Then in order for the Merge join to work is that it would load all of the rows from one input into memory and then the Merge Join would look up the row in memory. That means
a large amount of memory would be needed.
By the way, if you want Merge Join behavior without the sort, use a Lookup component.
Russel Loski, MCT, MCSE Data Platform/Business Intelligence. Twitter: @sqlmovers; blog: www.sqlmovers.com
Similar Messages
-
Why XL line cards required for OTV?
According to Cisco's website, there are specific types of line cards required to support OTV on N7K (N7K-8 port 10GbE with XL options, or N7K-48 port 10/100/1000 Module with XL option).
Q1: Do I have to use the physical ports on those line cards to make OTV related interfaces? Can I use logical interfaces, and use physical ports on those line cards for non-OTV connectivity, such as used as a layer 2 access port?
Q2: Since OTV feature is enabled globally, why the requirements for specific line cards in the first place? What the features/services provided by the line cards for OTV operations?
Thanks.Has anyone seen design docs that suggest using 10-Gig ports for the OTV interfaces? I know the Q&A indicates support for all M1 line cards (not F1 or F2), but I'm wondering if there's any clearly defined design reason for not using the 1-G line cards for this (such as the N7K-M148GS-11L).
I'm basically asking if it would be recommended (for a system with no additional open 10-G ports) to purchase another 10-G blade (such as the N7K-M108X2-12L at $27), or could we do the job with spare ports on an existing 1-G card? I've been through several docs, and while none of them indicate 10-G connectivity is mandatory, I'm hoping for a reason why Cisco might be pushing/recommending this for any size of OTV deployment.
Thank you. -
Why is my password required for printing?
Hi everyone,
I just got a brand new Mac Pro and I’m a bit puzzled about some of the security loops it makes me jump through. I didn’t mess much with the security settings, besides having to fix some permissions to share files with my MacBook, so I feel I’m unlikely to have caused this behavior.
I was thus surprised (and annoyed) when I attempted to print and was asked each time to enter my password. The printer is connected to a Windows XP machine on a home network. I regularly use this printer with my MacBook and never have to enter any password. I must admit that there’s a chance I may have been asked for my password the first time I printed, and decided to store it in my keychain, and this could explain why I’m never asked for it. I just don’t remember.
Nevertheless, my question is whether it is standard for a new Mac Pro to ask me for my password when printing on such a setup. Is the only remedy to this to include the password in the keychain?
By the way, storing passwords for printers, web sites, etc., in a keychain is pretty unsafe unless access to my computer is password-protected, right?
Comments and suggestions will be greatly appreciated.Hiya, Jon99 from Midwest,
I am surprised that printing asks you for your password. I have printed for a long time BOTH on USB AND wirelessly via my WPA2 hidden network. Never has the printer asked me for a password! On a PC, I would suspect a virus, but I have not heard of that for a Mac (there only about 40 as opposed to 10's of thousands on PC's).
The only explanation, I would be able to think of is if it a shared network printer because yours "belongs" to another machine (PC); so it maybe that it is not your mac asking for a password to print, but your mac asks you to access your (hopefully) secure PC. And, yes, you probably stored your password on your macbook keychain. why don't you try and print again, and check the "details" of what keychain actually wants (it's 3/4 down the window and click on details).
Mac only normally asks for passwords for installation of programs, maybe in mail receiving/sending problems, or airport or similar. -
Can I get some advice on why data usage is so high since joining Meteor broadband?
I have been a Meteor 4G broadband customer for 3 months and i have two plans that allows me 80GB per month. Before joining Meteor i was with Three and had a 60GB monthly allowance and I never went over the allowance. Since joining Meteor we are constantly having to switch off the router most of the day because our usage is really high. On average we turn off wifi each day until 6pm in the evening and we still use 1.8GB when only using laptop for one hour!! We have updated all settings on phones for manual updates, my son has PS4 but he does not play online any more. We only use the laptop for reading news etc.... We never watch movies online, occasionally we use YouTube. Can you offer any reasons why this is happening? We are not getting any enjoyment from this service as we are constantly obsessing about how much data we use! We are seriously thinking about cancelling the service.
Hi there NulaF
Have you checked your usage on MyMeteor to see if this matches? Not to forget that MyMeteor can take up to 24 hours to update.
Have a look at your usage and if there is anything that you want to query, call Care. They will then get in touch with Tech and raise a ticket if they have to
-Kyle -
Auto Date/Time stamp required for Numbers 3.1
I would like to automatically add a Date/Time stamp to the last column's cell, when a person's name is selected from the pop-up menu in Column C.
As a number of different people will be using the spreadsheet, I'd prefer not to have a keyboard short cut, as it can be easily overlooked.
I was wondering if it were possible to use a script to copy a cell's (column D) results, containing the 'NOW()' or 'TODAY()' function, and paste the results into the 2nd cell (Column E), allowing the last column's formula to reference the 2nd cell?
The formula in the last cell =if($C2="","",$E2) can then reference a static cell, rather than a live cell.
I know enough to be dangerous, so I'm not sure if i'm heading in the right direction.
Thanking you all in advance.Hi Gordon,
Is this the same as a script?
Good question! It's fun to see people try to define what a service is. It may be one of the least understood and appreciated features on the Mac. It was certainly a mystery to me not long ago.
I think of a service as anything that can be made to appear in the Services menu. Apple has made that easy with Automator. You fire up Automator, create a new service "workflow" document, plop in a script, save, and see the result immediately in the Services menu. You can also send the .workflow package to someone else, who only needs to double-click to install it.
This particular date-time service contains a short AppleScript (you can view it by opening the .workflow in Automator. But other languages (Python, Perl, etc) and Unix utilities can be used too, as illustrated here.
Since Numbers has decent support for AppleScript, it is quite easy for the non-technical user to "bolt on" menu choices to do quite sophisticated things that are not built into the interface itself. I find it more accessible than (and just as powerful as) the VBA used for Excel macros. But that could be mostly a matter of familiarity. In any case, it adds a whole new dimension to Numbers.
SG -
Why is security user required for Tux Domain?
I have to add a user in Weblogic Security Realm with user name equal to the Tuxedo
Remote Domain name. Otherwise the service request from Tuxedo to WTC is rejected
"Failed to get user identity".
In WTC Local WLS Domain "AIRCORE-WLS"... Security=None
In the Remote Tuxedo Domain "AIRCORE-TUX" DMCONFIG... SECURITY=NONE
So why is Weblogic trying to authenticate the remote TDOM as a user?
####<Feb 18, 2003 6:43:28 PM CST> <Debug> <WTC> <EA-LAWSTUC-W2K> <aircoreserver>
<ExecuteThread: '11' for queue: 'default'> <kernel identity> <> <180046> <]/rdsession(0)/dispatch/15/Failed
to get user identity: javax.security.auth.login.FailedLoginException: Authentication
Failed: User AIRCORE-TUX javax.security.auth.login.LoginException: Identity Assertion
Failed: User AIRCORE-TUX does not exist>
After creating user "AIRCORE-TUX" in Security/Realms/myrealm/users, service request
works.
####<Feb 18, 2003 6:49:11 PM CST> <Debug> <WTC> <EA-LAWSTUC-W2K> <aircoreserver>
<ExecuteThread: '1' for queue: 'default'> <kernel identity> <> <180046> <[InvokeInfo/setTargetSubject/(principals=[AIRCORE-TUX])>Carl,
Carl Lawstuen wrote:
>
I have to add a user in Weblogic Security Realm with user name equal to the Tuxedo
Remote Domain name. Otherwise the service request from Tuxedo to WTC is rejected
"Failed to get user identity".This is WTC need to get correct user credential to access WLS EJB. You
either add users to WLS or you add remote domain id (access point id) as
user to WLS depends on your configuration and release of Tuxedo the
request came from.
>
In WTC Local WLS Domain "AIRCORE-WLS"... Security=None
In the Remote Tuxedo Domain "AIRCORE-TUX" DMCONFIG... SECURITY=NONEThis SECURITY is not for ACL or user credential, this is for
authenticating the TDOMAIN session. It is done at session
negotiation/establishing time. This has something to do with connection
principal but has nothing to do with ordinary user. Since you set it to
NONE then there is no session authentication being done.
>
So why is Weblogic trying to authenticate the remote TDOM as a user?As I mentioned before WTC needs user credential to access WLS properly.
>
####<Feb 18, 2003 6:43:28 PM CST> <Debug> <WTC> <EA-LAWSTUC-W2K> <aircoreserver>
<ExecuteThread: '11' for queue: 'default'> <kernel identity> <> <180046> <]/rdsession(0)/dispatch/15/Failed
to get user identity: javax.security.auth.login.FailedLoginException: Authentication
Failed: User AIRCORE-TUX javax.security.auth.login.LoginException: Identity Assertion
Failed: User AIRCORE-TUX does not exist>
After creating user "AIRCORE-TUX" in Security/Realms/myrealm/users, service request
works.As I mentioned before depends on your configuration and Tuxedo releases
you have to use access point id (domain id/connection principal) or real
user. Once you have this in place it should work fine.
>
####<Feb 18, 2003 6:49:11 PM CST> <Debug> <WTC> <EA-LAWSTUC-W2K> <aircoreserver>
<ExecuteThread: '1' for queue: 'default'> <kernel identity> <> <180046> <[InvokeInfo/setTargetSubject/(principals=[AIRCORE-TUX])>Regards,
Honghsi -
Why is email address required for digital sig?
Hi, all. Thanks, in advance, for any help :-)
We have created an application in LiveCycle and have incorporated digital signatures to allow applicants to sign the various forms within the application, including W-4 and I-9. Here is the next question....
By default, the digital signature cannot be created without entering an email address. The problem there, is that we have some applicants who do not have an email address (yes, there are still humans on the planet who don't have an email address :-). Is there a way to get around this? That can't be the only legally valid personal information that can be used to create a digital sig? Can we offer an option if they don't have an email address? We don't want to instruct them to enter bogus information.
Thanks,
MichelleMichelle
When Acrobat is used to create a digital certificate, it is creating a self-signed certificate that conforms to the X509 certificate standard. I haven't read the X509 spec from start to finish, but I would assue that the standard dictates that the e-mail (which is part of the "Subject" section of a certificate) is mandatory.
See http://en.wikipedia.org/wiki/X.509 for a bit more info on digital certificates
As for your statement "That can't be the only legally valid personal information that can be used to create a digital sig?", I don't believe there is anything legally binding or valid about a self-signed certificate as anyone can create one with any information they want.
Regards
Steve -
Data transfer rate requirements for iMovie?
Hi,
I was considering picking up a mobile external hard drive. One that I'm interested in is
http://www.lacie.com/products/product.htm?pid=10691
I was wondering if the transfer speeds would be adequate for use with iMovie projects?
Could anyone recommend this or any other mobile HD for around that price!
Thanks
Powerbook G4, 15", 1.67ghz Mac OS X (10.4.7)Well, the first one was USB2. I agree with Leo that I'd prefer to stick with Firewire, but USB2 would probably be adequate. (USB1 would be completely unacceptable for this. )
(Oh, and raw DV is 3.6MB per second -- just to get more specific. Any modern Mac should have no trouble doing this. My dual-867 could handle it easily from the factory, and now I have much bigger and faster drives anyway. In the old days, grabbing analog video at 320x240 meant around 5MB/sec, and my old Power Mac 9500 with a SCSI2 drive could almost sustain that.)
Do you have a tower case with a spare hard drive bay? If so, unless you really need the portability, you could just slap another hard drive in the case.
And LaCie has a good reputation, but wow, those are expensive. You could put together a 120GB drive and an external case for just under $100. Or for $250-270 (still cheaper than the second one, and not much more than the first), you could put a 500GB drive in an external case. (And the external cases I'm talking about have both Firewire 400 and USB2.) -
Why does Merge Join needs its sources to be sorted
Hi,
Can anyone explain why Merge Join needs its sources to be sorted ?
I saw an explanation in another thread that this is the way to not load all the data to memory, but isnt the data that is about to be merged already all in memory becuase the SORT component waits for all the rows to arrive ?
Why are there no options for Full cache, Partial Cache and No Cache like in the Lookup component ?MERGE Join is one of Join algorithms in SQL Server needs sorted inputs and efficient for large set of records.
You may refer the below(Craig
Freedman) link to understand the MERGE JOIN algorithm in details.
http://blogs.msdn.com/b/craigfr/archive/2006/08/03/merge-join.aspx
http://technet.microsoft.com/en-us/library/ms137653.aspx
Nope
This is SSIS forum and I guess poster is asking about MERGE JOIN transformation in SSIS
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
Using SSIS 2012 - merge join component to transfer data to destination provided it does not exist
HI Folks,
I have a table - parts_amer and this table exists in source & destination server as well.
CREATE TABLE [dbo].[Parts_AMER](
[ServiceTag] [varchar](30) NOT NULL,
[ComponentID] [decimal](18, 0) NOT NULL,
[PartNumber] [varchar](20) NULL,
[Description] [varchar](400) NULL,
[Qty] [decimal](8, 0) NOT NULL,
[SrcCommodityCod] [varchar](40) NULL,
[PartShortDesc] [varchar](100) NULL,
[SKU] [varchar](30) NULL,
[SourceInsertUpdateDate] [datetime2](7) NOT NULL,
CONSTRAINT [PK_Parts_AMER] PRIMARY KEY CLUSTERED
[ServiceTag] ASC,
[ComponentID] ASC
I need to exec the following query using SSIS components so that only that data ,is transfered,which does not exist at destination -
select source.*
from parts_amer source left join parts_amer destination
on source.ServiceTag = destination.ServiceTag
and source.ComponentID=destination.ComponentID
where destination.ServiceTag is null and destination.ComponentID is null
Question - Can Merge component help with this?
Pl help out.
Thanks.Hi Rvn_venky2605,
The Merge Join Transformation is used to join two sorted datasets using a FULL, LEFT, or INNER join, hence, not suitable in your scenario. As James mentioned, you can use Lookup Transformation to redirect the not matched records to the destination table.
Another option is to write a T-SQL script that makes use of
Merge statement, and execute the script via Execute SQL Task.
References:
http://oakdome.com/programming/SSIS_Lookup.php
http://www.mssqltips.com/sqlservertip/1511/lookup-and-cache-transforms-in-sql-server-integration-services/
Regards,
Mike Yin
TechNet Community Support -
Query Plan Question for Outer Join
I have the following join query:
select unique ep.ACT_UID,
ep.SE_REQ_TS,
sr.SERVICE_NM
from
eaisvcs.Service_requests sr,
eaisvcs.SERVICE_EPISODE ep
where
ep.ACT_UID = sr.ACT_UID;
I have an index on the act_uid in both tables, and the statistics are up-to-date. There are about 3.2 million rows in each table. 10% of the act_uid's that are in the ep table that do not have a match in the sr table. Other than that, most id's match up between the tables.
Here is the query plan:
ID PARENT_ID OPERATION OPTIONS
0 SELECT STATEMENT
1 0 SORT UNIQUE
2 1 MERGE JOIN
3 2 SORT JOIN
4 3 TABLE ACCESS FULL
5 2 SORT JOIN
6 5 TABLE ACCESS FULL
I cannot figure out why a table scan is being performed every time on both tables. The Act_UID is the PK in the ep table. There is a composite index (act_uid, sev_seq_no) in the sr table. the act_id is a varchar2 column in both tables. The sev_seq_no column is numeric.
Any ideas or insights about this would be greatly appreciated.
Thanks!The plan can be good.
Oracle choose a full table scan for these reason too:
- sr.SERVICE_NM is not a part of the index and so a full table scan is less expensive. If you will add this field to your composite index Oracle will probably take this new index.
Bye, Aron -
Hi experts, pls help solve as below. Thx.
Q1: In OVB8, how to change SyRoutine from 1 to 103 as the field is grey?
Q2: How can sales orders capture the relevant data after maintaining requirements for TOR? In other words, any other configurations if this maintainence is applied to sales orders?
Background info: Try to suppress MRP in case of credit block (MTO scenario).Maybe I need to be a bit more specific on the requirement.
Strategy group = 40 with requirement type VSF (ind. req.) and KSV (Cust. req.).
My client works in 2 distribution channels:
- For the standard sales (DC 01) the confirmed quantities should reduce the independent requirement quantity, so KSV is the correct requirement type (it shows by default on the sales item).
- For aftersales (DC 04) the independent requirement should not be reduced with the SO quantity and not KSV is wanted, but 011.
In my opinion it should be possible to override the default KSV with 011 in case the distribution channel is 04. The SPRO activity below looks suitable:
Alternative could be maybe to change progam MV45AFZZ / user exit USEREXIT_SAVE_DOCUMENT_PREPARE.
Please advise. -
Why is LOWER function producing a cartesian merge join, when UPPER doesn't?
Hi there,
I have an odd scenario that I would like to understand correctly...
We have a query that is taking a long time to run on one of our databases, further investigation of the explain plan showed that the query was in fact producing a Cartesian merge join even though there is clearly join criteria specified. I know that the optimiser can and will do this if it is a more efficient way of producing the results, however in this scenario it is producing the Cartesian merge on two unrelated tables and seemingly ignoring the Join condition...
*** ORIGINAL QUERY ***
SELECT count(*)
FROM srs_sce sce,
srs_scj scj,
men_mre mre,
srs_mst mst,
cam_smo cam,
ins_spr spr,
men_mua mua,
temp_webct_users u
WHERE sce.sce_scjc = scj.scj_code
AND sce.sce_stuc = mre.mre_code
AND mst.mst_code = mre.mre_mstc
AND mre.mre_mrcc = 'STU'
AND mst.mst_code = mua.mua_mstc
AND cam.ayr_code = sce.sce_ayrc
AND cam.spr_code = scj.scj_sprc
AND spr.spr_code = scj.scj_sprc
-- Ignored Join Condition
AND LOWER(mua.mua_extu) = LOWER(u.login)
AND SUBSTR (sce.sce_ayrc, 1, 4) = '2008'
AND sce.sce_stac IN ('RCE', 'RLL', 'RPD', 'RIN', 'RSAS', 'RHL_R', 'RCO', 'RCI', 'RCA');
*** CARTESIAN EXPLAIN PLAN ***
SELECT STATEMENT CHOOSECost: 83
20 NESTED LOOPS Cost: 83 Bytes: 176 Cardinality: 1
18 NESTED LOOPS Cost: 82 Bytes: 148 Cardinality: 1
15 NESTED LOOPS Cost: 80 Bytes: 134 Cardinality: 1
13 NESTED LOOPS Cost: 79 Bytes: 123 Cardinality: 1
10 NESTED LOOPS Cost: 78 Bytes: 98 Cardinality: 1
7 NESTED LOOPS Cost: 77 Bytes: 74 Cardinality: 1
NOTE: The Cartesian product is performed on the men_mre & temp_webct_users tables not the men_mua mua & temp_webct_users tables specified in the join condition.
4 MERGE JOIN CARTESIAN Cost: 74 Bytes: 32 Cardinality: 1
1 TABLE ACCESS FULL EXETER.TEMP_WEBCT_USERS Cost: 3 Bytes: 6 Cardinality: 1
3 BUFFER SORT Cost: 71 Bytes: 1,340,508 Cardinality: 51,558
2 TABLE ACCESS FULL SIPR.MEN_MRE Cost: 71 Bytes: 1,340,508 Cardinality: 51,558
6 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCE Cost: 3 Bytes: 42 Cardinality: 1
5 INDEX RANGE SCAN SIPR.SRS_SCEI3 Cost: 2 Cardinality: 3
9 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCJ Cost: 1 Bytes: 24 Cardinality: 1
8 INDEX UNIQUE SCAN SIPR.SRS_SCJP1 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID SIPR.INS_SPR Cost: 1 Bytes: 25 Cardinality: 1
11 INDEX UNIQUE SCAN SIPR.INS_SPRP1 Cardinality: 1
14 INDEX UNIQUE SCAN SIPR.SRS_MSTP1 Cost: 1 Bytes: 11 Cardinality: 1
17 TABLE ACCESS BY INDEX ROWID SIPR.MEN_MUA Cost: 2 Bytes: 14 Cardinality: 1
16 INDEX RANGE SCAN SIPR.MEN_MUAI3 Cost: 2 Cardinality: 1
19 INDEX RANGE SCAN SIPR.CAM_SMOP1 Cost: 2 Bytes: 28 Cardinality: 1 After speaking with data experts I realised one of the fields being LOWERed for the join condition generally always had uppercase values so I tried modifying the query to use the UPPER function rather than the LOWER one originally used, in this scenario the query executed in seconds and the Cartesian merge had been eradicated which by all accounts is a good result.
*** WORKING QUERY ***
SELECT count(*)
FROM srs_sce sce,
srs_scj scj,
men_mre mre,
srs_mst mst,
cam_smo cam,
ins_spr spr,
men_mua mua,
temp_webct_users u
WHERE sce.sce_scjc = scj.scj_code
AND sce.sce_stuc = mre.mre_code
AND mst.mst_code = mre.mre_mstc
AND mre.mre_mrcc = 'STU'
AND mst.mst_code = mua.mua_mstc
AND cam.ayr_code = sce.sce_ayrc
AND cam.spr_code = scj.scj_sprc
AND spr.spr_code = scj.scj_sprc
-- Working Join Condition
AND UPPER(mua.mua_extu) = UPPER(u.login)
AND SUBSTR (sce.sce_ayrc, 1, 4) = '2008'
AND sce.sce_stac IN ('RCE', 'RLL', 'RPD', 'RIN', 'RSAS', 'RHL_R', 'RCO', 'RCI', 'RCA');
*** WORKING EXPLAIN PLAN ***
SELECT STATEMENT CHOOSECost: 13
20 SORT AGGREGATE Bytes: 146 Cardinality: 1
19 NESTED LOOPS Cost: 13 Bytes: 146 Cardinality: 1
17 NESTED LOOPS Cost: 12 Bytes: 134 Cardinality: 1
15 NESTED LOOPS Cost: 11 Bytes: 115 Cardinality: 1
12 NESTED LOOPS Cost: 10 Bytes: 91 Cardinality: 1
9 NESTED LOOPS Cost: 7 Bytes: 57 Cardinality: 1
6 NESTED LOOPS Cost: 6 Bytes: 31 Cardinality: 1
4 NESTED LOOPS Cost: 5 Bytes: 20 Cardinality: 1
1 TABLE ACCESS FULL EXETER.TEMP_WEBCT_USERS Cost: 3 Bytes: 6 Cardinality: 1
3 TABLE ACCESS BY INDEX ROWID SIPR.MEN_MUA Cost: 2 Bytes: 42 Cardinality: 3
2 INDEX RANGE SCAN EXETER.TEST Cost: 1 Cardinality: 1
5 INDEX UNIQUE SCAN SIPR.SRS_MSTP1 Cost: 1 Bytes: 11 Cardinality: 1
8 TABLE ACCESS BY INDEX ROWID SIPR.MEN_MRE Cost: 2 Bytes: 26 Cardinality: 1
7 INDEX RANGE SCAN SIPR.MEN_MREI2 Cost: 2 Cardinality: 1
11 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCE Cost: 3 Bytes: 34 Cardinality: 1
10 INDEX RANGE SCAN SIPR.SRS_SCEI3 Cost: 2 Cardinality: 3
14 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCJ Cost: 1 Bytes: 24 Cardinality: 1
13 INDEX UNIQUE SCAN SIPR.SRS_SCJP1 Cardinality: 1
16 INDEX RANGE SCAN SIPR.CAM_SMOP1 Cost: 2 Bytes: 19 Cardinality: 1
18 INDEX UNIQUE SCAN SIPR.INS_SPRP1 Bytes: 12 Cardinality: 1 *** RESULT ***
COUNT(*)
83299I am still struggling to understand why this would have worked as to my knowledge the LOWER & UPPER functions are similar enough in function and regardless of that why would one version cause the optimiser to effectively ignore a join condition.
If anyone can shed any light on this for me it would be very much appreciated.
Regards,
Kieron
Edited by: Kieron_Bird on Nov 19, 2008 6:09 AM
Edited by: Kieron_Bird on Nov 19, 2008 6:41 AMMy mistake on the predicate information, was in a rush to run off to a meeting when I posted the entry...
*** UPPER Version of the Explain Plan ***
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 146 | 736 | | | |
| 1 | SORT AGGREGATE | | 1 | 146 | | | | |
| 2 | SORT AGGREGATE | | 1 | 146 | | 86,10 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | 241 | 35186 | 736 | 86,10 | PCWP | |
|* 4 | HASH JOIN | | 774 | 105K| 733 | 86,09 | P->P | HASH |
|* 5 | HASH JOIN | | 12608 | 1489K| 642 | 86,08 | P->P | BROADCAST |
| 6 | NESTED LOOPS | | 14657 | 1531K| 491 | 86,07 | P->P | HASH |
|* 7 | HASH JOIN | | 14657 | 1359K| 490 | 86,07 | PCWP | |
|* 8 | HASH JOIN | | 14371 | 996K| 418 | 86,06 | P->P | HASH |
|* 9 | TABLE ACCESS FULL | SRS_SCE | 3211 | 106K| 317 | 86,00 | S->P | BROADCAST |
|* 10 | HASH JOIN | | 52025 | 1879K| 101 | 86,06 | PCWP | |
|* 11 | TABLE ACCESS FULL | MEN_MRE | 51622 | 1310K| 71 | 86,01 | S->P | HASH |
| 12 | INDEX FAST FULL SCAN| SRS_MSTP1 | 383K| 4119K| 30 | 86,05 | P->P | HASH |
| 13 | TABLE ACCESS FULL | SRS_SCJ | 114K| 2672K| 72 | 86,02 | S->P | HASH |
|* 14 | INDEX UNIQUE SCAN | INS_SPRP1 | 1 | 12 | | 86,07 | PCWP | |
| 15 | TABLE ACCESS FULL | MEN_MUA | 312K| 4268K| 151 | 86,03 | S->P | HASH |
| 16 | INDEX FAST FULL SCAN | CAM_SMOP1 | 527K| 9796K| 91 | 86,09 | PCWP | |
| 17 | TABLE ACCESS FULL | TEMP_WEBCT_USERS | 33276 | 194K| 3 | 86,04 | S->P | HASH |
Predicate Information (identified by operation id):
3 - access(UPPER("MUA"."MUA_EXTU")=UPPER("U"."LOGIN"))
4 - access("CAM"."AYR_CODE"="SCE"."SCE_AYRC" AND "CAM"."SPR_CODE"="SCJ"."SCJ_SPRC")
5 - access("MST"."MST_CODE"="MUA"."MUA_MSTC")
7 - access("SCE"."SCE_SCJC"="SCJ"."SCJ_CODE")
8 - access("SCE"."SCE_STUC"="MRE"."MRE_CODE")
9 - filter(SUBSTR("SCE"."SCE_AYRC",1,4)='2008' AND ("SCE"."SCE_STAC"='RCA' OR "SCE"."SCE_STAC"='RCE' OR
"SCE"."SCE_STAC"='RCI' OR "SCE"."SCE_STAC"='RCO' OR "SCE"."SCE_STAC"='RHL_R' OR "SCE"."SCE_STAC"='RIN' OR
"SCE"."SCE_STAC"='RLL' OR "SCE"."SCE_STAC"='RPD' OR "SCE"."SCE_STAC"='RSAS'))
10 - access("MST"."MST_CODE"="MRE"."MRE_MSTC")
11 - filter("MRE"."MRE_MRCC"='STU')
14 - access("SPR"."SPR_CODE"="SCJ"."SCJ_SPRC")
Note: cpu costing is off
40 rows selected.*** LOWER Version of the Explain Plan ***
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 146 | 736 | | | |
| 1 | SORT AGGREGATE | | 1 | 146 | | | | |
| 2 | SORT AGGREGATE | | 1 | 146 | | 88,10 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | 257K| 35M| 736 | 88,10 | PCWP | |
|* 4 | HASH JOIN | | 774 | 105K| 733 | 88,09 | P->P | HASH |
|* 5 | HASH JOIN | | 12608 | 1489K| 642 | 88,08 | P->P | BROADCAST |
| 6 | NESTED LOOPS | | 14657 | 1531K| 491 | 88,07 | P->P | HASH |
|* 7 | HASH JOIN | | 14657 | 1359K| 490 | 88,07 | PCWP | |
|* 8 | HASH JOIN | | 14371 | 996K| 418 | 88,06 | P->P | HASH |
|* 9 | TABLE ACCESS FULL | SRS_SCE | 3211 | 106K| 317 | 88,00 | S->P | BROADCAST |
|* 10 | HASH JOIN | | 52025 | 1879K| 101 | 88,06 | PCWP | |
|* 11 | TABLE ACCESS FULL | MEN_MRE | 51622 | 1310K| 71 | 88,01 | S->P | HASH |
| 12 | INDEX FAST FULL SCAN| SRS_MSTP1 | 383K| 4119K| 30 | 88,05 | P->P | HASH |
| 13 | TABLE ACCESS FULL | SRS_SCJ | 114K| 2672K| 72 | 88,02 | S->P | HASH |
|* 14 | INDEX UNIQUE SCAN | INS_SPRP1 | 1 | 12 | | 88,07 | PCWP | |
| 15 | TABLE ACCESS FULL | MEN_MUA | 312K| 4268K| 151 | 88,03 | S->P | HASH |
| 16 | INDEX FAST FULL SCAN | CAM_SMOP1 | 527K| 9796K| 91 | 88,09 | PCWP | |
| 17 | TABLE ACCESS FULL | TEMP_WEBCT_USERS | 33276 | 194K| 3 | 88,04 | S->P | HASH |
Predicate Information (identified by operation id):
3 - access(LOWER("MUA"."MUA_EXTU")=LOWER("U"."LOGIN"))
4 - access("CAM"."AYR_CODE"="SCE"."SCE_AYRC" AND "CAM"."SPR_CODE"="SCJ"."SCJ_SPRC")
5 - access("MST"."MST_CODE"="MUA"."MUA_MSTC")
7 - access("SCE"."SCE_SCJC"="SCJ"."SCJ_CODE")
8 - access("SCE"."SCE_STUC"="MRE"."MRE_CODE")
9 - filter(SUBSTR("SCE"."SCE_AYRC",1,4)='2008' AND ("SCE"."SCE_STAC"='RCA' OR "SCE"."SCE_STAC"='RCE' OR
"SCE"."SCE_STAC"='RCI' OR "SCE"."SCE_STAC"='RCO' OR "SCE"."SCE_STAC"='RHL_R' OR "SCE"."SCE_STAC"='RIN' OR
"SCE"."SCE_STAC"='RLL' OR "SCE"."SCE_STAC"='RPD' OR "SCE"."SCE_STAC"='RSAS'))
10 - access("MST"."MST_CODE"="MRE"."MRE_MSTC")
11 - filter("MRE"."MRE_MRCC"='STU')
14 - access("SPR"."SPR_CODE"="SCJ"."SCJ_SPRC")
Note: cpu costing is off
40 rows selected.As you state something has obviously changed, but nothing obvious has been changed.
We gather statistics via...
exec dbms_stats.gather_schema_stats(ownname => 'USERNAME', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE , degree => 4, granularity => ''ALL'', cascade => TRUE);
We run a script nightly which works out which indexes require a rebuild and rebuild those only it doesn;t just rebuild all indexes.
It would be nice to be able to use the 10g statistics history, but on this instance we aren't yet at that version, hopefully we will be there soon though.
Hope this helps,
Kieron -
hi can any one please help me understand why this simple query is doing merge cartesion join , but not equi join
i have two table with column name id on both the tables.
CREATE TABLE id_a (ID VARCHAR2(10), description VARCHAR2(100));
CREATE TABLE id_b (ID VARCHAR2(10), long_description VARCHAR2(100),short_description VARCHAR2(100));
/* Formatted on 2010/01/13 12:16 (Formatter Plus v4.8.8) */
SET DEFINE OFF;
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0001'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0002'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0003'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0004'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0005'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0006'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0007'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0008'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0009'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0010'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0011'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0012'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0013'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0014'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0015'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0016'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0017'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0018'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0019'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0020'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0021'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0022'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0023'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0024'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0025'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0026'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0027'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0028'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0029'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0030'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0031'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0032'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0033'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0034'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0035'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0036'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0037'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC0038'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC8000'
INSERT INTO id_a
(ID, description
VALUES ('10ACCC', '10ACCC9998'
COMMIT ;
Insert into ID_B
(ID, LONG_DESCRIPTION, SHORT_DESCRIPTION)
Values
('10ACCC', NULL, NULL);
Insert into ID_B
(ID, LONG_DESCRIPTION, SHORT_DESCRIPTION)
Values
('20LNDM', NULL, NULL);
Insert into ID_B
(ID, LONG_DESCRIPTION, SHORT_DESCRIPTION)
Values
('10NQWR', NULL, NULL);
Insert into ID_B
(ID, LONG_DESCRIPTION, SHORT_DESCRIPTION)
Values
('40KKRA ', NULL, NULL);
Insert into ID_B
(ID, LONG_DESCRIPTION, SHORT_DESCRIPTION)
Values
('10VUIU', NULL, NULL);
Insert into ID_B
(ID, LONG_DESCRIPTION, SHORT_DESCRIPTION)
Values
('10NCMM', NULL, NULL);
Insert into ID_B
(ID, LONG_DESCRIPTION, SHORT_DESCRIPTION)
Values
('10EAQL', NULL, NULL);
Insert into ID_B
(ID, LONG_DESCRIPTION, SHORT_DESCRIPTION)
Values
('10BLGP', NULL, NULL);
Insert into ID_B
(ID, LONG_DESCRIPTION, SHORT_DESCRIPTION)
Values
('10LWGV', NULL, NULL);
commit;
select a.description from id_a a , id_b b where
a.id = '10ACCC'
and a.id = b.id ;these are the stats that accurately reflect the data. if i use exist clause for each table i can avoid the merge cartesion, but is it a better way to rewrite the query. please suggest.
And one more thing is that i read the article http://jonathanlewis.wordpress.com/2006/12/13/cartesian-merge-join/ in this article it clearly says that i need to set the parameter
optimizertransitivity_retain to true . but how do i do that?
SQL> show parameter user_dump_dest
NAME TYPE VALUE
user_dump_dest string /u01/app/oracle/admin/dnsprx/u
dump
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.1.0.5
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 16
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL>
SQL> select
2 sname
3 , pname
4 , pval1
5 , pval2
6 from
7 sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 02-21-2006 12:14
SYSSTATS_INFO DSTOP 02-21-2006 12:14
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 604.316547
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
Elapsed: 00:00:00.09
SQL>
SQL> --============================================================================
SQL> ----------------- put your sql statement here---------------------------------
SQL> explain plan for
2 SELECT DISTINCT b.sub_nasp_id
3 FROM r_ref_cust_status_unmnpip_nasp a,
4 ref_nasp_eref b,
5 ref_garm_naspid c
6 WHERE a.nasp_id = '10ACCC'
7 AND a.nasp_id = b.nasp_id
8 AND a.nasp_id = SUBSTR (c.ID, 1, 6)
9 AND NVL (INSTR ('1', NVL (c.sensitivity_level, '@!')), 1) != 0;
Explained.
Elapsed: 00:00:00.07
SQL>
SQL>
SQL> ---------------------- end of sql statement ---------------------------------
SQL> --============================================================================
SQL>
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 3987722552
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 6 | 192 | 3578 (2)| 00:00:43 |
| 1 | SORT UNIQUE | | 6 | 192 | 3578 (2)| 00:00:43 |
| 2 | MERGE JOIN CARTESIAN | | 3105 | 99360 | 3577 (2)| 00:00:43 |
| 3 | MERGE JOIN CARTESIAN | | 528 | 8448 | 1991 (4)| 00:00:24 |
|* 4 | TABLE ACCESS FULL | REF_GARM_NASPID | 528 | 4752 | 1990 (3)| 00:00:24 |
| 5 | BUFFER SORT | | 1 | 7 | 1 (100)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | UNMNPIP_NASP_INDX | 1 | 7 | 0 (0)| 00:00:01 |
| 7 | BUFFER SORT | | 6 | 96 | 3577 (2)| 00:00:43 |
| 8 | TABLE ACCESS BY INDEX ROWID| REF_NASP_EREF | 6 | 96 | 3 (0)| 00:00:01 |
|* 9 | INDEX RANGE SCAN | REF_NASP_EREF_INDX | 6 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - filter(SUBSTR("C"."ID",1,6)='10ACCC' AND
NVL(INSTR('1',NVL("C"."SENSITIVITY_LEVEL",'@!')),1)<>0)
6 - access("A"."NASP_ID"='10ACCC')
9 - access("B"."NASP_ID"='10ACCC')
24 rows selected.
Elapsed: 00:00:00.23
SQL> rollback;
Rollback complete.
Elapsed: 00:00:00.04
SQL>
SQL> -- rem Set the ARRAYSIZE according to your application
SQL> set autotrace traceonly arraysize 100
SQL>
SQL> alter session set tracefile_identifier = 'mytrace1';
Session altered.
Elapsed: 00:00:00.04
SQL> alter session set events '10046 trace name context forever, level 8';
Session altered.
Elapsed: 00:00:00.04
SQL>
SQL> --============================================================================
SQL> ----------------- put your sql statement here---------------------------------
SQL> explain plan for
2 SELECT DISTINCT b.sub_nasp_id
3 FROM r_ref_cust_status_unmnpip_nasp a,
4 ref_nasp_eref b,
5 ref_garm_naspid c
6 WHERE a.nasp_id = '10ACCC'
7 AND a.nasp_id = b.nasp_id
8 AND a.nasp_id = SUBSTR (c.ID, 1, 6)
9 AND NVL (INSTR ('1', NVL (c.sensitivity_level, '@!')), 1) != 0;
Explained.
Elapsed: 00:00:00.06
SQL>
SQL> ---------------------- end of sql statement ---------------------------------
SQL> --============================================================================
SQL>
SQL>
SQL> ----------------------------- Step 2 ------------------------------------------
SQL> --============================================================================
SQL>
SQL> -- if you're on 10g or later
SQL> -- get the row source statistics
SQL> -- if the SQL is sensitive
SQL> -- don't switch on the ECHO
SQL> -- set echo off
SQL>
SQL> set echo on
SQL>
SQL> ---set timing on trimspool on linesize 250 pagesize 999
SQL> set arraysize 100 termout off
SQL> -- alter system flush buffer_cache;
SQL> -- spool off
SQL>
SQL> -- put your statement here
SQL> -- use the GATHER_PLAN_STATISTICS hint
SQL> -- if you're not using STATISTICS_LEVEL = ALL
SQL>
SQL> SELECT /*+ gather_plan_statistics */ DISTINCT b.sub_nasp_id
2 FROM r_ref_cust_status_unmnpip_nasp a,
3 ref_nasp_eref b,
4 ref_garm_naspid c
5 WHERE a.nasp_id = '10ACCC'
6 AND a.nasp_id = b.nasp_id
7 AND a.nasp_id = SUBSTR (c.ID, 1, 6)
8 AND NVL (INSTR ('1', NVL (c.sensitivity_level, '@!')), 1) != 0;
40 rows selected.
Elapsed: 00:00:04.65
Execution Plan
Plan hash value: 3987722552
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 6 | 192 | 3578 (2)| 00:00:43 |
| 1 | SORT UNIQUE | | 6 | 192 | 3578 (2)| 00:00:43 |
| 2 | MERGE JOIN CARTESIAN | | 3105 | 99360 | 3577 (2)| 00:00:43 |
| 3 | MERGE JOIN CARTESIAN | | 528 | 8448 | 1991 (4)| 00:00:24 |
|* 4 | TABLE ACCESS FULL | REF_GARM_NASPID | 528 | 4752 | 1990 (3)| 00:00:24 |
| 5 | BUFFER SORT | | 1 | 7 | 1 (100)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | UNMNPIP_NASP_INDX | 1 | 7 | 0 (0)| 00:00:01 |
| 7 | BUFFER SORT | | 6 | 96 | 3577 (2)| 00:00:43 |
| 8 | TABLE ACCESS BY INDEX ROWID| REF_NASP_EREF | 6 | 96 | 3 (0)| 00:00:01 |
|* 9 | INDEX RANGE SCAN | REF_NASP_EREF_INDX | 6 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - filter(SUBSTR("C"."ID",1,6)='10ACCC' AND
NVL(INSTR('1',NVL("C"."SENSITIVITY_LEVEL",'@!')),1)<>0)
6 - access("A"."NASP_ID"='10ACCC')
9 - access("B"."NASP_ID"='10ACCC')
Statistics
1 recursive calls
0 db block gets
8836 consistent gets
5736 physical reads
0 redo size
855 bytes sent via SQL*Net to client
243 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
3 sorts (memory)
0 sorts (disk)
40 rows processed
SQL> set termout on
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
Elapsed: 00:00:00.26
Execution Plan
Plan hash value: 3602215112
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 8168 | 16336 | 24 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR PICKLER FETCH| DISPLAY_CURSOR | | | | |
Statistics
11 recursive calls
0 db block gets
108 consistent gets
0 physical reads
0 redo size
304 bytes sent via SQL*Net to client
243 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
SQL>
SQL>
SQL>
SQL>
SQL> disconnect
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL> spool off -
Outter-Join with requirement for sub-query
I am working on a view that needs to return 1 record per payment date within the primary payment table.
Part of the information that is to be returned in this row comes from a date-controlled table containing superannuation information. This data may or may not be present, requiring an outer-join. The final problem is that there may be multiple rows in this superannuation table requiring me to retrieve the record with the most recent start date, prior to the payment date. This is where I'm breaking down currently as I cannot outer-join to a sub-query.
I had an idea that I could create an inline view of the superannuation table with 1 row for each of the last 365 days (The reports that will be built off the view are always run within 1-2 months of the payment date), with the date and either the required information or blanks if it did not exist. This would avoid me requiring an outer-join as I could just join on the payment date to the date in the inline query and return whatever data was there.
I'm pretty sure I should be able to do this with analytics rather than creating a date table, by restricting the rows to 365, then havign a column that is effectively the previous column - 1. Unfortunately I'm fairly new to analytics and find the Oracle documentation hard to decipher.
Can anyone help with this or perhaps suggest an alternate solution?please don't do that. that's the most pathetic way generate rows. all_objects is a very nasty view, and oracle reserves the right to merge the view into the rest of your query, making performance less than you had hoped for. how about this instead:
select ...
from (
select trunc(sysdate-rownum+1) super_date
from dual
connect by level < = 365 ) dates,
your_table_here
where dates.super_date = your_column (+)
...
Maybe you are looking for
-
Hi, I am trying to create the DB link across two different platforms where 32 bit oracle is installed in 32 bit windows platform and 64 bit mysql is installed in 64 bit HP_UX platform. After the initial configuration, I finally execute the following
-
Wiped macbook pro will not boot from install DVD
I recently purchased an older MacBook Pro that came with the OS system wiped but it did have the original Install DVDs. The computer will not boot from the DVD. I have booted holding down the C key and nothing happens. I have booted holding down t
-
How to default save as JPEG?
My current workflow has been placing rectangular photos onto white square backgrounds and then saving as JPEGs. Each time I have to save the resulting image, I have to manually opt for JPEGs. Is there any way of making this a default? Thanks if anyon
-
Pressed "Never for This Website" by Accident: How to get my log/pass back?
Now password for my GMail account won't autofill. I've tried going through Keychain Access to set it straight, but no joy. iMac, OS X 10.4.11 Any Ideas? TIA
-
HT2500 Can I import my contacts from Gmail into Apple mail (Snow Leopard)?
Can I import my contacts from Gmail into Apple mail (Snow Leopard)?