Optimizer questions
Hello, I am pretty new to Oracle. Now I have something strange on the optimizer looking at explain plan.
I have a fact table and several dimension tables. I want to get the Max value of a field from the fact table. And it's extremely slow.
If i don't use Max function, it returns within 3 sec. If i do, it takes 5~10 minutes.
Each dim table with its condition returns the result within 1 sec. Searching the dim table keys in the fact table returns within a couple of seconds (The fact table has millions of records. May be more than that).
And the explain plan looks strange.
If I join 2 dimension tables, the Max function seems to run after all joining.
Explain plan looks like this as I expected:
Hash (GROUP BY)
NL ...
Index Unique Scan (Dim table 1)
Index Range Scan (Dim table 2)
Partition Range
Partition Range
But, if i join 3 dimension tables, the Max function seems to run before all joining
Explain plan looks like this:
Temp table transformation
Load as select
View
Filter predicates (Dim table 1)
Hash Join
Access Predicates (ROWID = ROWID)
Index Range scan (Dim table 1 again?)
Load as select
Table access by index rowid
index range scan (Dim table 2)
Hash (group by) <== this runs here after joining only 1 dim table with the fact table
Nested loops
Hash join
access predicates
inlist iterator
table access by global index rowid
index unique scan (Dim table 3)
View
Nested loops
partition range (subquery)
partition list (subquery)
bitmap conversion (to rowids)
access predicates (FACT_TABLE.KEY_OF_DIMTABLE3 = C0 <=== what is C0?)
BITMAP MERGE
BITMAP KEY ITERAION
BUFFER(SORT)
TABLE ACCESS (FULL <==== why full?)
BITMAP INDEX (RANGE SCAN)
ACCESS PREDICATES (FACT_TABLE.KEY_OF_DIMTABLE2 = C0 <== why use key of dim table 2 here although the condition of dim table was calculated already)
Add just one more table shows not only the much more complex explain plan, but much slower performance. Does it really run the Max function before joining all tables so that it runs on so many rows?
If it is so, what can I do to run it after joining all tables so that it can run on the reduced number of records?
Please help me! Thank you
the decisions of the optimizer are cost based: there are a lot of (more or less) complex calculations used to determine the plan that seems to be the most appropriate choice to the optimizer. Depending on the statistics (on which the calculation is based) and some limitations of the calculation model this decision is good in many cases, not bad in a lot of cases - and not so good in some other cases. So an additional join and the use of a function can lead to a plan that needs much more resources and multiplies the execution time. (Since I have worked with some other RDBMS I think it's save to say that the Oracle optimizer is quite good in doing its job)
That said I would suggest that you add the complete execution plans since your representation shows the operations but not the costing. If you can also add rowsource statistics (with the gather_plan_hint or a fitting statistics_level) this would make the analysis simpler.
You could also take a look at https://blogs.oracle.com/optimizer/entry/star_transformation since your query seems to be based on a classical star schema and there are some options provided by Oracle to improve the access in such cases.
Similar Messages
-
Examples of Output Files in Adobe CS5.5 - Todd Kopriva Disk Optimization Question
Hello Todd Kopriva
Your video on Disk Drive Optimization is excellent. I went out to purchase (2) Raid-5 (8 TB) units. I had a question regarding Read Files going to one Disk and Output Files going to another. Can you give some examples of what type of Output Files you are talking about. Iam fairly new to Adobe CS5.5 Production Premium. I am presently usiung Premiere and Encore as well as After Effects. I just want to make sure as to what files will be going tomy Output Raid Drive. I am thinking that one of them may the file created when you export out of Premiere wo Media Encoder.
Thanks for the help and keep up the Tutorials.
Thanks,
Steve
PS: I missed your Free FAQ Video and cannot seem to find it. If you or anyone knows, please let me know where I can view it etc.> By the way, is there a link to you FAQ video that you made?
There are links to a couple of video series that I made at the top of the FAQ forum:
http://forums.adobe.com/community/premiere/faq_list?view=overview -
Rman backup optimization question
Hi,
I have come across a RAM related question and would like to understand the answer:
The RMAN configuration has backup optimization set to ON. Which of the following commands will this setting affect?
a) Backup Archivelog with ALL or LIKE options, b) Backup Backupset ALL, c) Backup Tablespace, d) Backup Database
I believe all of the above are affected by backup optimization, however, only a, b and d seems correct, not c. I wonder why tablespace is not affected - it does not say read only, etc.?
Any ideas? Thanks.Thanks for the quick response!
Meanwhile I did a few tests below, and yes, tablespace does obviously not use backup optimization. But why? Doesn't the same like backup database apply? Isn't using the same retention policy?
RMAN> backup database;
skipping datafile 5; already backed up 2 time(s)
RMAN> backup archivelog ALL;
skipping archived log file /u02/fra/ORCL/ar...; already backed up 1 time(s)
RMAN> backup tablespace example;
input datafile file number=00005
piece handle=/u02/fra/ORCL/backupset....
RMAN> backup tablespace example;
input datafile file number=00005
piece handle=/u02/fra/ORCL/backupset.... -
I'm importing videos shot with my digital camera which takes great HD movies. I notice when I import them into iMovie 11, some of them will come out more grainy than the original. I'm selecting the Optimize video checkbox when importing. Is this what's causing the problem? I don't quite understand what the Optimize function is all about. I see some iMovies which other have made that are just crystal clear but a few of my clips, especially ones which are a bit darker, come out in a lower quality once they've been imported. Not sure of the proper way to do this. Any ideas/suggestions? Should I always be optimizing my video imports and if so, how do I get them to be as crystal clear as the originals? The movies my camera takes are 30fps Motion JPEG. They are 1280 x 720 HD and the QT info panel says:
Photo - JPEG, 1280 x 720, Millions
Linear PCM, 16 bit big-endian signed integer, 2 channels, 16000 HzOptimize usually indicates that iMovie is translating the videos into the Apple Intermediate Codec format so that iMovie can easily preview and edit the video accurately down to the 1/30th of a second video frame. If you don't want iMovie to do that, you can import it not optimized and Full Size as well just to force iMovie to touch the video a little less as compared to Large Size and Optimized. Might try just doing a test with one batch of Events coming in (don't erase the clips off the camera) then re-import with the Full Size, unoptimized settings and see if you can tell a difference between the two settings. Full Size, unoptimized is going to take up a lot of Hard Drive space so be aware there's no free lunch when choosing those settings (Full Size = Large File sizes)
-
Simple string optimization question
ABAPpers,
Here is the problem description:
In a SELECT query loop, I need to build a string by concatenating all the column values for the current row. Each column is represented as a length-value pair. For example, let's say the current values are:
Column1 (type string) : 'abc '
Column2 (type integer) : 7792
Column3 (type string) : 'def '
The resulting string must be of the form:
0003abc000477920003def...
The length is always represented by a four character value, followed by the actual value.
Note that the input columns may be of mixed types - numeric, string, date-time, etc.
Given that data is of mixed type and that the length of each string-type value is not known in advance, can someone suggest a good algorithm to build such a result? Or, is there any built-in function that already does something similar?
Thank you in advance for your help.
PradeepHi,
At the bottom of this message, I have posted a program that I currently wrote. Essentially, as I know the size of each "string" type column, I use a specific function to built the string. For any non-string type column, I assume that the length can never exceed 25 characters. As I fill 255 characters in the output buffer, I have to write the output buffer to screen.
The reason I have so many functions for each string type is that I wanted to optimize concatenate operation. Had I used just one function with the large buffer, then "concatenate" statement takes additional time to fill the remaining part of the buffer with spaces.
As my experience in ABAP programming is limited to just a couple of days, I'd appreciate it if someone can suggest me a better way to deal with my problem.
Thank you in advanced for all your help. Sorry about posting such a large piece of code. I just wanted to show you the complexity that I have created for myself :-(.
Pradeep
REPORT ZMYREPORTTEST no standard page heading line-size 255.
TABLES: GLPCA.
data: GLPCAGL_SIRID like GLPCA-GL_SIRID.
data: GLPCARLDNR like GLPCA-RLDNR.
data: GLPCARRCTY like GLPCA-RRCTY.
data: GLPCARVERS like GLPCA-RVERS.
data: GLPCARYEAR like GLPCA-RYEAR.
data: GLPCAPOPER like GLPCA-POPER.
data: GLPCARBUKRS like GLPCA-RBUKRS.
data: GLPCARPRCTR like GLPCA-RPRCTR.
data: GLPCAKOKRS like GLPCA-KOKRS.
data: GLPCARACCT like GLPCA-RACCT.
data: GLPCAKSL like GLPCA-KSL.
data: GLPCACPUDT like GLPCA-CPUDT.
data: GLPCACPUTM like GLPCA-CPUTM.
data: GLPCAUSNAM like GLPCA-USNAM.
data: GLPCABUDAT like GLPCA-BUDAT.
data: GLPCAREFDOCNR like GLPCA-REFDOCNR.
data: GLPCAAWORG like GLPCA-AWORG.
data: GLPCAKOSTL like GLPCA-KOSTL.
data: GLPCAMATNR like GLPCA-MATNR.
data: GLPCALIFNR like GLPCA-LIFNR.
data: GLPCASGTXT like GLPCA-SGTXT.
data: GLPCAAUFNR like GLPCA-AUFNR.
data: data(255).
data: currentPos type i.
data: lineLen type i value 255.
data len(4).
data startIndex type i.
data charsToBeWritten type i.
data remainingRow type i.
SELECT GL_SIRID
RLDNR
RRCTY
RVERS
RYEAR
POPER
RBUKRS
RPRCTR
KOKRS
RACCT
KSL
CPUDT
CPUTM
USNAM
BUDAT
REFDOCNR
AWORG
KOSTL
MATNR
LIFNR
SGTXT
AUFNR into (GLPCAGL_SIRID,
GLPCARLDNR,
GLPCARRCTY,
GLPCARVERS,
GLPCARYEAR,
GLPCAPOPER,
GLPCARBUKRS,
GLPCARPRCTR,
GLPCAKOKRS,
GLPCARACCT,
GLPCAKSL,
GLPCACPUDT,
GLPCACPUTM,
GLPCAUSNAM,
GLPCABUDAT,
GLPCAREFDOCNR,
GLPCAAWORG,
GLPCAKOSTL,
GLPCAMATNR,
GLPCALIFNR,
GLPCASGTXT,
GLPCAAUFNR) FROM GLPCA
perform BuildFullColumnString18 using GLPCAGL_SIRID.
perform BuildFullColumnString2 using GLPCARLDNR.
perform BuildFullColumnString1 using GLPCARRCTY.
perform BuildFullColumnString3 using GLPCARVERS.
perform BuildFullColumnString4 using GLPCARYEAR.
perform BuildFullColumnString3 using GLPCAPOPER.
perform BuildFullColumnString4 using GLPCARBUKRS.
perform BuildFullColumnString10 using GLPCARPRCTR.
perform BuildFullColumnString4 using GLPCAKOKRS.
perform BuildFullColumnString10 using GLPCARACCT.
perform BuildFullColumnNonString using GLPCAKSL.
perform BuildFullColumnNonString using GLPCACPUDT.
perform BuildFullColumnNonString using GLPCACPUTM.
perform BuildFullColumnString12 using GLPCAUSNAM.
perform BuildFullColumnNonString using GLPCABUDAT.
perform BuildFullColumnString10 using GLPCAREFDOCNR.
perform BuildFullColumnString10 using GLPCAAWORG.
perform BuildFullColumnString10 using GLPCAKOSTL.
perform BuildFullColumnString18 using GLPCAMATNR.
perform BuildFullColumnString10 using GLPCALIFNR.
perform BuildFullColumnString50 using GLPCASGTXT.
perform BuildFullColumnString12 using GLPCAAUFNR.
ENDSELECT.
if currentPos > 0.
move '' to datacurrentPos.
write: / data.
else.
write: / '+'.
endif.
data fullColumn25(29).
data fullColumn1(5).
Form BuildFullColumnString1 using value(currentCol).
len = STRLEN( currentCol ).
concatenate len currentCol into fullColumn1.
data startIndex type i.
data charsToBeWritten type i.
charsToBeWritten = STRLEN( fullColumn1 ).
data remainingRow type i.
do.
remainingRow = lineLen - currentPos.
if remainingRow > charsToBeWritten.
move fullColumn1+startIndex(charsToBeWritten) to
data+currentPos(charsToBeWritten).
currentPos = currentPos + charsToBeWritten.
exit.
endif.
if remainingRow EQ charsToBeWritten.
move fullColumn1+startIndex(charsToBeWritten) to
data+currentPos(charsToBeWritten).
write: / data.
currentPos = 0.
exit.
endif.
move fullColumn1+startIndex(remainingRow) to
data+currentPos(remainingRow).
write: / data.
startIndex = startIndex + remainingRow.
charsToBeWritten = charsToBeWritten - remainingRow.
currentPos = 0.
enddo.
EndForm.
data fullColumn2(6).
Form BuildFullColumnString2 using value(currentCol).
len = STRLEN( currentCol ).
concatenate len currentCol into fullColumn2.
data startIndex type i.
data charsToBeWritten type i.
charsToBeWritten = STRLEN( fullColumn2 ).
data remainingRow type i.
do.
remainingRow = lineLen - currentPos.
if remainingRow > charsToBeWritten.
move fullColumn2+startIndex(charsToBeWritten) to
data+currentPos(charsToBeWritten).
currentPos = currentPos + charsToBeWritten.
exit.
endif.
if remainingRow EQ charsToBeWritten.
move fullColumn2+startIndex(charsToBeWritten) to
data+currentPos(charsToBeWritten).
write: / data.
currentPos = 0.
exit.
endif.
move fullColumn2+startIndex(remainingRow) to
data+currentPos(remainingRow).
write: / data.
startIndex = startIndex + remainingRow.
charsToBeWritten = charsToBeWritten - remainingRow.
currentPos = 0.
enddo.
EndForm.
data fullColumn3(7).
Form BuildFullColumnString3 using value(currentCol).
EndForm.
data fullColumn4(8).
Form BuildFullColumnString4 using value(currentCol).
EndForm.
data fullColumn50(54).
Form BuildFullColumnString50 using value(currentCol).
EndForm.
Form BuildFullColumnNonString using value(currentCol).
move currentCol to fullColumn25.
condense fullColumn25.
len = STRLEN( fullColumn25 ).
concatenate len fullColumn25 into fullColumn25.
data startIndex type i.
data charsToBeWritten type i.
charsToBeWritten = STRLEN( fullColumn25 ).
data remainingRow type i.
do.
remainingRow = lineLen - currentPos.
if remainingRow > charsToBeWritten.
move fullColumn25+startIndex(charsToBeWritten) to
data+currentPos(charsToBeWritten).
currentPos = currentPos + charsToBeWritten.
exit.
endif.
if remainingRow EQ charsToBeWritten.
move fullColumn25+startIndex(charsToBeWritten) to
data+currentPos(charsToBeWritten).
write: / data.
currentPos = 0.
exit.
endif.
move fullColumn25+startIndex(remainingRow) to
data+currentPos(remainingRow).
write: / data.
startIndex = startIndex + remainingRow.
charsToBeWritten = charsToBeWritten - remainingRow.
currentPos = 0.
enddo.
EndForm. -
Font embed and PDF optimizer question
using Quark 7, Acrobat 8 professional, non-intel mac OS10.4.11. According to the quark dialog, the fonts are imbedded. When I look in PDF optimizer, the fonts don't show up. Does this mean they were never imbedded? Also, when I save a file as PDF optimized I get a zero kb figure in the finder panel. Is this correct?
ThanksAlthough you may have embedding turned on, Doesn't necessarily mean That fonts are embedded. Only fonts that allowed to be embedded, are those from Adobe, plus other font houses that have agreements with Adobe to allo it. If your trying to embed MicroSoft Fonts. Forget it it They share nothing with anyone. And The chace of it happening are about like the old say saying The chances of it happening are *Slim* and *None* *and Slim just left town*.
Also if you receive a PDF form someone else. They may have run through Optimizer to reduce the size and removed all instances of Embedded font.
If you use common fonts between windows and Apple say for example Ariel There should only be subtle differences in the look of the PDF if you have use system fonts turned on. -
Hi All
Can anyone please point me document or let me know what is the need of SAP SCM 5.1 u2013 SCM Optimizer. I am trying to understand what SCM component uses SCM-Optimizer.
Thanks Everyone.Hi,
https://websmp208.sap-ag.de/scm
http://service.sap.com/notes ---> Note 1019666 - Installing SCM Optimizer Version 5.1
Note 1165080 - SCM Optimizer 5.1 Support Package 06
You can find the SAR archive of the optimizer in the SAP Software Distribution Center (http://service.sap.com/patches) under the menu path "Entry by Application Group -> SAP application components
-> SAP SCM -> SAP SCM 2007 -> Entry by Component -> SCM Optimizer -> SAP SCM Optimizer 5.1 -> <platform>"
-> SAP TRANSPORTATION MANAGEMENT -> SAP TM 6.0 -> Entry by Component -> SAP Optimizer -> SAP SCM Optimizer 5.1 -> <platform>"
Regards,
Srini Nookala -
say a table(tab1) have two columns col1,col2
now when i see the execution plan of the two following queries they are same but is there any difference between these two queries?
select *from tab1
select col1,col2 from tab1Certainly there are ways to make one query faster than the other. However this would require some extreme circumstances. Even your scenario is not neccessarily under those.
SQL> create table tab1 (col1 number(1), col2 number(5));
Table created.
SQL> create index ixtest on tab1(col1, col2);
Index created.
SQL> insert into tab1 (col1, col2)
2 (select mod(rn, 5), mod(rn,10)
3 from (select level rn from dual connect by level < 10000));
9999 rows created.
SQL> explain plan for select *from tab1
2 ;
explained.
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | | | |
| 1 | TABLE ACCESS FULL | TAB1 | | | |
Note: rule based optimization, PLAN_TABLE' is old version
9 rows selected.
SQL> explain plan for select col1, col2 from tab1;
Explained.
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | | | |
| 1 | TABLE ACCESS FULL | TAB1 | | | |
Note: rule based optimization, PLAN_TABLE' is old version
9 rows selected.
SQL> -
A whole load of optimization questions
Hi
I am making my new website in iweb, and i am a composer with a fairly large portfolio.
I need to be able to show potential clients lots of bits of different genres to get across the idea of what i am capable of. Trouble is that i am not really up on how to get things loading fast enough to be useable. So the things I need to know are these:
1. What bit rate of mp3 is high enough to be good enough for showing off audio (bearing in mind that i am a composer and the quality is important) but small enough to load quickly into most browsers?
2. How long an audio clip is sensible for this purpose?
3. Am i right in thinking that iweb converts mp3 into .mov files? If so, does this make a problem in terms of speed of loading.
4. Does iwebenhancer help speed up loading when it simplifies the folder structure
5. What size (in terms of megabytes not pixels!) images are acceptable to accompany audio?
6. Is iweb the right tool for trying to do this job, or am i trying to get it to do something it is not intended for?
Hope someone can help me, because i am spending hour upon hour trying to get this right, and the site is ridiculously slow at the moment. It took 10 seconds to load a page on a machine with a 10meg broadband connection!!!!
Thanks in advance
David TobinHi
I am making my new website in iweb, and i am a composer with a fairly large portfolio.
I need to be able to show potential clients lots of bits of different genres to get across the idea of what i am capable of. Trouble is that i am not really up on how to get things loading fast enough to be useable. So the things I need to know are these:
1. What bit rate of mp3 is high enough to be good enough for showing off audio (bearing in mind that i am a composer and the quality is important) but small enough to load quickly into most browsers?
2. How long an audio clip is sensible for this purpose?
3. Am i right in thinking that iweb converts mp3 into .mov files? If so, does this make a problem in terms of speed of loading.
4. Does iwebenhancer help speed up loading when it simplifies the folder structure
5. What size (in terms of megabytes not pixels!) images are acceptable to accompany audio?
6. Is iweb the right tool for trying to do this job, or am i trying to get it to do something it is not intended for?
Hope someone can help me, because i am spending hour upon hour trying to get this right, and the site is ridiculously slow at the moment. It took 10 seconds to load a page on a machine with a 10meg broadband connection!!!!
Thanks in advance
David Tobin -
Hello
i am making a change to an existing custom report.I have to pull all the orders except CANCELLED Orders for a parameters passed by user.
I made a change as FLOW_STATUS_CODE<>'CANCELLED' in rdf.The not equal is causing performance issues...and it is taking lot of time to complete.
can any one sujjest what will be the best to use in place of not equal.
ThanksIs there an index on column FLOW_STATUS_CODE?
Run your query in sqlplus through explain plan, and check the execution plan if a query has performance issues.
set pages 999
set lines 400
set trimspool on
spool explain.lst
explain plan for
<your statement>;
select * from table(dbms_xplan.display);
spool offFor optimization questions you'd better go to the SQL forum. -
One-to-many selfjoin removing records with the same ranking or with a substitute
Sorry for my bad choice of discussion title, feel free to suggest me a more pertinent one
I've rewritten post for clarity and following the FAQ.
DB Version
I'm using Oracle Enterprise 10g 10.2.0.1.0 64bit
Tables involved
CREATE TABLE wrhwr (
wr_id INTEGER PRIMARY KEY,
eq_id VARCHAR2(50) NULL,
date_completed DATE NULL,
status VARCHAR2(20) NOT NULL,
pmp_id VARCHAR2(20) NOT NULL,
description VARCHAR2(20) NULL);
Sample data
INSERT into wrhwr VALUES (1,'MI-EXT-0001',date'2013-07-16','Com','VER-EXC','Revisione')
INSERT into wrhwr VALUES (2,'MI-EXT-0001',date'2013-07-01','Com','VER-EXC','Verifica')
INSERT into wrhwr VALUES (3,'MI-EXT-0001',date'2013-06-15','Com','VER-EXC','Revisione')
INSERT into wrhwr VALUES (4,'MI-EXT-0001',date'2013-06-25','Com','VER-EXC','Verifica')
INSERT into wrhwr VALUES (5,'MI-EXT-0001',date'2013-04-14','Com','VER-EXC','Revisione')
INSERT into wrhwr VALUES (6,'MI-EXT-0001',date'2013-04-30','Com','VER-EXC','Verifica')
INSERT into wrhwr VALUES (7,'MI-EXT-0001',date'2013-03-14','Com','VER-EXC','Collaudo')
Query used
SELECT *
FROM (SELECT eq_id,
date_completed,
RANK ()
OVER (PARTITION BY eq_id
ORDER BY date_completed DESC NULLS LAST)
rn
FROM wrhwr
WHERE status != 'S'
AND pmp_id LIKE 'VER-EX%'
AND description LIKE '%Verifica%') table1,
(SELECT eq_id,
date_completed,
RANK ()
OVER (PARTITION BY eq_id
ORDER BY date_completed DESC NULLS LAST)
rn
FROM wrhwr
WHERE status != 'S'
AND pmp_id LIKE 'VER-EX%'
AND description LIKE '%Revisione%') table2,
(SELECT eq_id,
date_completed,
RANK ()
OVER (PARTITION BY eq_id
ORDER BY date_completed DESC NULLS LAST)
rn
FROM wrhwr
WHERE status != 'S'
AND pmp_id LIKE 'VER-EX%'
AND description LIKE '%Collaudo%') table3
WHERE table1.eq_id = table3.eq_id
AND table2.eq_id = table3.eq_id
AND table1.eq_id = table2.eq_id
Purpose of the above query is to selfjoin wrhwr table 3 times in order to have for every row:
eq_id;
completition date of a work request of type Verifica for this eq_id (table1 alias);
completition date of a wr of type Revisione (table2 alias) for this eq_id;
completition date of a wr of type Collaudo (table3 alias) for this eq_id;
A distinct eq_id:
can have many work requests (wrhwr records) with different completition dates or without completition date (date_completed column NULL);
in a date range can have all the types of wrhwr ('Verifica', 'Revisione', 'Collaudo') or some of them (ex. Verifica, Revisione but not Collaudo, Collaudo but not Verifica and Revisione, etc.);
substrings in description shouldn't repeat;
(eq_id,date_completed) aren't unique but (eq_id,date_completed,description,pmp_id) should be unique;
Expected output
Using sample data above I expect this output:
eq_id,table1.date_completed,table2.date_completed,table3.date_completed
MI-EXT-001,2013-07-01,2013-07-16,2013-03-14 <--- for this eq_id table3 doesn't have 3 rows but only 1. I want to repeat the most ranked value of table3 for every result row
MI-EXT-001,2013-07-01,2013-06-15,2013-03-14 <-- I don't wanna this row because table1 and table2 have both 3 rows so the match must be in rank terms (1st,1st) (2nd,2nd) (3rd,3rd)
MI-EXT-001,2013-06-25,2013-06-15,2013-03-14 <-- 2nd rank of table1 joins 2nd rank of table2
MI-EXT-001,2013-04-30,2013-04-14,2013-03-14 <-- 1st rank table1, 1st rank table2, 1st rank table3
In vector style syntax, expected tuple output must be:
ix = i-th ranking of tableX
(i1, i2, i3) IF EXISTS an i-th ranking row in every table
ELSE
(i1, b, b)
where b is the first available lower ranking of table2 OR NULL if there isn't any row of lower ranking.
Any clues?
With the query I'm unable to remove "spurius" rows.
I'm thinking at a solution based on analytic functions like LAG() and LEAD(), using ROLLUP() or CUBE(), using nested query but I would find a solution elegant, easy, fast and easy to maintain.
ThanksFrankKulash ha scritto:
About duplicate dates: I was most interested in what you wanted when 2 (or more) rows with the same eq_id and row type (e.g. 'Collaudo') had exactly the same completed_date.
In the new results, did you get the columns mixed up? It looks like the row with eq_id='MI-EXT-0002' has 'Collaudo' in the desciption, but the date appears in the verifica column of the output, not the collaudo column.
Why don't you want 'MI-EXT-0001' in the results? Is it realted to the non-unique date?
For all optimization questions, see the forum FAQ:https://forums.oracle.com/message/9362003
If you can explain what you need to do in the view (and post some sample data and output as examples) then someone might help you find a better way to do it.
It looks like there's a lot of repetition in the code. Whatever you're trying to do, I suspect there's a simpler, more efficient way to do it.
About Duplicate dates: query must show ONLY one date_completed and ignore duplicated. Those records are "bad data". You can't have 2 collaudos with the same date completed.
Collaudo stands for equipment check. A craftperson does an equipment check once a day and, with a mobile app, update the work request related to equipment and procedure of preventive maintenance, so is impossibile that complete more than one check (Collaudo) in a day, by design.
In the new results, it's my fault: during digitation I've swapped columns
With "I don't want 'MI-EXT-0001'" I mean: "I don't want to show AGAIN MI-EXT-0001. In the previous post was correct the output including MI-EXT-0001.
Regarding optimisation...
repetition of
LAST_VALUE (
MIN (CASE WHEN r_type = THEN column_name END) IGNORE NULLS)
OVER (PARTITION BY eq_id ORDER BY r_num) AS alias_column_name
is because I don't know another feasible way to have all columns needed of table wrhwr in main query, maintaining the correct order. So i get them in got_r_type and propagate them in all the subquery.
In main query I join eq table (which contains all the information about a specific equipment) with "correct" dates and columns of wrhwr table.
I filter eq table for the specific equipment standard (eq_std column).
efm_eq_tablet table and where clause
AND e.eq_id = e2.eq_id
AND e2.is_active = 'S';
means: show only rows in eq table that have an equal row in efm_eq_tablet table AND are active (represented by 'S' value in is_active column).
About the tables v2, r2 and c2
(SELECT doc_data, doc_data_rinnovo, eq_id
FROM efm_doc_rep edr
WHERE edr.csi_id = '1011503' AND edr.doc_validita_temp = 'LIM') v2,
(SELECT doc_data, doc_data_rinnovo, eq_id
FROM efm_doc_rep edr
WHERE eq_id = edr.eq_id
AND edr.csi_id = '1011504'
AND edr.doc_validita_temp = 'LIM') r2,
(SELECT doc_data, doc_data_rinnovo, eq_id
FROM efm_doc_rep edr
WHERE edr.csi_id IN ('1011505', '1011507')
AND edr.doc_validita_temp = 'LIM'
AND adempimento_ok = 'SI') c2,
Those tables contains "alternate" dates of completition to be used when there isn't any wrhwr row for an eq_id OR when all date_completed are NULL.
NVL() and NVL2() functions are used in main query in order to impletement this.
The CASE/WHEN blocks inside main query implements the behavior of selecting the correct date based of the above conditions. -
Optimizing this query....
I have a query for the following
select x.date_required, sum(x.t_score) as t-score from
(select tv.business_date as date_required, sum(tv.score) as t_score from table1 tv
where tv.type like 'blue'
union all
select ta.business_date as date_required, sum(ta.score) as t_score from table1 ta
where ta.type like 'White'
union all
select tg.business_date as date_required, sum(tg.score) as t_score from table1 tg
where tg.type like 'green') x
group by x.date_required;
I would like to optimize it to make it simplier. Thank you.Hi,
There are errors in the query you posted, so it's hard to know what you're trying to do.
For starters, find a way to avoid the UNION. For example:
SELECT business_date AS date_required
, SUM (score) AS t_score
FROM table1
WHERE type IN ('blue', 'white', 'green')
GROUP BY business_date
;If you're not using wild-cards, don't use LIKE; it's slower than = or IN.
Do you have an index on type?
The following thread show what you need to do for optimization questions:
When your query takes too long ... -
We have a multi-terabyte database which daily receives tens of thousands of records to insert or update. Most online advice on reducing memory paging covers server memory options and dynamic memory management techniques.
Are there any tested programmatic or database design techniques that could also be explored to increase efficiency?
Thanks,
Tom> How long is it taking now to process this workload?
About four hours. The record data is inserted/updated across multiple related tables.
Across about how many tables?
Four hours seems a bit long for 100k rows to merge, say half updates and half new inserts (and whatever you do about deletes, maybe treat them as slowly changing dimensions), but my first move would be to just treat it as an optimization question and use
the profiler to get a list of the longest running and most expensive queries, and see if they can be individually improved.
Just basic stuff - do you handle the rows one at a time and commit for each one, or are the processes done relationally and with one commit per batch?
Josh -
Restoration of cold backup on differnet server
Question: 1) I have cold backup of sunday using User managed backup on A server & all archivelogs upto wednesday on A server.
2) I transfer my cold backup & archivelogs on B server.
3) Now I need to recover my B server's database upto wednesday.
Note: 1) I have different location of datafiles,controlfiles.redologs on B server as of A server..
2) OS is AIX 5.3
3) Oracle databse version 10.2.0.4 on both serverduplicate thread
rman backup optimization question
Aman.... -
Hello.
I have a lot of SQL statements (like 2-3 Mio) - plain inserts, deletes and updates. In a previous post, I asked how to embed exceptions for the statements:
Re: exceptions for a small SQL script (now with optimization question)
Here is the solution and this is how my script looks like now (with only 7 statements in 3 blocks out of these millions of statements):
BEGIN
BEGIN
DELETE ...
COMMIT;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
ROLLBACK;
raise;
END;
BEGIN
INSERT ...
INSERT ...
INSERT ...
COMMIT;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
ROLLBACK;
raise;
END;
BEGIN
INSERT ...
UPDATE ...
COMMIT;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
ROLLBACK;
raise;
END;
END;
/So it seems, that for oracle this is only one very big statement (the .sql file is over 1 GB big) and oracle tries at first to parse everything. In my case - obviously unsuccessful, because I received the ORA-24373: invalid length specified for statement. So my question is - what is the max length of a statement? Is there a work-around for this issue?
Thanks a lot!Great, I found the solution by myself. Here's what I needed:
BEGIN
DELETE ...
COMMIT;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
ROLLBACK;
raise;
END;
BEGIN
INSERT ...
INSERT ...
INSERT ...
COMMIT;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
ROLLBACK;
raise;
END;
BEGIN
INSERT ...
UPDATE ...
COMMIT;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
ROLLBACK;
raise;
END;
/My script will still be Gigabytes big and will work just fine. Get it?
Maybe you are looking for
-
Problem displaying WebHelp built from RoboHelp HTML Editiion
I have a project I want to transition out of RoboHelp for Word but when I build the files in RoboHTML in my extremely locked down environment (I work on a secure network) I get errors when I try to display the WebHelp that does not occur with my Robo
-
PRAA - How to have multiple payment methods
Hello experts, We use PRAA/SM35 to create/update employee vendors through a reference vendor. But the payment method is defaulted to direct deposit as in the reference vendor. We have some employees set up for check and every night it is changed back
-
Can't get JSF sources from CVS
I'm trying to download the correct version of JSF to debug a problem I'm having with JSF in Glassfish. I'm using GF v2 b41c, which states that it uses JSF cvs tag: JSF_1_2_GFINT_B17. (See http://wiki.java.net/bin/view/Projects/GlassFishSourcesBundle.
-
Just loaded Photoshop Elements 4.0 to Macbook with OS X 10.4. Upon opening after install, error message that application cannot launch. It fails onscreen as TWAIN menus are running. Is Elements conflicting with my CS versions of Indesign and Illustra
-
Inappropriate behaviour by Moderator
Hi, I am new contributor on SDN Master Data Management forum. One of my reply was edited in a very poor manner by ABAP moderator "Kishan P" on SAP Master Data Management Forum. My understanding about the SDN Forums is: It is a place where a person p