Mathscript contourf function produces strange results
the code "contourf(magic(8))" typed into the mathscripts window produces a different result from the same single line in matlab. The resolution and colormap differences don't bother me, but the line fragments are clearly incorrect. This is a simple example of a problem I'm having in my data. Further, if I'd like to plot the contour line myself, the "hold on" function does not seem to work with the contour. Any help would be much appreciated. A simple example demonstrating both problems:
clf
colormap(jet)
contourf(magic(8));
hold on
plot(1:8,1:8,'r*')
shg
piyrwq
Attachments:
matlabmagic8.jpg 35 KB
mathscriptsmagic8.jpg 143 KB
Hello,
I have looked into this issue some more. I talked to the developer who wrote the colormap function and it turns out we do have all the colormaps. However, their use is not documented. They are passed to the function as an option, e.g. colormap('jet').
In addition, the order of calls matters. The colormap only applies to the current plot. If you generate a new one, the colormap disappears. Consequently, reverse the order of your colormap and contourf calls as follows:
contourf(magic(8));
colormap('jet')
Grant M.
Staff Software Engineer | LabVIEW Math & Signal Processing | National Instruments
Similar Messages
-
Help - count function returns strange results
hi everyone,
here's my scenario: i'm trying to get the NUMBER OF REPORTS and NUMBER OF
IMAGES GROUP BY MONTH from the two tables below.
REPORT
reportid(*primary key)
date
IMAGE
reportid(*foreign key referring to report's table)
image
sample output:
MONTH NO.OF REPORTS NO. OF IMAGES
feb 01 10 9
mar 01 12 8
my SQL goes like this:
select to_char(date, 'month-yy'),
count(REPORT.reportid), count(IMAGE.reportid)
from REPORT, IMAGE
where REPORT.reportid = IMAGE.reportid
group by to_char(date, 'month-yy')
the above sql yielded strange results, number of images is equal to the number of reports, which is of course wrong! as one report may or may not contain one or more image.
i dont know what's wrong with the above statement, but if i were to group it
by REPORTID and DAY rather than MONTH, then amazingly it works! what's
wrong with the count, why does it give me the same result if i group by
MONTH.
can anyone shed some light on this?try using the following example:
Table TEST_REPORT
RPTID RPTDATE
1 02-JAN-01
3 02-JAN-01
2 02-JAN-01
5 11-FEB-01
6 11-FEB-01
7 11-FEB-01
Table TEST_IMAGE
RPTID IM
1 1
2 1
3 1
SQL:
select to_char(rptdate,'MON-YYYY'),
sum(decode(a.rptid,null,0,1)) report_cnt,
sum(decode(b.rptid,null,0,1)) image_cnt
from test_report a, test_image b
where a.rptid = b.rptid(+)
group by to_char(rptdate,'MON-YYYY');
<BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by skcedric:
hi everyone,
here's my scenario: i'm trying to get the NUMBER OF REPORTS and NUMBER OF
IMAGES GROUP BY MONTH from the two tables below.
REPORT
reportid(*primary key)
date
IMAGE
reportid(*foreign key referring to report's table)
image
sample output:
MONTH NO.OF REPORTS NO. OF IMAGES
feb 01 10 9
mar 01 12 8
my SQL goes like this:
select to_char(date, 'month-yy'),
count(REPORT.reportid), count(IMAGE.reportid)
from REPORT, IMAGE
where REPORT.reportid = IMAGE.reportid
group by to_char(date, 'month-yy')
the above sql yielded strange results, number of images is equal to the number of reports, which is of course wrong! as one report may or may not contain one or more image.
i dont know what's wrong with the above statement, but if i were to group it
by REPORTID and DAY rather than MONTH, then amazingly it works! what's
wrong with the count, why does it give me the same result if i group by
MONTH.
can anyone shed some light on this? <HR></BLOCKQUOTE>
null -
Masks not working, producing strange results?...
Hello, I'm having problems with an animated mask.
I have a leg graphic (contains 3 tweened leg pieces), and then a gradient.
In the editor they behave how they should, properly masking the leg. However
the SWF output produces some crazy results shown below:
I've tried setting both as movieclips, using the "Layer" blend mode on the leg, using leg.mask = legMask;
but whatever I try produces the exact same result.
Any ideas?
Thanks!It sounds like you've applied the Blur filter to the mask instead of the object the Mask is applied to.
This will blur the masked footage:
GROUP
Footage copy
Mask
Gaussian Blur
Footage
This will blur the edges of the mask:
GROUP
Footage copy
Mask
Gaussian Blur
Footage
Hope that helps. -
Exporting to Quicktime produces strange results
Greetings:
I have been working on a fairly large project in FCE for many months. I am finally ready to export to burn onto a DVD but am having some strange problems. Although everything looks fine within FCE, when exported to a Quicktime movie as either a self-contained movie or as a reference file, the movie is not plating properly. Specifically, there are problems with images that I have animated using keyframes and with some transitions between images using cross dissolve.
There are no problems with the video or sound, only when still images are involved in some fashion or another.
Does anyone have any experience with this issue? Does anyone have a suggestion for a fix?
Thanks in advance for any help...A couple of different problems.
The first is as follows:
I have an image in one video track. Using keyframe animation the image slides across the screen, stops and then fills the screen The underlying videotrack is a movie interview that I conducted. In FCE, all looks as it should. Once exported as a Quicktime movie, the image is transparent, allowing the underlying clip to be seen and the image seems to be appearing in duplicate. One image overlying another with the images being of different sizes and not matching the keyframes I used.
The other problem relates to a photo montage that I did. I recreated the 'Ken Burns effect' using key frames and added cross dissolve transitions between photos. Again, in FCE all appears as it should but in Quicktime, some images are again duplicated and move in ways not defined by me, transitions flicker or flash and occasionally the images appear very dark.
Does this help you help me? -
ALTER SESSION in sql script produces strange results
Why would "alter session set NLS_DATE_FORMAT" interfere with the operation of a sql script?
First, the script without the ALTER SESSION:
SQL> !cat doit1.sql
--alter session set NLS_DATE_FORMAT = "DD-MON-YYYY"
set pagesize 99
set trimspool on
col BIRTH_MONTH for a11
break on BIRTH_MONTH skip 1
spo doit.txt
prompt My Membership
set feedback off heading off
select 'As of ' || sysdate from dual;
set heading on
select count(*) from dual;
spool offAnd its execution:
SQL> @doit1
My Membership
As of 14-JAN-10
COUNT(*)
1All's well with the world. Now let's throw in the ALTER SESSION:
SQL> !cat doit2.sql
alter session set NLS_DATE_FORMAT = "DD-MON-YYYY"
set pagesize 99
set trimspool on
col BIRTH_MONTH for a11
break on BIRTH_MONTH skip 1
spo doit.txt
prompt My Membership
set feedback off heading off
select 'As of ' || sysdate from dual;
set heading on
select count(*) from dual;
spool offAnd its execution
SQL> @doit2
set pagesize 99
ERROR at line 2:
ORA-00922: missing or invalid option
COUNT(*)
1
not spooling currently
SQL> Doesn't like the SET commands (I've played around with serveral, it doesn't like any of them) and ignores the opening SPOOL command
I first noticed this with a 10.2.0.1 client (windows) connected to a 10.2.0.4 database (HP-UX). Confirmed on my laptop VM lab, 10.2.0.4 under OEL5, client and db on same VM.Chris Poole wrote:
You're just missing the trailing ';' at the end of the alter session line. Add this, it all works.DOH! (in my best Homer Simpson imitation!) -
Strange symbols appear on my desktop, producing even stranger results!
Strange symbols appear on my desktop, producing even stranger results!
These symbols are for short cuts. could be a command to move text insertion point to beginning of document or if you have been in system preferences keyboard and clicked on the modifier key and set one of the options it would be for setting cap lock. Don't know why you are showing a screen shot of it. Have you clicked on it. Have you been doing any work with documents. You could go to disk utilities and repair permissions or do an pram reset. Command/Option?P/R while holding down the power button for three chimes and release.
-
Why is LOWER function producing a cartesian merge join, when UPPER doesn't?
Hi there,
I have an odd scenario that I would like to understand correctly...
We have a query that is taking a long time to run on one of our databases, further investigation of the explain plan showed that the query was in fact producing a Cartesian merge join even though there is clearly join criteria specified. I know that the optimiser can and will do this if it is a more efficient way of producing the results, however in this scenario it is producing the Cartesian merge on two unrelated tables and seemingly ignoring the Join condition...
*** ORIGINAL QUERY ***
SELECT count(*)
FROM srs_sce sce,
srs_scj scj,
men_mre mre,
srs_mst mst,
cam_smo cam,
ins_spr spr,
men_mua mua,
temp_webct_users u
WHERE sce.sce_scjc = scj.scj_code
AND sce.sce_stuc = mre.mre_code
AND mst.mst_code = mre.mre_mstc
AND mre.mre_mrcc = 'STU'
AND mst.mst_code = mua.mua_mstc
AND cam.ayr_code = sce.sce_ayrc
AND cam.spr_code = scj.scj_sprc
AND spr.spr_code = scj.scj_sprc
-- Ignored Join Condition
AND LOWER(mua.mua_extu) = LOWER(u.login)
AND SUBSTR (sce.sce_ayrc, 1, 4) = '2008'
AND sce.sce_stac IN ('RCE', 'RLL', 'RPD', 'RIN', 'RSAS', 'RHL_R', 'RCO', 'RCI', 'RCA');
*** CARTESIAN EXPLAIN PLAN ***
SELECT STATEMENT CHOOSECost: 83
20 NESTED LOOPS Cost: 83 Bytes: 176 Cardinality: 1
18 NESTED LOOPS Cost: 82 Bytes: 148 Cardinality: 1
15 NESTED LOOPS Cost: 80 Bytes: 134 Cardinality: 1
13 NESTED LOOPS Cost: 79 Bytes: 123 Cardinality: 1
10 NESTED LOOPS Cost: 78 Bytes: 98 Cardinality: 1
7 NESTED LOOPS Cost: 77 Bytes: 74 Cardinality: 1
NOTE: The Cartesian product is performed on the men_mre & temp_webct_users tables not the men_mua mua & temp_webct_users tables specified in the join condition.
4 MERGE JOIN CARTESIAN Cost: 74 Bytes: 32 Cardinality: 1
1 TABLE ACCESS FULL EXETER.TEMP_WEBCT_USERS Cost: 3 Bytes: 6 Cardinality: 1
3 BUFFER SORT Cost: 71 Bytes: 1,340,508 Cardinality: 51,558
2 TABLE ACCESS FULL SIPR.MEN_MRE Cost: 71 Bytes: 1,340,508 Cardinality: 51,558
6 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCE Cost: 3 Bytes: 42 Cardinality: 1
5 INDEX RANGE SCAN SIPR.SRS_SCEI3 Cost: 2 Cardinality: 3
9 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCJ Cost: 1 Bytes: 24 Cardinality: 1
8 INDEX UNIQUE SCAN SIPR.SRS_SCJP1 Cardinality: 1
12 TABLE ACCESS BY INDEX ROWID SIPR.INS_SPR Cost: 1 Bytes: 25 Cardinality: 1
11 INDEX UNIQUE SCAN SIPR.INS_SPRP1 Cardinality: 1
14 INDEX UNIQUE SCAN SIPR.SRS_MSTP1 Cost: 1 Bytes: 11 Cardinality: 1
17 TABLE ACCESS BY INDEX ROWID SIPR.MEN_MUA Cost: 2 Bytes: 14 Cardinality: 1
16 INDEX RANGE SCAN SIPR.MEN_MUAI3 Cost: 2 Cardinality: 1
19 INDEX RANGE SCAN SIPR.CAM_SMOP1 Cost: 2 Bytes: 28 Cardinality: 1 After speaking with data experts I realised one of the fields being LOWERed for the join condition generally always had uppercase values so I tried modifying the query to use the UPPER function rather than the LOWER one originally used, in this scenario the query executed in seconds and the Cartesian merge had been eradicated which by all accounts is a good result.
*** WORKING QUERY ***
SELECT count(*)
FROM srs_sce sce,
srs_scj scj,
men_mre mre,
srs_mst mst,
cam_smo cam,
ins_spr spr,
men_mua mua,
temp_webct_users u
WHERE sce.sce_scjc = scj.scj_code
AND sce.sce_stuc = mre.mre_code
AND mst.mst_code = mre.mre_mstc
AND mre.mre_mrcc = 'STU'
AND mst.mst_code = mua.mua_mstc
AND cam.ayr_code = sce.sce_ayrc
AND cam.spr_code = scj.scj_sprc
AND spr.spr_code = scj.scj_sprc
-- Working Join Condition
AND UPPER(mua.mua_extu) = UPPER(u.login)
AND SUBSTR (sce.sce_ayrc, 1, 4) = '2008'
AND sce.sce_stac IN ('RCE', 'RLL', 'RPD', 'RIN', 'RSAS', 'RHL_R', 'RCO', 'RCI', 'RCA');
*** WORKING EXPLAIN PLAN ***
SELECT STATEMENT CHOOSECost: 13
20 SORT AGGREGATE Bytes: 146 Cardinality: 1
19 NESTED LOOPS Cost: 13 Bytes: 146 Cardinality: 1
17 NESTED LOOPS Cost: 12 Bytes: 134 Cardinality: 1
15 NESTED LOOPS Cost: 11 Bytes: 115 Cardinality: 1
12 NESTED LOOPS Cost: 10 Bytes: 91 Cardinality: 1
9 NESTED LOOPS Cost: 7 Bytes: 57 Cardinality: 1
6 NESTED LOOPS Cost: 6 Bytes: 31 Cardinality: 1
4 NESTED LOOPS Cost: 5 Bytes: 20 Cardinality: 1
1 TABLE ACCESS FULL EXETER.TEMP_WEBCT_USERS Cost: 3 Bytes: 6 Cardinality: 1
3 TABLE ACCESS BY INDEX ROWID SIPR.MEN_MUA Cost: 2 Bytes: 42 Cardinality: 3
2 INDEX RANGE SCAN EXETER.TEST Cost: 1 Cardinality: 1
5 INDEX UNIQUE SCAN SIPR.SRS_MSTP1 Cost: 1 Bytes: 11 Cardinality: 1
8 TABLE ACCESS BY INDEX ROWID SIPR.MEN_MRE Cost: 2 Bytes: 26 Cardinality: 1
7 INDEX RANGE SCAN SIPR.MEN_MREI2 Cost: 2 Cardinality: 1
11 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCE Cost: 3 Bytes: 34 Cardinality: 1
10 INDEX RANGE SCAN SIPR.SRS_SCEI3 Cost: 2 Cardinality: 3
14 TABLE ACCESS BY INDEX ROWID SIPR.SRS_SCJ Cost: 1 Bytes: 24 Cardinality: 1
13 INDEX UNIQUE SCAN SIPR.SRS_SCJP1 Cardinality: 1
16 INDEX RANGE SCAN SIPR.CAM_SMOP1 Cost: 2 Bytes: 19 Cardinality: 1
18 INDEX UNIQUE SCAN SIPR.INS_SPRP1 Bytes: 12 Cardinality: 1 *** RESULT ***
COUNT(*)
83299I am still struggling to understand why this would have worked as to my knowledge the LOWER & UPPER functions are similar enough in function and regardless of that why would one version cause the optimiser to effectively ignore a join condition.
If anyone can shed any light on this for me it would be very much appreciated.
Regards,
Kieron
Edited by: Kieron_Bird on Nov 19, 2008 6:09 AM
Edited by: Kieron_Bird on Nov 19, 2008 6:41 AMMy mistake on the predicate information, was in a rush to run off to a meeting when I posted the entry...
*** UPPER Version of the Explain Plan ***
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 146 | 736 | | | |
| 1 | SORT AGGREGATE | | 1 | 146 | | | | |
| 2 | SORT AGGREGATE | | 1 | 146 | | 86,10 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | 241 | 35186 | 736 | 86,10 | PCWP | |
|* 4 | HASH JOIN | | 774 | 105K| 733 | 86,09 | P->P | HASH |
|* 5 | HASH JOIN | | 12608 | 1489K| 642 | 86,08 | P->P | BROADCAST |
| 6 | NESTED LOOPS | | 14657 | 1531K| 491 | 86,07 | P->P | HASH |
|* 7 | HASH JOIN | | 14657 | 1359K| 490 | 86,07 | PCWP | |
|* 8 | HASH JOIN | | 14371 | 996K| 418 | 86,06 | P->P | HASH |
|* 9 | TABLE ACCESS FULL | SRS_SCE | 3211 | 106K| 317 | 86,00 | S->P | BROADCAST |
|* 10 | HASH JOIN | | 52025 | 1879K| 101 | 86,06 | PCWP | |
|* 11 | TABLE ACCESS FULL | MEN_MRE | 51622 | 1310K| 71 | 86,01 | S->P | HASH |
| 12 | INDEX FAST FULL SCAN| SRS_MSTP1 | 383K| 4119K| 30 | 86,05 | P->P | HASH |
| 13 | TABLE ACCESS FULL | SRS_SCJ | 114K| 2672K| 72 | 86,02 | S->P | HASH |
|* 14 | INDEX UNIQUE SCAN | INS_SPRP1 | 1 | 12 | | 86,07 | PCWP | |
| 15 | TABLE ACCESS FULL | MEN_MUA | 312K| 4268K| 151 | 86,03 | S->P | HASH |
| 16 | INDEX FAST FULL SCAN | CAM_SMOP1 | 527K| 9796K| 91 | 86,09 | PCWP | |
| 17 | TABLE ACCESS FULL | TEMP_WEBCT_USERS | 33276 | 194K| 3 | 86,04 | S->P | HASH |
Predicate Information (identified by operation id):
3 - access(UPPER("MUA"."MUA_EXTU")=UPPER("U"."LOGIN"))
4 - access("CAM"."AYR_CODE"="SCE"."SCE_AYRC" AND "CAM"."SPR_CODE"="SCJ"."SCJ_SPRC")
5 - access("MST"."MST_CODE"="MUA"."MUA_MSTC")
7 - access("SCE"."SCE_SCJC"="SCJ"."SCJ_CODE")
8 - access("SCE"."SCE_STUC"="MRE"."MRE_CODE")
9 - filter(SUBSTR("SCE"."SCE_AYRC",1,4)='2008' AND ("SCE"."SCE_STAC"='RCA' OR "SCE"."SCE_STAC"='RCE' OR
"SCE"."SCE_STAC"='RCI' OR "SCE"."SCE_STAC"='RCO' OR "SCE"."SCE_STAC"='RHL_R' OR "SCE"."SCE_STAC"='RIN' OR
"SCE"."SCE_STAC"='RLL' OR "SCE"."SCE_STAC"='RPD' OR "SCE"."SCE_STAC"='RSAS'))
10 - access("MST"."MST_CODE"="MRE"."MRE_MSTC")
11 - filter("MRE"."MRE_MRCC"='STU')
14 - access("SPR"."SPR_CODE"="SCJ"."SCJ_SPRC")
Note: cpu costing is off
40 rows selected.*** LOWER Version of the Explain Plan ***
| Id | Operation | Name | Rows | Bytes | Cost | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 1 | 146 | 736 | | | |
| 1 | SORT AGGREGATE | | 1 | 146 | | | | |
| 2 | SORT AGGREGATE | | 1 | 146 | | 88,10 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | 257K| 35M| 736 | 88,10 | PCWP | |
|* 4 | HASH JOIN | | 774 | 105K| 733 | 88,09 | P->P | HASH |
|* 5 | HASH JOIN | | 12608 | 1489K| 642 | 88,08 | P->P | BROADCAST |
| 6 | NESTED LOOPS | | 14657 | 1531K| 491 | 88,07 | P->P | HASH |
|* 7 | HASH JOIN | | 14657 | 1359K| 490 | 88,07 | PCWP | |
|* 8 | HASH JOIN | | 14371 | 996K| 418 | 88,06 | P->P | HASH |
|* 9 | TABLE ACCESS FULL | SRS_SCE | 3211 | 106K| 317 | 88,00 | S->P | BROADCAST |
|* 10 | HASH JOIN | | 52025 | 1879K| 101 | 88,06 | PCWP | |
|* 11 | TABLE ACCESS FULL | MEN_MRE | 51622 | 1310K| 71 | 88,01 | S->P | HASH |
| 12 | INDEX FAST FULL SCAN| SRS_MSTP1 | 383K| 4119K| 30 | 88,05 | P->P | HASH |
| 13 | TABLE ACCESS FULL | SRS_SCJ | 114K| 2672K| 72 | 88,02 | S->P | HASH |
|* 14 | INDEX UNIQUE SCAN | INS_SPRP1 | 1 | 12 | | 88,07 | PCWP | |
| 15 | TABLE ACCESS FULL | MEN_MUA | 312K| 4268K| 151 | 88,03 | S->P | HASH |
| 16 | INDEX FAST FULL SCAN | CAM_SMOP1 | 527K| 9796K| 91 | 88,09 | PCWP | |
| 17 | TABLE ACCESS FULL | TEMP_WEBCT_USERS | 33276 | 194K| 3 | 88,04 | S->P | HASH |
Predicate Information (identified by operation id):
3 - access(LOWER("MUA"."MUA_EXTU")=LOWER("U"."LOGIN"))
4 - access("CAM"."AYR_CODE"="SCE"."SCE_AYRC" AND "CAM"."SPR_CODE"="SCJ"."SCJ_SPRC")
5 - access("MST"."MST_CODE"="MUA"."MUA_MSTC")
7 - access("SCE"."SCE_SCJC"="SCJ"."SCJ_CODE")
8 - access("SCE"."SCE_STUC"="MRE"."MRE_CODE")
9 - filter(SUBSTR("SCE"."SCE_AYRC",1,4)='2008' AND ("SCE"."SCE_STAC"='RCA' OR "SCE"."SCE_STAC"='RCE' OR
"SCE"."SCE_STAC"='RCI' OR "SCE"."SCE_STAC"='RCO' OR "SCE"."SCE_STAC"='RHL_R' OR "SCE"."SCE_STAC"='RIN' OR
"SCE"."SCE_STAC"='RLL' OR "SCE"."SCE_STAC"='RPD' OR "SCE"."SCE_STAC"='RSAS'))
10 - access("MST"."MST_CODE"="MRE"."MRE_MSTC")
11 - filter("MRE"."MRE_MRCC"='STU')
14 - access("SPR"."SPR_CODE"="SCJ"."SCJ_SPRC")
Note: cpu costing is off
40 rows selected.As you state something has obviously changed, but nothing obvious has been changed.
We gather statistics via...
exec dbms_stats.gather_schema_stats(ownname => 'USERNAME', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE , degree => 4, granularity => ''ALL'', cascade => TRUE);
We run a script nightly which works out which indexes require a rebuild and rebuild those only it doesn;t just rebuild all indexes.
It would be nice to be able to use the 10g statistics history, but on this instance we aren't yet at that version, hopefully we will be there soon though.
Hope this helps,
Kieron -
Formula Node Floor(x) Produces Different Result
Hi, A search didn't find anything about the Floor(x) function, so... I'm using LabVIEW 6.0.2, and the Floor(x)function in a Formula Node seems to be producing different results depending on previous calculations. I've got the following 2 lines in a Formula Node:
MSS = Ref / RefDiv;
MDN = floor(RF / MSS);
Ref is always 20.0M, RefDiv always 500.0, and for this calcualtion RF is always 1539.4, all numbers Double with 6 digits of precision. I generate an array of frequencies given a start, step, and frequency count. These frequencies then go to a subVI with a Formula Node that calculates the byte values to send to a couple PLLs.
If Start = 70.1, Step = .025, and Count = 20, at frequency 70.2 the Floor function gives 38.485.
If Start = 70.0, Step = .025, and Count = 20, at frequency 70.2 the Floor function gives 38.484.
I've omitted some calc steps here, but I've verified the starting values in the subVI are the same in both cases. Why the result changes I'm hoping someone can tell me...
Thanks,
SteveI want to thank those involved again for their help. With ideas and hints from others I found a solution without scaling.
In recap, what had bothered me was it *appeared* like the same subVI was giving correct results one time and incorrect results only randomly. While I understand binary fractional imprecision, I wasn't doing any looped calculations 100+ times or anything.
I did some more checking though. The problem was indeed introduced by cumulative fractional addition. In this case 10 additions were enought to cause the error. Apparently, floor(71.199_94) produces 72.0. However, using a shift register and constant fraction to add an offset to produce an array introduces enough error in under 10 iterations to be a problem. By the time the loop got to what was supposed to be 72.0, it was actually 71.199_84 or something, enough to throw the floor function. Now I understand why the error occurred, and why it wasn't a problem before.
I fixed this problem by changing the real frequency number to a I32 before introduction to the formula node. This corrected the error introduced by the fractional addition by forcing 71.199_84 to 72, instead of letting it propagate through the rest of the calculations. And it was a whole lot easier than changing all the VIs to allow scaling! Also, I prefer to know where and why the problem happened, instead of just scaling all my calcualtions. Maybe I can recoginse potential problems in the future.
My boss wants to go back and look at his program to see if HPVee somehow bypassed the problem or if he did the calculations differently.
Thanks again for the insight and help,
Steve -
Having installed aperture 3 and imported iphoto library (15000 photos) Apertue 3 does not render most of my photos corectly most a blurred pixellated and distorted. Reverting to photo produces perfect results. Any suggestions
Galdaplh,
What do you mean by "reverting to photo"? "Revert" is not a function in Aperture so you must be talking about something else.
If you just performed the import of that many photos, Aperture will take some time to generate thumbnails. You can see if it is doing this sort of thing by {Command}-{Shift}-0 to show the Activity window.
Once Aperture generates all its thumbnails, you will have nearly instantaneous access to much, much better thumbnails than Aperture uses immediately after importing photos.
nathan -
Strange results with Insert statement having select query
Hi all,
I am facing a strange issue with Insert statement based on a select query having multiple joins.
DB- Oracle 10g
Following is the layout of my query -
Insert into Table X
Select distinct Col1, Col2, Col3, Col4, Function(Col 5) from Table A, B
where trunc(updated_date) > = trunc(sysdate-3)
and join conditions for A, B
Union
Select Col1, Col2, Col3, Col4, Function(Col 5) from Table C, D
trunc(updated_date) > = trunc(sysdate-3)
and join conditions for C, D
Union
.... till 4 unions. all tables are residing in the local Database and not having records more than 50,000.
If I execute above insert in a DBMS job, it results into suppose 50 records where as if I execute the select query it gives 56 records.
We observed following things-
a) no issue with size of tablespace
b) no error while inserting
c) since query takes lot of time so we have not used Cursor and PLSQL block for inserting.
d) this discrepancy in number of records happens frequently but not everytime.
e) examined the records left out from the insert, there we couldn't find any specific pattern.
f) there is no constraint on the table X in which we are trying to insert. Also tables A, B, C....
I went through this thread -SQL insert with select statement having strange results but mainly users are having either DB Links or comparison of literal dates, in my case there is none.
Can somebody explain why is the discrepancy and what is the solution for it.
Or atleast some pointers how to proceed with the analysis.
Edited by: Pramod Verma on Mar 5, 2013 4:59 AM
Updated query and added more details>
Since I am using Trunc() in the where clause so timing should not matter much. Also I manually ruled out records which were updated after the job run.
>
The first rule of troubleshooting is to not let your personal opinion get in the way of finding out what is wrong.
Actually this code, and the process it represents, is the most likely CAUSE of the problem.
>
where trunc(updated_date) > = trunc(sysdate-3)
>
You CANNOT reliably use columns like UPDATED_DATE to select records for processing. Your process is flawed.
The value of that column is NOT the date/time that the data was actually committed; it is the date/time that the row was populated.
If you insert a row into a table right now, using SYSDATE (8am on 3/5/2013) and don't commit that row until April your process will NEVER see that 3/5/2013 date until April.
Here is the more typical scenario that I see all the time.
1. Data is inserted/updated all day long on 3/4/2013.
2. A column, for example UPDATED_DATE is given a value of SYSDATE (3/4/2013) in a query or by a trigger on the table.
3. The insert/update query takes place at 11:55 PM - so the SYSDATE values are for THE DAY THE QUERY BEGAN
4. The data pull begins at 12:05 am (on 3/5/2013 - just after midnight)
5. The transaction is COMMITTED at 12:10 AM (on 3/5/2013); 5 minutes after the data pull began.
That data extract in step 4 will NEVER see those records! They DO NOT EXIST when the data pull query is executed since they haven't been committed.
Even worse, the next nights data pull will not see them either! That is because the next pull will pull data for 3/5/2013 but those records have a date of 3/4/2013. They will never get processed.
>
Job timing is 4am and 10pm EST
>
Another wrinkle is when data is inserted/updated from different timezones and the UPDATED_DATE value is from the CLIENT pc or server. Then you can get even more data missed since the client dates may be hours different than the server date used for the data pull process.
DO NOT try to use UPDATED_DATE type columns to do delta extraction or you can have this issue. -
Analog clock MC -- how to produce accurate results?
I'm using Flash 8.
I am trying to program an analog clock movieClip -- if any of
you have seen the UK game show "Countdown", it's a replica of the
Countdown clock. It's a 30-second clock and the second hand runs
from top ("12") to bottom ("6").
My first crack at this has been timeline-based. I've used a
motion tween to animate the hand moving from top to bottom. My
movie is set to run at 12fps, but it's only running at ~11.3fps.
Even though it's a slight deviation, it causes the clock to run
about 32 seconds instead of 30. Note that during the animation, the
CPU usage isn't exceeding about 4%.
I do understand that you can't depend on the Flash Player to
run at a particular framerate.
So I changed the logic behind the clock to use setInterval().
Unfortunately, even my intervals aren't firing at the exact rate
they need to. I've set the interval to 100ms, but it appears to be
firing at 110ms or so. Even changing the intervals to a higher
number (200ms, 500ms) don't produce exact results.
Any ideas or suggestions on how I can make this work?
Thanks,
CurtWhen you say “DAQ express” I assume you mean DAQmx, in
LabVIEW 7.0 or later. You could go the route of trying to do precise software
timed stuff, but using WinXP you will always run into the problems that you
mentioned. Hardware timed, will always be more accurate, but setting up the
control may be a bit tricky. To program a hardware timed acquisition you can either
do it explicitly with the DAQmx functions, or you can use the DAQ assistant
(which is an Express VI). With the DAQ assistant once you drop it down it
should be pretty straight forward how to configure it, a window will pop and
you will have to fill in what you want. I would also recommend taking a look at
some of the shipping examples to get a feel for DAQmx programming without using
the DAQ assistant. You can find these examples in LabVIEW by going to: Help
Menu >> Find Example. Then from there: Hardware Input and Output >>
DAQmx >> Analog Generation. If you look at the examples that are Internally
Clocked, those will be the easiest to work with initially. Now I am not too
sure how well this will be for stage control, generally this is done either
with a motion board or if fine enough control is needed FPGA. But with those
examples, it should give you a good starting point to work from.
-GDE -
Strange result from insert into...select query
Hello guys,
I need your preciuos help for a question maybe simple, but that I can't explain by myself!
I have a query of "insert into...select" that, as I have explained in the title, returns a strange result. In facts, If I execute ONLY the SELECT statement the query returns the expected result (so 2 rows); instead If I execute the entire statement, that is the "insert into...select", the query returns 0 rows inserted!!
Following an example of the query:
INSERT
INTO TITOLI_ORI
COD_TITOLO_RICCONS ,
D_ESTRAZIONE ,
COD_SOCIETA ,
COD_PIANO_CONTABILE ,
COD_CONTO_CONTABILE ,
COD_RUBRICATO_STATISTICO_1 ,
COD_NDG ,
NUM_ESEGUITO ,
CUR_IMPORTO_RICCONS ,
CUR_IMPORTO_BICO ,
FLG_MODIFICATO ,
CUR_NON_ASSEGNATO ,
FLG_QUOTATO ,
COD_CATEG ,
TIP_COPERTURA ,
TIPTAS_TITOLO
SELECT NEWID,
'28-feb-2111',
COD_SOCIETA,
COD_PIANO_CONTABILE,
COD_CONTO_CONTABILE,
COD_RUBRICATO_STATISTICO_1,
COD_NDG,
NUM_ESEGUITO,
CUR_VAL_IMPEGNI,
'ABC' as CUR_IMPORTO_BICO,
0 as FLG_MODIFICATO,
NULL as CUR_NON_ASSEGNATO,
FLG_QUOTATO,
COD_CATEG,
TIP_COPERTURA,
TIP_TASSO
FROM
(SELECT S.COD_SOC AS COD_SOCIETA,
S.TIP_PIANO_CNTB AS COD_PIANO_CONTABILE,
S.COD_CONTO_CNTB AS COD_CONTO_CONTABILE,
S.COD_RUBR_STAT AS COD_RUBRICATO_STATISTICO_1,
TRC.COD_RAGGR_IAS AS COD_RAGGRUPPAMENTO_IAS,
TRC.COD_NDG AS COD_NDG,
TRC.COD_ESEG AS NUM_ESEGUITO,
CAST((TRC.IMP_PLUS_MINUS_VAL/TRC.IMP_CAMB) AS FLOAT) AS CUR_VAL_IMPEGNI,
TRC.TIP_QUOTAZ AS FLG_QUOTATO,
TRC.COD_CAT_TIT AS COD_CATEG,
TIP_COP AS TIP_COPERTURA,
T.TIP_TASSO AS TIP_TASSO
FROM S_SLD_CNTB S
INNER JOIN
(SELECT DISTINCT COD_SOC,
TIP_PIANO_CNTB,
COD_CONTO_CNTB,
COD_RUBR_STAT ,
COD_INTER_TIT AS COD_INTER
FROM S_COLLEG_CONTO_CNTB_TIT
WHERE COD_SOC = 'ME'
) CCC
ON S.COD_SOC = CCC.COD_SOC
AND S.TIP_PIANO_CNTB = CCC.TIP_PIANO_CNTB
AND S.COD_CONTO_CNTB = CCC.COD_CONTO_CNTB
AND S.COD_RUBR_STAT = CCC.COD_RUBR_STAT
INNER JOIN S_TIT_RICCONS TRC
ON CCC.COD_INTER = TRC.COD_INTER_TIT
AND CCC.COD_SOC = TRC.COD_SOC
AND TRC.COD_RAGGR_IAS = RTRIM('VALUE1 ')
AND TRC.COD_RAGGR_IAS NOT IN ('VALUE2')
AND TRC.DES_TIP_SLD_TIT_RICCONS IN ('VALUE3')
AND TRC.DES_MOV_TIT = RTRIM('VALUE4 ')
AND TRC.COD_CAT_TIT = RTRIM('VALUE4 ')
AND TRC.COD_INTER_TIT = RTRIM('VALUE5')
AND '28-feb-2011' = TRC.DAT_RIF
LEFT JOIN S_TIT T
ON T.COD_INTER_TIT = TRC.COD_INTER_TIT
AND T.COD_SOC = TRC.COD_SOC
AND '28-feb-2011' = T.DAT_RIF
INNER JOIN S_ANAG_SOGG AG
ON TRC.COD_NDG = AG.COD_NDG
AND AG.COD_SOC = TRC.COD_SOC
AND '28-feb-2011' = AG.DAT_RIF
WHERE S.DAT_RIF = '28-feb-2011'
AND (S.FLG_ANULL_BICO = 0
OR S.FLG_ANULL_BICO IS NULL)
AND S.COD_SOC = 'V6'
AND LENGTH(RTRIM(S.COD_CONTO_CNTB)) = 10
AND S.TIP_PIANO_CNTB = 'V7'
AND TRC.IMP_PLUS_MINUS_VAL < 0
AND SUBSTR(S.COD_CONTO_CNTB,1,7) IN (RTRIM('VALUE8 '))
Thanks a lotRight, I have executed this steps:
- I have changed the query with the select count(*)
- Changed the insert into with the select count(*)
- Executed the insert into
These are the result:
SQL> select count(*) from TITOLI_ORI2;
COUNT(*)
1
BUT:
SQL> select * from TITOLI_ORI2;
A
0
The insert into that I've modified is this:
INSERT INTO bsc.TITOLI_ORI2
select count(*)
FROM
(SELECT bsc.NEWID,
TO_DATE('28-feb-2111','DD-MON-YYYY') as data,
COD_SOCIETA,
COD_PIANO_CONTABILE,
COD_CONTO_CONTABILE,
COD_RUBRICATO_STATISTICO_1,
COD_NDG,
NUM_ESEGUITO,
CUR_VAL_IMPEGNI,
'ABC' AS CUR_IMPORTO_BICO,
0 AS FLG_MODIFICATO,
NULL CUR_NON_ASSEGNATO,
FLG_QUOTATO,
COD_CATEG,
TIP_COPERTURA,
TIP_TASSO
FROM
(SELECT S.COD_SOC AS COD_SOCIETA,
S.TIP_PIANO_CNTB AS COD_PIANO_CONTABILE,
S.COD_CONTO_CNTB AS COD_CONTO_CONTABILE,
S.COD_RUBR_STAT AS COD_RUBRICATO_STATISTICO_1,
TRC.COD_RAGGR_IAS AS COD_RAGGRUPPAMENTO_IAS,
TRC.COD_NDG AS COD_NDG,
TRC.COD_ESEG AS NUM_ESEGUITO,
CAST((TRC.IMP_PLUS_MINUS_VAL/TRC.IMP_CAMB) AS FLOAT) AS CUR_VAL_IMPEGNI,
TRC.TIP_QUOTAZ AS FLG_QUOTATO,
TRC.COD_CAT_TIT AS COD_CATEG,
TIP_COP AS TIP_COPERTURA,
T.TIP_TASSO AS TIP_TASSO
FROM bsc.S_SLD_CNTB S
INNER JOIN
(SELECT DISTINCT COD_SOC,
TIP_PIANO_CNTB,
COD_CONTO_CNTB,
COD_RUBR_STAT ,
COD_INTER_TIT AS COD_INTER
FROM bsc.S_COLLEG_CONTO_CNTB_TIT
WHERE COD_SOC = 'ME'
) CCC
ON S.COD_SOC = CCC.COD_SOC
AND S.TIP_PIANO_CNTB = CCC.TIP_PIANO_CNTB
AND S.COD_CONTO_CNTB = CCC.COD_CONTO_CNTB
AND S.COD_RUBR_STAT = CCC.COD_RUBR_STAT
INNER JOIN bsc.S_TIT_RICCONS TRC
ON CCC.COD_INTER = TRC.COD_INTER_TIT
AND CCC.COD_SOC = TRC.COD_SOC
AND TRC.COD_RAGGR_IAS = RTRIM('HFT ')
AND TRC.COD_RAGGR_IAS NOT IN ('GPO')
AND TRC.DES_TIP_SLD_TIT_RICCONS IN ('DISPONIBILI')
AND TRC.DES_MOV_TIT = RTRIM('CONSEGNARE ')
AND TRC.COD_CAT_TIT = RTRIM('OBBLIGAZIONE ')
AND TRC.COD_INTER_TIT = RTRIM('334058')
AND '28-feb-2011' = TRC.DAT_RIF
LEFT JOIN bsc.S_TIT T
ON T.COD_INTER_TIT = TRC.COD_INTER_TIT
AND T.COD_SOC = TRC.COD_SOC
AND '28-feb-2011' = T.DAT_RIF
INNER JOIN bsc.S_ANAG_SOGG AG
ON TRC.COD_NDG = AG.COD_NDG
AND AG.COD_SOC = TRC.COD_SOC
AND '28-feb-2011' = AG.DAT_RIF
WHERE S.DAT_RIF = '28-feb-2011'
AND (S.FLG_ANULL_BICO = 0
OR S.FLG_ANULL_BICO IS NULL)
AND S.COD_SOC = 'ME'
AND LENGTH(RTRIM(S.COD_CONTO_CNTB)) = 10
AND S.TIP_PIANO_CNTB = 'IS'
AND TRC.IMP_PLUS_MINUS_VAL < 0
AND SUBSTR(S.COD_CONTO_CNTB,1,7) IN (RTRIM('P044C11 '))
Another time the strange result returns!!
And I've created the table TITOLI_ORI2 as create table TITOLI_ORI2 (a number); to contain the number result of the query. -
Filter expression producing different results after upgrade to 11.1.1.7
Hello,
We recently did an upgrade and noticed that on a number of reports where we're using the FILTER expression that the numbers are very inflated. Where we are not using the FILTER expression the numbers are as expected. In the example below we ran the 'Bookings' report in 10g and came up with one number and ran the same report in 11g (11.1.1.7.0) after the upgrade and got two different results. The data source is the same database for each envrionment. Also, in running the physical SQL generated by the 10g and 11g version of the report we get different the inflated numbers from the 11g SQL. Any ideas on what might be happening or causing the issue?
10g report: 2016-Q3......Bookings..........72,017
11g report: 2016-Q3......Bookings..........239,659
This is the simple FILTER expression that is being used in the column formula on the report itself for this particular scenario which produces different results in 10g and 11g.
FILTER("Fact - Opportunities"."Won Opportunity Amount" USING ("Opportunity Attributes"."Business Type" = 'New Business'))
-------------- Physical SQL created by 10g report -------- results as expected --------------------------------------------
WITH
SAWITH0 AS (select sum(case when T33142.OPPORTUNITY_STATUS = 'Won-closed' then T33231.USD_LINE_AMOUNT else 0 end ) as c1,
T28761.QUARTER_YEAR_NAME as c2,
T28761.QUARTER_RANK as c3
from
XXFI.XXFI_GL_FISCAL_MONTHS_V T28761 /* Dim_Periods */ ,
XXFI.XXFI_OSM_OPPTY_HEADER_ACCUM T33142 /* Fact_Opportunity_Headers(CloseDate) */ ,
XXFI.XXFI_OSM_OPPTY_LINE_ACCUM T33231 /* Fact_Opportunity_Lines(CloseDate) */
where ( T28761.PERIOD_NAME = T33142.CLOSE_PERIOD_NAME and T28761.QUARTER_YEAR_NAME = '2012-Q3' and T33142.LEAD_ID = T33231.LEAD_ID and T33231.LINES_BUSINESS_TYPE = 'New Business' and T33142.OPPORTUNITY_STATUS <> 'Duplicate' )
group by T28761.QUARTER_YEAR_NAME, T28761.QUARTER_RANK)
select distinct SAWITH0.c2 as c1,
'Bookings10g' as c2,
SAWITH0.c1 as c3,
SAWITH0.c3 as c5,
SAWITH0.c1 as c7
from
SAWITH0
order by c1, c5
-------------- Physical SQL created by the same report as above but in 11g (11.1.1.7.0) -------- results much higher --------------------------------------------
WITH
SAWITH0 AS (select sum(case when T33142.OPPORTUNITY_STATUS = 'Won-closed' then T33142.TOTAL_OPPORTUNITY_AMOUNT_USD else 0 end ) as c1,
T28761.QUARTER_YEAR_NAME as c2,
T28761.QUARTER_RANK as c3
from
XXFI.XXFI_GL_FISCAL_MONTHS_V T28761 /* Dim_Periods */ ,
XXFI.XXFI_OSM_OPPTY_HEADER_ACCUM T33142 /* Fact_Opportunity_Headers(CloseDate) */ ,
XXFI.XXFI_OSM_OPPTY_LINE_ACCUM T33231 /* Fact_Opportunity_Lines(CloseDate) */
where ( T28761.PERIOD_NAME = T33142.CLOSE_PERIOD_NAME and T28761.QUARTER_YEAR_NAME = '2012-Q3' and T33142.LEAD_ID = T33231.LEAD_ID and T33231.LINES_BUSINESS_TYPE = 'New Business' and T33142.OPPORTUNITY_STATUS <> 'Duplicate' )
group by T28761.QUARTER_YEAR_NAME, T28761.QUARTER_RANK),
SAWITH1 AS (select distinct 0 as c1,
D1.c2 as c2,
'Bookings2' as c3,
D1.c3 as c4,
D1.c1 as c5
from
SAWITH0 D1),
SAWITH2 AS (select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4,
D1.c5 as c5,
sum(D1.c5) as c6
from
SAWITH1 D1
group by D1.c1, D1.c2, D1.c3, D1.c4, D1.c5)
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6 from ( select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4,
D1.c5 as c5,
sum(D1.c6) over () as c6
from
SAWITH2 D1
order by c1, c4, c3 ) D1 where rownum <= 2000001
Thank you,
Mike
Edited by: Mike Jelen on Jun 7, 2013 2:05 PMThank you for the info. They are definitely different values since ones on the header and the other is on the lines. As the "Won Opportunity" logical column is mapped to multiple LTS it appears the OBI 11 uses a different alogorthim to determine the most efficient table to use in the query generation vs 10g. I'll need to spend some time researching the impact to adding a 'sort' to the LTS. I'm hoping that there's a way to get OBI to use similar logic relative to 10g in how it generated the table priority.
Thx again,
Mike -
SQL Query produces different results when inserting into a table
I have an SQL query which produces different results when run as a simple query to when it is run as an INSERT INTO table SELECT ...
The query is:
SELECT mhldr.account_number
, NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
, COUNT(1) num_apps
FROM app_parties ap
SELECT accsta.account_number
, actply.party_sysid
, RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
FROM activity_players actply
, account_status accsta
WHERE 1 = 1
AND actply.table_id (+) = 'ACCGRP'
AND actply.acttyp_code (+) = 'MHLDRM'
AND NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
AND actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
) mhldr
WHERE 1 = 1
AND ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
GROUP BY mhldr.account_number; The INSERT INTO code:
TRUNCATE TABLE applicant_summary;
INSERT /*+ APPEND */
INTO applicant_summary
( account_number
, main_borrower_status
, num_apps
SELECT mhldr.account_number
, NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
, COUNT(1) num_apps
FROM app_parties ap
SELECT accsta.account_number
, actply.party_sysid
, RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
FROM activity_players actply
, account_status accsta
WHERE 1 = 1
AND actply.table_id (+) = 'ACCGRP'
AND actply.acttyp_code (+) = 'MHLDRM'
AND NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
AND actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
) mhldr
WHERE 1 = 1
AND ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
GROUP BY mhldr.account_number; When run as a query, this code consistently returns 2 for the num_apps field (for a certain group of accounts), but when run as an INSERT INTO command, the num_apps field is logged as 1. I have secured the tables used within the query to ensure that nothing is changing the data in the underlying tables.
If I run the query as a cursor for loop with an insert into the applicant_summary table within the loop, I get the same results in the table as I get when I run as a stand alone query.
I would appreciate any suggestions for what could be causing this odd behaviour.
Cheers,
Steve
Oracle database details:
Oracle Database 10g Release 10.2.0.2.0 - Production
PL/SQL Release 10.2.0.2.0 - Production
CORE 10.2.0.2.0 Production
TNS for 32-bit Windows: Version 10.2.0.2.0 - Production
NLSRTL Version 10.2.0.2.0 - Production
Edited by: stevensutcliffe on Oct 10, 2008 5:26 AM
Edited by: stevensutcliffe on Oct 10, 2008 5:27 AMstevensutcliffe wrote:
Yes, using COUNT(*) gives the same result as COUNT(1).
I have found another example of this kind of behaviour:
Running the following INSERT statements produce different values for the total_amount_invested and num_records fields. It appears that adding the additional aggregation (MAX(amount_invested)) is causing problems with the other aggregated values.
Again, I have ensured that the source data and destination tables are not being accessed / changed by any other processes or users. Is this potentially a bug in Oracle?Just as a side note, these are not INSERT statements but CTAS statements.
The only non-bug explanation for this behaviour would be a potential query rewrite happening only under particular circumstances (but not always) in the lower integrity modes "trusted" or "stale_tolerated". So if you're not aware of any corresponding materialized views, your QUERY_REWRITE_INTEGRITY parameter is set to the default of "enforced" and your explain plan doesn't show any "MAT_VIEW REWRITE ACCESS" lines, I would consider this as a bug.
Since you're running on 10.2.0.2 it's not unlikely that you hit one of the various "wrong result" bugs that exist(ed) in Oracle. I'm aware of a particular one I've hit in 10.2.0.2 when performing a parallel NESTED LOOP ANTI operation which returned wrong results, but only in parallel execution. Serial execution was showing the correct results.
If you're performing parallel ddl/dml/query operations, try to do the same in serial execution to check if it is related to the parallel feature.
You could also test if omitting the "APPEND" hint changes anything but still these are just workarounds for a buggy behaviour.
I suggest to consider installing the latest patch set 10.2.0.4 but this requires thorough testing because there were (more or less) subtle changes/bugs introduced with [10.2.0.3|http://oracle-randolf.blogspot.com/2008/02/nasty-bug-introduced-with-patch-set.html] and [10.2.0.4|http://oracle-randolf.blogspot.com/2008/04/overview-of-new-and-changed-features-in.html].
You could also open a SR with Oracle and clarify if there is already a one-off patch available for your 10.2.0.2 platform release. If not it's quite unlikely that you are going to get a backport for 10.2.0.2.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Spatial Queries Not Always Producing Accurate Results
Hi,
Spatial queries are not always producing accurate results. Here are the issues. We would appreciate any clarification you could provide to resolve these issues.
1. When querying for points inside a polygon that is not an MBR (minimum bounded rectangle), some of the coordinates returned are not inside the polygon. It is as though the primary filter is working, but not the secondary filter when using sdo_relate. How can we validate that the spatial query using sdo_relate is using the secondary filter?
2. SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT returns true when validating geometries even though we find results that are invalid.
3. Illegal geodetic coordinates can be inserted into a table: latitude > 90.0, latitude < -90.0, longitude > 180.0 or longitude < -180.0.
4. Querying for coordinates outside the MBR for the world where illegal coordinates existed did NOT return any rows, yet there were coordinates of long, lat: 181,91.
The following are examples and information relating to the above-referenced points.
select * from USER_SDO_GEOM_METADATA
TABLE_NAME COLUMN_NAME DIMINFO(SDO_DIMNAME, SDO_LB, SDO_UB, SDO_TOLERANCE) SRID
LASTKNOWNPOSITIONS THE_GEOM SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .05), SDO_DIM_ELEMENT('Y', -90, 90, .05)) 8307
POSITIONS THE_GEOM SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .05), SDO_DIM_ELEMENT('Y', -90, 90, .05)) 8307
Example 1: Query for coordinates inside NON-rectangular polygon includes points outside of polygon.
SELECT l.vesselid, l.latitude, l.longitude, TO_CHAR(l.observationtime,
'YYYY-MM-DD HH24:MI:SS') as obstime FROM lastknownpositions l where
SDO_RELATE(l.the_geom,SDO_GEOMETRY(2003, 8307, NULL,
SDO_ELEM_INFO_ARRAY(1, 1003, 1),
SDO_ORDINATE_ARRAY(-98.20268,18.05079,-57.30101,18.00705,-57.08229,
54.66061,-98.59638,32.87842,-98.20268,18.05079)),'mask=inside')='TRUE'
This query returns the following coordinates that are outside of the polygon:
vesselid : 1152 obstime : 2005-08-24 06:00:00 long : -82.1 lat : 45.3
vesselid : 3140 obstime : 2005-08-28 12:00:00 long : -80.6 lat : 44.6
vesselid : 1253 obstime : 2005-08-22 09:00:00 long : -80.0 lat : 45.3
Example 2a: Using SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT
Select areaid, the_geom,
SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(the_geom, 0.005) from area where
areaid=24
ResultSet:
AREAID THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO,
SDO_ORDINATES) SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(THE_GEOM,0.005)
24 SDO_GEOMETRY(2003, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1), SDO_ORDINATE_ARRAY(-98.20268, 18.05079, -57.30101, 18.00705, -57.08229, 54.66061, -98.59638, 32.87842, -98.20268, 18.05079)) TRUE
Example 2b: Using SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT
Select positionid, vesselid, the_geom,
SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(the_geom, 0.005) from positions where vesselid=1152
ResultSet:
POSITIONID VESSELID THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z),
SDO_ELEM_INFO, SDO_ORDINATES) DO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(THE_GEOM,0.005)
743811 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-82.1, 45.3, NULL), NULL, NULL) TRUE
743812 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-82.1, 45.3, NULL), NULL, NULL) TRUE
743813 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-80.2, 42.5, NULL), NULL, NULL) TRUE
743814 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-80.2, 42.5, NULL), NULL, NULL) TRUE
Example 3: Invalid Coordinate values found in POSITIONS table.
SELECT p.positionid, p.latitude, p.longitude, p.the_geom FROM positions p
WHERE p.latitude < -180.0
2 lines from ResultSet:
POSITIONID LATITUDE LONGITUDE THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
714915 -210.85408 -79.74449 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-79.74449, -210.85408, NULL), NULL, NULL)
714938 -211.13632 -79.951256 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-79.951256, -211.13632, NULL), NULL, NULL)
SELECT p.positionid, p.latitude, p.longitude, p.the_geom FROM positions p
WHERE p.longitude > 180.0
3 lines from ResultSet:
POSITIONID LATITUDE LONGITUDE THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
588434 91 181 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(181, 91, NULL), NULL, NULL)
589493 91 181 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(181, 91, NULL), NULL, NULL)
589494 91 181 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(181, 91, NULL), NULL, NULL)
Example 4: Failure to locate illegal coordinates by querying for disjoint coordinates outside of MBR for the world:
SELECT p.vesselid, p.latitude, p.longitude, p.the_geom,
TO_CHAR(p.observationtime, 'YYYY-MM-DD HH24:MI:SS') as obstime,
SDO_GEOM.RELATE(p.the_geom, 'determine',
SDO_GEOMETRY(2003, 8307, NULL,SDO_ELEM_INFO_ARRAY(1, 1003, 1),
SDO_ORDINATE_ARRAY(-180.0,-90.0,180.0,-90.0,180.0,90.0,
-180.0,90.0,-180.0,-90.0)), .005) relationship FROM positions p where
SDO_GEOM.RELATE(p.the_geom, 'disjoint', SDO_GEOMETRY(2003, 8307,
NULL,SDO_ELEM_INFO_ARRAY(1, 1003, 1),
SDO_ORDINATE_ARRAY(-180.0,-90.0,180.0,-90.0,180.0,90.0,-80.0,90.0,
-180.0,-90.0)),.005)='TRUE'
no rows selected
Carol SaahHi Carol,
1) I think the results are correct. Note in a geodetic coordinate system adjacent points in a linestring or polygon are connected via geodesics. You are probably applying planar thinking to an ellipsoidal problem! I don't have time to do the full analysis right now, but a first guess is that is what is happening.
2) The query window seems to be valid. I don't think this is a problem.
3) Oracle will let you insert most anything into a table. In the index, it probably wraps. If you validate, I think the validation routines will tell you is is illegal if you use the signature with diminfo, where the coordinate system bounds are included in the validation.
4) Your query window is not valid. Your data is not valid. As the previous reply stated, you need to have valid data. If you think in terms of a geodetic coordinate system, you will realize that -180.0,-90.0 and 180.0,-90.0 are really the same point. Also, Oracle has a rule that polygon geometries cannot be greater than half the surface of the Earth.
Hope this helps.
Maybe you are looking for
-
Manually create an output message for GRs
Is it possible to manually create an output message for a GR that has never been printed? There was an issue with our output determination and we have a GR that never had a message created when saved so I can't use MB90 to print it. The problem wit
-
[SOLVED] DBus error when accessing "Windows Network"
I'm getting an error when trying to access "Windows Network" from Nautilus after last Gnome (2.22 -> 2.24) update: Unable to mount location DBus error org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus) Please
-
Error in Delta Infopacakge after successful initialization
Hi, We are having a content datasource 0customer_attr for which we are having Init and Delta infopackages. After the BI system copy from Productive system to Quality system, we have re-connected the R/3 source system and deleted the old Init settings
-
LSH error on installation server: Could not install DS as a service
Hi we set up an installation server some time ago. In the past all users had administrative privileges therefore no LSH was necessary. Currently we are upgrading from Windows XP to Windows 7 and the users won't be admins anymore. That's why LSH is ne
-
How to change font size for SMS at PC suite
Hello, I would like to read and write SMS at the PC and receive and send them via PC suite. It works. But how can I change the font type and font size of an SMS within the PC suite. I have problems to recognize Arab characters in SMS because the char