Inconsistent results in CCM
Hi,
When I schedule a job for a control in CCM on monthly time frame its giving Adequate results where as when I schedule the same control for frequency weekly or daily its giving deficiencies.
What we understand was there are millions of records for monthly analysis and its not fetching the results, however can anyone have acrossed with this kind of issues? looking forward for your valuable suggestions.
Regards,
Sumanth
Hello Marcpa,
This is a bug. It is fixed in the next release of SQL Developer.
Thanks,
Jim
Similar Messages
-
Query resulting inconsistent results
Hi,
I'm running Oracle 10.2.0.4 and have a partitioned table.
When I run the query
select bse_rem_bse, ben_aly_num from dw_cn2.prs_old
where bse_rem_bse = 3
I'm getting records with null data (i.e. bse_rem_bse is 'null') and the correct data (i.e. bse_rem_bse is 3).
bse_rem_bse is a number type data field.
Can anyone help me understanding the inconsistent result
and how to find what's broken in table.
Thanks
TarunPost your table structure along with data type and some sample data here. Also, post your DB version by executing the following query -
select * from v$version;Regards.
Satyaki De. -
Inconsistent results with MDX formula
Hi. I'm converting a BSO cube to ASO, and it has dynamically calculated formulas that I'm converting to MDX. I have a formula that is supposed to accumulate an account (Order Intake) through the months and years until it gets to the current month of the current year (set by substitution variables) and then just carries that balance forward until the end.
This is the formula I wrote in MDX.
IIF( Count( Intersect( {MemberRange([Years].[FY95], [&Auto_CurYr].Lag(1))}, {Years.CurrentMember} ) ) = 1,
IIF(CurrentMember ([Period]) = [Jan],
[Order Intake] + ([Contract Value],[Adj],[Years].CurrentMember.PrevMember),
[Order Intake] + ([Contract Value],[Period].CurrentMember.PrevMember)
IIF( CurrentMember ([Years]) = [&Auto_CurYr],
IIF( CurrentMember ([Period]) = [Jan],
[Order Intake] + ([Contract Value],[Adj],[Years].CurrentMember.PrevMember),
IIF( Count( Intersect( {MemberRange([Period].[Feb], [&Auto_CurMoNext_01].Lag(1))}, {Period.CurrentMember} ) ) = 1,
[Order Intake] + ([Contract Value],[Period].CurrentMember.PrevMember),
([Contract Value],[Period].CurrentMember.PrevMember)
([Contract Value],[Adj],[Years].CurrentMember.PrevMember) /*This is the statement that evaluates for months and years after the current month and year*/
The inconsistent results are as follows:
I have a spreadsheet that has the years and months across the top in columns. The substitution variables are set to FY09 for the year and Oct for the month. The formula works fine until it gets to Jan of FY10, at which point it produces a number out of thin air, and carries that incorrect number through to the end.
When I put the years and months into my rows, however, and then drill down on the months, I get different results. Not only different, but different results at different times, too. When I first drilled, all results were correct. Now when I drill, it produces a random number in October of FY09 (not entirely random, but actually double what it's supposed to be), then #missing in Nov of FY09, then the correct number thereafter. Same exact data intersection on both spreadsheets, different results. I've retrieved over and over again, and the only time it might change is if I re-drill. I've used both Essbase Add-in and Smart View with consistently inconsistent results.
Has anyone ever encountered this sort of behavior with an MDX formula?Well, I finally got a formula that works. I did end up using a combination of CASE and IIF, but I never did figure out how to deal with summing up ranges of data correctly, accounting for changing substitution variables, so I had to do a lot of hard coding by month. For instance, I couldn't ask it to sum([Order Intake],[Jan],[&Auto_CurYr]:([Order Intake],[&Auto_CurMo],[&Auto_CurYr]). Although it validated fine, when I tried to retrieve it said members were not of the same generation, presumably because my substitution variable could potentially be a non - level 0 month (it worked if I hard coded the end month). Also, I really don't like the MDX version of @LSIBLINGS and @RSIBLINGS.
But this works.
CASE
When Count( Intersect( {MemberRange([Years].[FY95], [&Auto_CurYr].Lag(1))}, {Years.CurrentMember} ) ) = 1
THEN IIF(CurrentMember ([Period]) = [Jan],
[Order Intake] + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Feb],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Feb]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Mar],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Mar]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Apr],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Apr]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [May],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[May]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Jun],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Jun]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Jul],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Jul]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Aug],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Aug]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Sep],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Sep]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Oct],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Oct]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Nov],
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Nov]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Dec]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember))
When CurrentMember ([Years]) IS [&Auto_CurYr]
THEN IIF(CurrentMember ([Period]) = [Jan],
[Order Intake] + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Feb] AND CONTAINS([Feb], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Feb]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Mar] AND CONTAINS([Mar], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Mar]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Apr] AND CONTAINS([Apr], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Apr]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [May] AND CONTAINS([May], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[May]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Jun] AND CONTAINS([Jun], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Jun]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Jul] AND CONTAINS([Jul], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Jul]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Aug] AND CONTAINS([Aug], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Aug]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Sep] AND CONTAINS([Sep], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Sep]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Oct] AND CONTAINS([Oct], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Oct]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
IIF(CurrentMember ([Period]) = [Nov] AND CONTAINS([Nov], {MEMBERRANGE([&Auto_CurMoNext_01].FirstSibling, [&Auto_CurMoNext_01].Lag(1))}),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[Nov]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember)),
Sum(CrossJoin({[Order Intake]}, {[Jan]:[&Auto_CurMo]})) + sum(([Order Intake],[YearTotal],[FY95]):([Order Intake],[YearTotal],[Years].CurrentMember.PrevMember))
WHEN CONTAINS([Years].CurrentMember, {MemberRange([&Auto_CurYr].Lead(1), [Years].[FY15])})
THEN ([Contract Value],[Adj],[Years].&Auto_CurYr)
END
Thanks for looking at it, Gary, I appreciate it.
Sabrina
Edited by: SabrinaD on Nov 18, 2009 2:29 PM
Edited by: SabrinaD on Nov 18, 2009 2:31 PM
Edited by: SabrinaD on Nov 18, 2009 2:34 PM
Edited by: SabrinaD on Nov 18, 2009 2:35 PM -
Hi Oracle Forums,
I am installing packages required for Oracle Database 11gR2, and am having problems with rpm responses.
When I query rpm about a package, it tells me that the package is not installed. When I go to install the package, rpm informs me that the package is already installed. An example is shown below:
[root@OELVM02 Server]# uname -a
Linux OELVM02.localdomain 2.6.32-300.10.1.el5uek #1 SMP Wed Feb 22 17:37:40 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
[root@OELVM02 Server]#
[root@OELVM02 Server]# pwd
/media/OL5.8 x86_64 dvd 20120229/Server
[root@OELVM02 Server]#
[root@OELVM02 Server]# ls -alrt binutils-2.17.50.0.6-20.el5.x86_64.rpm
-rw-r--r-- 1 root root 3069914 Dec 28 2011 binutils-2.17.50.0.6-20.el5.x86_64.rpm
[root@OELVM02 Server]#
[root@OELVM02 Server]# rpm -q ./binutils-2.17.50.0.6-20.el5.x86_64.rpm
package ./binutils-2.17.50.0.6-20.el5.x86_64.rpm is not installed
[root@OELVM02 Server]#
[root@OELVM02 Server]# rpm -ivh ./binutils-2.17.50.0.6-20.el5.x86_64.rpm
warning: ./binutils-2.17.50.0.6-20.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
package binutils-2.17.50.0.6-20.el5.x86_64 is already installed
[root@OELVM02 Server]#
[root@OELVM02 Server]# rpm -q ./binutils-2.17.50.0.6-20.el5.x86_64.rpm
package ./binutils-2.17.50.0.6-20.el5.x86_64.rpm is not installed
[root@OELVM02 Server]# I do not understand the inconsistent results that rpm is giving me.
Any help would be greatly appreciated
Thanks
GavinHi Avi,
Thanks for your quick response!!
The Oracle documentation requires that both the 32 and 64 bit rpm's be installed for some packages. In the below scenario, is rpm telling me that both 32 and 64 bit packages are installed?
[root@OELVM02 Server]# uname -a
Linux OELVM02.localdomain 2.6.32-300.10.1.el5uek #1 SMP Wed Feb 22 17:37:40 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
[root@OELVM02 Server]#
[root@OELVM02 Server]# pwd
/media/OL5.8 x86_64 dvd 20120229/Server
[root@OELVM02 Server]#
[root@OELVM02 Server]# ls -alrt *glibc-2*
-rw-r--r-- 1 root root 1544040 Nov 18 2010 compat-glibc-2.3.4-2.26.x86_64.rpm
-rw-r--r-- 1 root root 1069214 Nov 18 2010 compat-glibc-2.3.4-2.26.i386.rpm
-rw-r--r-- 1 root root 5607577 Feb 26 2012 glibc-2.5-81.i686.rpm
-rw-r--r-- 1 root root 4997627 Feb 26 2012 glibc-2.5-81.x86_64.rpm
[root@OELVM02 Server]#
[root@OELVM02 Server]# rpm -q glibc-2.5-81.i686
glibc-2.5-81
[root@OELVM02 Server]#
[root@OELVM02 Server]# rpm -q glibc-2.5-81.x86_64
glibc-2.5-81
[root@OELVM02 Server]#
[root@OELVM02 Server]# Thanks heaps
Gavin -
Inconsistent Results Installing WebApps in Console. The Secret?
Hi:
I'm having a difficult time understanding what exactly happens on my
server when I use the mydomain->Deployments->Web Applications->Install
a New Web Application dialog.
Sometimes when I upload a .war file, it then displays in the Web
Application section of the left pane where certificate and
DefaultWebApp are shown. Othertimes? Nothing.
I have tried installing my web application in several ways, and the
results are not consistent. What settings should be made and where
(for example, in files like config.xml) when I install a web
application?
I have also tried using the Configure a new Web Application dialog,
and have similarly inconsistent results.
Sometimes my config.xml gets updated, sometimes not. Sometimes it
updates with an ineffective <Application> tag that does not include
the <WebAppComponent> tag, and sometimes it works.
Thanks for any insight. I'm really having a tough time of this.
Thanks,
BillHi.
there should be no other file updated. You should open a case with support.
Regards,
Michael
bill b3nac wrote:
Yes, I'm using wls6.1 with sp2. jdk131 on windows 2000. my browser is
msie 6.0.26.
On installing a new web app through the console, is there any file
that gets updated that I should keep my eye on other than
./wlserver6.1SP2/config/mydomain/config.xml?
Thanks.
Bill
Michael Young <[email protected]> wrote in message news:<[email protected]>...
Hi.
Hmm. I'll presume you are running under wls 6.1. Make sure you are
running with the latest service pack - sp2. Also, what platform/jdk are
you using?
If you already are using sp2 and are still seeing this inconsistency I
recommend you open a case with support. However, be forwarned that they
may not be able to help much if this only occurs randomly or rarely.
Regards,
Michael
bill b3nac wrote:
Hi:
I'm having a difficult time understanding what exactly happens on my
server when I use the mydomain->Deployments->Web Applications->Install
a New Web Application dialog.
Sometimes when I upload a .war file, it then displays in the Web
Application section of the left pane where certificate and
DefaultWebApp are shown. Othertimes? Nothing.
I have tried installing my web application in several ways, and the
results are not consistent. What settings should be made and where
(for example, in files like config.xml) when I install a web
application?
I have also tried using the Configure a new Web Application dialog,
and have similarly inconsistent results.
Sometimes my config.xml gets updated, sometimes not. Sometimes it
updates with an ineffective <Application> tag that does not include
the <WebAppComponent> tag, and sometimes it works.
Thanks for any insight. I'm really having a tough time of this.
Thanks,
Bill
Michael Young
Developer Relations Engineer
BEA Support -
Inconsistent results with localtimestamp and current_timestamp
Running XE on Windows XP with the system timezone to GMT rebooted, restarted XE)
Oracle Database 10g Express Edition Release 10.2.0.1.0 - Product
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
I'm getting incorrect and inconsistent results with current_timestamp and localtimestamp:
With SQL, localtimestamp computes the wrong offset (appears to use 1987-2006 DST rules):
select
dbtimezone
, sessiontimezone
, current_timestamp
, current_timestamp + numtodsinterval(18,'day') as current_timestamp18
, localtimestamp
from dual;
+00:00
US/Eastern
17-MAR-10 10.27.17.376000000 AM US/EASTERN
04-APR-10 10.27.17.376000000 AM US/EASTERN
17-MAR-10 09.27.17.376000000 AM
however, in PL/SQL, both current_timestamp and localtimestamp return the wrong hour value, and adding 18 to current_timestamp shows it is using 1987-2006 DST rules (1st sunday of april)/ note that this happens in straight PL/SQL and in embedded SQL (same results selecting from tables other than DUAL):
begin
for r1 in (
select
dbtimezone
, sessiontimezone
, current_timestamp
, current_timestamp + numtodsinterval(18,'day') as current_timestamp18
, localtimestamp
from dual
loop
dbms_output.put_line('SQL dbtimezone = ' || r1.dbtimezone);
dbms_output.put_line('SQL sessiontimezone = ' || r1.sessiontimezone);
dbms_output.put_line('SQL current_timestamp = ' || r1.current_timestamp);
dbms_output.put_line('SQL current_timestamp +18 = ' || r1.current_timestamp18);
dbms_output.put_line('SQL localtimestamp = ' || r1.localtimestamp);
end loop;
dbms_output.put_line('dbtimezone = ' || dbtimezone);
dbms_output.put_line('sessiontimezone = ' || sessiontimezone);
dbms_output.put_line('systimestamp = ' || systimestamp);
dbms_output.put_line('current_timestamp = ' || current_timestamp);
dbms_output.put_line('current_timestamp +18 = ' || (current_timestamp + numtodsinterval(18,'day')));
dbms_output.put_line('localtimestamp = ' || localtimestamp);
end;
SQL dbtimezone = +00:00
SQL sessiontimezone = US/Eastern
SQL current_timestamp = 17-MAR-10 09.29.32.784000 AM US/EASTERN
SQL current_timestamp +18 = 04-APR-10 10.29.32.784000000 AM US/EASTERN
SQL localtimestamp = 17-MAR-10 09.29.32.784000 AM
dbtimezone = +00:00
sessiontimezone = US/Eastern
systimestamp = 17-MAR-10 02.29.32.784000000 PM +00:00
current_timestamp = 17-MAR-10 09.29.32.784000000 AM US/EASTERN
current_timestamp +18 = 04-APR-10 10.29.32.784000000 AM US/EASTERN
localtimestamp = 17-MAR-10 09.29.32.784000000 AM
dbtimezone = +00:00
sessiontimezone = US/Eastern
systimestamp = 17-MAR-10 02.16.21.366000000 PM +00:00
current_timestamp = 17-MAR-10 09.16.21.366000000 AM US/EASTERN
current_timestamp +18 = 04-APR-10 10.16.21.366000000 AM US/EASTERN
localtimestamp = 17-MAR-10 09.16.21.366000000 AM
is this a known bug?
is there a patch or a work-around for XE?
are other datasbase versions affected?Can't patch XE, unfortunately it comes with pre-2007 DST rules.
There is a metalink note describing how to fix the DST changes, and while it's not really a "supported" method, neither is XE- if you can get updated timezone files from a later patch set for the same release, 10gR2, on the right operating system, shutdown/startup the database the updated DST rules will be in place. The timezone files are in $ORACLE_HOME/oracore/zoneinfo.
Another unfortunately, any values already stored in the database using timestamp with local timezone datatypes for the affected period of the DST changes won't be correct, i.e. there is no 2010-03-14 02:01 (?) but with older timezone rules in place that would be a valid timestamp. The data has to be saved before updating the timezone file, and re-translated to timestamp w/local tz datatypes after the update.
IMHO storing literal timezone info isn't an ideal practice, let the client settings do the time interpretation, time is always changing. Its the interpretation of the time that gets changed. From time to time. :( -
Inconsistent results when exporting projects as new libraries
I am trying to export a project as a new library without the original ("master") images and am getting inconsistent results. Each time I've tried this, I am careful not to check "Copy originals into exported library". Sometimes it works exactly as expected -- i.e., a new library file is created and, when I examine the contents of the new library, there are no master image files. Other times when I examine the new Aperture library file, I find that it contains master image files. Indeed, sometimes, the new library contains more master image files than there are images in the project from which the library was created.
What am I doing wrong or missing? Is this normal behavior?
(Aperture 3.4.3)After a bit of exploring, here is what I determined:
1. Most of my projects were created by importing images from a folder without moving the original images (Files> Import> Folder as Projects> Store Files: In their original location). Using this method, the images in the project are all referenced images (i.e. the originals are not moved or copied into the Aperture library). When later export these projects as described above, the resulting library also does not contain any of the original image files or masters. This is my desired state and for most of my 2012 projects exactly what happened.
2. If, however, any of the images in a given library were originally imported so that the master image resided in the Aperture library and that image was subsequently deleted, then the exported library will still contain the masters.
The solution I found was to open the exported library file and empty the trash (Aperture> Empty Aperture Trash).
(Of course, the longer term solution within my given workflow is to be careful not to import the masters at the beginning of the process.)
Hope this helps someone. -
Inconsistent results with ANSI LEFT JOIN on 9iR2
Is this a known issue? Is it solved in 10g?
With the following data setup, I get inconsistent results. It seems to be linked to the combination of using LEFT JOIN with the NULL comparison within the JOIN.
create table titles (title_id int, title varchar(50));
insert into titles values (1, 'Red Book');
insert into titles values (2, 'Yellow Book');
insert into titles values (3, 'Blue Book');
insert into titles values (4, 'Orange Book');
create table sales (stor_id int, title_id int, qty int, email varchar(60));
insert into sales values (1, 1, 1, '[email protected]'));
insert into sales values (1, 2, 1, '[email protected]');
insert into sales values (3, 3, 4, null);
insert into sales values (3, 4, 5, '[email protected]');
SQL> SELECT titles.title_id, title, qty
2 FROM titles LEFT OUTER JOIN sales
3 ON titles.title_id = sales.title_id
4 AND stor_id = 3
5 AND sales.email is not null
6 ;
TITLE_ID TITLE QTY
4 Orange Book 5
3 Blue Book
1 Red Book
2 Yellow Book
SQL>
SQL> SELECT titles.title_id, title, qty
2 FROM titles LEFT OUTER JOIN sales
3 ON titles.title_id = sales.title_id
4 AND 3 = stor_id
5 AND sales.email is not null;
TITLE_ID TITLE QTY
2 Yellow Book 1
4 Orange Book 5
3 Blue Book
1 Red Book
It seems to matter what order I specify the operands stor_id = 3, or 3 = stor_id.
In the older (+) environment, I would understand this, but here? I'm pretty sure most other databases don't care about the order.
thanks for your insight
KevinDon't have a 9i around right now to test ... but in 10 ...
SQL> create table titles (title_id int, title varchar(50));
Â
Table created.
Â
SQL> insert into titles values (1, 'Red Book');
Â
1 row created.
Â
SQL> insert into titles values (2, 'Yellow Book');
Â
1 row created.
Â
SQL> insert into titles values (3, 'Blue Book');
Â
1 row created.
Â
SQL> insert into titles values (4, 'Orange Book');
Â
1 row created.
Â
SQL> create table sales (stor_id int, title_id int, qty int, email varchar(60));
Â
Table created.
Â
SQL> insert into sales values (1, 1, 1, '[email protected]');
Â
1 row created.
Â
SQL> insert into sales values (1, 2, 1, '[email protected]');
Â
1 row created.
Â
SQL> insert into sales values (3, 3, 4, null);
Â
1 row created.
Â
SQL> insert into sales values (3, 4, 5, '[email protected]');
Â
1 row created.
Â
SQL> SELECT titles.title_id, title, qty
2 FROM titles LEFT OUTER JOIN sales
3 ON titles.title_id = sales.title_id
4 AND stor_id = 3
5 AND sales.email is not null
6 ;
Â
TITLE_ID TITLE QTY
4 Orange Book 5
3 Blue Book
1 Red Book
2 Yellow Book
Â
SQL>
SQL> SELECT titles.title_id, title, qty
2 FROM titles LEFT OUTER JOIN sales
3 ON titles.title_id = sales.title_id
4 AND 3 = stor_id
5 AND sales.email is not null;
Â
TITLE_ID TITLE QTY
4 Orange Book 5
3 Blue Book
1 Red Book
2 Yellow Book
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options -
Inconsistent results for SDO_RELATE
Using SDO_VERSION = 10.2.0.2.0
I am getting inconsistent results using SDO_RELATE. If I do the whole table (over 200,000 records), one particular record that I know of is skipped while if I pick a smaller range of records ie. 4 records in this case, the record is not skipped. The particular relationship with this record is "touch". Is there any limitation on table size or is this something else? Here is the example :
-- The column is set to null
SQL> update nad_als_fixed_stn_10G_HQ set insidecheck = null;
231484 rows updated.
-- SDO_RELATE on a few records which actually finds the correct relationship
SQL> UPDATE nad_als_fixed_stn_10G_HQ C SET C.insidecheck = '1'
2 WHERE EXISTS (SELECT 1 FROM MetroRegions A, nad_als_fixed_stn_10G_HQ B
3 WHERE SDO_RELATE(B.location83r, A.geoloc, 'mask=anyinteract') = 'TRUE'
4 AND C.lic_no = B.lic_no and C.lic_no between 4687157 and 4687223 )
5 ;
3 rows updated.
-- Displays the correct relationship for that record (this is a "touch")
SQL> select insidecheck from nad_als_fixed_stn_10G_HQ where lic_no = 4687161;
Inside
check
1
-- Reset the column to null
SQL> update nad_als_fixed_stn_10G_HQ set insidecheck = null;
231484 rows updated.
-- SDO_RELATE on the complete table
SQL> UPDATE nad_als_fixed_stn_10G_HQ C SET C.insidecheck = '1'
2 WHERE EXISTS (SELECT 1 FROM MetroRegions A, nad_als_fixed_stn_10G_HQ B
3 WHERE SDO_RELATE(B.location83r, A.geoloc, 'mask=anyinteract') = 'TRUE'
4 AND C.lic_no = B.lic_no );
48488 rows updated.
-- This particular record which was located correctly earlier appears to be skipped
SQL> select insidecheck from nad_als_fixed_stn_10G_HQ where lic_no = 4687161;
Inside
check
SQL>
François SigouinThanks but it did not solve the problem of inconsistent results. The response time for the first update is much improved though. When I added the hint on the second update (which is about 600 records), it never came back so I tested without it. Any other ideas ?
François.
TEST
SQL> update nad_als_fixed_stn_10G_HQ set insidecheck = null;
231484 rows updated.
-- First update with hint, response time is improved but same results obtained
SQL> UPDATE nad_als_fixed_stn_10G_HQ C SET C.insidecheck = '1'
2 WHERE EXISTS (SELECT /*+ ORDERED */ 1 FROM MetroRegions A, nad_als_fixed_stn_10G_HQ B
3 WHERE SDO_RELATE(B.location83r, A.geoloc, 'mask=anyinteract') = 'TRUE'
4 AND C.lic_no = B.lic_no );
48488 rows updated.
SQL> select insidecheck from nad_als_fixed_stn_10G_HQ where lic_no = 4687161;
Inside
check
SQL> update nad_als_fixed_stn_10G_HQ set insidecheck = null;
231484 rows updated.
--The second update has to be without the hint otherwise it does not come back.
SQL> UPDATE nad_als_fixed_stn_10G_HQ C SET C.insidecheck = '1'
2 WHERE EXISTS (SELECT 1 FROM MetroRegions A, nad_als_fixed_stn_10G_HQ B
3 WHERE SDO_RELATE(B.location83r, A.geoloc, 'mask=anyinteract') = 'TRUE'
4 AND C.lic_no = B.lic_no and C.lic_no between 4687157 and 4687223 )
5 ;
3 rows updated.
SQL> select insidecheck from nad_als_fixed_stn_10G_HQ where lic_no = 4687161;
Inside
check
1 -
Inconsistent results while searching with TREX
Hi all, iam getting inconsistent results for the same search terms. iam searching for content in a document.one user has read permission on this document and other doesn't. if search using user without read access no results are displayed. i logged in as a user who has read permission in a different window. search for the same content displays the document. and now if i search for the same content for the user who don't have read permission it's displaying the document. ideally it should not display. it would be very helpful if somebody can point what is the problem. thanks in advance.
regards
kranthiHi Kranthi,
could this be a browser caching or credentials per browser session issue?
- Do you open the new Window with Ctrl-N?
- Or do you start a completely new browser (click browser icon a second time)? Does it still happen in that case?
- Does it also happen, if you completely close the browser in between and then re-open?
- Does it still happen, if you delete the temporary internet files in between? And/or the cookies?
Regards,
Karsten -
Inconsistent Results from dbms_output.get_lines
Hi,
I am getting inconsistent results from using dbms_output.get_lines.
I'm using get_lines in a procedure A that executes a function B to test if the function returns 0 or > 0 to indicate validity of my data. In that function, I use dbms_output.put_lines to communicate data points that I want to use. My procedure A does a get_lines after executing function B then either logs the lines into a table or sends an email.
Right now, get_lines is behaving sporadically for me. Sometimes I the chararr returns some lines while other times it doesn't. The strange thing is numlines does return a value, and it's the value that I expected.
Can someone please help?
Thanks.Use parameters or even global package variables to transport data from one procedure to the other. dbms_output is not meant for this, it will not work.
-
Inconsistent results from CallableStatement
We are getting inconsistent results from a call to an Oracle database. We have a CallableStatement that returns 4 INTEGERs.
It runs perfectly if I run the Oracle procedure directly from the sql command line and then call it from the java code.
However if I recompile the Oracle stored procedures and execute the Callable Statement (without first running it from sql) then I am getting incorrect results...1,2,1,1 is being returned instead of 1,2,0,3. Are these results being cached somewhere???
We are using Oracle 8.1.7, JDK 1.3.1, and Oracle thin driver 8.1.7.
The code is as follows:
<pre>
private CallableStatement autoLBNumbersNeeded;
private void prepareStatements(){
autoLBNumbersNeeded=dbConnection.prepareCall("{CALL
pkg_scheduler.pr_auto_lb_nos(?,?,?,?)}");
autoLBNumbersNeeded.clearParameters();
autoLBNumbersNeeded.registerOutParameter(1, java.sql.Types.INTEGER);
autoLBNumbersNeeded.registerOutParameter(2, java.sql.Types.INTEGER);
autoLBNumbersNeeded.registerOutParameter(3, java.sql.Types.INTEGER);
autoLBNumbersNeeded.registerOutParameter(4, java.sql.Types.INTEGER);
public UserNumber[] doAutoLoadBalancing () throws SQLException {
autoLBNumbersNeeded.clearParameters();
autoLBNumbersNeeded.execute();
mwFF = autoLBNumbersNeeded.getLong(1);
mwHF = autoLBNumbersNeeded.getLong(2);
wmFF = autoLBNumbersNeeded.getLong(3);
wmHF = autoLBNumbersNeeded.getLong(4);
autoLBNumbersNeeded.clearParameters();
</pre>
The Oracle procedure is
<pre>
PROCEDURE proc1( p_parm_1 OUT NUMBER, p_parm_2 OUT NUMBER, p_parm_3
OUT NUMBER, p_parm_4 OUT NUMBER)
AS
BEGIN
-- Get Counts
SELECT COUNT(*)
INTO p_parm_1
FROM blah ......
SELECT COUNT(*)
INTO p_parm_2
FROM blah ......
SELECT COUNT(*)
INTO p_parm_3
FROM blah ......
SELECT COUNT(*)
INTO p_parm_4
FROM blah ......
EXCEPTION
blah .......
END proc1;
</pre>
Can anyone help?
Many thanks
FionnualaHi
Yes I have a debug line that prints them out in the doAutoLoadBalancing method - this is the point where I am diagnosing the problem. It only prints the correct results after I have already run the procedure from the SQL command line.
public UserNumber[] doAutoLoadBalancing () throws SQLException {
autoLBNumbersNeeded.clearParameters();
autoLBNumbersNeeded.execute();
mwFF = autoLBNumbersNeeded.getLong(1);
mwHF = autoLBNumbersNeeded.getLong(2);
wmFF = autoLBNumbersNeeded.getLong(3);
wmHF = autoLBNumbersNeeded.getLong(4);
autoLBNumbersNeeded.clearParameters();
if (m_cat.isDebugEnabled()) {m_cat.debug("mwFF: "+mwFF+"; mwHF: " +mwHF+"; wmFF: "+wmFF+"; wmHF: "+wmHF);}
</pre> -
Inconsistent results using the "places" view on the iPad with geotagged photos
I am getting inconsistent results with geotagged photos on the iPad (both 1 and 2) OS 5.01. It seems that sometime photos will show up in the correct location on the places view others will not ( they will not show up at all). In fact I can have a geotagged photo in one folder, synch with iTunes and it will not show up in the places view and then move that same picture to a different folder (which becomes an album on the iPad) re-synch and it will now show up at the correct location in the places view. All photos show up in the album view. Everything is the latest version iTunes, iPad etc. I can also have many photos geotagged exactly to the same location (exact same coordinates) some will show up, others will not.
I have used different methods to geotag my photos. At first I was using Photoshop Elements 8 until they stopped supporting geotagging photos. Now I am using a third party application GeoSetter (excellent free application). The GPS data is clearly in the Exif data but the results are not consistent.
Any ideas?I believe I have solved this problem myself and thought others might benefit from what I found. It turns out to be what I believe as a glitch in iTunes. iTunes creates a cache of all the photos that you synch with your iPad, iPhone etc. Apparently it does not update that cache when you change the photo. At least not the Geotagging data. The file modify date was changing but that wasn't enough to cause iTunes to update the cache of the photos. I deleted the cache of all the photos and re-synched and presto now all my photos where showing up at the correct location in the iPad "places" view.
I might add that I called Apple tech support and was routed to a higher tier support person and was very disappointed with his response. He basically told me that the "place" view in the iPad and iPhone was designed to be only used with an Apple computer and Apple software and was surprised the places view was working at all with pictures synched with a Windows based machine. He also said he was unable to help me since he couldn't provide support for non-Apple products. Although I understand this to a point I think he was too quick to make this decision. The solution after all was a simple one and one I think he should have known. I would think this cache problem would happen whether I was using Apple products as well. I view this as a bug in iTunes. Apple needs to compare the modified date of the file and update the cache if it has changed. -
Scripts run too fast? Inconsistent Results
In Automator I wrote a simple workflow combining Applescript and Automator. I saved it as an application and it appears to work every time.
In a new Automator workflow, I then tried looping the application 5 times. This yields inconsistent results though. It seems to loop, but steps are often missed in the workflow. Furthermore, which steps are missed is not consistent.
The workflow includes importing sound files etc. which takes a bit of time, so I tried adding some delays here and there. I also tried quitting other app's to ensure it can run at a full speed. No luck though.
Surely there is a more efficient and robust solution than guessing where to put delays and how long they should be?
Any ideas?
Thank you soooo much!My solution. STOP USING AUTOMATOR! Especially any mouse click tracking with the record button.
Applescript is so much more versatile and reliable. I spent about 10 hours learning it from scratch to make this simple script, and I still suck at it, but it's been really worth it for me. It's really pretty simple. -
Start/Remove-CMContentDistribution Inconsistent Results
I have been seeing inconsistent results with the powershell cmdlets of remove/start-cmcontentdistribution.
We are running ConfigMgr 2012 R2 CU3 at the site systems and remote consoles.
My remote console is on Windows 7 Pro SP1 with .net 4.5.2 with R2 Cu3, Powershell 4
I have been using the following syntax:
Get-Content "E:\pkgs\ABC00123.txt" | ForEach-Object {Start-CMContentDistribution -PackageId ABC00123 -DistributionPointName $_}
In the text file, is just a list of distribution points associated with the package ABC00123
Results I get:
Sometimes I get the following messages/errors:
WARNING: The result set exceeded the maximum size. Only first 1000 items used. You can use Set-CMQueryResultMaximum
Looking at some of the logs, it looks like the cmdlet is enumerating through all distributions no matter what the status for EACH DP/package that I specify, this can take hours for 100 entries.
Sometimes it deletes the DP from the package, lately it does not.
The other message I get:
No collection specified
No action actually occurs and the package still assign to the distribution point.
Supposedly, these were fixed in previous updates like CU2, and looks like there was some issues with CU1 with the "no collections specified" error.
I have tried emptying the %TEMP% folder to see if that was causing it
Sometimes, but not consistent, removing, rebooting, reinstalling the console sometimes help
Anyone have any other insight?
MCP, MCTS, MCSA,MCSEThis definitely does not look like correct behavior. Are you using the latest CU (CU4)? If so please file a feedback on Connect at
https://connect.microsoft.com/ConfigurationManagervnext/feedback and we'll look into this for our next release of the cmdlets.
If you file feedback on Connect, it would be very helpful if you provided cmdlet output with -Verbose.
Check out my Configuration Manager blog at http://aka.ms/ameltzer
Maybe you are looking for
-
How to check the file size before loading it to the context
Hello, I have an application to upload a file and write it to the server using the FileUpload UI and IWDResource Interface. I would like to limit the size of the file the user is uploading to, say, 2MB. The problem is that the current API doesn't all
-
Hi, We are in process of upgrade.now we are in testing face.When we testing vy printing sales order doucments from VF03 trasaction we are getting error message 'Output Could not be issue',but when we are trying to see print preview it appears correct
-
Best book or online education on SQL developer data modeler 3.0 version
Hi, i dont see any OBE for sql developer data modeler the way we have for sql developer, would one of you please suggest the Best book that explains everything about data modeler tool or any other online tutorial for that matter, I am new to this URL
-
Dear Sir\Mam, I am Rajmohan,i am looking for LabVIEW job.I have CLAD certification with 3 years experiance in Automation field,if you have any opening pls consider me.Herewith i have attached my Resume.i am expecting your ackno
-
To whom it may concern: My new Treo 800w doesn't search for name in my contacts when I text message; will always say "matching...". So as of now, I have to look up my contacts as if I was going to call someone and instead of calling I'd select "send