Different results on a SQL sort depending on the tool?
Hi,
i've a problem with sorting a column, getting different results depending on the tool.
Environment:
Oracle 8.1.6 on Solaris, Oracle Client on NT 4.0 (SP5)
The Query:
SELECT * FROM mytable ORDER BY mycolumn ASC
This is the result of a query by SQL Plus Worksheet:
0000000006
0000100100
A00000
A06015
A06016
This is the result of a query by SQL*Plus (and Oracle ODBC 32Bit Test and a jdbc query) :
A00000
A06015
A06016
0000000006
0000100100
Why has the same SQL statement different results?
Why are there different collating orders?
How can this be solved?
Can anyone help me?
Regards Stefan
null
At this point I think you should get Applejack...
http://www.macupdate.com/info.php/id/15667/applejack
After installing, reboot holding down CMD+s, (+s), then when the DOS like prompt shows, type in...
applejack AUTO
Then let it do all 6 of it's things.
At least it'll eliminate some questions if it doesn't fix it.
The 6 things it does are...
Correct any Disk problems.
Repair Permissions.
Clear out Cache Files.
Repair/check several plist files.
Dump the VM files for a fresh start.
Trash old Log files.
First reboot will be slower, sometimes 2 or 3 restarts will be required for full benefit... my guess is files relying upon other files relying upon other files! :-)
Disconnect the USB cable from any Uninterruptible Power Supply so the system doesn't shut down in the middle of the process.
Then... Try putting these numbers in Network>TCP/IP>DNS Servers, for the Interface you connect with...
208.67.222.222
208.67.220.220
Then Apply. For 10.5/10.6 Network, highlight Interface>Advanced button>DNS tab>little + icon.
DNS Servers are a bit like Phone books where you look up a name and it gives you the phone number, in our case, you put in apple.com and it comes back with 17.149.160.49 behind the scenes.
These Servers have been patched to guard against DNS poisoning, and are faster/more reliable than most ISP's DNS Servers.
The Interface that connects to the Internet, needs to be drug to the top of System Preferences>Network>Show:>Network Port Configurations and checked ON.
10.5.x/10.6.x instructions...
System Preferences>Network, click on the little gear at the bottom next to the + & - icons, (unlock lock first if locked), choose Set Service Order.
The interface that connects to the Internet should be dragged to the top of the list.
Similar Messages
-
We have a business scenario where the receipt days for a location depends on the relationship between the Start-Destination Location. We need to include this constraint in an optimizer run with daily buckets.
Example:
Factory A (Start Location) ships to DC X (destination location) only on Mondays, with a lead time of 2 days. DC X should receive the stock from Factory A on Wednesday
Factory B (Start Location) ships to DC X (destination location) only on Thursdays, with a lead time of 1 day. DC X should receive the stock from Factory B on Saturday
Does anyone has been able to model this scenario?
I tried using transportation calendars(time stream) in the means of transport, having undesired results:
a) Transportation Lane A-X. Calendar 1A, only Monday is available
The system creates stock transfers on Monday on Week 1 and ends on Monday of Week 3.
b) Transportation Lane A-X. Calendar 2A, Monday and Wednesday are available
The system creates stock transfers on Monday and Wednesday on Week 1:
The stock transfer that starts on Monday-Week 1 ends on Monday-Week 2.
The stock transfer that starts on Wednesday-Week 1 ends on Wednesday-Week 2.
c) Transportation Lane A-X. Calendar 3A, Monday Tuesday and Wednesday are available
The system creates stock transfers on Monday, Tuesday and Wednesday on Week 1.
The stock transfer that starts on Monday-Week 1 ends on Wednesday-Week 1. (this is actually what I need)
The stock transfer that starts on Tuesday-Week 1 ends on Monday-Week 2.
The stock transfer that starts on Wednesday-Week 1 ends on Tuesday-Week 2.
Regardsyou really don't need to post all that code, few people will read it, or try to compile/run it.
just a tree in a scrollpane in a frame is all you need, then add the method/problem to the basic,
and post that so its only 20 to 30 lines long.
here's your basic tree/scrollpane/frame, with an expandAll()
import javax.swing.*;
import java.awt.*;
import java.awt.event.*;
class Testing
public void buildGUI()
JTree tree = new JTree();
expandAll(tree);
JFrame f = new JFrame();
f.getContentPane().add(new JScrollPane(tree));
f.pack();
f.setLocationRelativeTo(null);
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
f.setVisible(true);
public void expandAll(JTree tree)
int row = 0;
while (row < tree.getRowCount())
tree.expandRow(row);
row++;
public static void main(String[] args)
SwingUtilities.invokeLater(new Runnable(){
public void run(){
new Testing().buildGUI();
}so, is your problem with the expandAll(), or with reading the properties file? -
I'm trying to fill in some missing color in a small spot on a hat and I may need to set an achor point. I want to use the blob brush though it is to big. What tool what a use to segregate a specific setting of the hat?
RC,
That depends on the shape of the area to have the colour. You could use a circle, maybe.
If there is nothing there, you could also create a path larger than the area, give it the desired colour, and dragging it behind the (rest of the) hat, which you can do in the Layers palette.
But depending on the artwork, you may have better options.
And depending on the version, you may be able to just use Live Paint to fill the gap. -
Different result from same SQL statement
The following SQL statement brings back records using query
analyzer on the SQL server. However when I run it in a cold fusion
page it comes back with no results. Any idea why????????
SELECT COUNT(h.userID) AS hits, u.OCD
FROM dbo.tbl_hits h INNER JOIN
dbo.tlkp_users u ON h.userID = u.PIN
WHERE (h.appName LIKE 'OPwiz%') AND (h.lu_date BETWEEN
'05/01/07' AND '06/01/07')
GROUP BY u.OCD
ORDER BY u.OCDAnthony Spears wrote:
> That didn't work either.
>
> But here is something interesting. If we use the dates
05/01/2007 and
> 06/01/2007 we get results in SQL Server Query Analyzer
but not using a cold
> fusion page. But if we use the dates 05/01/2007 and
09/01/2007 both get back
> the same results.
>
Are you absolutely, 100% sure that you are connecting to the
same
database instance with both CF and Query Analyzer? That kind
of symptom
is 9 out of 10 times because the user is looking at different
data. One
is looking at production and the other development or an
backup or
recent copy or something different. -
Different result comparing AWR to TKPROF
Hi,
I runned last night event 10046 on my database using the following commands:
ALTER SYSTEM SET statistics_level = ALL;
ALTER SYSTEM SET events '10046 trace name context forever, level 12';
Today, i compared a single statment from TKPROF result to the AWR and found
big different results:
The TKPROF shows:
Executions is : 1
elpased time is :51.39 seconds
cpu time is : 0.23 second
Gets per Exec is : 72
SELECT CM_CUST_DIM_INST_PROD.INST_PROD_ID, CM_CUST_DIM_INST_PROD.NAP_PRODUCT_ID, CM_CUST_DIM_INST_PROD.NAP_PACKEAGE, CM_CUST_DIM_INST_PROD.PRODUCT_ID, CM_CUST_DIM_INST_PROD.PRODUCT_DESCR, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP_DESCR, CM_CUST_DIM_INST_PROD.PROD_CATEGORY, CM_CUST_DIM_INST_PROD.PROD_CATEGORY_DESCR, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE_DESCR, CM_CUST_DIM_INST_PROD.NAP_AREA2, CM_CUST_DIM_INST_PROD.NAP_PHONE_NUM, CM_CUST_DIM_INST_PROD.NAP_CANCEL_DT, CM_CUST_DIM_INST_PROD.NAP_SERVICE_OPN_DT, CM_CUST_DIM_INST_PROD.NAP_MAKAT_CD, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS_DESCR, CM_CUST_DIM_INST_PROD.NAP_RTRV_INSPRD_ID
FROM CM_CUST_DIM_INST_PROD ,
cm_ip_service_delta, cm_ip_service_delta cm_ip_service_delta2
WHERE CM_CUST_DIM_INST_PROD.prod_grp_type in ('INTR', 'HOST') and
CM_CUST_DIM_INST_PROD.Inst_Prod_Id = cm_ip_service_delta.inst_prod_id(+) and
CM_CUST_DIM_INST_PROD.Nap_Makat_Cd = cm_ip_service_delta.nap_billing_catnum(+)
and cm_ip_service_delta.nap_billing_catnum is null and
cm_ip_service_delta.inst_prod_id is null
and cm_ip_service_delta2.inst_prod_id = CM_CUST_DIM_INST_PROD.Nap_Packeage
ORDER BY INST_PROD_ID
call count cpu elapsed disk query current rows
Parse 1 0.01 0.03 0 22 0 0
Execute 1 0.02 1.79 0 32 0 0
Fetch 13 0.19 49.56 0 18 0 661
total 15 0.23 51.39 0 72 0 661
The AWR report shows
Executions is : 1
elpased time is :697.91 seconds
cpu time is :41.89 second
Gets per Exec is : 351,105.00
Executions Gets per Exec CPU Time (s) Elapsed Time (s) SQL Id SQL
1 351,105.00 41.89 697.91 6hh4jdx9dvjzw
6hh4jdx9dvjzw
SELECT CM_CUST_DIM_INST_PROD.INST_PROD_ID, CM_CUST_DIM_INST_PROD.NAP_PRODUCT_ID, CM_CUST_DIM_INST_PROD.NAP_PACKEAGE, CM_CUST_DIM_INST_PROD.PRODUCT_ID, CM_CUST_DIM_INST_PROD.PRODUCT_DESCR, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP_DESCR, CM_CUST_DIM_INST_PROD.PROD_CATEGORY, CM_CUST_DIM_INST_PROD.PROD_CATEGORY_DESCR, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE_DESCR, CM_CUST_DIM_INST_PROD.NAP_AREA2, CM_CUST_DIM_INST_PROD.NAP_PHONE_NUM, CM_CUST_DIM_INST_PROD.NAP_CANCEL_DT, CM_CUST_DIM_INST_PROD.NAP_SERVICE_OPN_DT, CM_CUST_DIM_INST_PROD.NAP_MAKAT_CD, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS_DESCR, CM_CUST_DIM_INST_PROD.NAP_RTRV_INSPRD_ID FROM CM_CUST_DIM_INST_PROD , cm_ip_service_delta, cm_ip_service_delta cm_ip_service_delta2 WHERE CM_CUST_DIM_INST_PROD.prod_grp_type in ('INTR', 'HOST') and CM_CUST_DIM_INST_PROD.Inst_Prod_Id = cm_ip_service_delta.inst_prod_id(+) and CM_CUST_DIM_INST_PROD.Nap_Makat_Cd = cm_ip_service_delta.nap_billing_catnum(+) and cm_ip_service_delta.nap_billing_catnum is null and cm_ip_service_delta.inst_prod_id is null and cm_ip_service_delta2.inst_prod_id = CM_CUST_DIM_INST_PROD.Nap_Packeage ORDER BY INST_PROD_ID
Does one can explain the different results ?
Thank YouHi Virag,
I ran the statment from sqlplus and after that i generated an addm report:
As you can see below TKPROF show that elspaed time was : 50.76 second,
while ADDM show:
"was executed 1 times and had an average elapsed time of 751 seconds."
ALTER SESSION SET max_dump_file_size = unlimited;
ALTER SESSION SET tracefile_identifier = '10046';
ALTER SESSION SET statistics_level = ALL;
ALTER SESSION SET events '10046 trace name context forever, level 12';
SELECT CM_CUST_DIM_INST_PROD.INST_PROD_ID, CM_CUST_DIM_INST_PROD.NAP_PRODUCT_ID, CM_CUST_DIM_INST_PROD.NAP_PACKEAGE, CM_CUST_DIM_INST_PROD.PRODUCT_ID, CM_CUST_DIM_INST_PROD.PRODUCT_DESCR, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP, CM_CUST_DIM_INST_PROD.PRODUCT_GROUP_DESCR, CM_CUST_DIM_INST_PROD.PROD_CATEGORY, CM_CUST_DIM_INST_PROD.PROD_CATEGORY_DESCR, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE, CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE_DESCR, CM_CUST_DIM_INST_PROD.NAP_AREA2, CM_CUST_DIM_INST_PROD.NAP_PHONE_NUM, CM_CUST_DIM_INST_PROD.NAP_CANCEL_DT, CM_CUST_DIM_INST_PROD.NAP_SERVICE_OPN_DT, CM_CUST_DIM_INST_PROD.NAP_MAKAT_CD, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS, CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS_DESCR, CM_CUST_DIM_INST_PROD.NAP_RTRV_INSPRD_ID
FROM CM_CUST_DIM_INST_PROD ,
cm_ip_service_delta, cm_ip_service_delta cm_ip_service_delta2
WHERE CM_CUST_DIM_INST_PROD.prod_grp_type in ('INTR', 'HOST') and
CM_CUST_DIM_INST_PROD.Inst_Prod_Id = cm_ip_service_delta.inst_prod_id(+) and
CM_CUST_DIM_INST_PROD.Nap_Makat_Cd = cm_ip_service_delta.nap_billing_catnum(+)
and cm_ip_service_delta.nap_billing_catnum is null and
cm_ip_service_delta.inst_prod_id is null
and cm_ip_service_delta2.inst_prod_id = CM_CUST_DIM_INST_PROD.Nap_Packeage
ORDER BY INST_PROD_ID
ALTER SESSION SET EVENTS '10046 trace name context off';
EXIT
call count cpu elapsed disk query current rows
Parse 1 0.05 0.05 0 0 0 0
Execute 1 0.02 1.96 24 32 0 0
Fetch 46 0.19 48.74 6 18 0 661
total 48 0.26 50.76 30 50 0 661
Rows Row Source Operation
661 PX COORDINATOR (cr=50 pr=30 pw=0 time=50699289 us)
0 PX SEND QC (ORDER) :TQ10003 (cr=0 pr=0 pw=0 time=0 us)
0 SORT ORDER BY (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND RANGE :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN RIGHT OUTER (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
3366 INDEX FAST FULL SCAN IDX_CM_SERVICE_DELTA (cr=9 pr=6 pw=0 time=47132 us)(object id 1547887)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
3366 INDEX FAST FULL SCAN IDX_CM_SERVICE_DELTA (cr=9 pr=0 pw=0 time=20340 us)(object id 1547887)
0 PX BLOCK ITERATOR PARTITION: 1 4 (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL CM_CUST_DIM_INST_PROD PARTITION: 1 4 (cr=0 pr=0 pw=0 time=0 us)
RECOMMENDATION 1: SQL Tuning, 56% benefit (615 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"6wd7sw8adqaxv".
RELEVANT OBJECT: SQL statement with SQL_ID 6wd7sw8adqaxv and
PLAN_HASH 2594021963
SELECT CM_CUST_DIM_INST_PROD.INST_PROD_ID,
CM_CUST_DIM_INST_PROD.NAP_PRODUCT_ID,
CM_CUST_DIM_INST_PROD.NAP_PACKEAGE, CM_CUST_DIM_INST_PROD.PRODUCT_ID,
CM_CUST_DIM_INST_PROD.PRODUCT_DESCR,
CM_CUST_DIM_INST_PROD.PRODUCT_GROUP,
CM_CUST_DIM_INST_PROD.PRODUCT_GROUP_DESCR,
CM_CUST_DIM_INST_PROD.PROD_CATEGORY,
CM_CUST_DIM_INST_PROD.PROD_CATEGORY_DESCR,
CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE,
CM_CUST_DIM_INST_PROD.PROD_GRP_TYPE_DESCR,
CM_CUST_DIM_INST_PROD.NAP_AREA2, CM_CUST_DIM_INST_PROD.NAP_PHONE_NUM,
CM_CUST_DIM_INST_PROD.NAP_CANCEL_DT,
CM_CUST_DIM_INST_PROD.NAP_SERVICE_OPN_DT,
CM_CUST_DIM_INST_PROD.NAP_MAKAT_CD,
CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS,
CM_CUST_DIM_INST_PROD.NAP_CRM_STATUS_DESCR,
CM_CUST_DIM_INST_PROD.NAP_RTRV_INSPRD_ID
FROM CM_CUST_DIM_INST_PROD ,
cm_ip_service_delta, cm_ip_service_delta cm_ip_service_delta2
WHERE CM_CUST_DIM_INST_PROD.prod_grp_type in ('INTR', 'HOST') and
CM_CUST_DIM_INST_PROD.Inst_Prod_Id =
cm_ip_service_delta.inst_prod_id(+) and
CM_CUST_DIM_INST_PROD.Nap_Makat_Cd =
cm_ip_service_delta.nap_billing_catnum(+)
and cm_ip_service_delta.nap_billing_catnum is null and
cm_ip_service_delta.inst_prod_id is null
and cm_ip_service_delta2.inst_prod_id =
CM_CUST_DIM_INST_PROD.Nap_Packeage
ORDER BY INST_PROD_ID
RATIONALE: SQL statement with SQL_ID "6wd7sw8adqaxv" was executed 1
times and had an average elapsed time of 751 seconds.
RATIONALE: At least one execution of the statement ran in parallel.
Thanks. -
Different results from 3.0 to 3i
I am creating worksheets on Oracle Discoverer 3.1 User Edition (Version 3.1.36.06) and trying to view them on Oracle Discoverer 3i Viewer Edition v3.3.57.24.
I get normal results on the user edition. I get various results on the 3i version. I noticed that just adding totals to a worksheet caused different results in 3i. In some cases, the worksheet would display rows in 3i but no totals. It would also run extremely longer in 3i (minutes versus seconds). In other cases, 3i Viewer would return the message that the worksheet was empty. I also received an error message "An error occurred while trying to run the query. (Internal EUL Error: ExprPArseNodeType - Invalid type in canonical formula".
I have developed various reports in a different Business Area that are working as expected in 3i.
Any help would be appreciated.Please tell us your first name and put it into your OTN profile and handle as a courtesy and help to us. Thanks.
There is a patch for this problem on Metalink for bug 7138068, file p7138068_111060_GENERIC.zip.
Scott -
Purchase order(me21) number range depending upon the plant
Hello All,
I have to generate differant number range for purchase order depending on the plant.
I have found one User exit MM06E003 but it has only ekko as importing structure.
In whiih exit , I cam get plant (ekpo-werks ) and import it to memory id , which will be further used in
MM06E003.
Regards
MohitHi Mohit,
You can try the Enhancement MM06E005. Both EKKO and EKPO structures are available in various function exits in the above Enhancement.
Also you can try using BADI ME_PROCESS_PO_CUST. This BADI is called whenever transactions ME21N/22N/23N are called.
Hope the above info will help.
Regards.
Abhisek Biswas. -
SQL Query produces different results when inserting into a table
I have an SQL query which produces different results when run as a simple query to when it is run as an INSERT INTO table SELECT ...
The query is:
SELECT mhldr.account_number
, NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
, COUNT(1) num_apps
FROM app_parties ap
SELECT accsta.account_number
, actply.party_sysid
, RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
FROM activity_players actply
, account_status accsta
WHERE 1 = 1
AND actply.table_id (+) = 'ACCGRP'
AND actply.acttyp_code (+) = 'MHLDRM'
AND NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
AND actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
) mhldr
WHERE 1 = 1
AND ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
GROUP BY mhldr.account_number; The INSERT INTO code:
TRUNCATE TABLE applicant_summary;
INSERT /*+ APPEND */
INTO applicant_summary
( account_number
, main_borrower_status
, num_apps
SELECT mhldr.account_number
, NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
, COUNT(1) num_apps
FROM app_parties ap
SELECT accsta.account_number
, actply.party_sysid
, RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
FROM activity_players actply
, account_status accsta
WHERE 1 = 1
AND actply.table_id (+) = 'ACCGRP'
AND actply.acttyp_code (+) = 'MHLDRM'
AND NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
AND actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
) mhldr
WHERE 1 = 1
AND ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
GROUP BY mhldr.account_number; When run as a query, this code consistently returns 2 for the num_apps field (for a certain group of accounts), but when run as an INSERT INTO command, the num_apps field is logged as 1. I have secured the tables used within the query to ensure that nothing is changing the data in the underlying tables.
If I run the query as a cursor for loop with an insert into the applicant_summary table within the loop, I get the same results in the table as I get when I run as a stand alone query.
I would appreciate any suggestions for what could be causing this odd behaviour.
Cheers,
Steve
Oracle database details:
Oracle Database 10g Release 10.2.0.2.0 - Production
PL/SQL Release 10.2.0.2.0 - Production
CORE 10.2.0.2.0 Production
TNS for 32-bit Windows: Version 10.2.0.2.0 - Production
NLSRTL Version 10.2.0.2.0 - Production
Edited by: stevensutcliffe on Oct 10, 2008 5:26 AM
Edited by: stevensutcliffe on Oct 10, 2008 5:27 AMstevensutcliffe wrote:
Yes, using COUNT(*) gives the same result as COUNT(1).
I have found another example of this kind of behaviour:
Running the following INSERT statements produce different values for the total_amount_invested and num_records fields. It appears that adding the additional aggregation (MAX(amount_invested)) is causing problems with the other aggregated values.
Again, I have ensured that the source data and destination tables are not being accessed / changed by any other processes or users. Is this potentially a bug in Oracle?Just as a side note, these are not INSERT statements but CTAS statements.
The only non-bug explanation for this behaviour would be a potential query rewrite happening only under particular circumstances (but not always) in the lower integrity modes "trusted" or "stale_tolerated". So if you're not aware of any corresponding materialized views, your QUERY_REWRITE_INTEGRITY parameter is set to the default of "enforced" and your explain plan doesn't show any "MAT_VIEW REWRITE ACCESS" lines, I would consider this as a bug.
Since you're running on 10.2.0.2 it's not unlikely that you hit one of the various "wrong result" bugs that exist(ed) in Oracle. I'm aware of a particular one I've hit in 10.2.0.2 when performing a parallel NESTED LOOP ANTI operation which returned wrong results, but only in parallel execution. Serial execution was showing the correct results.
If you're performing parallel ddl/dml/query operations, try to do the same in serial execution to check if it is related to the parallel feature.
You could also test if omitting the "APPEND" hint changes anything but still these are just workarounds for a buggy behaviour.
I suggest to consider installing the latest patch set 10.2.0.4 but this requires thorough testing because there were (more or less) subtle changes/bugs introduced with [10.2.0.3|http://oracle-randolf.blogspot.com/2008/02/nasty-bug-introduced-with-patch-set.html] and [10.2.0.4|http://oracle-randolf.blogspot.com/2008/04/overview-of-new-and-changed-features-in.html].
You could also open a SR with Oracle and clarify if there is already a one-off patch available for your 10.2.0.2 platform release. If not it's quite unlikely that you are going to get a backport for 10.2.0.2.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
JOIN ON 2 different sets of table depending on the result of first set
<br>
I have a query where it returns results. I want to join this query to
2 different sets of table depending upon the first set has a result or not.
if first set didnt had a results or records then check for the second set.
SELECT
peo.email_address,
r.segment1 requistion_num,
to_char(l.line_num) line_num,
v.vendor_name supplier,
p.CONCATENATED_SEGMENTS category,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
pe.full_name requestor,
l.item_description,
pr.segment1 project_num,
t.task_number,
c.segment1,
c.segment2
FROM po_requisition_headers_all r,
po_requisition_lines_all l,
(SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date FROM
(SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
FROM po_req_distributions_all pod) WHERE rn = 1) d,
gl_code_combinations c,
POR_CATEGORY_LOV_V p,
per_people_v7 pe,
PA_PROJECTS_ALL pr,
PA_TASKS_ALL_V t,
ap_vendors_v v,
WHERE d.creation_date >= nvl(to_date(:DATE_LAST_CHECKED,
'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
AND
l.requisition_header_id = r.requisition_header_id
AND l.requisition_line_id = d.requisition_line_id
AND d.code_combination_id = c.code_combination_id
AND r.APPS_SOURCE_CODE = 'POR'
AND l.category_id = p.category_id
AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
AND l.to_person_id = pe.person_id
AND pr.project_id(+) = d.project_id
AND t.project_id(+) = d.project_id
AND t.task_id(+) = d.task_id
AND v.vendor_id(+) = l.vendor_id
and r.requisition_header_id in(
SELECT requisition_header_id FROM po_requisition_lines_all pl
GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
group by
peo.email_address,
r.REQUISITION_HEADER_ID,
r.segment1 ,
to_char(l.line_num) ,
v.vendor_name,
p.CONCATENATED_SEGMENTS ,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
pe.full_name ,
l.item_description,
c.segment1,
c.segment2,
pr.segment1 ,
t.task_number
<b>I want to join this query with this first set </b>
SELECT b.NAME, c.segment1 CO, c.segment2 CC,
a.org_information2 Commodity_mgr,
b.organization_id, p.email_address
FROM hr_organization_information a, hr_all_organization_units b, pay_cost_allocation_keyflex c, per_people_v7 p
WHERE a.org_information_context = 'Financial Approver Information'
AND a.organization_id = b.organization_id
AND b.COST_ALLOCATION_KEYFLEX_ID = c.COST_ALLOCATION_KEYFLEX_ID
and a.ORG_INFORMATION2 = p.person_id
AND NVL (b.date_to, SYSDATE + 1) >= SYSDATE
AND b.date_from <= SYSDATE;
<b>if this doesnt return any result then i need to join the query with the 2nd set</b>
select lookup_code, meaning, v.attribute1 company, v.attribute2 cc,
decode(v.attribute3,null,null,p1.employee_number || '-' || p1.full_name) sbu_controller,
decode(v.attribute4,null,null,p2.employee_number || '-' || p2.full_name) commodity_mgr
from fnd_lookup_values_vl v,
per_people_v7 p1, per_people_v7 p2
where lookup_type = 'BIO_FIN_APPROVER_INFO'
and v.attribute3 = p1.person_id(+)
and v.attribute4 = p2.person_id(+)
order by lookup_code
How do i do it?
[pre]<br>
I have hard coded the 2 jon sets into one using UNION ALL but if one record exists in both sets how would i diferentiate between the 2 sets.
COUNT(*) will only give the total records.
if there r total 14
suppose first set gives 12 records
second set gives 4 records.
But i want only 14 records which could 12 from set 1 and 2 from set 2 since set1 and set2 can have common records.
SELECT
peo.email_address,
r.segment1 requistion_num,
to_char(l.line_num) line_num,
v.vendor_name supplier,
p.CONCATENATED_SEGMENTS category,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
pe.full_name requestor,
l.item_description,
pr.segment1 project_num,
t.task_number,
c.segment1,
c.segment2
FROM po_requisition_headers_all r,
po_requisition_lines_all l,
(SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date FROM
(SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
FROM po_req_distributions_all pod) WHERE rn = 1) d,
gl_code_combinations c,
POR_CATEGORY_LOV_V p,
per_people_v7 pe,
PA_PROJECTS_ALL pr,
PA_TASKS_ALL_V t,
ap_vendors_v v,
WHERE d.creation_date >= nvl(to_date(:DATE_LAST_CHECKED,
'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
AND
l.requisition_header_id = r.requisition_header_id
AND l.requisition_line_id = d.requisition_line_id
AND d.code_combination_id = c.code_combination_id
AND r.APPS_SOURCE_CODE = 'POR'
AND l.category_id = p.category_id
AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
AND l.to_person_id = pe.person_id
AND pr.project_id(+) = d.project_id
AND t.project_id(+) = d.project_id
AND t.task_id(+) = d.task_id
AND v.vendor_id(+) = l.vendor_id
and r.requisition_header_id in(
SELECT requisition_header_id FROM po_requisition_lines_all pl
GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
group by
peo.email_address,
r.REQUISITION_HEADER_ID,
r.segment1 ,
to_char(l.line_num) ,
v.vendor_name,
p.CONCATENATED_SEGMENTS ,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
pe.full_name ,
l.item_description,
c.segment1,
c.segment2,
pr.segment1 ,
t.task_number
UNION ALL
SELECT
r.segment1 requistion_num,
to_char(l.line_num) line_num,
v.vendor_name supplier,
p.CONCATENATED_SEGMENTS category,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') need_by_date,
pe.full_name requestor,
l.item_description,
pr.segment1 project_num,
t.task_number,
c.segment1,
c.segment2
FROM po_requisition_headers_all r,
po_requisition_lines_all l,
(SELECT project_id,task_id,code_combination_id, distribution_id,requisition_line_id,creation_date FROM
(SELECT project_id,task_id,code_combination_id,distribution_id,creation_date,requisition_line_id,ROW_NUMBER ()
OVER (PARTITION BY requisition_line_id ORDER BY requisition_line_id,distribution_id ) rn
FROM po_req_distributions_all pod) WHERE rn = 1) d,
gl_code_combinations c,
POR_CATEGORY_LOV_V p,
per_people_v7 pe,
PA_PROJECTS_ALL pr,
PA_TASKS_ALL_V t,
ap_vendors_v v,
fnd_lookup_values_vl flv,
per_people_v7 p1,
per_people_v7 p2
WHERE d.creation_date >= nvl(to_date('11-APR-2008',
'DD-MON-YYYY HH24:MI:SS'),SYSDATE-1)
AND
l.requisition_header_id = r.requisition_header_id
AND l.requisition_line_id = d.requisition_line_id
AND d.code_combination_id = c.code_combination_id
AND r.APPS_SOURCE_CODE = 'POR'
AND l.org_id = 141
AND l.category_id = p.category_id
AND r.authorization_status IN ('IN PROCESS','PRE-APPROVED','APPROVED')
AND l.to_person_id = pe.person_id
AND pr.project_id(+) = d.project_id
AND t.project_id(+) = d.project_id
AND t.task_id(+) = d.task_id
AND v.vendor_id(+) = l.vendor_id
AND flv.attribute1=c.segment1
AND flv.attribute2=c.segment2
AND flv.lookup_type = 'BIO_FIN_APPROVER_INFO'
and flv.attribute3 = p1.person_id(+)
and flv.attribute4 = p2.person_id(+)
and r.requisition_header_id in(
SELECT requisition_header_id FROM po_requisition_lines_all pl
GROUP BY requisition_header_id HAVING SUM(nvl(pl.quantity,0) * nvl(pl.unit_price, 0)) >=100000)
group by
r.REQUISITION_HEADER_ID,
r.segment1 ,
to_char(l.line_num) ,
v.vendor_name,
p.CONCATENATED_SEGMENTS ,
to_char(round((nvl(l.quantity, 0) * nvl(l.unit_price, 0))),'99,999,999,999.99'),
TO_CHAR(l.need_by_date,'MM/DD/YYYY') ,
pe.full_name ,
l.item_description,
c.segment1,
c.segment2,
pr.segment1 ,
t.task_number -
SQL Server 2012 Physical vs. Hyper-V Same Query Different Results
I have a database that is on physical hardware (16 CPU's, 32GB Ram).
I have a copy of the database that was attached to a virtual Hyper-V server (16 CPU's, 32GB Ram).
Both Servers and SQL Servers are identical OS=2008R2 Standard, SQL Server 2012R2 Standard same patch level SP1 CU8.
Same query run on both servers return same data set, but the time is much different 26 Sec on Physical, 5 minutes on virtual.
Statistics are identical on both databases, query execution plane is identical on both queries.
Indices are identical on both databases.
When I use set statistics IO, I get different results between the two servers.
One table in particular (366k rows) on physical shows logical reads of 15400, on Hyper-V reports logical reads of 418,000,000 that is four hundred eighteen million for the same table.
When the query is run on the physical it uses no CPU, when run on the Hyper-V it takes 100% of all 16 processors.
I have experimented with Maxdop and it does exactly what it should by limiting processors but it doesn't fix the issue.A massive difference in logical reads usually hints at differences in the query plan.
When you compare query plans, it is essential that you look at actual query plans.
Please note that if your server / Hyper-V supports parallelism (which is almost always nowadays), then you are likely to have two query plans: a parallel and a serial query plan. Of course the actual query plan will make clear which one is used in which
case.
To say this again, this is by far the most likely reason for your problem.
There are other (unlikely) reasons that could be the case here:
runaway parallel threads or other bugs in the optimizer or engine. Make sure you have installed the latest service pack
Maybe the slow server (Hyper-V) has extreme fragmentation in the relevant tables
As mentioned by Erland, you have much much more information about the query and query plan than we do. You already know whether or not parallelism is used, how many threads are being used in it, if you have no, one or several Loop Joins in the query (my
bet is on at least one, possibly more), etc. etc.
With the limited information you can share (or choose to share), involving PSS is probably your best course of action.
Gert-Jan -
Different results of same calculation between SQL and PL/SQL
This SQL statement:
select 1074 * (4 / 48) from dual;Gives the result 89.5.
However this PL/SQL block
declare
tmp NUMBER;
begin
SELECT 1074 * (4 / 48) into tmp from dual;
dbms_output.put_line('Result '||tmp);
end;Gives a different result:
Result 89.49999999999999999999999999999999999996
If I change and give my variable tmp a precision and scale, say (38,36) then the result is 89.5.
Edit. I have done this on both 10g (10.2.0.4.0) and 11g (11.1.0.7.0) with the same result in both.
Edited by: kendenny on Jul 9, 2010 10:19 AM for additional informationWhat's your current NUMWIDTH value in SQL*Plus (I'm assuming you are using that as your tool)?
SQL> set numwidth 50
SQL> select 1074 * (4 / 48) from dual;
1074*(4/48)
89.49999999999999999999999999999999999996 -
Same sql statement two different results?
Hi,
I was wondering if anyone knows why I am getting different
results on the same query.
Basically... when I run this query in "view" in sql server, I
get the results I need, however when I run in Coldfusion, I am
getting totally different results.... It is a totally different
result...
the query:
SELECT DISTINCT
tbl_employees.indexid, tbl_employees.[Employee ID] as
employeeid, tbl_employees.[First Name] as firstname,
tbl_employees.[Last Name] as lastname,
tbl_employees.[Supervisor ID] as supervisorid,
tbl_workaddress_userdata.firstname,
tbl_workaddress_userdata.lastname,
tbl_workaddress_userdata.supervisorid,
tbl_workaddress_userdata.location,
tbl_workaddress_userdata.employeeid,
tbl_workaddress_userdata.locationdescription
FROM tbl_employees FULL OUTER JOIN
tbl_workaddress_userdata ON tbl_employees.[Employee ID] =
tbl_workaddress_userdata.employeeid
WHERE (tbl_employees.[Supervisor ID] = 7) AND
(tbl_workaddress_userdata.location IS NULL)I suspect you and your CF DSN are looking at two different
DBs...
Adam -
Different results using Matcher.replaceAll on a literal depending on the Pattern compiled
I would have expected that the results for all the following scenarios would have been the same:
public class PatternMatcher {
public static void main(String[] args) {
Pattern p;
Matcher m;
// duplicates
p = Pattern.compile("(.*)");
m = p.matcher("abc");
if (m.matches()) System.out.println(p + " : " + m.replaceAll("xyz"));
// single
p = Pattern.compile("(.+)");
m = p.matcher("abc");
if (m.matches()) System.out.println(p + " : " + m.replaceAll("xyz"));
// wtf
p = Pattern.compile("(.*?)");
m = p.matcher("abc");
if (m.matches()) System.out.println(p + " : " + m.replaceAll("xyz"));
// duplicates
p = Pattern.compile("(.*+)");
m = p.matcher("abc");
if (m.matches()) System.out.println(p + " : " + m.replaceAll("xyz"));
// single
p = Pattern.compile("^(.*)");
m = p.matcher("abc");
if (m.matches()) System.out.println(p + " : " + m.replaceAll("xyz"));
// duplicates
p = Pattern.compile("(.*)$");
m = p.matcher("abc");
if (m.matches()) System.out.println(p + " : " + m.replaceAll("xyz"));
// single
p = Pattern.compile("^(.*)$");
m = p.matcher("abc");
if (m.matches()) System.out.println(p + " : " + m.replaceAll("xyz"));
// single
p = Pattern.compile("(.(.*).)");
m = p.matcher("abc");
if (m.matches()) System.out.println(p + " : " + m.replaceAll("xyz"));
But the results vary depending on the pattern compiled:
(.*) : xyzxyz
(.+) : xyz
(.*?) : xyzaxyzbxyzcxyz
(.*+) : xyzxyz
^(.*) : xyz
(.*)$ : xyzxyz
^(.*)$ : xyz
(.(.*).) : xyz
Since all of the patterns have an all-encompassing capture group, but the replacement string does not have any group references, I was expecting that in every case the replacement string would simply be returned unchanged (so just "xyz").
Have I uncovered a bug in the core library? or am I misunderstanding how this should be working?jwenting wrote:
And such is the case here.
"You're doing it wrong, but I'm not going to tell you what it is you're doing wrong nah nah nah"
That's the good thing about long lasting platforms such as Java - APIs become mature because they've been tested and re-tested by thousands of people for more than a decade. And thus when you go to use them and have to think "wtf", you can be almost certain you simply don't understand and meanies in forums can say so without having any ammunition to back up that accusation. -
To_char displays different results on sql client and server
Hi,
I am executing the below query on my database with 8.1.7.4 version:
SQL> select to_char(to_date('20-OCT-07'),'D') from dual;
The following result is displayed:
T
7
When the same query is being run through sqlplus client(9.2.0.3) connecting to the same database, following result is being displayed:
SQL> select to_char(to_date('20-OCT-07'),'D') from dual;
T
6
Could anyone please explain me why is this difference and what parameter setting needs to be made to get the same result.
Thanks in advance,
VishwanathOr from the territory part of NLS_LANG OS variable :
oracle@xxx:/home/oracle# echo $NLS_LANG
AMERICAN_AMERICA.UTF8
oracle@xxx:/home/oracle# sqlplus '/ as sysdba'
SQL*Plus: Release 9.2.0.8.0 - Production on Mon Oct 22 10:44:59 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production
SQL> select to_char(to_date('20-OCT-07'),'D') from dual;
T
7
SQL> quit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production
oracle@xxx:/home/oracle# export NLS_LANG=AMERICAN_FRANCE.UTF8
oracle@xxx:/home/oracle# sqlplus '/ as sysdba'
SQL*Plus: Release 9.2.0.8.0 - Production on Mon Oct 22 10:45:38 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production
SQL> select to_char(to_date('20-OCT-07'),'D') from dual;
T
6
SQL>Check it on both sites (client and server).
Nicolas. -
Filter expression producing different results after upgrade to 11.1.1.7
Hello,
We recently did an upgrade and noticed that on a number of reports where we're using the FILTER expression that the numbers are very inflated. Where we are not using the FILTER expression the numbers are as expected. In the example below we ran the 'Bookings' report in 10g and came up with one number and ran the same report in 11g (11.1.1.7.0) after the upgrade and got two different results. The data source is the same database for each envrionment. Also, in running the physical SQL generated by the 10g and 11g version of the report we get different the inflated numbers from the 11g SQL. Any ideas on what might be happening or causing the issue?
10g report: 2016-Q3......Bookings..........72,017
11g report: 2016-Q3......Bookings..........239,659
This is the simple FILTER expression that is being used in the column formula on the report itself for this particular scenario which produces different results in 10g and 11g.
FILTER("Fact - Opportunities"."Won Opportunity Amount" USING ("Opportunity Attributes"."Business Type" = 'New Business'))
-------------- Physical SQL created by 10g report -------- results as expected --------------------------------------------
WITH
SAWITH0 AS (select sum(case when T33142.OPPORTUNITY_STATUS = 'Won-closed' then T33231.USD_LINE_AMOUNT else 0 end ) as c1,
T28761.QUARTER_YEAR_NAME as c2,
T28761.QUARTER_RANK as c3
from
XXFI.XXFI_GL_FISCAL_MONTHS_V T28761 /* Dim_Periods */ ,
XXFI.XXFI_OSM_OPPTY_HEADER_ACCUM T33142 /* Fact_Opportunity_Headers(CloseDate) */ ,
XXFI.XXFI_OSM_OPPTY_LINE_ACCUM T33231 /* Fact_Opportunity_Lines(CloseDate) */
where ( T28761.PERIOD_NAME = T33142.CLOSE_PERIOD_NAME and T28761.QUARTER_YEAR_NAME = '2012-Q3' and T33142.LEAD_ID = T33231.LEAD_ID and T33231.LINES_BUSINESS_TYPE = 'New Business' and T33142.OPPORTUNITY_STATUS <> 'Duplicate' )
group by T28761.QUARTER_YEAR_NAME, T28761.QUARTER_RANK)
select distinct SAWITH0.c2 as c1,
'Bookings10g' as c2,
SAWITH0.c1 as c3,
SAWITH0.c3 as c5,
SAWITH0.c1 as c7
from
SAWITH0
order by c1, c5
-------------- Physical SQL created by the same report as above but in 11g (11.1.1.7.0) -------- results much higher --------------------------------------------
WITH
SAWITH0 AS (select sum(case when T33142.OPPORTUNITY_STATUS = 'Won-closed' then T33142.TOTAL_OPPORTUNITY_AMOUNT_USD else 0 end ) as c1,
T28761.QUARTER_YEAR_NAME as c2,
T28761.QUARTER_RANK as c3
from
XXFI.XXFI_GL_FISCAL_MONTHS_V T28761 /* Dim_Periods */ ,
XXFI.XXFI_OSM_OPPTY_HEADER_ACCUM T33142 /* Fact_Opportunity_Headers(CloseDate) */ ,
XXFI.XXFI_OSM_OPPTY_LINE_ACCUM T33231 /* Fact_Opportunity_Lines(CloseDate) */
where ( T28761.PERIOD_NAME = T33142.CLOSE_PERIOD_NAME and T28761.QUARTER_YEAR_NAME = '2012-Q3' and T33142.LEAD_ID = T33231.LEAD_ID and T33231.LINES_BUSINESS_TYPE = 'New Business' and T33142.OPPORTUNITY_STATUS <> 'Duplicate' )
group by T28761.QUARTER_YEAR_NAME, T28761.QUARTER_RANK),
SAWITH1 AS (select distinct 0 as c1,
D1.c2 as c2,
'Bookings2' as c3,
D1.c3 as c4,
D1.c1 as c5
from
SAWITH0 D1),
SAWITH2 AS (select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4,
D1.c5 as c5,
sum(D1.c5) as c6
from
SAWITH1 D1
group by D1.c1, D1.c2, D1.c3, D1.c4, D1.c5)
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6 from ( select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4,
D1.c5 as c5,
sum(D1.c6) over () as c6
from
SAWITH2 D1
order by c1, c4, c3 ) D1 where rownum <= 2000001
Thank you,
Mike
Edited by: Mike Jelen on Jun 7, 2013 2:05 PMThank you for the info. They are definitely different values since ones on the header and the other is on the lines. As the "Won Opportunity" logical column is mapped to multiple LTS it appears the OBI 11 uses a different alogorthim to determine the most efficient table to use in the query generation vs 10g. I'll need to spend some time researching the impact to adding a 'sort' to the LTS. I'm hoping that there's a way to get OBI to use similar logic relative to 10g in how it generated the table priority.
Thx again,
Mike
Maybe you are looking for
-
How to map fields Condition group1- condition group5 [KDKG1- KDKG5] to CRM
We have a requirement to enhance the BP master data in CRM in such a way that Condition group1- condition group5 of ECC customer master data can be mapped to the BP master data in CRM. This is required because the fields are used in pricing and pric
-
My 400 minute/month $4.99 is NOT being used? I'm...
How do I fix?
-
JCA adapter doesnt update MarkReadColumn with correct value
Hi, I've created a JCA adapter in the SOA suite which polls a certain database. The poll works perfect only during configuration I set it to do a logical delete. My JCA looks like: <adapter-config name="SchoolFitListener" adapter="Database Adapter" w
-
I am trying to load an RSS feed into my flash application, which gives the URL of an image I want to load. This script works fine locally, but when I upload to my server (goDaddy deluxe hosting) the images won't load. Here is the URL: http://www.laur
-
Photos: Shared tab shows "No Shared Photos or Videos" but they are there
After taking photos on my iPhone, I have created many shared photo streams using my iPad and I have always used the iPad to view them. But for some reason, now they don't appear on the iPad when I go into Photos app and tap on the Shared icon at the