SQL+-MULTI TABLE QUERY PROBLEM
HAI ALL,
ANY SUGGESTION PLEASE?
SUB: SQL+-MULTI TABLE QUERY PROBLEM
SQL+ QUERY GIVEN:
SELECT PATIENT_NUM, PATIENT_NAME, HMTLY_TEST_NAME, HMTLY_RBC_VALUE,
HMTLY_RBC_NORMAL_VALUE, DLC_TEST_NAME, DLC_POLYMORPHS_VALUE,
DLC_POLYMORPHS_NORMAL_VALUE FROM PATIENTS_MASTER1, HAEMATOLOGY1,
DIFFERENTIAL_LEUCOCYTE_COUNT1
WHERE PATIENT_NUM = HMTLY_PATIENT_NUM AND PATIENT_NUM = DLC_PATIENT_NUM AND PATIENT_NUM
= &PATIENT_NUM;
RESULT GOT:
&PATIENT_NUM =1
no rows selected
&PATIENT_NUM=2
no rows selected
&PATIENT_NUM=3
PATIENT_NUM 3
PATIENT_NAME KKKK
HMTLY_TEST_NAME HAEMATOLOGY
HMTLY_RBC_VALUE 4
HMTLY_RBC_NORMAL 4.6-6.0
DLC_TEST_NAME DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE 60
DLC_POLYMORPHS_NORMAL_VALUE 40-65
ACTUAL WILL BE:
&PATIENT_NUM=1
PATIENT_NUM 1
PATIENT_NAME BBBB
HMTLY_TEST_NAME HAEMATOLOGY
HMTLY_RBC_VALUE 5
HMTLY_RBC_NORMAL 4.6-6.0
&PATIENT_NUM=2
PATIENT_NUM 2
PATIENT_NAME GGGG
DLC_TEST_NAME DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE 42
DLC_POLYMORPHS_NORMAL_VALUE 40-65
&PATIENT_NUM=3
PATIENT_NUM 3
PATIENT_NAME KKKK
HMTLY_TEST_NAME HAEMATOLOGY
HMTLY_RBC_VALUE 4
HMTLY_RBC_NORMAL 4.6-6.0
DLC_TEST_NAME DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE 60
DLC_POLYMORPHS_NORMAL_VALUE 40-65
4 TABLES FOR CLINICAL LAB FOR INPUT DATA AND GET REPORT ONLY FOR TESTS MADE FOR PARTICULAR
PATIENT.
TABLE1:PATIENTS_MASTER1
COLUMNS:PATIENT_NUM, PATIENT_NAME,
VALUES:
PATIENT_NUM
1
2
3
4
PATIENT_NAME
BBBB
GGGG
KKKK
PPPP
TABLE2:TESTS_MASTER1
COLUMNS:TEST_NUM, TEST_NAME
VALUES:
TEST_NUM
1
2
TEST_NAME
HAEMATOLOGY
DIFFERENTIAL LEUCOCYTE COUNT
TABLE3:HAEMATOLOGY1
COLUMNS:
HMTLY_NUM,HMTLY_PATIENT_NUM,HMTLY_TEST_NAME,HMTLY_RBC_VALUE,HMTLY_RBC_NORMAL_VALUE
VALUES:
HMTLY_NUM
1
2
HMTLY_PATIENT_NUM
1
3
MTLY_TEST_NAME
HAEMATOLOGY
HAEMATOLOGY
HMTLY_RBC_VALUE
5
4
HMTLY_RBC_NORMAL_VALUE
4.6-6.0
4.6-6.0
TABLE4:DIFFERENTIAL_LEUCOCYTE_COUNT1
COLUMNS:DLC_NUM,DLC_PATIENT_NUM,DLC_TEST_NAME,DLC_POLYMORPHS_VALUE,DLC_POLYMORPHS_
NORMAL_VALUE,
VALUES:
DLC_NUM
1
2
DLC_PATIENT_NUM
2
3
DLC_TEST_NAME
DIFFERENTIAL LEUCOCYTE COUNT
DIFFERENTIAL LEUCOCYTE COUNT
DLC_POLYMORPHS_VALUE
42
60
DLC_POLYMORPHS_NORMAL_VALUE
40-65
40-65
THANKS
RCS
E-MAIL:[email protected]
--------
I think you want an OUTER JOIN
SELECT PATIENT_NUM, PATIENT_NAME, HMTLY_TEST_NAME, HMTLY_RBC_VALUE,
HMTLY_RBC_NORMAL_VALUE, DLC_TEST_NAME, DLC_POLYMORPHS_VALUE,
DLC_POLYMORPHS_NORMAL_VALUE
FROM PATIENTS_MASTER1, HAEMATOLOGY1, DIFFERENTIAL_LEUCOCYTE_COUNT1
WHERE PATIENT_NUM = HMTLY_PATIENT_NUM (+)
AND PATIENT_NUM = DLC_PATIENT_NUM (+)
AND PATIENT_NUM = &PATIENT_NUM;Edited by: shoblock on Nov 5, 2008 12:17 PM
outer join marks became stupid emoticons or something. attempting to fix
Similar Messages
-
Hi Friends,
I was created a new multiprovider which contains two InfoCubes, First infocube contains the Site & Material and second infocube contains only Material, so when i executed the query it is Showing the first cube Values in some rows, after ending the first query values, it is showing the second query values, i am unable to finding the solution.
Plz help me.
Regards,
Rams.
Message was edited by:
Rams RamsHi,
Check in the multiprovider option if you have selected the material number to match with both infoproviders, in order to do this :
1)Double click in MultiProvider
2)Next
3)Characteristic Tab, click in Identification Button.
4)Check if the material number is selected in both infoprovider
End
Regards
Asign point if useful -
A challenging dynamic SQL query problem
hi All,
I have a very interesting problem at work:
We have this particular table defined as follows :
CREATE TABLE sales_data (
sales_id NUMBER,
sales_m01 NUMBER,
sales_m02 NUMBER,
sales_m03 NUMBER,
sales_m04 NUMBER,
sales_m05 NUMBER,
sales_m06 NUMBER,
sales_m07 NUMBER,
sales_m08 NUMBER,
sales_m09 NUMBER,
sales_m10 NUMBER,
sales_m11 NUMBER,
sales_m12 NUMBER,
sales_prior_yr NUMBER );
The columns 'sales_m01 ..... sales_m12' represents aggregated monthly sales, in which 'sales_m01' translates to 'sales for the month of january, january being the first month, 'sales_m02' sales for the month of february, and so on.
The problem I face is that we have a project which requires that a parameter be passed to a stored procedure which stands for the month number which is then used to build a SQL query with the following required field aggregations, which depends on the parameter passed :
Sample 1 : parameter input: 4
Dynamically-built SQL query should be :
SELECT
SUM(sales_m04) as CURRENT_SALES,
SUM(sales_m01+sales_m02+sales_m03+sales_m04) SALES_YTD
FROM
sales_data
WHERE
sales_id = '0599768';
Sample 2 : parameter input: 8
Dynamically-built SQL query should be :
SELECT
SUM(sales_m08) as CURRENT_SALES,
SUM(sales_m01+sales_m02+sales_m03+sales_m04+
sales_m05+sales_m06+sales_m07+sales_m08) SALES_YTD
FROM
sales_data
WHERE
sales_id = '0599768';
So in a sense, the contents of SUM(sales_m01 ....n) would vary depending on the parameter passed, which should be a number between 1 .. 12 which corresponds to a month, which in turn corresponds to an actual field range on the table itself. The resulting dynamic query should only aggregate those columns/fields in the table which falls within the range given by the input parameter and disregards all the remaining columns/fields.
Any solution is greatly appreciated.
Thanks.Hi another simpler approach is using decode
try like this
SQL> CREATE TABLE sales_data (
2 sales_id NUMBER,
3 sales_m01 NUMBER,
4 sales_m02 NUMBER,
5 sales_m03 NUMBER,
6 sales_m04 NUMBER,
7 sales_m05 NUMBER,
8 sales_m06 NUMBER,
9 sales_m07 NUMBER,
10 sales_m08 NUMBER,
11 sales_m09 NUMBER,
12 sales_m10 NUMBER,
13 sales_m11 NUMBER,
14 sales_m12 NUMBER,
15 sales_prior_yr NUMBER );
Table created.
SQL> select * from sales_data;
SALES_ID SALES_M01 SALES_M02 SALES_M03 SALES_M04 SALES_M05 SALES_M06 SALES_M07 SALES_M08 SALES_M09 SALES_M10 SALES_M11 SALES_M12 SALES_PRIOR_YR
1 124 123 145 146 124 126 178 189 456 235 234 789 19878
2 124 123 145 146 124 126 178 189 456 235 234 789 19878
1 100 200 300 400 500 150 250 350 450 550 600 700 10000
1 101 201 301 401 501 151 251 351 451 551 601 701 10000----now for your requirement. see below query if there is some problem then tell.
SQL> SELECT sum(sales_m&input_data), DECODE (&input_data,
2 1, SUM (sales_m01),
3 2, SUM (sales_m01 + sales_m02),
4 3, SUM (sales_m01 + sales_m02 + sales_m03),
5 4, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04),
6 5, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04 + sales_m05),
7 6, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06),
8 7, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07),
9 8, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08),
10 9, SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09),
11 10,SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09+sales_m10),
12 11,SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09+sales_m10+sales_m11),
13 12,SUM (sales_m01 + sales_m02 + sales_m03 + sales_m04+sales_m05+sales_m06+sales_m07+sales_m08+sales_m09+sales_m10+sales_m11+sales_m12)
14 ) total
15 FROM sales_data
16 WHERE sales_id = 1;
Enter value for input_data: 08
Enter value for input_data: 08
old 1: SELECT sum(sales_m&input_data), DECODE (&input_data,
new 1: SELECT sum(sales_m08), DECODE (08,
SUM(SALES_M08) TOTAL
890 5663 -
Problems with ReportDocument and multi-table dataset...
Hi,
First post here and hoping someone can help with me with this problem.
I have an ASP.NET app running using CR 2008 Basic to produce invoices - these are fairly complex and use multiple tables - data used is provided in lists and they work fine when displayed/printed/exported from the viewer.
My client now wants the ability to batch print these so I'm trying to develop a page which will use create a ReportDocument object and then print/export as required.
As soon as I started down this path I received the 'invalid logon params problem' - so to simplify things and to get past this I've developed a simple 1 page report with which takes data from a dataset I've populated and passed to it.
Here's the problem:
1) If I put one table in the dataset and add a field from that table to the report it works OK.
2) If I use a second table and field (without the first) it works OK.
3) as soon as I have both tables and the report has field from each I get the 'Invalid logon parameters' error.
The tables are fine (since I can use each individually), the report is find (only has 1 or 2 fields and works with individual tables) ... it's only when I have both tables that I get problems...
... this is driving me up the wall and if CR can't handle this there's no way it's going to handle the more complex invoices with subreports.
Can anyone suggest what I'm doing wrong ... or tell me whether I'm just pushing CR beyond it's capabilities?
The code I'm using to generate the ReportDocument is:
List<Invoice> rinv = Invoice.SelectOneContract(inv.Invoice_ID);
List<InvoiceLine> rline = InvoiceLine.SelectInvoice(inv.Invoice_ID);
DataSet ds = new DataSet();
ds.Tables.Add(InvoiceLineTable(rline));
ds.Tables.Add(InvoiceTable(rinv));
rdoc.FileName = Server.MapPath("~/Invoicing/test.rpt");
rdoc.SetDataSource(ds.Tables);
rdoc.ExportToDisk(ExportFormatType.PortableDocFormat, "c:
test
test.pdf");
... so not rocket science and the error is always caused at the 'ExportToDisk' line.
Thanks in advance!I've got nowhere trying to create a reportdocument and pass it a multi-table dataset, so decided to do it the 'dirty' way by adding all the controls to an aspx page and referring to them.
I know I can do this because the whole issue is printing a report that is currently viewed.
So ... I've now added the ObjectDataSources to the page as well as the CrystalReportSource and the CrystalViewer ...
.. I've tested the page and the report appears within the viewer with all the correct data ...
...so with a certain amount of excitement I've added the following line to the code behind file:
rptSrcContract.ReportDocument.PrintToPrinter(1, true, 1, 1);
... then I run the page and predictably the first thing that comes up is:
Unable to connect: incorrect log on parameters.
.. this is madness!
1) The data is retrieved and is in the correct format otherwise the report would not display.
2) the rptSrcContract.ReportDocument exists ... otherwise it would not display in the viewer.
So why does this want to logon to a database when the data is retrieved successfully and the security is running with a 'Network Services' account anyway????? (actually I know it has nothing to do with logging onto the database this is just the generic Crystal Reports 'I have a problem' message)
... sorry if this is a bit of an angry rant .. didn't get much sleep last night because of this ... all I want to be able to do is print a report from code .... surely this should be possible?? -
Multi table insert and ....tricky problem
I have three account related table
--staging table where all raw records exist
acc_event_stage
(acc_id NUMBER(22) NOT NULL,
event_code varchar2(50) NOT NULL,
event_date date not null);
--production table where valid record are moved from staging table
acc_event
(acc_id NUMBER(22) NOT NULL,
event_code varchar2(50) NOT NULL,
event_date date not null);
--error records from staging are moved in here
err_file
(acc_id NUMBER(22) NOT NULL,
error_code varchar2(50) NOT NULL);
--summary records from production account table
acc_event_summary
(acc_id NUMBER(22) NOT NULL,
event_date date NOT NULL,
instance_flag char(1) not null);
records in the staging table may look like this(I have put it in simple english for ease)
open account A on June 12 8 am
close account A on June 12 9 am
open account A on June 12 11 am
Rules
Account cannot be closed if an open account doesnt exist
account cannot be updated if an open account doesnt exist
account cannot be opened if an open account already exist.
Since I am using
Insert all
when open account record and account already exist
into ...error table
when close account record and open account doesnt exist
into ...error table
else
into account_event table.
select * from acc_event_stage order by ..
wondering if the staging table has records like this
open account A on June 12 8 am
close account A on June 12 9 am
open account A on June 12 11 am
then how can I validate some of the records given the rule above.I can do the validation from existing records (before the staging table data arrived) without any problem.But the tricky part is the new records in the staging table.
This can be easily achieved if I do cursor loop method,but doing a multi table insert looks like a problem
any opinion.?
thx
mIn short,in simple example,
through multi table insert if I insert record in production account event table from
staging account event table,making sure that the record doesnt exist in the production table.This will bomb if 2 open account exist in the current staging table.
It will also bomb if an open account and then close account exist.
etc. -
Problem Loading Microsoft Sql Serer table data to flat file
Hi Experts,
i am trying to load data from SQL Server table to flat file but its errror out.
I have selected Staging area different form Targert ( I am using SQL Server as my staging area)
knowlegde modue used: IKM SQL to file Append
I reciee the following errror
ODI-1217: Session table to file (124001) fails with return code 7000.
ODI-1226: Step table to file fails after 1 attempt(s).
ODI-1240: Flow table to file fails while performing a Integration operation. This flow loads target table test.
ODI-1227: Task table to file (Integration) fails on the source MICROSOFT_SQL_SERVER connection POS_XSTORE.
Caused By: java.sql.SQLException: Could not read heading rows from file
at com.sunopsis.jdbc.driver.file.FileResultSet.<init>(FileResultSet.java:164)
Please help!!!
Thanks,
PrateekHave you defined the File Datastore correctly with appropriate delimiter and file properties.
Although the example is for oracle to file , see if this helps you in any way - http://odiexperts.com/oracle-to-flat-file. -
Any general tips on getting better performance out of multi table insert?
I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
First let me describe my scenario to see if you agree that my performance is slow...
its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
Indexes?
Table 1 has 3 index + PK.
Table 2 has 0 index + FK + PK.
Table 3 has 4 index + FK + PK
Table 4 has 3 index + FK + PK
Table 5 has 4 index + FK + PK
None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
The query itself looks something like this:
insert /*+ append */ all
when 1=1 then
into table1 (...) values (...)
into table2 (...) values (...)
when a=b then
into table3 (...) values (...)
when a=c then
into table3 (...) values (...)
when p=q then
into table4(...) values (...)
when x=y then
into table5(...) values (...)
select .... from source_table
Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
Now for the performance:
It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
Does that seem normal or am I expecting too much?
I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
Edited by: trant on Jun 27, 2011 9:29 PMLooks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
510 events in waitclass Other 30670 9161 329759 10.75 196 3297590639 1736664284 1893977003 0 Other
512 events in waitclass Other 32428 10920 329728 10.17 196 3297276553 1736664284 1893977003 0 Other
243 events in waitclass Other 21513 5 329594 15.32 196 3295935977 1736664284 1893977003 0 Other
223 events in waitclass Other 21570 52 329590 15.28 196 3295898897 1736664284 1893977003 0 Other
241 row cache lock 1273669 0 42137 0.03 267 421374408 1714089451 3875070507 4 Concurrency
241 events in waitclass Other 614793 0 34266 0.06 12 342660764 1736664284 1893977003 0 Other
241 db file sequential read 13323 0 3948 0.3 13 39475015 2652584166 1740759767 8 User I/O
241 SQL*Net message from client 7 0 1608 229.65 1566 16075283 1421975091 2723168908 6 Idle
241 log file switch completion 83 0 459 5.54 73 4594763 3834950329 3290255840 2 Configuration
241 gc current grant 2-way 5023 0 159 0.03 0 1591377 2685450749 3871361733 11 Cluster
241 os thread startup 4 0 55 13.82 26 552895 86156091 3875070507 4 Concurrency
241 enq: HW - contention 574 0 38 0.07 0 378395 1645217925 3290255840 2 Configuration
512 PX Deq: Execution Msg 3 0 28 9.45 28 283374 98582416 2723168908 6 Idle
243 PX Deq: Execution Msg 3 0 27 9.1 27 272983 98582416 2723168908 6 Idle
223 PX Deq: Execution Msg 3 0 25 8.26 24 247673 98582416 2723168908 6 Idle
510 PX Deq: Execution Msg 3 0 24 7.86 23 235777 98582416 2723168908 6 Idle
243 PX Deq Credit: need buffer 1 0 17 17.2 17 171964 2267953574 2723168908 6 Idle
223 PX Deq Credit: need buffer 1 0 16 15.92 16 159230 2267953574 2723168908 6 Idle
512 PX Deq Credit: need buffer 1 0 16 15.84 16 158420 2267953574 2723168908 6 Idle
510 direct path read 360 0 15 0.04 4 153411 3926164927 1740759767 8 User I/O
243 direct path read 352 0 13 0.04 6 134188 3926164927 1740759767 8 User I/O
223 direct path read 359 0 13 0.04 5 129859 3926164927 1740759767 8 User I/O
241 PX Deq: Execute Reply 6 0 13 2.12 10 127246 2599037852 2723168908 6 Idle
510 PX Deq Credit: need buffer 1 0 12 12.28 12 122777 2267953574 2723168908 6 Idle
512 direct path read 351 0 12 0.03 5 121579 3926164927 1740759767 8 User I/O
241 PX Deq: Parse Reply 7 0 9 1.28 6 89348 4255662421 2723168908 6 Idle
241 SQL*Net break/reset to client 2 0 6 2.91 6 58253 1963888671 4217450380 1 Application
241 log file sync 1 0 5 5.14 5 51417 1328744198 3386400367 5 Commit
510 cursor: pin S wait on X 3 2 2 0.83 1 24922 1729366244 3875070507 4 Concurrency
512 cursor: pin S wait on X 2 2 2 1.07 1 21407 1729366244 3875070507 4 Concurrency
243 cursor: pin S wait on X 2 2 2 1.06 1 21251 1729366244 3875070507 4 Concurrency
241 library cache lock 29 0 1 0.05 0 13228 916468430 3875070507 4 Concurrency
241 PX Deq: Join ACK 4 0 0 0.07 0 2789 4205438796 2723168908 6 Idle
241 SQL*Net more data from client 6 0 0 0.04 0 2474 3530226808 2000153315 7 Network
241 gc current block 2-way 5 0 0 0.04 0 2090 111015833 3871361733 11 Cluster
241 enq: KO - fast object checkpoint 4 0 0 0.04 0 1735 4205197519 4217450380 1 Application
241 gc current grant busy 4 0 0 0.03 0 1337 2277737081 3871361733 11 Cluster
241 gc cr block 2-way 1 0 0 0.06 0 586 737661873 3871361733 11 Cluster
223 db file sequential read 1 0 0 0.05 0 461 2652584166 1740759767 8 User I/O
223 gc current block 2-way 1 0 0 0.05 0 452 111015833 3871361733 11 Cluster
241 latch: row cache objects 2 0 0 0.02 0 434 1117386924 3875070507 4 Concurrency
241 enq: TM - contention 1 0 0 0.04 0 379 668627480 4217450380 1 Application
512 PX Deq: Msg Fragment 4 0 0 0.01 0 269 77145095 2723168908 6 Idle
241 latch: library cache 3 0 0 0.01 0 243 589947255 3875070507 4 Concurrency
510 PX Deq: Msg Fragment 3 0 0 0.01 0 215 77145095 2723168908 6 Idle
223 PX Deq: Msg Fragment 4 0 0 0 0 145 77145095 2723168908 6 Idle
241 buffer busy waits 1 0 0 0.01 0 142 2161531084 3875070507 4 Concurrency
243 PX Deq: Msg Fragment 2 0 0 0 0 84 77145095 2723168908 6 Idle
241 latch: cache buffers chains 4 0 0 0 0 73 2779959231 3875070507 4 Concurrency
241 SQL*Net message to client 7 0 0 0 0 51 2067390145 2000153315 7 Network
(yikes, is there a way to wrap that in equivalent of other forums' tag?)
v$session_wait;
223 835 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 10 WAITING
241 22819 row cache lock cache id 13 000000000000000D mode 0 00 request 5 0000000000000005 3875070507 4 Concurrency -1 0 WAITED SHORT TIME
243 747 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 7 WAITING
510 10729 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 2 WAITING
512 12718 PX Deq Credit: send blkd sleeptime/senderid 268697599 000000001003FFFF passes 1 0000000000000001 qref 0 00 1893977003 0 Other 0 4 WAITING
v$sess_io:
223 0 5779 5741 0 0
241 38773810 2544298 15107 27274891 0
243 0 5702 5688 0 0
510 0 5729 5724 0 0
512 0 5682 5678 0 0 -
OJ syntax for multi-table left outer join with MS Oracle Driver
I have a multi-table left outer join that works fine in SQL Server ODBC Driver, Oracle ODBC driver 8.01.07.00, but not with Microsoft ODBC Driver for Oracle 2.573.7326.0
SELECT * from { oj A LEFT OUTER JOIN B ON A.col1 = B.col1 LEFT OUTER JOIN C ON A.col1 = C.col1 }
I noticed someone had a similar problem (the proposed solution doesn't work):
http://www.justpbinfo.com/listarchive/msg02874.html
Does anyone know how to get this working with the Microsoft ODBC Driver for Oracle? Or does it just not work?The Microsoft ODBC Driver for Oracle 2.573.7326.0 does perform the same 'fix up' with {oj} in Oracle 8i. The problem is that it doesn't work when joining more than two tables:
This works:
SELECT * from { oj A LEFT OUTER JOIN B ON A.col1 = B.col1}
This doesn't work:
SELECT * from { oj A LEFT OUTER JOIN B ON A.col1 = B.col1 LEFT OUTER JOIN C ON B.col1 = C.col1 }
(The second query will work with the Oracle Oracle ODBC driver, with a bit of tweaking. But I haven't found a way to get it to work with the Microsoft ODBC Driver for Oracle 2.573.7326.0. My suspicion is that it just doesn't work.)
Gavin -
ODI - IKM Oracle Multi Table Insert
Hi All,
I am new to ODI, i tried to use "IKM Oracle Multi Table Insert", one interface generates query like
insert all
when 1=1 then
into BI.EMP_TOTAL_SAL
*(EMPNO, ENAME, JOB, MGR, HIREDATE, TOTAL_SAL, DEPTNO)*
values
*(C1_EMPNO, C2_ENAME, C3_JOB, C4_MGR, C5_HIREDATE, C6_TOTAL_SAL, C7_DEPTNO)*
select
C1_EMPNO EMPNO,
C2_ENAME ENAME,
C3_JOB JOB,
C4_MGR MGR,
C5_HIREDATE HIREDATE,
C6_TOTAL_SAL TOTAL_SAL,
C7_DEPTNO DEPTNO
from BI.C$_0EMP_TOTAL_SAL
where (1=1)
because of alias this insert fails. Could you please anyone explain what exactly happens and how to control the query genration?
Thanks & Regards
M ThiyaguWhat David is asking is for you to go to operator and review the failed task, copy the SQL and paste it up here, run the SQL in your sql client (Toad / SQL Developer) and try and ascertain what objects your missing causing your SQL Error.
Have you followed the link posted above? Have you placed the interfaces in a package in the right order? Are you running the package as a whole or individual interfaces? I dont think the individual interfaces will work with this IKM as its designed for one to feed the other.
Please detail the steps you've taken, how many interfaces you have and what options you have chosen in the IKM options for each interface - Its tricky to diagnose your problem and when you say "I can't understand what to do and how to do...
So please give the step wise solution to do that interface.. or please give with an example.." it means a lot of people will ignore your post as we cant see any evidence of you trying!
p.s I see you have resurected a thread from 2009 - 1) I dont think the multi-table insert KM was available with ODI at that time (10G) 2) The thread is answered / closed so not many people will look at it 3) Proceedurs should only really be used when you cant do it with an interface, you lose all the lovely lineage between objects with you get with an interface.
Hope this helps - please post your setup , your error and how you have configured the interfaces and package so far. -
Can PQO been used in DML statements to speed up multi-dimension query ?
We have
limit dim1 to var1 ne na
where dim1 is a dimension.
var1 is a variable dimensioned by dim1.
This is pretty much like SQL table scan to find the records. Is there a PQO (parallel query option)-like option in DML to speed up multi-dimension query ?This is one of the beauties of the OLAP Option, all the query optimisation is managed by the engine itself. It resolves the best way to get you the result set. If you have partitioned your cube the query engine will perform the same way as the relational query engine and employ partition elimination.
Where things can slow down is where you used compression, since to answer a question such as "is this cell NA?" all rows in the cube need to be uncompressed before the query can be answered and result set generated. Compression technology works best (i.e. is least intrusive in terms of affecting query performance) where you have a very fast CPU. However, the overall benefits of compression (smaller cubes) nearly always outweigh the impact of having to uncompress data to answer a specific question. Usually the impact is minimal if you partition your cube, since the un-compress function only needs to work on a specific partition.
In summary, the answer to your question is "No", because the OLAP engine automatically optimises the allocation of resources to return the result-set as fast as possible.
Is there a specific problem you are experiencing with this query?
Keith Laker
Oracle EMEA Consulting
OLAP Blog: http://oracleOLAP.blogspot.com/
OLAP Wiki: http://wiki.oracle.com/page/Oracle+OLAP+Option
DM Blog: http://oracledmt.blogspot.com/
OWB Blog : http://blogs.oracle.com/warehousebuilder/
OWB Wiki : http://wiki.oracle.com/page/Oracle+Warehouse+Builder
DW on OTN : http://www.oracle.com/technology/products/bi/db/11g/index.html -
Multi table inheritance and performance
I really like the idea of multi-table inheritance, since a have a main
class and three subclasses which just add one integer to the main class.
It would be a waste to spend 4 tables on this, so I decided to put them
all into one.
My problem now is, that when I query for a specific class, kodo will build
SQL like:
select ... from table where
JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
this is pretty slow, when the table grows because string comparisons are
awefull - and even worse: the database has to compare nearly the whole
string because it differs only in the last letters.
indexing would help a bit but wouldn't outperforming integer comparisons.
Is it possible to get kodo to do one more step of normalization ?
Having an extra table containing all classnames und id's for them (and
references in the original table) would improve performance of
multi-tables quite a lot !
Even with standard classes it would save a lot memory not having the full
classname in each row.Stefan-
Thanks for the feedback. Note that 3.0 does make this simpler: we have
extensions that allow you to define the mechanism for subclass
identification purely in the metadata file(s). See:
http://solarmetric.com/Software/Documentation/3.0.0RC1/docs/manual.html#ref_guide_mapping_classind
The idea for having a separate table mapping numbers to class names is
good, but we prefer to have as few Kodo-managed tables as possible. It
is just as easy to do this in the metadata file.
In article <[email protected]>, Stefan wrote:
First of all: thx for the fast help, this one (IntegerProvider) helped and
solves my problem.
kodo is really amazing with all it's places where customization can be
done !
Anyway as a wish for future releases: exactly this technique - using
integer as class-identifiers rather than the full class names is what I
meant with "normalization".
The only thing missing, is a table containing information of how classIDs
are mapped to classnames (which is now contained as an explicit statement
in the jdo-File). This table is not mapped to the primary key of the main
table (as you suggested), but to the classID-Integer wich acts as a
foreign key.
A query for a specific class would be solved with a query like:
select * from classValues, classMapping where
classValues.JDOCLASSX=classmapping.IDX and
classmapping.CLASSNAMEX='de.company.whatever'
This table should be managed by kodo of course !
Imagine a table with 300.000 rows containing only 3 different derived
classes.
You would have an extra table with 4 rows (base class + 3 derived types).
Searching for the classID is done in that 4row table, while searching the
actual class instances than would be done over an indexed integer-classID
field.
This is much faster than having the database doing 300.000 String
comparisons (even when indexed).
(By the way - it would save a lot memory as well, even on classes which
are not derived)
If this technique is done by kodo transparently, maybe turned on with an
extra option ... that would be great, since you wouldn't need to take care
for different "subclass-indicator-values", can go on as everytime and have
a far better performance ...
Stephen Kim wrote:
You could push off fields to seperate tables (as long as the pk column
is the same), however, I doubt that would add much performance benefit
in this case, since we'd simply add a join (e.g. select data.name,
info.jdoclassx, info.jdoidx where data.jdoidx = info.jdoidx where
info.jdoclassx = 'foo'). One could turn off default fetch group for
fields stored in data, but now you're adding a second select to load one
"row" of data.
However, we DO provide an integer subclass provider which can speed
these sorts of queries a lot if you need to constrain your queries by
class, esp. with indexing, at the expense of simple legibility:
http://solarmetric.com/Software/Documentation/2.5.3/docs/ref_guide_meta_class.html#meta-class-subclass-provider
Stefan wrote:
I really like the idea of multi-table inheritance, since a have a main
class and three subclasses which just add one integer to the main class.
It would be a waste to spend 4 tables on this, so I decided to put them
all into one.
My problem now is, that when I query for a specific class, kodo will build
SQL like:
select ... from table where
JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
this is pretty slow, when the table grows because string comparisons are
awefull - and even worse: the database has to compare nearly the whole
string because it differs only in the last letters.
indexing would help a bit but wouldn't outperforming integer comparisons.
Is it possible to get kodo to do one more step of normalization ?
Having an extra table containing all classnames und id's for them (and
references in the original table) would improve performance of
multi-tables quite a lot !
Even with standard classes it would save a lot memory not having the full
classname in each row.
Steve Kim
[email protected]
SolarMetric Inc.
http://www.solarmetric.com
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com -
Multi-table INSERT with PARALLEL hint on 2 node RAC
Multi-table INSERT statement with parallelism set to 5, works fine and spawns multiple parallel
servers to execute. Its just that it sticks on to only one instance of a 2 node RAC. The code I
used is what is given below.
create table t1 ( x int );
create table t2 ( x int );
insert /*+ APPEND parallel(t1,5) parallel (t2,5) */
when (dummy='X') then into t1(x) values (y)
when (dummy='Y') then into t2(x) values (y)
select dummy, 1 y from dual;
I can see multiple sessions using the below query, but on only one instance only. This happens not
only for the above statement but also for a statement where real time table(as in table with more
than 20 million records) are used.
select p.server_name,ps.sid,ps.qcsid,ps.inst_id,ps.qcinst_id,degree,req_degree,
sql.sql_text
from Gv$px_process p, Gv$sql sql, Gv$session s , gv$px_session ps
WHERE p.sid = s.sid
and p.serial# = s.serial#
and p.sid = ps.sid
and p.serial# = ps.serial#
and s.sql_address = sql.address
and s.sql_hash_value = sql.hash_value
and qcsid=945
Won't parallel servers be spawned across instances for multi-table insert with parallelism on RAC?
Thanks,
MaheshPlease take a look at these 2 articles below
http://christianbilien.wordpress.com/2007/09/12/strategies-for-rac-inter-instance-parallelized-queries-part-12/
http://christianbilien.wordpress.com/2007/09/14/strategies-for-parallelized-queries-across-rac-instances-part-22/
thanks
http://swervedba.wordpress.com -
SSRS - SQL Server Spatial Query
Hi, I was hoping someone could help me.
I'm creating an SSRS report with Map and importing points via SQL Server Spatial query. I have a db table with 8,000 points bu the import only ever brings in the first 2,000 queried (no matter what the filter is). Is there a limitation in SSRS that can be
changed via a configuration setting?
Many thanks in advance for any help on this,
Steve.Really appreciate you coming back so quick Olaf.
No its a straightforward query (select AccountNum, Name, Address, Longitude, Lattitude, UrbanArea, County, MI_STYLE, MI_PRINX, SP_GEOMETRY from <table>)
Just to give the whole picture. I had 8,000 points in MapInfo version 10. I imported them to SQL Server 2012 via MapInfo Easy Loader.
The SQL Server spatial points import to SSRS and the Bing overlay map works great (so they import to correct space) but its only a subset of the points that import. I can pick any subset using the WHERE clause but I cannot get all 8,000 points.
I was hoping there was a configuration setting somewhere in SSRS that was causing the problem. -
Materialized views on prebuilt tables - query rewrite
Hi Everyone,
I am currently counting on implementing the query rewrite functionality via materialized views to leverage existing aggregated tables.
Goal*: to use aggregate-awareness for our queries
How*: by creating views on existing aggregates loaded via ETL (+CREATE MATERIALIZED VIEW xxx on ON PREBUILT TABLE ENABLE QUERY REWRITE+)
Advantage*: leverage oracle functionalities + render logical model simpler (no aggregates)
Disadvantage*: existing ETL's need to be written as SQL in view creation statement --> aggregation rule exists twice (once on db, once in ETL)
Issue*: Certain ETL's are quite complex via lookups, functions, ... --> might create overy complex SQLs in view creation statements
My question: is there a way around the issue described? (I'm assuming the SQL in the view creation is necessary for oracle to know when an aggregate can be used)
Best practices & shared experiences are welcome as well of course
Kind regards,
Peterstreefpo wrote:
I'm still in the process of testing, but the drops should not be necessary.
Remember: The materialized view is nothing but a definition - the table itself continues to exist as before.
So as long as the definition doesn't change (added column, changed calculation, ...), the materialized view doesn't need to be re-created. (as the data is not maintained by Oracle)Thanks for reminding me but if you find a documented approach I will be waiting because this was the basis of my argument from the beginning.
SQL> select * from v$version ;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production
SQL> desc employees
Name Null? Type
EMPLOYEE_ID NOT NULL NUMBER(6)
FIRST_NAME VARCHAR2(20)
LAST_NAME NOT NULL VARCHAR2(25)
EMAIL NOT NULL VARCHAR2(25)
PHONE_NUMBER VARCHAR2(20)
HIRE_DATE NOT NULL DATE
JOB_ID NOT NULL VARCHAR2(10)
SALARY NUMBER(8,2)
COMMISSION_PCT NUMBER(2,2)
MANAGER_ID NUMBER(6)
DEPARTMENT_ID NUMBER(4)
SQL> select count(*) from employees ;
COUNT(*)
107
SQL> create table mv_table nologging as select department_id, sum(salary) as totalsal from employees group by department_id ;
Table created.
SQL> desc mv_table
Name Null? Type
DEPARTMENT_ID NUMBER(4)
TOTALSAL NUMBER
SQL> select count(*) from mv_table ;
COUNT(*)
12
SQL> create materialized view mv_table on prebuilt table with reduced precision enable query rewrite as select department_id, sum(salary) as totalsal from employees group by department_id ;
Materialized view created.
SQL> select count(*) from mv_table ;
COUNT(*)
12
SQL> select object_name, object_type from user_objects where object_name = 'MV_TABLE' ;
OBJECT_NAME OBJECT_TYPE
MV_TABLE TABLE
MV_TABLE MATERIALIZED VIEW
SQL> insert into mv_table values (999, 100) ;
insert into mv_table values (999, 100)
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view
SQL> update mv_table set totalsal = totalsal * 1.1 where department_id = 10 ;
update mv_table set totalsal = totalsal * 1.1 where department_id = 10
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view
SQL> delete from mv_table where totalsal <= 10000 ;
delete from mv_table where totalsal <= 10000
ERROR at line 1:
ORA-01732: data manipulation operation not legal on this view While investigating for this thread I actually made my own question redundant as the answer became gradually clear:
When using complex ETL's, I just need to make sure the complexity is located in the ETL loading the detailed table, not the aggregate
I'll try to clarify through an example:
- A detailed Table DET_SALES exists with Sales per Day, Store & Product
- An aggregated table AGG_SALES_MM exists with Sales, SalesStore per Month, Store & Product
- An ETL exists to load AGG_SALES_MM where Sales = SUM(Sales) & SalesStore = (SUM(Sales) Across Store)
--> i.e. the SalesStore measure will be derived out of a lookup
- A (Prebuilt) Materialized View will exist with the same column definitions as the ETL
--> to allow query-rewrite to know when to access the table
My concern was how to include the SalesStore in the materialized view definition (--> complex SQL!)
--> I should actually include SalesStore in the DET_SALES table, thus:
- including the 'Across Store' function in the detailed ETL
- rendering my Aggregation ETL into a simple GROUP BY
- rendering my materialized view definition into a simple GROUP BY as wellNot sure how close your example is to your actual problem. Also don't know if you are doing an incremental/complete data load and the data volume.
But the "SalesStore = (SUM(Sales) Across Store)" can be derived from the aggregated MV using analytical function. One can just create a normal view on top of MV for querying. It is hard to believe that aggregating in detail table during ETL load is the best approach but what do I know? -
I have the below network setup:-
1. Its a simple network at my father's office at a small town called Ichalkaranji (District - Kolhapur, Maharashtra).
2. We are using private network range 192.168.1.xxx with two Windows Server 2003 Enterprise Edition with SP2 licensed copies and 15 local Windows 7 clients who are only using Server A.
3. The network is having a TP-Link Braodband Router Connected to internet with the IP 192.168.1.1.
4. Both there Windows Server 2003 Enterprise Edition with SP2 are running separate SQL Server 2005 Express with Advanced Services, you can treat them as Server A (Problematic Server with IP of 192.168.1.2)
and Server B (this is not having any issue with IP of 192.168.1.3).
5. Server A is also being used by 6 Remote users from our Kolkata office using DDNS facility through the NO IP client software which installed separately on both the servers. Kolkata remote users
do not use OR access the Server B.
6. Server B is being used by only 2 Remote users from our Erode office (Under Salem District, Tamilnadu) using DDNS facility through the NO IP client software which installed separately on both
the servers. Erode remote users do not use OR access the Server A.
7. The front end application which running separately on both the servers have been developed in VB by a local vendor at Ichalkaranji (District - Kolhapur, Maharashtra).
8. Both Servers are having the same database structure in terms of design and tables format. Only difference is that both the servers are being used separately.
9. This error OR problem is only related to Server A, where on the clients we get the message "error [hyt00] [microsoft][odbc sql server driver] query timeout expired" every now and then.
10. We have to frequently reboot the server whenever we get this message on the client machines. May be after rebooting every thing works perfectly for 2 hours / 5 Hours / may be one full day but
the the error will come back for sure.
11. Current Database back up size on Server A is around 35 GB and take around 1 hour 15 minutes daily to take the back up.
12. Current Database back up size on Server B is around 3 GB and take around 5 to 10 minutes daily to take the back up.
13. One thing I have noticed is that when ever we reboot Server A, for some time sqlsrvr.exe file will show memory usage of 200 to 300 MBs but it will start using slowly, what i understand is that
this is the way SQL Server works.
14. Both the Servers are also running Quick heal Antivirus Server Edition separate licensed copies also.
15. Server B is also running Tally ERP 9 Licenses copy which is being used locally by 15 users of Ichalkaranji (District - Kolhapur, Maharashtra) same users
Can any one help to resolve this issue. Any help will be highly appreciated.The error message "query timeout expired" occurs, because by default many APIs, including ODBC only waits for 30 seconds for SQL Server to send any output. If no data has been seen for this period of time, they tell SQL Server to cancel execution
and return this error to the caller.
This timeout could be seen as a token that the query is taking too long time to execute and this needs to be fixed. But it can also be a pain in the rear parts, if you are content with a report taking five minutes, because you only run it once a day.
The simplest way to get rid of the error is to set the timeout to zero, which means "wait forever". This is something your vendor would have to do. This may, however, not resolve the problem, as the users may just find that the application is hanging.
To wit, there are two reasons why the query takes more than 30 seconds to complete. One is that there is simply that much work to do. This can be reduced by adding indexes or by doing other tuning, if the execution time is not acceptable. The other possibility
is blocking. That is, there is a process blocking this query from completing. This is much more likely to require attention.
It is not clear to me, whether the vendor has developed the database part as well. In this case, you should probably call the vendor's support desk to have them to sort this out.
Finally, I am little puzzled. You say that you are using Express Edition, but one of the databases is 35 GB in size. 35 GB is far above the limit for Express Edition.
Erland Sommarskog, SQL Server MVP, [email protected]
Maybe you are looking for
-
Why can't i print my calendar i use in iCLOUD from a PC
Why can't i print my calendar i use in iCLOUD from a PC
-
Drop-down menus and submitting completed forms
Hi, I have two questions: 1) On my form I have multiple drop-down menus. Is there a way to allow the user to return the field to its original blank state once an item is selected from the menu? I noticed that I could add a choice to the menu tha
-
iMac - OS X 10.9.4 - 8 GB 1067 MHz DDR3 - 3.33 GHz Intel Core 2 Duo Mail often crashes or hangs here is the crash report for the most recent crash Process: Mail [3022] Path: /Applications/Mail.app/Contents/MacOS/Mail Identifier:
-
Why some nokia phones have a homepage settings configuration. And others nokia have dont?
-
Permissions required to create a pluggable database
Hi using 12.1.0.1.5 on oracle linux 6.3 I am trying to setup a new user with permissions to create pluggable databases within my CDB, i have created the user with the following create user c##admin identified by oracle12c container=all; grant connect