Best way to write the following query
Hi ,
I've the following table structures and data..
And i've written the query to get the records which are greater than BBB-
But could you please hint me to write in simpler way.
create table obj (ob_id )
as select 1 from dual union all
select 2 from dual union all
select 3 from dual union all
select 4 from dual union all
select 5 from dual union all
select 6 from dual
create table og_dt (or_id , rt_cd,rt_ct_cd)
AS SELECT 1 ,'B','BRID' FROM DUAL UNION ALL
SELECT 1 ,'B','BRD' FROM DUAL UNION ALL
SELECT 2 ,'BB-','ACR' FROM DUAL UNION ALL
SELECT 2 ,'BB-','AQCR' FROM DUAL UNION ALL
SELECT 3 ,'BBB','QYRE' FROM DUAL UNION ALL
SELECT 4 ,'BB+','TUR' FROM DUAL UNION ALL
SELECT 5 ,'BBB-','KUYR' FROM DUAL
create table rt_srt (srt_ord,rt_cd,rt_ct_cd)
as select 50 ,'B','VID' FROM DUAL UNION ALL
SELECT 50 ,'B','BRD' FROM DUAL UNION ALL
SELECT 40 ,'BB-','ACR' FROM DUAL UNION ALL
SELECT 41 ,'BB-','AQCR' FROM DUAL UNION ALL
SELECT 30 ,'BBB','QYRE' FROM DUAL UNION ALL
SELECT 33 ,'BB+','TUR' FROM DUAL UNION ALL
SELECT 20 ,'BBB-','KUYR' FROM DUAL
select distinct
from obj,og_dt,rt_srt
where obj.ob_id=og_dt.or_id
and og_dt.rt_cd = rt_srt.rt_cd
and og_dt.rt_ct_cd=rt_srt.rt_ct_cd
and rt_srt.srt_ord > all (select rt_srt.srt_ord from rt_srt
where rt_cd='BBB-' I've used the table rt_srt twise in the above query
Could you please hint to write it in the simple way.
Thank you
Here are execution plans for 3 possible solutions (incuding one you posted). Second & third solutions assume rt_srt.srt_ord is not null:
SQL> explain plan for
2 select distinct *
3 from obj,
4 og_dt,
5 rt_srt
6 where obj.ob_id = og_dt.or_id
7 and og_dt.rt_cd = rt_srt.rt_cd
8 and og_dt.rt_ct_cd = rt_srt.rt_ct_cd
9 and rt_srt.srt_ord > all (
10 select rt_srt.srt_ord
11 from rt_srt
12 where rt_cd = 'BBB-'
13 )
14 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3210303028
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 7 | 504 | 16 (25)| 00:00:01 |
| 1 | HASH UNIQUE | | 7 | 504 | 16 (25)| 00:00:01 |
| 2 | MERGE JOIN ANTI NA | | 7 | 504 | 15 (20)| 00:00:01 |
| 3 | SORT JOIN | | 7 | 385 | 11 (19)| 00:00:01 |
|* 4 | HASH JOIN | | 7 | 385 | 10 (10)| 00:00:01 |
|* 5 | HASH JOIN | | 7 | 238 | 7 (15)| 00:00:01 |
PLAN_TABLE_OUTPUT
| 6 | TABLE ACCESS FULL| OBJ | 6 | 78 | 3 (0)| 00:00:01 |
| 7 | TABLE ACCESS FULL| OG_DT | 7 | 147 | 3 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL | RT_SRT | 7 | 147 | 3 (0)| 00:00:01 |
|* 9 | SORT UNIQUE | | 1 | 17 | 4 (25)| 00:00:01 |
|* 10 | TABLE ACCESS FULL | RT_SRT | 1 | 17 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("OG_DT"."RT_CD"="RT_SRT"."RT_CD" AND
PLAN_TABLE_OUTPUT
"OG_DT"."RT_CT_CD"="RT_SRT"."RT_CT_CD")
5 - access("OBJ"."OB_ID"="OG_DT"."OR_ID")
9 - access("RT_SRT"."SRT_ORD"<="RT_SRT"."SRT_ORD")
filter("RT_SRT"."SRT_ORD"<="RT_SRT"."SRT_ORD")
10 - filter("RT_CD"='BBB-')
Note
- dynamic sampling used for this statement (level=2)
31 rows selected.
SQL> explain plan for
2 select distinct *
3 from obj,
4 og_dt,
5 rt_srt
6 where obj.ob_id = og_dt.or_id
7 and og_dt.rt_cd = rt_srt.rt_cd
8 and og_dt.rt_ct_cd = rt_srt.rt_ct_cd
9 and rt_srt.srt_ord > (
10 select max(rt_srt.srt_ord)
11 from rt_srt
12 where rt_cd = 'BBB-'
13 )
14 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 3391900174
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 55 | 14 (15)| 00:00:01 |
| 1 | HASH UNIQUE | | 1 | 55 | 14 (15)| 00:00:01 |
|* 2 | HASH JOIN | | 1 | 55 | 10 (10)| 00:00:01 |
| 3 | MERGE JOIN CARTESIAN| | 2 | 68 | 6 (0)| 00:00:01 |
|* 4 | TABLE ACCESS FULL | RT_SRT | 1 | 21 | 3 (0)| 00:00:01 |
| 5 | SORT AGGREGATE | | 1 | 17 | | |
PLAN_TABLE_OUTPUT
|* 6 | TABLE ACCESS FULL| RT_SRT | 1 | 17 | 3 (0)| 00:00:01 |
| 7 | BUFFER SORT | | 6 | 78 | 3 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL | OBJ | 6 | 78 | 3 (0)| 00:00:01 |
| 9 | TABLE ACCESS FULL | OG_DT | 7 | 147 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("OBJ"."OB_ID"="OG_DT"."OR_ID" AND
"OG_DT"."RT_CD"="RT_SRT"."RT_CD" AND
PLAN_TABLE_OUTPUT
"OG_DT"."RT_CT_CD"="RT_SRT"."RT_CT_CD")
4 - filter("RT_SRT"."SRT_ORD"> (SELECT MAX("RT_SRT"."SRT_ORD") FROM
"RT_SRT" "RT_SRT" WHERE "RT_CD"='BBB-'))
6 - filter("RT_CD"='BBB-')
Note
- dynamic sampling used for this statement (level=2)
30 rows selected.
SQL> explain plan for
2 select distinct obj.*,
3 og_dt.*,
4 rt_srt.srt_ord,
5 rt_srt.rt_cd,
6 rt_srt.rt_ct_cd
7 from obj,
8 og_dt,
9 (
10 select t.*,
11 max(case rt_cd when 'BBB-' then srt_ord end) over() max_srt_ord
12 from rt_srt t
13 ) rt_srt
14 where obj.ob_id = og_dt.or_id
15 and og_dt.rt_cd = rt_srt.rt_cd
16 and og_dt.rt_ct_cd = rt_srt.rt_ct_cd
17 and rt_srt.srt_ord > max_srt_ord
18 /
Explained.
SQL> @?\rdbms\admin\utlxpls
PLAN_TABLE_OUTPUT
Plan hash value: 998396165
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 7 | 476 | 11 (19)| 00:00:01 |
| 1 | HASH UNIQUE | | 7 | 476 | 11 (19)| 00:00:01 |
|* 2 | HASH JOIN | | 7 | 476 | 10 (10)| 00:00:01 |
|* 3 | HASH JOIN | | 7 | 238 | 7 (15)| 00:00:01 |
| 4 | TABLE ACCESS FULL | OBJ | 6 | 78 | 3 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL | OG_DT | 7 | 147 | 3 (0)| 00:00:01 |
PLAN_TABLE_OUTPUT
|* 6 | VIEW | | 7 | 238 | 3 (0)| 00:00:01 |
| 7 | WINDOW BUFFER | | 7 | 147 | 3 (0)| 00:00:01 |
| 8 | TABLE ACCESS FULL| RT_SRT | 7 | 147 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("OG_DT"."RT_CD"="RT_SRT"."RT_CD" AND
"OG_DT"."RT_CT_CD"="RT_SRT"."RT_CT_CD")
3 - access("OBJ"."OB_ID"="OG_DT"."OR_ID")
PLAN_TABLE_OUTPUT
6 - filter("RT_SRT"."SRT_ORD">"MAX_SRT_ORD")
Note
- dynamic sampling used for this statement (level=2)
27 rows selected.
SQL> SY.
Edited by: Solomon Yakobson on May 7, 2012 4:46 PM
Similar Messages
-
Can u write the following query without using group by clause
select sp.sid, p.pid, p.name from product p, supp_prod sp
where sp.pid= p.pid and
sp.sid = ( select sid from supp_prod group by sid
having count(*) =(select count(*) from product));
thru this, we retrieving all the products delivered by the supplier.
Can you write the following query without using the group by clauseselect sp.sid, p.pid, p.name
from product p, supp_prod sp
where sp.pid= p.pid the above query will still retrieve all the products supplied by the supplier. sub-query is not necessary.
maybe if you can post some sample data and output will help us understand what you want to achieve. -
Write the following query using JOIN keyword ?
Qest:
Write the following query using JOIN keyword ?
select ID_category from category_master X
where Exist(select 1 from sub_category Y where
X.ID_category=Y.ID_category)Edited by: 799660 on Oct 3, 2010 6:05 AMselect X.ID_category
from category_master X join (select distinct ID_category from sub_category) Y
on (X.ID_category=Y.ID_category)
/SY. -
Best way to show the following data?
Hi Experts,
I am designing a validation solution in java webdynpro.
I need to process 1000 records against certain logic and need to show the errors in the interface..Each record has more than 30 columns. I would like to get your suggestions on,
1. How to display 1000 records with nearly 30 columns? A normal table with scrolling is enough or i need to split into 2 or 3 tabs based on some grouping and show these 1000 records?
2. What would be the best way to show the error messages on these records?, Is it advisable to show the error messages on a tooltip? Is it possible to show a tooltip for each record in a table?
Any suggestions would be really helpful for my work...
Thanks in advance..
Regards,
Senthil
Edited by: Senthilkumar.Paulchamy on Jun 23, 2011 9:18 PMHi,
Depending on your data would use a TableColumnGroup to group your 30 columns under a common header (displaying only the most significant data)
Expanding the row would then show all 30 columns' data
To display the errors, I would display the common header rows in bold red to make them stand out from the non-erroneous rows.
Furthermore, I would add a toggle button, to alternate between 'all records' and 'erroneous records only'
Best,
Robin -
What's the best way to model the following requirements in a data model?
I need a data model to represent the following scenario but i just can't get my head around it.
Class A has many Class B's. Class B's can have many Class C's and vice versa. So ultimately the relationship for the above 3 classes is as follows:
Class A -- (1 : M) --> Class B --- (M : M) ---> Class C
And Class C must know about its parent referenced (Class B) and also same applies to Class B where it needs to know who owns it (Class A).
Whats the best way to construct this? In a Tree? Graph?
Or just simply:
Class A has list of Class B and Class B has list of Class C.
But wouldn't this be tight dependencies?Theresonly1 wrote:
Basically:
A's owns multiple B's (B's need to know who owns it) BUT a B can be only owned by one A
B's owns multiple C's AND C's can be owned by multiple B's (this is a many-many relationship, right?) - Again; (C's need to know who owns it)I'd reckon that you'd need some references. First, figure out the names of each tier of class. I would say maybe A is a principal/school, B's are teachers (because typically teachers only teach under one principal/ in one school), and C's are students (because teachers can have multiple students, and each student has multiple teachers). So now that you have the names, make some base classes. If I understand your problem correctly, A's really don't need to know who they own, but B's need to know who owns them, so A's don't need to have much of anything. B's have a reference to the A that owns them, but not much else because they don't need to know who they own. C's own nothing, but they are owned by multiple B's, so they have an Array of refrences to each of the B's that own them. I'd use an ArrayList, considering each could have a different amount of B's, but you could do it with an array if you tried. I'll leave it up to you how you implement everything, but I'll give you some guides to how I might do it:
public class Principal{
public class Teacher{
public Principal owner;
public Teacher(Principal owner){
this.owner=owner;
public class Student{
public Teacher[] owners;
public Student(Teacher owner...){
owners=owner;
public void addOwner(Teacher newOwner){
//basically copy and paste the old array into a temporary one
//with an extra spot, add newOwner to the end of that, then
//make the owner array refrence that array.
}In Student, I'm pretty sure thats how you allow an undetermined number of parameters be used, and I'm pretty sure that it comes out as an array. I hope this helps you! -
Hi just want to know where is the best way to type the statement?
just to ask you what is the best way to write the statement please
Could you please explain what the topic and statement is about?
Helps to narrow the focus, down to a specific hardware or software
question pertaining to the Apple products involved, where applicable.
Basic word processing can be accomplished in TextEdit, if you do
not have Pages or any other software to write with, for example.
Yet, there is no indication of the purpose, direction, or problem.
Good luck & happy computing! -
What is the best way to write 10 channels of data each sampled at 4kHz to file?
Hi everyone,
I have developed a vi with about 8 AI channels and 2 AO channels... The vi uses a number of parallel while loops to acquire, process, and display continous data.. All data are read at 400 points per loop interation and all synchronously sampled at 4kHz...
My questions is: Which is the best way of writing the data to file? The "Write Measurement To File.vi" or low-level "open/create file" and "close file" functions? From my understanding there are limitations with both approaches, which I have outlines below..
The "Write Measurement To File.vi" is simple to use and closes the file after each interation so if the program crashes not all data would necessary be lost; however, the fact it closes and opens the file after each iteration consumes the processor and takes time... This may cause lags or data to be lost, which I absolutely do not want..
The low-level "open/create file" and "close file" functions involves a bit more coding, but does not require the file to be closed/opened after each iteration; so processor consumption is reduced and associated lag due to continuous open/close operations will not occur.. However, if the program crashes while data is being acquired ALL data in the buffer yet to be written will be lost... This is risky to me...
Does anyone have any comments or suggestions about which way I should go?... At the end of the day, I want to be able to start/stop the write to file process within a running while loop... To do this can the opn/create file and close file functions even be used (as they will need to be inside a while loop)?
I think I am ok with the coding... Just the some help to clarify which direction I should go and the pros and cons for each...
Regards,
Jack
Attachments:
TMS [PXI] FINAL DONE.vi 338 KBOne thing you have not mentioned is how you are consuming the data after you save it. Your solution should be compatible with whatever software you are using at both ends.
Your data rate (40kS/s) is relatively slow. You can achieve it using just about any format from ASCII, to raw binary and TDMS, provided you keep your file open and close operations out of the write loop. I would recommend a producer/consumer architecture to decouple the data collection from the data writing. This may not be necessary at the low rates you are using, but it is good practice and would enable you to scale to hardware limited speeds.
TDMS was designed for logging and is a safe format (<fullDisclosure> I am a National Instruments employee </fullDisclosure> ). If you are worried about power failures, you should flush it after every write operation, since TDMS can buffer data and write it in larger chunks to give better performance and smaller file sizes. This will make it slower, but should not be an issue at your write speeds. Make sure you read up on the use of TDMS and how and when it buffers data so you can make sure your implementation does what you would like it to do.
If you have further questions, let us know.
This account is no longer active. Contact ShadesOfGray for current posts and information. -
I am moving from PC to Mac. My PC has two internal drives and I have a 3Tb external. What is best way to move the data from the internal drives to Mac and the best way to make the external drive read write without losing data
Paragon even has non-destriuctive conversion utility if you do want to change drive.
Hard to imagine using 3TB that isn't NTFS. Mac uses GPT for default partition type as well as HFS+
www.paragon-software.com
Some general Apple Help www.apple.com/support/
Also,
Mac OS X Help
http://www.apple.com/support/macbasics/
Isolating Issues in Mac OS
http://support.apple.com/kb/TS1388
https://www.apple.com/support/osx/
https://www.apple.com/support/quickassist/
http://www.apple.com/support/mac101/help/
http://www.apple.com/support/mac101/tour/
Get Help with your Product
http://docs.info.apple.com/article.html?artnum=304725
Apple Mac App Store
https://discussions.apple.com/community/mac_app_store/using_mac_apple_store
How to Buy Mac OS X Mountain Lion/Lion
http://www.apple.com/osx/how-to-upgrade/
TimeMachine 101
https://support.apple.com/kb/HT1427
http://www.apple.com/support/timemachine
Mac OS X Community
https://discussions.apple.com/community/mac_os -
What is the best way to Optimize a SQL query : call a function or do a join?
Hi, I want to know what is the best way to optimize a SQL query, call a function inside the SELECT statement or do a simple join?
Hi,
If you're even considering a join, then it will probably be faster. As Justin said, it depends on lots of factors.
A user-defined function is only necessary when you can't figure out how to do something in pure SQL, using joins and built-in functions.
You might choose to have a user-defined function even though you could get the same result with a join. That is, you realize that the function is slow, but you believe that the convenience of using a function is more important than better performance in that particular case. -
Hi,
Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
The table has the following key Columns for the Select (sample Table)
Client_Visit
ID Number(12,0) --sequence generated number
EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
Create_TS Timestamp(6)
Client_ID Number(9,0)
Cascade Flg vahrchar2(1)
On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
Code 1:
SELECT au_subtyp1.au_id_k,
au_subtyp1.pgm_struct_id_k
FROM au_subtyp au_subtyp1
WHERE au_subtyp1.create_ts =
(SELECT MAX (au_subtyp2.create_ts)
FROM au_subtyp au_subtyp2
WHERE au_subtyp2.au_id_k =
au_subtyp1.au_id_k
AND au_subtyp2.create_ts <
TO_DATE ('2013-01-01',
'YYYY-MM-DD'
AND au_subtyp2.eff_dte =
(SELECT MAX
(au_subtyp3.eff_dte
FROM au_subtyp au_subtyp3
WHERE au_subtyp3.au_id_k =
au_subtyp2.au_id_k
AND au_subtyp3.create_ts <
TO_DATE
('2013-01-01',
'YYYY-MM-DD'
AND au_subtyp3.eff_dte < =
TO_DATE
('2012-12-31',
'YYYY-MM-DD'
AND au_subtyp1.exists_flg = 'Y'
Explain Plan
Plan hash value: 2534321861
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 91 | | 33265 (2)| 00:06:40 |
|* 1 | FILTER | | | | | | |
| 2 | HASH GROUP BY | | 1 | 91 | | 33265 (2)| 00:06:40 |
|* 3 | HASH JOIN | | 1404K| 121M| 19M| 33178 (1)| 00:06:39 |
|* 4 | HASH JOIN | | 307K| 16M| 8712K| 23708 (1)| 00:04:45 |
| 5 | VIEW | VW_SQ_1 | 307K| 5104K| | 13493 (1)| 00:02:42 |
| 6 | HASH GROUP BY | | 307K| 13M| 191M| 13493 (1)| 00:02:42 |
|* 7 | INDEX FULL SCAN | AUSU_PK | 2809K| 125M| | 13493 (1)| 00:02:42 |
|* 8 | INDEX FAST FULL SCAN| AUSU_PK | 2809K| 104M| | 2977 (2)| 00:00:36 |
|* 9 | TABLE ACCESS FULL | AU_SUBTYP | 1404K| 46M| | 5336 (2)| 00:01:05 |
Predicate Information (identified by operation id):
1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
"AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
select au_id_k,pgm_struct_id_k from (
SELECT au_id_k
, pgm_struct_id_k
, ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
create_ts, eff_dte,exists_flg
FROM au_subtyp
WHERE create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
AND eff_dte <= TO_DATE('2012-12-31','YYYY-MM-DD')
) d where rn =1 and exists_flg = 'Y'
--Explain Plan
Plan hash value: 4039566059
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2809K| 168M| | 40034 (1)| 00:08:01 |
|* 1 | VIEW | | 2809K| 168M| | 40034 (1)| 00:08:01 |
|* 2 | WINDOW SORT PUSHED RANK| | 2809K| 133M| 365M| 40034 (1)| 00:08:01 |
|* 3 | TABLE ACCESS FULL | AU_SUBTYP | 2809K| 133M| | 5345 (2)| 00:01:05 |
Predicate Information (identified by operation id):
1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
VijayHi Justin,
Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
Both the production and test environment are 3 Node RAC.
First Query...
CPU used by this session 4740
CPU used when call started 4740
Cached Commit SCN referenced 21393
DB time 4745
OS Involuntary context switches 467
OS Page reclaims 64253
OS System time used 26
OS User time used 4562
OS Voluntary context switches 16
SQL*Net roundtrips to/from client 9
bytes received via SQL*Net from client 2487
bytes sent via SQL*Net to client 15830
calls to get snapshot scn: kcmgss 37
consistent gets 52162
consistent gets - examination 2
consistent gets from cache 52162
enqueue releases 19
enqueue requests 19
enqueue waits 1
execute count 2
ges messages sent 1
global enqueue gets sync 19
global enqueue releases 19
index fast full scans (full) 1
index scans kdiixs1 1
no work - consistent read gets 52125
opened cursors cumulative 2
parse count (hard) 1
parse count (total) 2
parse time cpu 1
parse time elapsed 1
physical write IO requests 69
physical write bytes 17522688
physical write total IO requests 69
physical write total bytes 17522688
physical write total multi block requests 69
physical writes 2139
physical writes direct 2139
physical writes direct temporary tablespace 2139
physical writes non checkpoint 2139
recursive calls 19
recursive cpu usage 1
session cursor cache hits 1
session logical reads 52162
sorts (memory) 2
sorts (rows) 760
table scan blocks gotten 23856
table scan rows gotten 2809607
table scans (short tables) 1
user I/O wait time 1
user calls 11
workarea executions - onepass 1
workarea executions - optimal 9
Second Query
CPU used by this session 1197
CPU used when call started 1197
Cached Commit SCN referenced 21393
DB time 1201
OS Involuntary context switches 8684
OS Page reclaims 21769
OS System time used 14
OS User time used 1183
OS Voluntary context switches 50
SQL*Net roundtrips to/from client 9
bytes received via SQL*Net from client 767
bytes sent via SQL*Net to client 15745
calls to get snapshot scn: kcmgss 17
consistent gets 23871
consistent gets from cache 23871
db block gets 16
db block gets from cache 16
enqueue releases 25
enqueue requests 25
enqueue waits 1
execute count 2
free buffer requested 1
ges messages sent 1
global enqueue get time 1
global enqueue gets sync 25
global enqueue releases 25
no work - consistent read gets 23856
opened cursors cumulative 2
parse count (hard) 1
parse count (total) 2
parse time elapsed 1
physical read IO requests 27
physical read bytes 6635520
physical read total IO requests 27
physical read total bytes 6635520
physical read total multi block requests 27
physical reads 810
physical reads direct 810
physical reads direct temporary tablespace 810
physical write IO requests 117
physical write bytes 24584192
physical write total IO requests 117
physical write total bytes 24584192
physical write total multi block requests 117
physical writes 3001
physical writes direct 3001
physical writes direct temporary tablespace 3001
physical writes non checkpoint 3001
recursive calls 25
session cursor cache hits 1
session logical reads 23887
sorts (disk) 1
sorts (memory) 2
sorts (rows) 2810365
table scan blocks gotten 23856
table scan rows gotten 2809607
table scans (short tables) 1
user I/O wait time 2
user calls 11
workarea executions - onepass 1
workarea executions - optimal 5Thanks,
Vijay
Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM -
Best way to write SELECT statement
Hi,
I am selecting fields from one table, and need to use two fields on that table to look up additional fields in two other tables.
I do not want to use a VIEW to do this.
I need to keep all records in the original selection, yet I've been told that it's not good practice to use LEFT OUTER joins. What I really need to do is multiple LEFT OUTER joins.
What is the best way to write this? Please reply with actual code.
I could use 3 internal tables, where the second 2 use "FOR ALL ENTRIES" to obtain the additional data. But then how do I append the 2 internal tables back to the first? I've been told it's bad practice to use nested loops as well.
Thanks.Hi,
in your case having 2 internal table to update the one internal tables.
do the following steps:
*get the records from tables
sort: itab1 by key field, "Sorting by key is very important
itab2 by key field. "Same key which is used for where condition is used here
loop at itab1 into wa_tab1.
read itab2 into wa_tab2 " This sets the sy-tabix
with key key field = wa_tab1-key field
binary search.
if sy-subrc = 0. "Does not enter the inner loop
v_kna1_index = sy-tabix.
loop at itab2 into wa_tab2 from v_kna1_index. "Avoiding Where clause
if wa_tab2-keyfield <> wa_tab1-key field. "This checks whether to exit out of loop
exit.
endif.
****** Your Actual logic within inner loop ******
endloop. "itab2 Loop
endif.
endloop. " itab1 Loop
Refer the link also you can get idea about the Parallel Cursor - Loop Processing.
http://wiki.sdn.sap.com/wiki/display/Snippets/CopyofABAPCodeforParallelCursor-Loop+Processing
Regards,
Dhina.. -
Whats the best way to get the server name in a servlet deployed to a cluster?
Hi,
I have a servlet in a web application that is deployed to a cluster, just
wondering what is the best way to get the name of the node that the server is
running on at run time??
Thanks
Please try to modify the following code and test for your purpose: (check Weblogic
class document for detail)
import javax.naming.*;
import weblogic.jndi.*;
import weblogic.management.*;
import weblogic.management.configuration.*;
import weblogic.management.runtime.*;
MBeanHome home = null;
try{
//The Environment class represents the properties used to create
//an initial Context. DEfault constructor constructs an Environment
//with default properties, that is, with a WebLogic initial context.
//If unset, the properties for principal and credentials default to
//guest/guest, and the provider URL defaults to "t3://localhost:7001".
Environment env = new Environment();
//Sets the Context.PROVIDER_URL property value to the value of
//the argument url.
if(admin_url!=null){
env.setProviderUrl(admin_url);
//Sets the Context.SECURITY_PRINCIPAL property to the value of
//the argument principal.
env.setSecurityPrincipal(username);
//Sets the value of the Context.SECURITY_CREDENTIAL property to
//the value of the argument cedentials
env.setSecurityCredentials(password);
//Returns an initial context based on the properties in an Environment.
ctx = env.getInitialContext();
}else ctx = new InitialContext();
home = (MBeanHome) ctx.lookup(MBeanHome.ADMIN_JNDI_NAME);
ctx.close(); //free resource
// or if looking up a specific MBeanHome
//home = (MBeanHome) ctx.lookup(MBeanHome.JNDI_NAME + "." + serverName);
DomainMBean dmb = home.getActiveDomain(); //Get Active Domain
ServerMBean[] sbeans = dmb.getServers(); //Get all servers
if(sbeans!=null){
for(int s1=0; s1<sbeans.length; s1++){
String privip = sbeans[s1].getListenAddress();
sbeans[s1].getName();
sbeans[s1].getListenPort();
WebServerMBean wmb = sbeans[s1].getWebServer();
}catch(Exception ex){
"Gao Jun" <[email protected]> wrote:
>Is there any sample code? Thanks
>
>Best Regards,
>Jun Gao
>
>"Xiang Rao" <[email protected]> wrote in message
>news:[email protected]...
>>
>> Sure. You can use the Weblogic management APIs to query ServerBean.
>>
>>
>> "Me" <[email protected]> wrote:
>> >
>> >Thanks for your reply, i was hoping to find a better way for example
>> >a class in
>> >weblogic API.
>> >
>> >Thanks
>> >
>> >"Xiang Rao" <[email protected]> wrote:
>> >>
>> >>Physical: InetAddress.getLocalHost().getHostName()
>> >>Weblogic: System.getProperty("weblogic.Name");
>> >>
>> >>
>> >>"Me" <[email protected]> wrote:
>> >>>
>> >>>Hi,
>> >>> I have a servlet in a web application that is deployed to a
>cluster,
>> >>>just
>> >>>wondering what is the best way to get the name of the node that
>the
>> >>server
>> >>>is
>> >>>running on at run time??
>> >>>
>> >>>Thanks
>> >>
>> >
>>
>
>
-
Best way to spool DYNAMIC SQL query to file from PL/SQL
Best way to spool DYNAMIC SQL query to file from PL/SQL [Package], not SqlPlus
I'm looking for suggestions on how to create an output file (fixed width and comma delimited) from a SELECT that is dynamically built. Basically, I've got some tables that are used to define the SELECT and to describe the output format. For instance, one table has the SELECT while another is used to defined the column "formats" (e.g., Column Order, Justification, FormatMask, Default value, min length, ...). The user has an app that they can use to customize the output...which leaving the gathering of the data untouched. I'm trying to keep this formatting and/or default logic out of the actual query. This lead me into a problem.
Example query :
SELECT CONTRACT_ID,PV_ID,START_DATE
FROM CONTRACT
WHERE CONTRACT_ID = <<value>>Customization Table:
CONTRACT_ID : 2,Numeric,Right
PV_ID : 1,Numeric,Mask(0000)
START_DATE : 3,Date,Mask(mm/dd/yyyy)The first value is the kicker (ColumnOrder) as well as the fact that the number of columns is dynamic. Technically, if I could use SqlPlus...then I could just use SPOOL. However, I'm not.
So basically, I'm trying to build a generic routine that can take a SQL string execute the SELECT and map the output using data from another table to a file.
Any suggestions?
Thanks,
JasonYou could build the select statement within PL/SQL and open it using a cursor variable. You could write it to a file using the package 'UTL_FILE'. If you want to display the output using SQL*Plus, you could have an out parameter as a ref cursor.
-
What is the best way to mimic the data from production to other server?
Hi,
here we user streams and advanced replication to send the data for 90% of tables from production to another production database server. if one goes down can use another one. is there any other best option rather using the streams and replication? we are having lot of problems with streams these days they keep break and get calls.
I heard about data guard but dont know what is use of it? please advice the best way to replicate the data.
Thanks a lot.....RAC, Data Guard. The first one is active-active, that is, you have two or more nodes accessing the same database on shared storage and you get both HA and load balancing. The second is active-passive (unless you're on 11.2 with Active Standby or Snapshot Standby), that is one database is primary and the other is standby, which you normally cannot query or modify, but to which you can quickly switch in case primary fails. There's also Logical Standby - it's based on Streams and generally looks like what you seem to be using now (sort of.) But it definitely has issues. You can also take a look at GoldenGate or SharePlex.
-
What is the best way to get the notes content from iphone 4 to 4s without doing a restore? i want the new phone to be totally new but not sure how to get notes content across. If I do a restore as I have when previously from one iphone to another it has shown (in settings, usage) the cumulative usage from previous phones so all the hours of calls on all previous iphones will be displayed even though its brand new. Anyone know how I can get my notes (from standard iphone notes app) to my new iphone 4s without restoring from previous iphone 4. Thanks for any help offered.
First, if you haven't updated to iTunes 10.5, please update now as you will need it for the iPhone 4S and iOS 5.
Once you're done installing iTunes 10.5, open it. Connect your iPhone to iTunes using the USB cable. Once your iPhone pops up right click on it. For example: an iPhone will appear and it will say "Ryan's iPhone."
Right click on it and select "Backup" from the dropdown menu. It will start backing up. This should backup your notes.
Please tell me if you have any problems with backing up.
Once you backup and get your iPhone 4S, you must follow these steps. If you don't follow these steps, you will not be able to get your notes on your new iPhone 4S.
Open up iTunes again then right click on your device (iPhone 4S). Once you do you will see a dropdown menu. It will say "Restore from Backup..." Select this and it'll ask for a backup, select it from the dropdown menu. For example "Ryan's iPhone - October 12, 2011." Pick that and it will restore to your backup. Do this when you get your iPhone 4S so you will not lose anything. Even though you're restoring, you're getting back, since you're getting the previous settings, notes, contacts, mail and other settings from your old iPhone. You'll still have Siri though! So, restore when you first get it. Also frequently backup your device, as it will be worth it. You can restore from a backup if something goes wrong or save your data for a future update.
Once you do that, you should have your notes on your new iPhone 4S and iOS 5.
Please tell me if you need any help.
I hoped I answered your questions and solved your problem!
Maybe you are looking for
-
How do i load images from a folder?
Hello everyone, Please can someone help me as i am stuck. I am trying to research loading images from a folder called /images/unknown (unknown is a customer number and is stored in a variable called customerNo). I feel that i will need to load the im
-
Stop with an external monitor in Mavericks
I moved to MacBook PRO Retina 13" from a MacBook PRO late 2009. Using an external monitor in Mavericks when unplugging power, the computer doesn't stop and the battery power slowly drains. In the old MacOS X and with the old MacBook PRO once you disc
-
TS3218 Syncing new ipod to old itunes
How do I sync my new ipod nano to my current itunes without loosing all my songs!? x
-
Hi can any one give me some example of GL accounts which will maintain open item management accounts other GR/IR and rent clearing accounts. ramesh
-
Hi, I don't know how to delete internal order. I have set a deletion flag but I don't see a transaction where I could erase Internal orders. Could you help me with that. Is it user's tree or SPRO transaction? Best wishes, Karol