Performance issue when using select count on large tables
Hello Experts,
I have a requirement where i need to get count of data from a database table.Later on i need to display the count in ALV format.
As per my requirement, I have to use this select count inside a nested loops.
Below is the count snippet:
LOOP at systems assigning <fs_sc_systems>.
LOOP at date assigning <fs_sc_date>.
SELECT COUNT( DISTINCT crmd_orderadm_i~header )
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient "MANDT is referred as client
AND crmd_orderadm_iguid EQ bbp_pdigpguid
INTO w_sc_count
WHERE crmd_orderadm_i~created_at BETWEEN <fs_sc_date>-start_timestamp
AND <fs_sc_date>-end_timestamp
AND bbp_pdigp~zz_scsys EQ <fs_sc_systems>-sys_name.
endloop.
endloop.
In the above code snippet,
<fs_sc_systems>-sys_name is having the system name,
<fs_sc_date>-start_timestamp is having the start date of month
and <fs_sc_date>-end_timestamp is the end date of month.
Also the data in tables crmd_orderadm_i and bbp_pdigp is very large and it increases every day.
Now,the above select query is taking a lot of time to give the count due to which i am facing performance issues.
Can any one pls help me out to optimize this code.
Thanks,
Suman
Hi Choudhary Suman ,
Try this:
SELECT crmd_orderadm_i~header
INTO it_header " interna table
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient
AND crmd_orderadm_iguid EQ bbp_pdigpguid
FOR ALL ENTRIES IN date
WHERE crmd_orderadm_i~created_at BETWEEN date-start_timestamp
AND date-end_timestamp
AND bbp_pdigp~zz_scsys EQ date-sys_name.
SORT it_header BY header.
DELETE ADJACENT DUPLICATES FROM it_header
COMPARING header.
describe table it_header lines v_lines.
Hope this information is help to you.
Regards,
José
Similar Messages
-
Performance issue when using the same query in a different way
Hello,
I have a performance problem with the statement below when running it with an insert or with execute immediate.
n.b.: This statement could be more optimized, but it is a generated statement.
When I run this statement I get one row back within one second, so there is no performance problem.
select sysdate
,5
,'testje'
,count (1)
,'NL' groupby
from (select 'different (target)' compare_type
,t.id_org_addr id_org_addr -- ID_ORG_ADDR
,t.vpd_country vpd_country -- CTL_COUNTRY
,t.addr_type addr_type -- ADDRESSTYP_COD
from (select *
from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m
on m.vpd_country = t.vpd_country
and m.key_type = 'ORGADDR2'
and m.target_value = t.id_org_addr
where coalesce (t.end_date, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where vpd_country = 'NL' /*EGRB*/
) t
where exists
(select null
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s
where t.id_org_addr = s.id_org_addr)
minus
select 'different (target)' compare_type
,s.id_org_addr id_org_addr -- ID_ORG_ADDR
,s.ctl_country vpd_country -- CTL_COUNTRY
, (select to_number (l.target_value)
from okc_code_foreign l
where l.source_code_type = 'TYS'
and l.target_code_type = 'ADDRLINKTYPE'
and l.source_value = upper (s.addresstyp_cod)
and l.vpd_country = s.ctl_country)
addr_type -- ADDRESSTYP_COD
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s) When I run this statement using a insert by placing
insert into okc_compare_results (
datetime
,compare_tables_id
,compare_target
,record_count
,groupby
) before the statement, then the statement runs about *3 to 4 minutes*, The same is happening when running the select part only using execute immediate.
Below the execution plans of the insert with the select and the select only.
Could somebody tell me what causes the different behavior of the "same" statement and what could I do to avoid this behavior.
The database version is: 11.1.0.7.0
Regards,
Fred.
SQL Statement which produced this data:
select * from table(dbms_xplan.display_cursor ('cuk3uwnxx344q',0 /*3431532430 */))
union all
select * from table(dbms_xplan.display_cursor ('862aq599gfd6n',0/*3531428851 */))
plan_table_output
SQL_ID cuk3uwnxx344q, child number 0
select sysdate ,:"SYS_B_00" ,:"SYS_B_01"
,count (:"SYS_B_02") ,:"SYS_B_03" groupby from ( (select
:"SYS_B_04" compare_type ,t.id_org_addr id_org_addr
-- ID_ORG_ADDR ,t.vpd_country vpd_country --
CTL_COUNTRY ,t.addr_type addr_type -- ADDRESSTYP_COD
from (select * from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m on
m.vpd_country = t.vpd_country ; and
m.key_type = :"SYS_B_05" and
m.target_value = t.id_org_addr ; where
coalesce (t.end_date, to_date (:"SYS_B_06", :"SYS_B_07")) >= sysdate)
/*SGRB*/ where vpd_country = :"SYS_B_08" /*EGRB*/
Plan hash value: 3431532430
Id Operation Name Rows Bytes Cost (%CPU) Time Pstart Pstop
0 SELECT STATEMENT 1772 (100)
1 SORT AGGREGATE 1
2 VIEW 3 1772 (1) 00:00:22
3 MINUS
4 SORT UNIQUE 3 492 1146 (1) 00:00:14
* 5 HASH JOIN OUTER 3 492 1145 (1) 00:00:14
6 NESTED LOOPS
7 NESTED LOOPS 3 408 675 (1) 00:00:09
* 8 HASH JOIN 42 4242 625 (1) 00:00:08
9 PARTITION LIST SINGLE 3375 148K 155 (2) 00:00:02 KEY KEY
* 10 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 3375 148K 155 (2) 00:00:02 KEY KEY
* 11 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
* 12 INDEX UNIQUE SCAN UK_ODS_ORG_ADDR 1 1 (0) 00:00:01
* 13 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_ORG_ADDR 1 35 2 (0) 00:00:01 ROWID ROWID
* 14 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 1354K 469 (1) 00:00:06
15 NESTED LOOPS
16 NESTED LOOPS 1 67 9 (12) 00:00:01
17 NESTED LOOPS 1 48 8 (13) 00:00:01
* 18 HASH JOIN 1 23 6 (17) 00:00:01
* 19 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_COUNTRY_SYSTEM 1 11 2 (0) 00:00:01 ROWID ROWID
* 20 INDEX RANGE SCAN PK_ODS_DIVISION_SYSTEM 1 1 (0) 00:00:01
* 21 TABLE ACCESS FULL SY_SOURCE_CODE 8 96 3 (0) 00:00:01
22 TABLE ACCESS BY INDEX ROWID SY_FOREIGN_CODE 1 25 2 (0) 00:00:01
* 23 INDEX RANGE SCAN PK_SY_FOREIGN_CODE 1 1 (0) 00:00:01
* 24 INDEX UNIQUE SCAN PK_SY_TARGET_CODE 1 0 (0)
* 25 TABLE ACCESS BY INDEX ROWID SY_TARGET_CODE 1 19 1 (0) 00:00:01
26 SORT UNIQUE 3375 332K 626 (1) 00:00:08
* 27 HASH JOIN OUTER 3375 332K 625 (1) 00:00:08
28 PARTITION LIST SINGLE 3375 148K 155 (2) 00:00:02 KEY KEY
* 29 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 3375 148K 155 (2) 00:00:02 KEY KEY
* 30 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
Predicate Information (identified by operation id):
5 - access("M"."TARGET_VALUE"="T"."ID_ORG_ADDR" AND "M"."VPD_COUNTRY"="T"."VPD_COUNTRY")
8 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_12 S."ADR_ID_CEGEDIM" :SYS_B_13 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
10 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_14,:SYS_B_15))>=SYSDATE@!)
11 - access("M"."KEY_TYPE"=:SYS_B_11 AND "M"."VPD_COUNTRY"=:SYS_B_16)
12 - access("T"."ID_ORG_ADDR"="M"."TARGET_VALUE")
13 - filter(("T"."VPD_COUNTRY"=:SYS_B_08 AND COALESCE("T"."END_DATE",TO_DATE(:SYS_B_06,:SYS_B_07))>=SYSDATE@!))
14 - access("M"."KEY_TYPE"=:SYS_B_05 AND "M"."VPD_COUNTRY"=:SYS_B_08)
18 - access("CS"."ID_SYSTEM"="SK"."ID_SOURCE_SYSTEM")
19 - filter("CS"."SYSTEM_TYPE"=1)
20 - access("CS"."VPD_COUNTRY"=:B1 AND "CS"."EXP_IMP_TYPE"='I')
filter("CS"."EXP_IMP_TYPE"='I')
21 - filter("SK"."CODE_TYPE"=:SYS_B_18)
23 - access("FK"."ID_SOURCE_CODE"="SK"."ID_SOURCE_CODE" AND "FK"."SOURCE_VALUE"=UPPER(:B1) AND
"CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY")
filter(("FK"."VPD_COUNTRY"=:B1 AND "FK"."SOURCE_VALUE"=UPPER(:B2) AND "CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY"))
24 - access("FK"."ID_TARGET_CODE"="TK"."ID_TARGET_CODE")
25 - filter("TK"."CODE_TYPE"=:SYS_B_19)
27 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_23 S."ADR_ID_CEGEDIM" :SYS_B_24 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
29 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_25,:SYS_B_26))>=SYSDATE@!)
30 - access("M"."KEY_TYPE"=:SYS_B_22 AND "M"."VPD_COUNTRY"=:SYS_B_27)
SQL_ID 862aq599gfd6n, child number 0
insert into okc_compare_results ( datetime
,compare_tables_id ,compare_target
,record_count ,groupby )
select sysdate ,:"SYS_B_00" ,:"SYS_B_01"
,count (:"SYS_B_02") ,:"SYS_B_03" groupby from ( (select
:"SYS_B_04" compare_type ,t.id_org_addr id_org_addr
-- ID_ORG_ADDR ,t.vpd_country vpd_country --
CTL_COUNTRY ,t.addr_type addr_type -- ADDRESSTYP_COD
from (select * from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m on
m.vpd_country = t.vpd_country ; and
m.key_type = :"SYS_B_05" and
m.target_value = t.id_org_addr
Plan hash value: 3531428851
Id Operation Name Rows Bytes Cost (%CPU) Time Pstart Pstop
0 INSERT STATEMENT 1646 (100)
1 LOAD TABLE CONVENTIONAL
2 SORT AGGREGATE 1
3 VIEW 1 1646 (1) 00:00:20
4 MINUS
5 SORT UNIQUE 1 163
6 NESTED LOOPS OUTER 1 163 1067 (1) 00:00:13
7 NESTED LOOPS 1 135 599 (1) 00:00:08
* 8 HASH JOIN 19 1919 577 (2) 00:00:07
9 PARTITION LIST SINGLE 1535 69075 107 (4) 00:00:02 KEY KEY
* 10 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 1535 69075 107 (4) 00:00:02 KEY KEY
* 11 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
* 12 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_ORG_ADDR 1 34 2 (0) 00:00:01 ROWID ROWID
* 13 INDEX UNIQUE SCAN UK_ODS_ORG_ADDR 25 1 (0) 00:00:01
* 14 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 1 28 468 (1) 00:00:06
15 NESTED LOOPS
16 NESTED LOOPS 1 67 8 (0) 00:00:01
17 NESTED LOOPS 1 48 7 (0) 00:00:01
18 NESTED LOOPS 1 23 5 (0) 00:00:01
* 19 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_COUNTRY_SYSTEM 1 11 2 (0) 00:00:01 ROWID ROWID
* 20 INDEX RANGE SCAN PK_ODS_DIVISION_SYSTEM 1 1 (0) 00:00:01
* 21 TABLE ACCESS FULL SY_SOURCE_CODE 1 12 3 (0) 00:00:01
22 TABLE ACCESS BY INDEX ROWID SY_FOREIGN_CODE 1 25 2 (0) 00:00:01
* 23 INDEX RANGE SCAN PK_SY_FOREIGN_CODE 1 1 (0) 00:00:01
* 24 INDEX UNIQUE SCAN PK_SY_TARGET_CODE 1 0 (0)
* 25 TABLE ACCESS BY INDEX ROWID SY_TARGET_CODE 1 19 1 (0) 00:00:01
26 SORT UNIQUE 1535 151K
* 27 HASH JOIN OUTER 1535 151K 577 (2) 00:00:07
28 PARTITION LIST SINGLE 1535 69075 107 (4) 00:00:02 KEY KEY
* 29 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 1535 69075 107 (4) 00:00:02 KEY KEY
* 30 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
Predicate Information (identified by operation id):
8 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_12 S."ADR_ID_CEGEDIM" :SYS_B_13 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
10 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_14,:SYS_B_15))>=SYSDATE@!)
11 - access("M"."KEY_TYPE"=:SYS_B_11 AND "M"."VPD_COUNTRY"=:SYS_B_16)
12 - filter((COALESCE("T"."END_DATE",TO_DATE(:SYS_B_06,:SYS_B_07))>=SYSDATE@! AND "T"."VPD_COUNTRY"=:SYS_B_08))
13 - access("T"."ID_ORG_ADDR"="M"."TARGET_VALUE")
14 - access("M"."KEY_TYPE"=:SYS_B_05 AND "M"."VPD_COUNTRY"=:SYS_B_08 AND "M"."TARGET_VALUE"="T"."ID_ORG_ADDR")
filter("M"."TARGET_VALUE"="T"."ID_ORG_ADDR")
19 - filter("CS"."SYSTEM_TYPE"=1)
20 - access("CS"."VPD_COUNTRY"=:B1 AND "CS"."EXP_IMP_TYPE"='I')
filter("CS"."EXP_IMP_TYPE"='I')
21 - filter(("SK"."CODE_TYPE"=:SYS_B_18 AND "CS"."ID_SYSTEM"="SK"."ID_SOURCE_SYSTEM"))
23 - access("FK"."ID_SOURCE_CODE"="SK"."ID_SOURCE_CODE" AND "FK"."SOURCE_VALUE"=UPPER(:B1) AND
"CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY")
filter(("FK"."VPD_COUNTRY"=:B1 AND "FK"."SOURCE_VALUE"=UPPER(:B2) AND "CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY"))
24 - access("FK"."ID_TARGET_CODE"="TK"."ID_TARGET_CODE")
25 - filter("TK"."CODE_TYPE"=:SYS_B_19)
27 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_23 S."ADR_ID_CEGEDIM" :SYS_B_24 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
29 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_25,:SYS_B_26))>=SYSDATE@!)
30 - access("M"."KEY_TYPE"=:SYS_B_22 AND "M"."VPD_COUNTRY"=:SYS_B_27)Edited by: BluShadow on 20-Jun-2012 10:30
added {noformat}{noformat} tags for readability. Please read {message:id=9360002} and learn to do this yourself.yes, all the used tables are analyzed.
Thanks, for pointing to the metalink bug, I have also searched in metalink, but didn't find this bug.
I have a little bit more information about the problem.
I use the following select (now in a readable format)
select count (1)
from ( (select 'different (target)' compare_type
,t.id_org_addr id_org_addr -- ID_ORG_ADDR
,t.vpd_country vpd_country -- CTL_COUNTRY
,t.addr_type addr_type -- ADDRESSTYP_COD
from (select *
from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m
on m.vpd_country = t.vpd_country
and m.key_type = 'ORGADDR2'
and m.target_value = t.id_org_addr
where coalesce (t.end_date, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where vpd_country = 'NL' /*EGRB*/
) t
where exists
(select null
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s
where t.id_org_addr = s.id_org_addr)
minus
select 'different (target)' compare_type
,s.id_org_addr id_org_addr -- ID_ORG_ADDR
,s.ctl_country vpd_country -- CTL_COUNTRY
, (select to_number (l.target_value)
from okc_code_foreign l
where l.source_code_type = 'TYS'
and l.target_code_type = 'ADDRLINKTYPE'
and l.source_value = upper (s.addresstyp_cod)
and l.vpd_country = s.ctl_country)
addr_type -- ADDRESSTYP_COD
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s)) The select is executed in 813 msecs
When I execute the same select using execute immediate like:
declare
ln_count number;
begin
execute immediate q'[<select statement>]' into ln_count;
end;This takes 3:56 minutes to complete.
When I change the second coalesce part (the one within the exists) in the flowing way:
the part
coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate
is replaced by
s.end_val_dat >= sysdate or s.end_val_dat is nullthen the execution time is even faster (560 msecs) in both, the plain select and the select using the execute immediate. -
Performance issues when using Smart View and Excel 2010
Hello, we are experiencing very slow retrieval times when using Smart View and Excel 2010. Currently on v.11.1.3.00 and moved over from Excel 2003 in the last quarter. The same spreadsheets in 2010 (recreated) are running much slower than they used to in 2003 and I was wondering if anyone else out there has experienced similar problems?
It looks like there is some background caching going on as when you copy and paste the contents into a new file and retrieve it is better.....initially. The size of the files are generally less than 2mb and there aren't an expecially large number of subcubes requested so I am at a loss to explain or alleviate the issues.
Any advice / tips on how to optimise the performance would be greatly appreciated.
Thanks,
NickHi Nick,
Office 2010 (32 bit) only is supported.
Also check these documents:
Refresh in Smart View 11.1.2.1 is Slow with MS Office 2010. (Doc ID 1362557.1)
Smart View Refresh Returns Zeros (Doc ID 758892.1)
Internet Explorer (IE7, IE8 and IE9) Recommended Settings for Oracle Hyperion Products (Doc ID 820892.1)
Thank you,
Charles Babu J
Edited by: CJX on Nov 15, 2011 12:21 PM -
Zoom issue when using selection tool in Photoshop CS5
Hi, i have a macbook pro with a multi touch trackpad. For some reason when i'm using the selection tool and i try zooming in or out using the multi touch track pad nothing happens or sometimes it takes forever for anything to take place. Other times, i have to resort to using the keypad to zoom in or out.
what could be causing this issue as it is really slowing me downHi - thanks for your comments. It was happening on all of the selection tools however I have solved the problem by changing the 'Advanced Graphics Processor Setting' to 'Basic' in Photoshop CC.
Thanks anyway -
Performance issues when using AQ notification with one consumer
We have developed a system to load data from a reservation database to a reporting database
A a certain point in the proces, a message with the identifier of the reservation is enqueued to a queue (multi-consumer) on the same DB and then propagated to a similar queue on the REP database.
This queue (multi-consumer) has AQ notification enabled (with one consumer) which calls the queue_callback procedure which
- dequeues the message
- calls a procedure to load the Resv data into the Reporting schema (through DB link)
We need each message to be processed ONLY ONCE thus the usage of one single subscriber (consumer)
But when load testing our application with multiple threads, the number of records created in the Reservation Database becomes quite large, meaning a large number of messages going through the first queue and propagating to the second queue (very quickly).
But messages are not processed fast enough by the 2nd queue (notification) which falls behind.
I would like to keep using notification as processing is automatic (no need to set up dbms_jobs to dequeue etc..) or something similar
So having read articles, I feel I need to use:
- multiple subscribers to the 2nd queue where each message is processed only by one subscriber (using a rule : say 10 subscribers S0 to S10 with Si processing messages where last number of the identifier is i )
problem with this is that there is an attempt to process the message for each subscriber, isn't there
- a different dequeuing method where many processes are used in parallel , with each message is processed only by one subscriber
Does anyone have experience and recommendations to make on how to improve throughput of messages?
Rgds
PhilippeHi, thanks for your interest
I am working with 10.2.0.4
My objective is to load a subset of the reservation data from the tables in the first DB (Reservation-OLTP-150 tables)
to the tables in the second DB (Reporting - about 15 tables at the moment), without affecting performance on the Reservation DB.
Thus the choice of advanced queueing (asyncronous )
- I have 2 similar queues in 2 separate databases ( AND Reporting)
The message payload is the same on both (the identifier of the reservation)
When a certain event happens on the RESERVATION database, I enqueue a message on the first database
Propagation moves the same message data to the second queue.
And there I have notification sending the message to a single consumer, which:
- calls dequeue
- and the data load procedure, which load this reservation
My performance difficulties start at the notification but I will post all the relevant code before notification, in case it has an impact.
- The 2nd queue was created with a script containing the following (similar script for fisrt queue)
dbms_aqadm.create_queue_table( queue_table => '&&CQT_QUEUE_TABLE_NAME',
queue_payload_type => 'RESV_DETAIL',
comment => 'Report queue table',
multiple_consumers => TRUE,
message_grouping => DBMS_AQADM.NONE,
compatible => '10.0.0',
sort_list => 'ENQ_TIME',
primary_instance => '0',
secondary_instance => '0');
dbms_aqadm.create_queue (
queue_name => '&&CRQ_QUEUE_NAME',
queue_table => '&&CRQ_QUEUE_TABLE_NAME',
max_retries => 5);
- ENQUEUING on the first queue (snippet of code)
o_resv_detail DLEX_AQ_ADMIN.RESV_DETAIL;
o_resv_detail:= DLEX_AQ_ADMIN.RESV_DETAIL(resvcode, resvhistorysequence);
DLEX_RESVEVENT_AQ.enqueue_one_message (o_resv_detail);
where DLEX_RESVEVENT_AQ.enqueue_one_message is :
PROCEDURE enqueue_one_message (msg IN RESV_DETAIL)
IS
enqopt DBMS_AQ.enqueue_options_t;
mprop DBMS_AQ.message_properties_t;
enq_msgid dlex_resvevent_aq_admin.msgid_t;
BEGIN
DBMS_AQ.enqueue (queue_name => dlex_resvevent_aq_admin.c_resvevent_queue,
enqueue_options => enqopt,
message_properties => mprop,
payload => msg,
msgid => enq_msgid
END;
- PROPAGATION: The message is dequeued from 1st queue and enqueued automatically by AQ propagation into this 2nd queue.
(using a call to the following 'wrapper' procedure)
PROCEDURE schedule_propagate (
src_queue_name IN VARCHAR2,
destination IN VARCHAR2 DEFAULT NULL
IS
sprocname dlex_types.procname_t:= 'dlex_resvevent_aq_admin.schedule_propagate';
BEGIN
DBMS_AQADM.SCHEDULE_PROPAGATION(queue_name => src_queue_name,
destination => destination,
latency => 10);
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line (SQLERRM || ' occurred in ' || sprocname);
END schedule_propagate;
- For 'NOTIFICATION': ONE subscriber was created using:
EXECUTE DLEX_REPORT_AQ_ADMIN.add_subscriber('&&STQ_QUEUE_NAME','&&STQ_SUBSCRIBER',NULL,NULL, NULL);
this is a wrapper procedure that uses:
DBMS_AQADM.add_subscriber (queue_name => p_queue_name, subscriber => subscriber_agent );
Then notification is registered with:
EXECUTE dlex_report_aq_admin.register_notification_action ('&&AQ_SCHEMA','&&REPORT_QUEUE_NAME','&&REPORT_QUEUE_SUBSCRIBER');
- job_queue_processes is set to 10
- The callback procedure is as follows
CREATE OR REPLACE PROCEDURE DLEX_AQ_ADMIN.queue_callback
context RAW,
reginfo SYS.AQ$_REG_INFO,
descr SYS.AQ$_DESCRIPTOR,
payload RAW,
payloadl NUMBER
IS
s_procname CONSTANT VARCHAR2 (40) := UPPER ('queue_callback');
r_dequeue_options DBMS_AQ.DEQUEUE_OPTIONS_T;
r_message_properties DBMS_AQ.MESSAGE_PROPERTIES_T;
v_message_handle RAW(16);
o_payload RESV_DETAIL;
BEGIN
r_dequeue_options.msgid := descr.msg_id;
r_dequeue_options.consumer_name := descr.consumer_name;
DBMS_AQ.DEQUEUE(
queue_name => descr.queue_name,
dequeue_options => r_dequeue_options,
message_properties => r_message_properties,
payload => o_payload,
msgid => v_message_handle
-- Call procedure to load data from reservation database to Reporting DB through the DB link
dlex_report.dlex_data_load.load_reservation
( in_resvcode => o_payload.resv_code,
in_resvHistorySequence => o_payload.resv_history_sequence );
COMMIT;
END queue_callback;
- I noticed that messages are not taken out of the 2nd queue,
I guess I would need to use the REMOVE option to delete messages from the queue?
Would this be a large source of performance degradation after just a few thousand messages?
- The data load through the DB may be a little bit intensive but I feel that doing things in parallel would help.
I would like to understand if Oracle has a way of dequeuing in parallel (with or without the use of notification)
In the case of multiple subscribers with notification , does 'job_queue_processes' value has an impact on the degree of parallelism? If not what setting has?
And is there a way supplied by Oracle to set the queue to notify only one subscriber per message?
Your advice would be very much appreciated
Philippe
Edited by: user528100 on Feb 23, 2009 8:14 AM -
I have some code that takes a csv and loads it into a type table. I am now trying to select from this table but am not sure what column name to use in my cursor. (See below I have c2.???).
FUNCTION in_list_varchar2 (p_in_list IN VARCHAR2) RETURN VARCHAR2_TT is
l_tab VARCHAR2_TT := VARCHAR2_TT();
l_text VARCHAR2(32767) := p_in_list || ',';
l_idx NUMBER;
BEGIN
LOOP l_idx := INSTR(l_text, ',');
EXIT WHEN NVL(l_idx, 0) = 0;
l_tab.extend;
l_tab(l_tab.last) := TRIM(SUBSTR(l_text, 1, l_idx - 1));
l_text := SUBSTR(l_text, l_idx + 1);
END LOOP;
RETURN l_tab;
END in_list_varchar2;
schema_list := 'SCOTT, TOM, MIKE';
schema_names := regexp_replace(schema_list, '([^,]+)', '''\1''');
schema_list_t := DATAPUMP_UTIL.in_list_varchar2(schema_list);
for c2 in
select *from table(schema_list_t)
loop
dml_str := 'DROP USER ' || c2.??? || 'CASADE';
EXECUTE IMMEDIATE dml_str;
end loop;Chris wrote:
I have some code that takes a csv and loads it into a type table. I am now trying to select from this table but am not sure what column name to use in my cursor. (See below I have c2.???).
FUNCTION in_list_varchar2 (p_in_list IN VARCHAR2) RETURN VARCHAR2_TT is
l_tab VARCHAR2_TT := VARCHAR2_TT();
l_text VARCHAR2(32767) := p_in_list || ',';
l_idx NUMBER;
BEGIN
LOOP l_idx := INSTR(l_text, ',');
EXIT WHEN NVL(l_idx, 0) = 0;
l_tab.extend;
l_tab(l_tab.last) := TRIM(SUBSTR(l_text, 1, l_idx - 1));
l_text := SUBSTR(l_text, l_idx + 1);
END LOOP;
RETURN l_tab;
END in_list_varchar2;
schema_list := 'SCOTT, TOM, MIKE';
schema_names := regexp_replace(schema_list, '([^,]+)', '''\1''');
schema_list_t := DATAPUMP_UTIL.in_list_varchar2(schema_list);
for c2 in
select *from table(schema_list_t)
loop
dml_str := 'DROP USER ' || c2.??? || 'CASADE';
EXECUTE IMMEDIATE dml_str;
end loop;
I have some code that takes a csv and loads it into a type table.Why a type table? Where is type table defined as such?
with PL/SQL, objects must be declared before they can be referenced.
You need to correct syntax errors before tackling any runtime/logic flaws. -
Performance Issues when editing large PDFs
We are using Adobe 9 and X Professional and are experiencing performance issues when attempting to edit large PDF files. (Windows 7 OS). When editing PDFs that are 200+ pages, we are seeing pregnated pauses (that feel like lockups), slow open times and slow to print issues.
Are there any tips or tricks with regard to working with these large documents that would improve performance?You said "edit." If you are talking about actual editing, that should be done in the original and a new PDF created. Acrobat is not a very good editing tool and should only be used for minor, critical edits.
If you are talking about simply using the PDF, a lot depends on the structure of the PDF. If it is full of graphics, it will be slow. You can improve this performance by using the PDF Optimize to reduce graphic resolution and such. You may very likely have a bloated PDF that is causing the problem and optimizing the structure should help.
Be sure to work on a copy. -
Performance issue when a Direct I/O option is selected
Hello Experts,
One of my customers has a performance issue when a Direct I/O option is selected. Reports that there was increase in memory usage when Direct I/O storage option is selected when compared to Buffered I/O option.
There are two applications on the server of type BSO. When using Buffered I/O, experienced a high level of Read and Write I/O's. Using Direct I/O reduces the Read and Write I/O's, but dramatically increases memory usage.
Other Information -
a) Environment Details
HSS - 9.3.1.0.45, AAS - 9.3.1.0.0.135, Essbase - 9.3.1.2.00 (64-bit)
OS: Microsoft Windows x64 (64-bit) 2003 R2
b) What is the memory usage when Buffered I/O and Direct I/O is used? How about running calculations, database restructures, and database queries? Do these processes take much time for execution?
Application 1: Buffered 700MB, Direct 5GB
Application 2: Buffered 600MB to 1.5GB, Direct 2GB
Calculation times may increase from 15 minutes to 4 hours. Same with restructure.
c) What is the current Database Data cache; Data file cache and Index cache values?
Application 1: Buffered (Index 80MB, Data 400MB), Direct (Index 120MB; Data File 4GB, Data 480MB).
Application 2: Buffered (Index 100MB, Data 300MB), Direct (Index 700MB, Data File 1.5GB, Data 300MB)
d) What is the total size of the ess0000x.pag files and ess0000x.ind files?
Application 1: Page File 20GB, Index 1.7GB.
Application 2: Page 3GB, index 700MB.
Any suggestions on how to improve the performance when Direct I/O is selected? Any performance documents relating to above scenario would be of great help.
Thanks in advance.
Regards,
SudhirSudhir,
Do you work at a help desk or are you a consultant? you ask such a varied range of questions I think the former. If you do work at a help desk, don't you have a next level support that could help you? If you are a consultant, I suggest getting together with another consultant that actually knows more. You might also want to close some of your questions,. You have 24 open and perhaps give points to those that helped you. -
Performance issues when creating a Report / Query in Discoverer
Hi forum,
Hope you are can help, it involves a performance issues when creating a Report / Query.
I have a Discoverer Report that currently takes less than 5 seconds to run. After I add a condition to bring back Batch Status that = Posted we cancelled the query after reaching 20 minutes as this is way too long. If I remove the condition the query time goes back to less than 5 seconds.
Please see attached the SQL Inspector Plan:
Before Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
AND-EQUAL
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N2
INDEX RANGE SCAN GL.GL_CODE_COMBINATIONS_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_N1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
After Condition
SELECT STATEMENT
SORT GROUP BY
VIEW SYS
SORT GROUP BY
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS
TABLE ACCESS FULL GL.GL_JE_BATCHES
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_HEADERS
INDEX RANGE SCAN GL.GL_JE_HEADERS_N1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
INDEX UNIQUE SCAN GL.GL_ENCUMBRANCE_TYPES_U1
INDEX UNIQUE SCAN GL.GL_DAILY_CONVERSION_TYPES_U1
INDEX UNIQUE SCAN GL.GL_BUDGET_VERSIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_SOURCES_TL
INDEX UNIQUE SCAN GL.GL_JE_SOURCES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_CATEGORIES_TL_U1
INDEX UNIQUE SCAN GL.GL_JE_BATCHES_U1
TABLE ACCESS BY INDEX ROWID GL.GL_JE_LINES
INDEX RANGE SCAN GL.GL_JE_LINES_U1
INDEX UNIQUE SCAN GL.GL_SETS_OF_BOOKS_U2
TABLE ACCESS BY INDEX ROWID GL.GL_CODE_COMBINATIONS
INDEX UNIQUE SCAN GL.GL_CODE_COMBINATIONS_U1
TABLE ACCESS BY INDEX ROWID GL.GL_PERIODS
INDEX RANGE SCAN GL.GL_PERIODS_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUES_N1
INDEX RANGE SCAN APPLSYS.FND_FLEX_VALUE_NORM_HIER_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUES_TL
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUES_TL_U1
TABLE ACCESS BY INDEX ROWID APPLSYS.FND_FLEX_VALUE_SETS
INDEX UNIQUE SCAN APPLSYS.FND_FLEX_VALUE_SETS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
INDEX UNIQUE SCAN GL.GL_JE_HEADERS_U1
Is there anything i can do in Discoverer Desktop / Administration to avoid this problem.
Many thanks,
LanceHi Rod,
I've tried the condition (Batch Status||'' = 'Posted') as you suggested, but the qeury time is still over 20 mins. To test i changed it to (Batch Status||'' = 'Unposted') and the query was returned within seconds again.
Ive been doing some more digging and have found the database view that is linked to the Journal Batches folder. See below.
I think the problem is with the column using DECODE. When querying the column in TOAD the value of P is returned. But in discoverer the condition is done on the value Posted. Im not too sure how DECODE works, but think this could be the causing some sort of issue with Full Table Scans. How do we get around this?
Lance
DECODE( JOURNAL_BATCH1.STATUS,
'+', 'Unable to validate or create CTA',
'+*', 'Was unable to validate or create CTA',
'-','Invalid or inactive rounding differences account in journal entry',
'-*', 'Modified invalid or inactive rounding differences account in journal entry',
'<', 'Showing sequence assignment failure',
'<*', 'Was showing sequence assignment failure',
'>', 'Showing cutoff rule violation',
'>*', 'Was showing cutoff rule violation',
'A', 'Journal batch failed funds reservation',
'A*', 'Journal batch previously failed funds reservation',
'AU', 'Showing batch with unopened period',
'B', 'Showing batch control total violation',
'B*', 'Was showing batch control total violation',
'BF', 'Showing batch with frozen or inactive budget',
'BU', 'Showing batch with unopened budget year',
'C', 'Showing unopened reporting period',
'C*', 'Was showing unopened reporting period',
'D', 'Selected for posting to an unopened period',
'D*', 'Was selected for posting to an unopened period',
'E', 'Showing no journal entries for this batch',
'E*', 'Was showing no journal entries for this batch',
'EU', 'Showing batch with unopened encumbrance year',
'F', 'Showing unopened reporting encumbrance year',
'F*', 'Was showing unopened reporting encumbrance year',
'G', 'Showing journal entry with invalid or inactive suspense account',
'G*', 'Was showing journal entry with invalid or inactive suspense account',
'H', 'Showing encumbrance journal entry with invalid or inactive reserve account',
'H*', 'Was showing encumbrance journal entry with invalid or inactive reserve account',
'I', 'In the process of being posted',
'J', 'Showing journal control total violation',
'J*', 'Was showing journal control total violation',
'K', 'Showing unbalanced intercompany journal entry',
'K*', 'Was showing unbalanced intercompany journal entry',
'L', 'Showing unbalanced journal entry by account category',
'L*', 'Was showing unbalanced journal entry by account category',
'M', 'Showing multiple problems preventing posting of batch',
'M*', 'Was showing multiple problems preventing posting of batch',
'N', 'Journal produced error during intercompany balance processing',
'N*', 'Journal produced error during intercompany balance processing',
'O', 'Unable to convert amounts into reporting currency',
'O*', 'Was unable to convert amounts into reporting currency',
'P', 'Posted',
'Q', 'Showing untaxed journal entry',
'Q*', 'Was showing untaxed journal entry',
'R', 'Showing unbalanced encumbrance entry without reserve account',
'R*', 'Was showing unbalanced encumbrance entry without reserve account',
'S', 'Already selected for posting',
'T', 'Showing invalid period and conversion information for this batch',
'T*', 'Was showing invalid period and conversion information for this batch',
'U', 'Unposted',
'V', 'Journal batch is unapproved',
'V*', 'Journal batch was unapproved',
'W', 'Showing an encumbrance journal entry with no encumbrance type',
'W*', 'Was showing an encumbrance journal entry with no encumbrance type',
'X', 'Showing an unbalanced journal entry but suspense not allowed',
'X*', 'Was showing an unbalanced journal entry but suspense not allowed',
'Z', 'Showing invalid journal entry lines or no journal entry lines',
'Z*', 'Was showing invalid journal entry lines or no journal entry lines', NULL ), -
JTable text alignment issues when using JPanel as custom TableCellRenderer
Hi there,
I'm having some difficulty with text alignment/border issues when using a custom TableCellRenderer. I'm using a JPanel with GroupLayout (although I've also tried others like FlowLayout), and I can't seem to get label text within the JPanel to align properly with the other cells in the table. The text inside my 'panel' cell is shifted downward. If I use the code from the DefaultTableCellRenderer to set the border when the cell receives focus, the problem gets worse as the text shifts when the new border is applied to the panel upon cell selection. Here's an SSCCE to demonstrate:
import java.awt.Color;
import java.awt.Component;
import java.awt.EventQueue;
import javax.swing.GroupLayout;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.JTable;
import javax.swing.border.Border;
import javax.swing.table.TableCellRenderer;
import javax.swing.table.TableColumn;
import sun.swing.DefaultLookup;
public class TableCellPanelTest extends JFrame {
private class PanelRenderer extends JPanel implements TableCellRenderer {
private JLabel label = new JLabel();
public PanelRenderer() {
GroupLayout layout = new GroupLayout(this);
layout.setHorizontalGroup(layout.createParallelGroup().addComponent(label));
layout.setVerticalGroup(layout.createParallelGroup().addComponent(label));
setLayout(layout);
public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) {
if (isSelected) {
setBackground(table.getSelectionBackground());
} else {
setBackground(table.getBackground());
// Border section taken from DefaultTableCellRenderer
if (hasFocus) {
Border border = null;
if (isSelected) {
border = DefaultLookup.getBorder(this, ui, "Table.focusSelectedCellHighlightBorder");
if (border == null) {
border = DefaultLookup.getBorder(this, ui, "Table.focusCellHighlightBorder");
setBorder(border);
if (!isSelected && table.isCellEditable(row, column)) {
Color col;
col = DefaultLookup.getColor(this, ui, "Table.focusCellForeground");
if (col != null) {
super.setForeground(col);
col = DefaultLookup.getColor(this, ui, "Table.focusCellBackground");
if (col != null) {
super.setBackground(col);
} else {
setBorder(null /*getNoFocusBorder()*/);
// Set up our label
label.setText(value.toString());
label.setFont(table.getFont());
return this;
public TableCellPanelTest() {
JTable table = new JTable(new Integer[][]{{1, 2, 3}, {4, 5, 6}}, new String[]{"A", "B", "C"});
// set up a custom renderer on the first column
TableColumn firstColumn = table.getColumnModel().getColumn(0);
firstColumn.setCellRenderer(new PanelRenderer());
getContentPane().add(table);
pack();
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
public void run() {
new TableCellPanelTest().setVisible(true);
}There are basically two problems:
1) When first run, the text in the custom renderer column is shifted downward slightly.
2) Once a cell in the column is selected, it shifts down even farther.
I'd really appreciate any help in figuring out what's up!
Thanks!1) LayoutManagers need to take the border into account so the label is placed at (1,1) while labels just start at (0,0) of the cell rect. Also the layout manager tend not to shrink component below their minimum size. Setting the labels minimum size to (0,0) seems to get the same effect in your example. Doing the same for maximum size helps if you set the row height for the JTable larger. Easier might be to use BorderLayout which ignores min/max for center (and min/max height for west/east, etc).
2) DefaultTableCellRenderer uses a 1px border if the UI no focus border is null, you don't.
3) Include a setDefaultCloseOperation is a SSCCE please. I think I've got a hunderd test programs running now :P. -
Select count from large fact tables with bitmap indexes on them
Hi..
I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
ThxYou may have corruption in the index if the queries ...
Select /*+ full(t) */ count(*) from my_table t
... and ...
Select /*+ index_combine(t my_index) */ count(*) from my_table t;
... give different results.
Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them. -
Performance hit when using a FirePro GPU?
Hey there!
I'm interested in purchasing a cheap ATI FirePro V4900 or similar GPU in order to take advantage of my 10 bit TFT when using Photoshop. I was wondering if I have to expect a performance hit when using PS 6.0 with such a card as opposed to a normal "gamer" GPU like the NVidia 670 GTX when :
- performing normal file handling, opening PSD files, paning, zooming, brushing, etc.
- applying some of the newer GPU-enhanced filters like Liquify, Oil Paint, Iris Blur, 3d enhancements, etc.
- actually applying/rendering a more demanding fliter such as Iris Blur?
Does anyone know about this? I'm afraid I could not find any benchmarks at all except for the one on Tomshardware regarding OCL, but that one does not include professional GPUs...
Thanks for any info in advance!There will not be synchronization when the method of A
is being called. The method 1) certainly saves memory
space, but will the performance be hurt since there
will only be one object accessed by multiple threads?
Or maybe it doesn't matter?If there is no synchronization, it will not matter. Threads execute methods. Methods do not run on Objects. The Object is just data that is implicitly linked to the method.
Just make sure it's safe to keep the method unsynchronized. -
Performance problem when using CAPS LOCK piano input
Dear reader,
I'm very new to Logic and am running into a performance problem when using the CAPS LOCK-piano-keyboard for input of an instrument: when I'm not recording everything is fine and the program instantly responds on my keystrokes, but as soon as I go into record-mode there is sometimes a delay in the response-time (so I press a key and it takes up to half a second longer before the note is actually played).
Is there anything to do about this to improve performance (for example turning of certain features of the application), or should I never use the CAPS LOCK keyboard anyway and go straight for an external MIDI-keyboard?
Thanks and regards,
Tim MetzDoes your project have Audio tracks and just how heavy it is, how many tracks? Also, what kind of Software Instrument do you use?
-
Does resteasy API have class loader issues when using via OSGi
Does resteasy API have class loader issues when using via OSGi
Hi Scott,
THis isnt an answer to ur Question, but could u tell me which jar files are needed for the packages:
com.sap.portal.pcm.system.ISystems
com.sap.portal.pcm.system.ISystem
and under which path I coul dfind them.
Thnx
Regards
Meesum. -
Oracle Retail 13 - Performance issues when open, save, approving worksheets
Hi Guys,
Recently we started facing performance issues when we started working with Oracle Retail 13 worksheets from within the java GUI at clients desktops.
We run Oracle Retail 13.1 powered by Oracle Database 11g R1 and AS 10g in latest release.
Issues:
- Opening, saving, approving worksheets with approx 9 thousands of items takes up to 15 minutes.
- Time for smaller worksheets is also around 10 minutes just to open a worksheet
- Also just to open multiple worksheets takes "ages" up to 10-15 minuts
Questions:
- Is it expected performance for such worksheets?
- What is your experience with Oracle Retail 13 in terms of performance while working with worksheets - how much time does it normally take to open edit save a worksheet?
- What are the average expected times for such operations?
Any feedback and hints would be much appreciated.
Cheers!!Hi,
I guess you mean Order/Buyer worksheets?
This is not normal, should be quicker, matter of seconds to at most a minute.
Database side tuning is where I would look for clues.
And the obvious question: remember any changes to anything that may have caused the issue? Are the table and index statistics freshly gathered?
Best regards, Erik Ykema
Maybe you are looking for
-
How to Connect My Printer to The Network
My printer is LaserJet M1212nf. According to the brief introduction I should connect the printer to the network first in order to enable its e-printing function. I'm working at home. I don't share information with other computers. Does "network"
-
This is not FCP related but thought this might be a good place to ask. I have a Toshiba HD projector and when I connect up direct TV or a dvd player, it gets a line that runs up the screen. The line is like if you would lighten or darken a strip of t
-
ASA 8.6(1)12 Outlook SSLVPN issue
Hello, We're running an ASA 5515-X cluster 8.6(1)12 and we're having problems with OWA (Outlook Web Access) being accessed through the ASA SSLVPN Portal. We're able to login and get to OWA but the layout of the page is only displayed properly in Chro
-
in sequence container i have three task one task have to made transaction Remaining three task made not transaction
-
Error message upon copying clips
on onw timeline, I have selected a bunch of clips. I hit ctrl c on keyboard. bam error message!!!!!!!!!!!!!!!!!!!!!!!!!!! any ideas? the error message saye, Premiere Pro has encountered an error. [..\..Src\Sequence\Action\SequenceAddTransitionAction.