Performance issue about using JDBC?
Since no one reply me, I post again. :(
I just got a big performance problem lately, and I tried all the possible ways, still can't fix it. Could you help me out or give me more suggestions?
Oracle 8i for Solaris 2.6
A web application with back end is Oracle database, developed by Java, use JDBC driver. It also uses Servelet. Report generated in browser is using dynamic SQL.
When I click some link to generate report in browser, it will run the corresponding SQL script, then return the result to browser. The problem is it takes long long time to get the result. For simple query, it takes around 2-3 minutes. But if I run the same SQL script in
SQL*Plus, it only takes 4-5 seconds, or even less. So I think the index for this query is fine. (I also rebuild all indices, some result.) And all the hit ratios in SGA are also OK. When browser generate reports, I didn't see high CPU usage or I/O activity.
I really have no idea why this happens. But I think the Oracle DB is fine, 'cause query is run normally in SQL*Plus. The problem may related to the JDBC driver or JDBC connection. The developers also have no clue about this. When the Java app run the query, does it use the same way to access the tables and indexes as used in SQL*Plus?
Any idea or suggestions?
Thanks a lot and have a good day!
null
Thanks for all.
So do you guys has any suggestion on the following code?
DESCRIBE TABLE gt_vbeln LINES l_lines.
IF l_lines = 0.
***>>Links20060411
* ELSEIF l_lines GT c_1000.
* SELECT vbelv posnv vbeln posnn vbtyp_n rfmng
* APPENDING TABLE gt_vbfa_all PACKAGE SIZE c_1000
* FROM vbfa
* FOR ALL ENTRIES IN gt_vbeln
* WHERE vbelv EQ gt_vbeln-vbelv
* AND posnv EQ gt_vbeln-posnv
* AND vbtyp_n IN ('T', 'J', 'R', 'h').
* ENDSELECT.
* ELSE.
* SELECT vbelv posnv vbeln posnn vbtyp_n rfmng
* INTO TABLE gt_vbfa_all FROM vbfa
* FOR ALL ENTRIES IN gt_vbeln
* WHERE vbelv EQ gt_vbeln-vbelv
* AND posnv EQ gt_vbeln-posnv
* AND vbtyp_n IN ('T', 'J', 'R', 'h').
ELSEIF l_lines > c_1000.
SELECT vbelv posnv vbeln posnn vbtyp_n rfmng
APPENDING TABLE gt_vbfa PACKAGE SIZE c_1000
FROM vbfa
FOR ALL ENTRIES IN gt_vbeln
WHERE vbelv = gt_vbeln-vbelv
AND posnv = gt_vbeln-posnv
AND vbtyp_n IN ('T', 'J').
ENDSELECT.
ELSE.
SELECT vbelv posnv vbeln posnn vbtyp_n rfmng
INTO TABLE gt_vbfa FROM vbfa
FOR ALL ENTRIES IN gt_vbeln
WHERE vbelv = gt_vbeln-vbelv
AND posnv = gt_vbeln-posnv
AND vbtyp_n IN ('T', 'J').
ENDIF.
Currently it runs timeout ,as the l_lines is very very large.
I think maybe we can change the package size. But what's the best package size for performance?
Thanks..
Similar Messages
-
ADF mobile Client App: Issue about using db sequence for populating row_id
Hi,
I'm working on an ADF mobile client app POC project. In the mobile app, new record can be created, the column type for row_id is VARCHAR2(15), I used the db sequence created in MC db, converted the seq number to a string, then set row_id via initDefaults method.
The new records are created and row_ids are set with the proper sequence numbers when first time launching the client app in blackberry simulator. But if I exit the app and re-launch the app again, I got net.rim.device.api.database.DataTypeExpection when trying to create a new record.
Could anyone please help me and let me know what could cause this issue? What should be the proper way to populate the row_id? Appreciate your response in advance!
Jdev/ADFMobile extension version:
11.1.1.4.0 build 5860
mobile server version:
10.3.0.3
blackberry version:
BlackBerry JDE 5.0.0
BlackBerry Smartphone Simulators 6.0.0.141 (9800)
Code:
public class SOrgExtEOImpl extends EntityImpl {
protected void initDefaults() {
super.initDefaults();
SequenceImpl seq = new SequenceImpl("S_SIEBELMOBILE_S_ORG_EXT", getDBTransaction());
populateAttributeAsChanged(ROWID1, seq.getSequenceNumber().toString());
Log:
First time launching the MC app:
[FINE - adfnmc.bindings - BC4JIteratorBinding - createRow]
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 0 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 1 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 2 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 3 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 4 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 5 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 6 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 7 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 8 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 9 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 10 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 11 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 12 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 1 to 2010-12-20 14:58:13.0
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 3 to 2010-12-20 14:58:13.0
[FINE - adfnmc.model - SequenceImpl - create] Database SQLite doesn't support sequences natively; creating TableSequenceImpl for
S_SIEBELMOBILE_S_ORG_EXT
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 0 to 73501
[FINE - adfnmc.model - EntityImpl - getAttribute] Retrieved from siebel.mobile.SOrgExtEO.CreatedBy at index 2
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 2 to 1-11ZQ
[FINE - adfnmc.model - EntityImpl - getAttribute] Retrieved from siebel.mobile.SOrgExtEO.LastUpdBy at index 4
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 4 to 1-53Y
[FINE - adfnmc.model - EntityImpl - getAttribute] Retrieved from siebel.mobile.SOrgExtEO.BuId at index 5
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 5 to 1-1DG
[INFO - adfnmc.model - MetaObjectManager - findOrLoadMetaObject] MetaObject siebel.mobile.AccountAddressFKAssoc not found in cache, so
loading it from XML
[INFO - adfnmc.model - MetaObjectManager - findOrLoadMetaObject] MetaObject siebel.mobile.ActivityAccountFKAssoc not found in cache, so
loading it from XML
[FINE - adfnmc.bindings - BC4JIteratorBinding - notifyRowInserted]
[FINE - adfnmc.bindings - IteratorExecutableBindingImpl - rowInserted] IterBinding - AccountPageDef:AccountAddressView1Iterator
[FINE - adfnmc.bindings - IteratorExecutableBindingImpl - notifyRowInserted] IterBinding - AccountPageDef:AccountAddressView1Iterator
[FINE - adfnmc.bindings - RangeBindingImpl - rowInserted] AccountAddressView1
[FINE - adfnmc.bindings - RangeBindingImpl - notifyNewElement] AccountAddressView1, index:0
[FINE - adfnmc.ui - BBTable - newElement] relativeIndex = 0
[FINE - adfnmc.bindings - RangeBindingImpl - setVariableIndex] Begin, AccountAddressView1, listener: oracle.adfnmc.component.ui.BBTable$1
[FINE - adfnmc.bindings - SimpleContext$Variables - setVariable] Setting variable "row" to expression #
{AccountPageDef_AccountAddressView1_rowAlias}
[FINE - adfnmc.ui - BBOutputText - endInit]
Re-launching the MC app:
INFO - adfnmc.bindings - BC4JOperationBinding - execute] Preparing to execute OperationBinding id:'CreateInsert'
[FINE - adfnmc.bindings - BC4JIteratorBinding - createRow]
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 0 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 1 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 2 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 3 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 4 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 5 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 6 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 7 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 8 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 9 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 10 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 11 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 12 to
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 1 to 2010-12-20 15:08:35.0
[FINEST - adfnmc.model - EntityImpl - populateAttribute] Setting value at index 3 to 2010-12-20 15:08:35.0
[FINE - adfnmc.model - SequenceImpl - create] Database SQLite doesn't support sequences natively; creating TableSequenceImpl for
S_SIEBELMOBILE_S_ORG_EXT
[INFO - adfnmc.ui - ErrorHandlerImpl - reportException] BindingContainer: AccountPageDef, exception: oracle.adfnmc.AMCJboException
[WARNING - adfnmc.ui - ErrorHandlerImpl - reportException] [oracle.jbo.server.SequenceImpl$TableSequenceImpl.retrieveSequenceParamsFromDB]
oracle.adfnmc.AMCJboException: ADF-MNC-60109: Error retrieving sequence parameters for sequence S_SIEBELMOBILE_S_ORG_EXT
[WARNING - adfnmc.ui - ErrorHandlerImpl - reportException] oracle.adfnmc.java.sql.SQLException:
net.rim.device.api.database.DataTypeException:Datatype mismatch
[WARNING - adfnmc.ui - ErrorHandlerImpl - reportException] Unable to retrieve String at index 2
[WARNING - adfnmc.ui - ErrorHandlerImpl - reportException]
[FINE - adfnmc.ui - MessageBox - show] message=oracle.adfnmc.AMCJboException: ADF-MNC-60109: Error retrieving sequence parameters for
sequence S_SIEBELMOBILE_S_ORG_EXT
[FINE - adfnmc.ui - MessageBox - show] oracle.adfnmc.java.sql.SQLException: net.rim.device.api.database.DataTypeException:Datatype mismatch
[FINE - adfnmc.ui - MessageBox - show] Unable to retrieve String at index 2>
>
using 10gR2 on Sun-Solaris. Getting consistently "db file parallel read" over 35 ms as an average wait for the past few months. No performance issues as such.
Using RAID 1+0. DB Size is 2 TB. Transactions are OLTP/Batch
Is this metric high or normal. How to justify that..
Looking at your results it's not really possible to say.
db file parallel read is a request for a number of randomly distirbuted blocks, and the time for a read is the time for the last block of the set to be returned.
Without knowing how many blocks are being requested at a time you can't really determine what constitutes a reasonable time. Given that you say OLTP + Batch, and have a large volume of scattered reads, it's quite possible that you have some queries on the Batch side doing very large index range scans - which would allow for some very large db file parallel reads.
I take it from the use of statspack that you're not licensed for the diagnostic and performance packs; it would be easy to query v$active_session_history to get some idea of the number of blocks per request as this is given by the P2 parameter. As it is. you may be able to get a rough idea by messing about with the various "physical read" numbers in the Instance Activity section of statspack.
Regards
Jonathan Lewis -
Performance issue when using select count on large tables
Hello Experts,
I have a requirement where i need to get count of data from a database table.Later on i need to display the count in ALV format.
As per my requirement, I have to use this select count inside a nested loops.
Below is the count snippet:
LOOP at systems assigning <fs_sc_systems>.
LOOP at date assigning <fs_sc_date>.
SELECT COUNT( DISTINCT crmd_orderadm_i~header )
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient "MANDT is referred as client
AND crmd_orderadm_iguid EQ bbp_pdigpguid
INTO w_sc_count
WHERE crmd_orderadm_i~created_at BETWEEN <fs_sc_date>-start_timestamp
AND <fs_sc_date>-end_timestamp
AND bbp_pdigp~zz_scsys EQ <fs_sc_systems>-sys_name.
endloop.
endloop.
In the above code snippet,
<fs_sc_systems>-sys_name is having the system name,
<fs_sc_date>-start_timestamp is having the start date of month
and <fs_sc_date>-end_timestamp is the end date of month.
Also the data in tables crmd_orderadm_i and bbp_pdigp is very large and it increases every day.
Now,the above select query is taking a lot of time to give the count due to which i am facing performance issues.
Can any one pls help me out to optimize this code.
Thanks,
SumanHi Choudhary Suman ,
Try this:
SELECT crmd_orderadm_i~header
INTO it_header " interna table
FROM crmd_orderadm_i
INNER JOIN bbp_pdigp
ON crmd_orderadm_iclient EQ bbp_pdigpclient
AND crmd_orderadm_iguid EQ bbp_pdigpguid
FOR ALL ENTRIES IN date
WHERE crmd_orderadm_i~created_at BETWEEN date-start_timestamp
AND date-end_timestamp
AND bbp_pdigp~zz_scsys EQ date-sys_name.
SORT it_header BY header.
DELETE ADJACENT DUPLICATES FROM it_header
COMPARING header.
describe table it_header lines v_lines.
Hope this information is help to you.
Regards,
José -
Performance issue with using MAX function in pl/sql
Hello All,
We are having a performance issue with the below logic wherein MAX is being used in order to get the latest instance/record for a given input variable ( p_in_header_id).. the item_key is having the format as :
p_in_header_id - <number generated from a sequence>
This query to fetch even 1 record takes around 1 minutes 30 sec..could someone please help if there is a better way to form this logic & to improve performance in this case.
We want to get the latest record for the item_key ( this we are getting using MAX (begin_date)) for a given p_in_header_id value.
Query 1 :
SELECT item_key FROM wf_items WHERE item_type = 'xxxxzzzz'
AND SUBSTR (item_key, 1, INSTR (item_key, '-') - 1) =p_in_header_id
AND root_activity ='START_REQUESTS'
AND begin_date =
(SELECT MAX (begin_date) FROM wf_items WHERE item_type = 'xxxxzzzz'
AND root_activity ='START_REQUESTS'
AND SUBSTR (item_key, 1, INSTR (item_key, '-') - 1) =p_in_header_id);
Could someone please help us with this performance issue..we are really stuck because of this
regardsFirst of all Thanks to all gentlemen who replied ..many thanks ...
Tried the ROW_NUMBER() option but still it is taking time...have given output for the query and tkprof results as well. Even when it doesn't fetch any record ( this is a valid cased because the input header id doesn't have any workflow request submitted & hence no entry in the wf_items table)..then also see the time it has taken.
Looked at the RANK & DENSE_RANK options which were suggested..but it is still taking time..
Any further suggestions or ideas as to how this could be resolved..
SELECT 'Y', 'Y', ITEM_KEY
FROM
( SELECT ITEM_KEY, ROW_NUMBER() OVER(ORDER BY BEGIN_DATE DESC) RN FROM
WF_ITEMS WHERE ITEM_TYPE = 'xxxxzzzz' AND ROOT_ACTIVITY = 'START_REQUESTS'
AND SUBSTR(ITEM_KEY,1,INSTR(ITEM_KEY,'-') - 1) = :B1
) T WHERE RN <= 1
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 1.57 0 0 0 0
Fetch 1 8700.00 544968.73 8180 8185 0 0
total 2 8700.00 544970.30 8180 8185 0 0
many thanks -
Performance issue with using buffering in a APPL0 or APPL1 Table
Hi,
Can anyone please tell me whether there's any serious performace issue with using buffering for a Master or Transaction table? I'm asking this because when I run code inspector for my transp table I'm getting information message:
Message Code 0011 ==> Buffereing is Activated but Delivery Class Is "A" and Message Code 0014 ==> Buffereing is Activated but Data Class Is "APPL1".
So what's other way round for improving performance.
Thanks,
Mahesh M.S.Hi,
have you read the documentation?
Let me paste it here for you:
Buffering is switched on for the examined table and it has data type 'APPL0' or 'APPL1'.
Tables with data type 'APPL0' or 'APPL1' should contain master or transaction data, so these tables should either contain a large amount of data or their content should change frequently. Therefore buffering the table is unfavourable. Very large tables suppress other tables in the buffer memory and hence slow done any access to them. Transaction data should not be buffered because the synchronization of the changes on the various application servers is very time consuming.
In exceptional cases, small master data tables ('APPL0', size category 0) can be buffered.
The solution depends on the table content. If it is master or transaction data, the table should not be buffered. If the table content does not consist of master or transaction data, the data type should be corrected accordingly.
This should answer your questions...
Kind regards,
Hermann -
Performance issue with using out parameter sys_refcursor
Hello,
I'm using Oracle 10g, with ODP.Net in a C# application.
I'm using about 10 stored procedures, each having one out parameter of sys_refcursor type. when I use one function in C# and call these 10 sp's, it takes about 78ms to excute the function.
Now to improve the performance, I created one SP with 10 output parameters of sys_refcursor type. and i just call this one sp. the time taken has increased , not it takes abt 95ms.
is this the right approach or how can we improve the performance by using sys_refcursor.
please suggest, it is urgent, i'm stuck up with this issue.
thanks
shrutiWith 78ms and 95ms are you talking about milliseconds or minutes? If it's milliseconds then what's the problem and does it really matter if there is a difference of 17 milliseconds as that could just be caused by network traffic or something. If you're talking minutes, then we would need more information about what you are attempting to do, what tables and processing is going on, what indexes you have etc.
Query optimisation tips can be found on this thread.. When your query takes too long ....
Without more information we can't really tell what's happening. -
Performance issue when using the same query in a different way
Hello,
I have a performance problem with the statement below when running it with an insert or with execute immediate.
n.b.: This statement could be more optimized, but it is a generated statement.
When I run this statement I get one row back within one second, so there is no performance problem.
select sysdate
,5
,'testje'
,count (1)
,'NL' groupby
from (select 'different (target)' compare_type
,t.id_org_addr id_org_addr -- ID_ORG_ADDR
,t.vpd_country vpd_country -- CTL_COUNTRY
,t.addr_type addr_type -- ADDRESSTYP_COD
from (select *
from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m
on m.vpd_country = t.vpd_country
and m.key_type = 'ORGADDR2'
and m.target_value = t.id_org_addr
where coalesce (t.end_date, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where vpd_country = 'NL' /*EGRB*/
) t
where exists
(select null
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s
where t.id_org_addr = s.id_org_addr)
minus
select 'different (target)' compare_type
,s.id_org_addr id_org_addr -- ID_ORG_ADDR
,s.ctl_country vpd_country -- CTL_COUNTRY
, (select to_number (l.target_value)
from okc_code_foreign l
where l.source_code_type = 'TYS'
and l.target_code_type = 'ADDRLINKTYPE'
and l.source_value = upper (s.addresstyp_cod)
and l.vpd_country = s.ctl_country)
addr_type -- ADDRESSTYP_COD
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s) When I run this statement using a insert by placing
insert into okc_compare_results (
datetime
,compare_tables_id
,compare_target
,record_count
,groupby
) before the statement, then the statement runs about *3 to 4 minutes*, The same is happening when running the select part only using execute immediate.
Below the execution plans of the insert with the select and the select only.
Could somebody tell me what causes the different behavior of the "same" statement and what could I do to avoid this behavior.
The database version is: 11.1.0.7.0
Regards,
Fred.
SQL Statement which produced this data:
select * from table(dbms_xplan.display_cursor ('cuk3uwnxx344q',0 /*3431532430 */))
union all
select * from table(dbms_xplan.display_cursor ('862aq599gfd6n',0/*3531428851 */))
plan_table_output
SQL_ID cuk3uwnxx344q, child number 0
select sysdate ,:"SYS_B_00" ,:"SYS_B_01"
,count (:"SYS_B_02") ,:"SYS_B_03" groupby from ( (select
:"SYS_B_04" compare_type ,t.id_org_addr id_org_addr
-- ID_ORG_ADDR ,t.vpd_country vpd_country --
CTL_COUNTRY ,t.addr_type addr_type -- ADDRESSTYP_COD
from (select * from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m on
m.vpd_country = t.vpd_country ; and
m.key_type = :"SYS_B_05" and
m.target_value = t.id_org_addr ; where
coalesce (t.end_date, to_date (:"SYS_B_06", :"SYS_B_07")) >= sysdate)
/*SGRB*/ where vpd_country = :"SYS_B_08" /*EGRB*/
Plan hash value: 3431532430
Id Operation Name Rows Bytes Cost (%CPU) Time Pstart Pstop
0 SELECT STATEMENT 1772 (100)
1 SORT AGGREGATE 1
2 VIEW 3 1772 (1) 00:00:22
3 MINUS
4 SORT UNIQUE 3 492 1146 (1) 00:00:14
* 5 HASH JOIN OUTER 3 492 1145 (1) 00:00:14
6 NESTED LOOPS
7 NESTED LOOPS 3 408 675 (1) 00:00:09
* 8 HASH JOIN 42 4242 625 (1) 00:00:08
9 PARTITION LIST SINGLE 3375 148K 155 (2) 00:00:02 KEY KEY
* 10 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 3375 148K 155 (2) 00:00:02 KEY KEY
* 11 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
* 12 INDEX UNIQUE SCAN UK_ODS_ORG_ADDR 1 1 (0) 00:00:01
* 13 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_ORG_ADDR 1 35 2 (0) 00:00:01 ROWID ROWID
* 14 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 1354K 469 (1) 00:00:06
15 NESTED LOOPS
16 NESTED LOOPS 1 67 9 (12) 00:00:01
17 NESTED LOOPS 1 48 8 (13) 00:00:01
* 18 HASH JOIN 1 23 6 (17) 00:00:01
* 19 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_COUNTRY_SYSTEM 1 11 2 (0) 00:00:01 ROWID ROWID
* 20 INDEX RANGE SCAN PK_ODS_DIVISION_SYSTEM 1 1 (0) 00:00:01
* 21 TABLE ACCESS FULL SY_SOURCE_CODE 8 96 3 (0) 00:00:01
22 TABLE ACCESS BY INDEX ROWID SY_FOREIGN_CODE 1 25 2 (0) 00:00:01
* 23 INDEX RANGE SCAN PK_SY_FOREIGN_CODE 1 1 (0) 00:00:01
* 24 INDEX UNIQUE SCAN PK_SY_TARGET_CODE 1 0 (0)
* 25 TABLE ACCESS BY INDEX ROWID SY_TARGET_CODE 1 19 1 (0) 00:00:01
26 SORT UNIQUE 3375 332K 626 (1) 00:00:08
* 27 HASH JOIN OUTER 3375 332K 625 (1) 00:00:08
28 PARTITION LIST SINGLE 3375 148K 155 (2) 00:00:02 KEY KEY
* 29 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 3375 148K 155 (2) 00:00:02 KEY KEY
* 30 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
Predicate Information (identified by operation id):
5 - access("M"."TARGET_VALUE"="T"."ID_ORG_ADDR" AND "M"."VPD_COUNTRY"="T"."VPD_COUNTRY")
8 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_12 S."ADR_ID_CEGEDIM" :SYS_B_13 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
10 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_14,:SYS_B_15))>=SYSDATE@!)
11 - access("M"."KEY_TYPE"=:SYS_B_11 AND "M"."VPD_COUNTRY"=:SYS_B_16)
12 - access("T"."ID_ORG_ADDR"="M"."TARGET_VALUE")
13 - filter(("T"."VPD_COUNTRY"=:SYS_B_08 AND COALESCE("T"."END_DATE",TO_DATE(:SYS_B_06,:SYS_B_07))>=SYSDATE@!))
14 - access("M"."KEY_TYPE"=:SYS_B_05 AND "M"."VPD_COUNTRY"=:SYS_B_08)
18 - access("CS"."ID_SYSTEM"="SK"."ID_SOURCE_SYSTEM")
19 - filter("CS"."SYSTEM_TYPE"=1)
20 - access("CS"."VPD_COUNTRY"=:B1 AND "CS"."EXP_IMP_TYPE"='I')
filter("CS"."EXP_IMP_TYPE"='I')
21 - filter("SK"."CODE_TYPE"=:SYS_B_18)
23 - access("FK"."ID_SOURCE_CODE"="SK"."ID_SOURCE_CODE" AND "FK"."SOURCE_VALUE"=UPPER(:B1) AND
"CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY")
filter(("FK"."VPD_COUNTRY"=:B1 AND "FK"."SOURCE_VALUE"=UPPER(:B2) AND "CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY"))
24 - access("FK"."ID_TARGET_CODE"="TK"."ID_TARGET_CODE")
25 - filter("TK"."CODE_TYPE"=:SYS_B_19)
27 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_23 S."ADR_ID_CEGEDIM" :SYS_B_24 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
29 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_25,:SYS_B_26))>=SYSDATE@!)
30 - access("M"."KEY_TYPE"=:SYS_B_22 AND "M"."VPD_COUNTRY"=:SYS_B_27)
SQL_ID 862aq599gfd6n, child number 0
insert into okc_compare_results ( datetime
,compare_tables_id ,compare_target
,record_count ,groupby )
select sysdate ,:"SYS_B_00" ,:"SYS_B_01"
,count (:"SYS_B_02") ,:"SYS_B_03" groupby from ( (select
:"SYS_B_04" compare_type ,t.id_org_addr id_org_addr
-- ID_ORG_ADDR ,t.vpd_country vpd_country --
CTL_COUNTRY ,t.addr_type addr_type -- ADDRESSTYP_COD
from (select * from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m on
m.vpd_country = t.vpd_country ; and
m.key_type = :"SYS_B_05" and
m.target_value = t.id_org_addr
Plan hash value: 3531428851
Id Operation Name Rows Bytes Cost (%CPU) Time Pstart Pstop
0 INSERT STATEMENT 1646 (100)
1 LOAD TABLE CONVENTIONAL
2 SORT AGGREGATE 1
3 VIEW 1 1646 (1) 00:00:20
4 MINUS
5 SORT UNIQUE 1 163
6 NESTED LOOPS OUTER 1 163 1067 (1) 00:00:13
7 NESTED LOOPS 1 135 599 (1) 00:00:08
* 8 HASH JOIN 19 1919 577 (2) 00:00:07
9 PARTITION LIST SINGLE 1535 69075 107 (4) 00:00:02 KEY KEY
* 10 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 1535 69075 107 (4) 00:00:02 KEY KEY
* 11 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
* 12 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_ORG_ADDR 1 34 2 (0) 00:00:01 ROWID ROWID
* 13 INDEX UNIQUE SCAN UK_ODS_ORG_ADDR 25 1 (0) 00:00:01
* 14 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 1 28 468 (1) 00:00:06
15 NESTED LOOPS
16 NESTED LOOPS 1 67 8 (0) 00:00:01
17 NESTED LOOPS 1 48 7 (0) 00:00:01
18 NESTED LOOPS 1 23 5 (0) 00:00:01
* 19 TABLE ACCESS BY GLOBAL INDEX ROWID ODS_COUNTRY_SYSTEM 1 11 2 (0) 00:00:01 ROWID ROWID
* 20 INDEX RANGE SCAN PK_ODS_DIVISION_SYSTEM 1 1 (0) 00:00:01
* 21 TABLE ACCESS FULL SY_SOURCE_CODE 1 12 3 (0) 00:00:01
22 TABLE ACCESS BY INDEX ROWID SY_FOREIGN_CODE 1 25 2 (0) 00:00:01
* 23 INDEX RANGE SCAN PK_SY_FOREIGN_CODE 1 1 (0) 00:00:01
* 24 INDEX UNIQUE SCAN PK_SY_TARGET_CODE 1 0 (0)
* 25 TABLE ACCESS BY INDEX ROWID SY_TARGET_CODE 1 19 1 (0) 00:00:01
26 SORT UNIQUE 1535 151K
* 27 HASH JOIN OUTER 1535 151K 577 (2) 00:00:07
28 PARTITION LIST SINGLE 1535 69075 107 (4) 00:00:02 KEY KEY
* 29 TABLE ACCESS FULL OKC_MDL_WORKPLACE_ADDRESS 1535 69075 107 (4) 00:00:02 KEY KEY
* 30 INDEX RANGE SCAN PK_M_SY_FOREIGN_KEY 49537 2709K 469 (1) 00:00:06
Predicate Information (identified by operation id):
8 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_12 S."ADR_ID_CEGEDIM" :SYS_B_13 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
10 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_14,:SYS_B_15))>=SYSDATE@!)
11 - access("M"."KEY_TYPE"=:SYS_B_11 AND "M"."VPD_COUNTRY"=:SYS_B_16)
12 - filter((COALESCE("T"."END_DATE",TO_DATE(:SYS_B_06,:SYS_B_07))>=SYSDATE@! AND "T"."VPD_COUNTRY"=:SYS_B_08))
13 - access("T"."ID_ORG_ADDR"="M"."TARGET_VALUE")
14 - access("M"."KEY_TYPE"=:SYS_B_05 AND "M"."VPD_COUNTRY"=:SYS_B_08 AND "M"."TARGET_VALUE"="T"."ID_ORG_ADDR")
filter("M"."TARGET_VALUE"="T"."ID_ORG_ADDR")
19 - filter("CS"."SYSTEM_TYPE"=1)
20 - access("CS"."VPD_COUNTRY"=:B1 AND "CS"."EXP_IMP_TYPE"='I')
filter("CS"."EXP_IMP_TYPE"='I')
21 - filter(("SK"."CODE_TYPE"=:SYS_B_18 AND "CS"."ID_SYSTEM"="SK"."ID_SOURCE_SYSTEM"))
23 - access("FK"."ID_SOURCE_CODE"="SK"."ID_SOURCE_CODE" AND "FK"."SOURCE_VALUE"=UPPER(:B1) AND
"CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY")
filter(("FK"."VPD_COUNTRY"=:B1 AND "FK"."SOURCE_VALUE"=UPPER(:B2) AND "CS"."VPD_COUNTRY"="FK"."VPD_COUNTRY"))
24 - access("FK"."ID_TARGET_CODE"="TK"."ID_TARGET_CODE")
25 - filter("TK"."CODE_TYPE"=:SYS_B_19)
27 - access("M"."SOURCE_VALUE"="S"."WKP_ID_CEGEDIM" :SYS_B_23 S."ADR_ID_CEGEDIM" :SYS_B_24 S."ADDRESSTYP_COD" AND
"M"."VPD_COUNTRY"="S"."CTL_COUNTRY")
29 - filter(COALESCE("S"."END_VAL_DAT",TO_DATE(:SYS_B_25,:SYS_B_26))>=SYSDATE@!)
30 - access("M"."KEY_TYPE"=:SYS_B_22 AND "M"."VPD_COUNTRY"=:SYS_B_27)Edited by: BluShadow on 20-Jun-2012 10:30
added {noformat}{noformat} tags for readability. Please read {message:id=9360002} and learn to do this yourself.yes, all the used tables are analyzed.
Thanks, for pointing to the metalink bug, I have also searched in metalink, but didn't find this bug.
I have a little bit more information about the problem.
I use the following select (now in a readable format)
select count (1)
from ( (select 'different (target)' compare_type
,t.id_org_addr id_org_addr -- ID_ORG_ADDR
,t.vpd_country vpd_country -- CTL_COUNTRY
,t.addr_type addr_type -- ADDRESSTYP_COD
from (select *
from (select t.*
from ods.ods_org_addr t
left outer join
m_sy_foreign_key m
on m.vpd_country = t.vpd_country
and m.key_type = 'ORGADDR2'
and m.target_value = t.id_org_addr
where coalesce (t.end_date, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where vpd_country = 'NL' /*EGRB*/
) t
where exists
(select null
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s
where t.id_org_addr = s.id_org_addr)
minus
select 'different (target)' compare_type
,s.id_org_addr id_org_addr -- ID_ORG_ADDR
,s.ctl_country vpd_country -- CTL_COUNTRY
, (select to_number (l.target_value)
from okc_code_foreign l
where l.source_code_type = 'TYS'
and l.target_code_type = 'ADDRLINKTYPE'
and l.source_value = upper (s.addresstyp_cod)
and l.vpd_country = s.ctl_country)
addr_type -- ADDRESSTYP_COD
from (select *
from (select m.target_value id_org_addr
,s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod id_cegedim
,s.*
from okc_mdl_workplace_address s
left outer join
m_sy_foreign_key m
on m.vpd_country = s.ctl_country
and m.key_type = 'ORGADDR2'
and m.source_value = s.wkp_id_cegedim || '-' || s.adr_id_cegedim || '-' || s.addresstyp_cod
where coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate) /*SGRB*/
where ctl_country = 'NL' /*EGRB*/
) s)) The select is executed in 813 msecs
When I execute the same select using execute immediate like:
declare
ln_count number;
begin
execute immediate q'[<select statement>]' into ln_count;
end;This takes 3:56 minutes to complete.
When I change the second coalesce part (the one within the exists) in the flowing way:
the part
coalesce (s.end_val_dat, to_date ('99991231', 'yyyymmdd')) >= sysdate
is replaced by
s.end_val_dat >= sysdate or s.end_val_dat is nullthen the execution time is even faster (560 msecs) in both, the plain select and the select using the execute immediate. -
Performance issues when using Smart View and Excel 2010
Hello, we are experiencing very slow retrieval times when using Smart View and Excel 2010. Currently on v.11.1.3.00 and moved over from Excel 2003 in the last quarter. The same spreadsheets in 2010 (recreated) are running much slower than they used to in 2003 and I was wondering if anyone else out there has experienced similar problems?
It looks like there is some background caching going on as when you copy and paste the contents into a new file and retrieve it is better.....initially. The size of the files are generally less than 2mb and there aren't an expecially large number of subcubes requested so I am at a loss to explain or alleviate the issues.
Any advice / tips on how to optimise the performance would be greatly appreciated.
Thanks,
NickHi Nick,
Office 2010 (32 bit) only is supported.
Also check these documents:
Refresh in Smart View 11.1.2.1 is Slow with MS Office 2010. (Doc ID 1362557.1)
Smart View Refresh Returns Zeros (Doc ID 758892.1)
Internet Explorer (IE7, IE8 and IE9) Recommended Settings for Oracle Hyperion Products (Doc ID 820892.1)
Thank you,
Charles Babu J
Edited by: CJX on Nov 15, 2011 12:21 PM -
Possible performance issue about DB table kmc_dbrm_contract
Hello,
We've just completed load tests for a large portal.
EP 6.40 SP20
During these tests, DB people have identified some contention on this particular table, which contains only three records.
The "offending" query seems rather fast and is just incrementing a counter. The reason for the contention is that we had a very large number os such queries, resulting on multiple locks, that could last up to a maximum of 1,5 seconds.
This is not related to our project developements and therefore I assume this is a portal standard.
Can you help me finding what this table is all about and if there is anything standard we can do to explain and/or prevent this slight delay?
I'm not a technology expert (just a PM), so I would appreciate a rather detailed response.
Thank you,
Luis C LemeHello,
This is known behavior (see http://help.sap.com/saphelp_nw70ehp3/helpdata/en/62/468698a8e611d5993600508b6b8b11/frameset.htm) when FSDB Repository is used and its option "Enable FSDB Content Tracking" is ticked.
The part of official docuementation says:
The database synchronization of content access might have a negative impact on performance. Every read or write content request to an FSDB resource waits to obtain a write lock on the lock record in the database. Therefore, the accumulated waiting time for obtaining the write lock in the database might increase and the waiting threads might consume a considerable amount of the available threads in the thread pool.
Best Regards,
Georgi -
Performance issues in using RBDAPP01 for reprocessing iDocs with Status 64
Hi All,
I am using the Standard ABAP Program 'RBDAPP01' for reprocessing Inbound iDocs with Status 64 (Ready to be posted).
When this is scheduled as a job in background, I find that it opens multiple sessions and occupies all available dialog sessions.
This in turn slows down the entire system.
Also, I find the addition 'Packet Size' on the selection screen for the program.
Is it related in any way to the number of sessions the program creates?
Any pointers in resolving this issue will be extremely helpful.
Thanks in advance.
Regards,
KeerthiHi,
When you mention Parallel Processing, it becomes active only if I choose that particular option on the selection screen right?
In my case, I haven't chosen parallel processing, but still the overall system performance seems to have fallen very badly.
Now please correct me if my understanding is wrong.
If I increase my Packet Size, it should improve the system performance, but will increase my runtime for the selected iDocs.
But as I have not selected parallel processing in this current situatuon, it should not have any impact here.
Have I summarized it rightly?
Thanks in advance.
Regards,
Keerthi -
Performance issues when using AQ notification with one consumer
We have developed a system to load data from a reservation database to a reporting database
A a certain point in the proces, a message with the identifier of the reservation is enqueued to a queue (multi-consumer) on the same DB and then propagated to a similar queue on the REP database.
This queue (multi-consumer) has AQ notification enabled (with one consumer) which calls the queue_callback procedure which
- dequeues the message
- calls a procedure to load the Resv data into the Reporting schema (through DB link)
We need each message to be processed ONLY ONCE thus the usage of one single subscriber (consumer)
But when load testing our application with multiple threads, the number of records created in the Reservation Database becomes quite large, meaning a large number of messages going through the first queue and propagating to the second queue (very quickly).
But messages are not processed fast enough by the 2nd queue (notification) which falls behind.
I would like to keep using notification as processing is automatic (no need to set up dbms_jobs to dequeue etc..) or something similar
So having read articles, I feel I need to use:
- multiple subscribers to the 2nd queue where each message is processed only by one subscriber (using a rule : say 10 subscribers S0 to S10 with Si processing messages where last number of the identifier is i )
problem with this is that there is an attempt to process the message for each subscriber, isn't there
- a different dequeuing method where many processes are used in parallel , with each message is processed only by one subscriber
Does anyone have experience and recommendations to make on how to improve throughput of messages?
Rgds
PhilippeHi, thanks for your interest
I am working with 10.2.0.4
My objective is to load a subset of the reservation data from the tables in the first DB (Reservation-OLTP-150 tables)
to the tables in the second DB (Reporting - about 15 tables at the moment), without affecting performance on the Reservation DB.
Thus the choice of advanced queueing (asyncronous )
- I have 2 similar queues in 2 separate databases ( AND Reporting)
The message payload is the same on both (the identifier of the reservation)
When a certain event happens on the RESERVATION database, I enqueue a message on the first database
Propagation moves the same message data to the second queue.
And there I have notification sending the message to a single consumer, which:
- calls dequeue
- and the data load procedure, which load this reservation
My performance difficulties start at the notification but I will post all the relevant code before notification, in case it has an impact.
- The 2nd queue was created with a script containing the following (similar script for fisrt queue)
dbms_aqadm.create_queue_table( queue_table => '&&CQT_QUEUE_TABLE_NAME',
queue_payload_type => 'RESV_DETAIL',
comment => 'Report queue table',
multiple_consumers => TRUE,
message_grouping => DBMS_AQADM.NONE,
compatible => '10.0.0',
sort_list => 'ENQ_TIME',
primary_instance => '0',
secondary_instance => '0');
dbms_aqadm.create_queue (
queue_name => '&&CRQ_QUEUE_NAME',
queue_table => '&&CRQ_QUEUE_TABLE_NAME',
max_retries => 5);
- ENQUEUING on the first queue (snippet of code)
o_resv_detail DLEX_AQ_ADMIN.RESV_DETAIL;
o_resv_detail:= DLEX_AQ_ADMIN.RESV_DETAIL(resvcode, resvhistorysequence);
DLEX_RESVEVENT_AQ.enqueue_one_message (o_resv_detail);
where DLEX_RESVEVENT_AQ.enqueue_one_message is :
PROCEDURE enqueue_one_message (msg IN RESV_DETAIL)
IS
enqopt DBMS_AQ.enqueue_options_t;
mprop DBMS_AQ.message_properties_t;
enq_msgid dlex_resvevent_aq_admin.msgid_t;
BEGIN
DBMS_AQ.enqueue (queue_name => dlex_resvevent_aq_admin.c_resvevent_queue,
enqueue_options => enqopt,
message_properties => mprop,
payload => msg,
msgid => enq_msgid
END;
- PROPAGATION: The message is dequeued from 1st queue and enqueued automatically by AQ propagation into this 2nd queue.
(using a call to the following 'wrapper' procedure)
PROCEDURE schedule_propagate (
src_queue_name IN VARCHAR2,
destination IN VARCHAR2 DEFAULT NULL
IS
sprocname dlex_types.procname_t:= 'dlex_resvevent_aq_admin.schedule_propagate';
BEGIN
DBMS_AQADM.SCHEDULE_PROPAGATION(queue_name => src_queue_name,
destination => destination,
latency => 10);
EXCEPTION
WHEN OTHERS
THEN
DBMS_OUTPUT.put_line (SQLERRM || ' occurred in ' || sprocname);
END schedule_propagate;
- For 'NOTIFICATION': ONE subscriber was created using:
EXECUTE DLEX_REPORT_AQ_ADMIN.add_subscriber('&&STQ_QUEUE_NAME','&&STQ_SUBSCRIBER',NULL,NULL, NULL);
this is a wrapper procedure that uses:
DBMS_AQADM.add_subscriber (queue_name => p_queue_name, subscriber => subscriber_agent );
Then notification is registered with:
EXECUTE dlex_report_aq_admin.register_notification_action ('&&AQ_SCHEMA','&&REPORT_QUEUE_NAME','&&REPORT_QUEUE_SUBSCRIBER');
- job_queue_processes is set to 10
- The callback procedure is as follows
CREATE OR REPLACE PROCEDURE DLEX_AQ_ADMIN.queue_callback
context RAW,
reginfo SYS.AQ$_REG_INFO,
descr SYS.AQ$_DESCRIPTOR,
payload RAW,
payloadl NUMBER
IS
s_procname CONSTANT VARCHAR2 (40) := UPPER ('queue_callback');
r_dequeue_options DBMS_AQ.DEQUEUE_OPTIONS_T;
r_message_properties DBMS_AQ.MESSAGE_PROPERTIES_T;
v_message_handle RAW(16);
o_payload RESV_DETAIL;
BEGIN
r_dequeue_options.msgid := descr.msg_id;
r_dequeue_options.consumer_name := descr.consumer_name;
DBMS_AQ.DEQUEUE(
queue_name => descr.queue_name,
dequeue_options => r_dequeue_options,
message_properties => r_message_properties,
payload => o_payload,
msgid => v_message_handle
-- Call procedure to load data from reservation database to Reporting DB through the DB link
dlex_report.dlex_data_load.load_reservation
( in_resvcode => o_payload.resv_code,
in_resvHistorySequence => o_payload.resv_history_sequence );
COMMIT;
END queue_callback;
- I noticed that messages are not taken out of the 2nd queue,
I guess I would need to use the REMOVE option to delete messages from the queue?
Would this be a large source of performance degradation after just a few thousand messages?
- The data load through the DB may be a little bit intensive but I feel that doing things in parallel would help.
I would like to understand if Oracle has a way of dequeuing in parallel (with or without the use of notification)
In the case of multiple subscribers with notification , does 'job_queue_processes' value has an impact on the degree of parallelism? If not what setting has?
And is there a way supplied by Oracle to set the queue to notify only one subscriber per message?
Your advice would be very much appreciated
Philippe
Edited by: user528100 on Feb 23, 2009 8:14 AM -
Asking help for performance issues about concurrent package
One of my friends is developing a service based on resin. They use thread pool of current package in jdk1.5. The service will create a lot of threads in the thread pool. And most of the threads are waiting. What they can not make sure is how the large amount of waiting threads will affect the performance. These threads surely will occupy a lot of memory. But how will they affect the cpu?
Some documents on the Internet say that the large amount of waiting threads will largely increase the thread switching overhead. And some others say no because the scheduler will not be affected by waiting threads. I'm not sure which one is true. Would anyone like to give me some tips? It's better if you can point out any our documents about it.
Thanks!No, it just depends on Data Structures 101.
You would have a list of ready threads, from which you would allocate one to the processor on some priority and fairness scheme, and another list of non-ready threads which you would only promote to the ready list when something happened to them that made them ready.
And among the ready threads you would most likely use a priority queue, so that operations on it were O(log(N)). And if the ready list also included the unready list for some strange reason, operations on it would still be O(log(N)), i.e. less than linear in the total number of threads.
And if for some strange reason it was implemented in a less efficient way than that, I would complain vociferously to the vendor. Scheduling has been going on for fifty years after all. -
Performance issue while using data type 'STRING'.
Hello All,
I have created a table for storing values of different features coming under a particular country. Since the last field 'Value field' have to hold text upto 800characters for some features, i have used the data type 'String' with character length specified as 1000. I am able to store values upto 1000characters using this. Also, the table has to hold lots and lots of value, and it will increase in future also.
Since i have mentioned the data type as 'String', I have one doubt whether this will affect the performance. Because length of most of the values in my value field is less than 75characters and in some case only it will exceed 700characters. So, my question is whether the 'String' data type will allocate the length which am specifying in the table for each entries, though the values entering is less than the specified length.
For example, if the value of my value field is 'Very High Complexity' which is of length 20characters, will the space allocation be reduced to 20 or will it be still 1000characters?
Hope someone can clarify my doubt.
Thanks In Advance,
Shino
Moved to appropriate forum
Edited by: Rob Burbank on Feb 23, 2009 4:27 PMHi Shino,
Well it is possible to store using STRING or LCHR in the transparent tables. There are some underlyning facts here:-
1. You can only have one such field per table
2. You cannot view them in the se11 / se16 table content browser
3. You will need to maintain an additional field for storing the length of the STRING or LCHR field.
Regarding the performance:
even though ABAP allows storing STRING or LCHR type fields in the transparent tables but as soon as the lenght of the field crosses 255 chars it is not advisable to store it directly in the transperant tables.
You should store that field in the knowledge repository and only a pointer to the knowledge repository in the transperant table field.
Anyways, Since you have only one field with such a requirement then i would suggest you use STRING instead of LCHR as in LCHR you will have to mandatorily assign a length (like 1000) so even if you are storing only 20 chars or 300 chars the system will reserve a slot of 1000 chars; this is not with string as in case of string everything would be dynamic.
The result being that the reading time increases in case of LCHR.
I Hop this answered your question.
Regards,
Sagar. -
Performance Issue in using Oracle Rules SDK!!!!
Hi,
I am using Oracle rules SDK. I have created a dictionary and declared 9 global variables in it. Now, before testing my ruleset in my code, I populate those variables and update datamodel.
Date startDate = new Date();
try
this.dataModel.update();
*}catch (Exception e) {*
e.printStackTrace();
Date endDate = new Date();
long duration = endDate.getTime() - startDate.getTime();
System.out.println("Time Taken : "+duration);
Now the issue is that time taken by dataModel.update() is freaking 4210 milliseconds, i.e 4.2 seconds. Any idea why this is slow??
oracle.rules.sdk.editor.datamodel.DataModel = new DataModel(ruleDictionary);When your query takes too long ...
Thanks,
Karthick. -
Performance issue in linux while using set with URL object
Hi,
I am facing performance issue while using Set(HashSet) with URL object on linux. But it is running perfectly on windows.
I am using
set.contains(urlObject)
Above particular statement is taking 40 sec on Linux, and only a fraction of ms on windows.
I have checked the jre version on both OS. It is the same version (jre6)
on both the OS.
Could anyone please tell me what is the exact reason, why the same statement is taking more time on linux than windows.
Thanks & Regards
Naveenjtahlborn wrote:
I believe the URL hashCode/equals implementations have some /tricky behavior which involves network access in order to run (doing hostname lookups and the like). you may want to either use simple Strings, or possibly the URI class (i think it fixed some of this behavior, although i could be wrong).The second new thing I have learned today. I was wrong in reply # 1 because looking at the URL code for 1.6 I see that the hash code is generated from the IP address and this has a lazy evaluation. Each URL placed in a HashMap (or other hash based collection) requires a DNS lookup the first time the hash code is used.
P.S. 40 seconds does seem a long time for a DNS lookup!
Edited by: sabre150 on Feb 13, 2008 3:40 PM
Maybe you are looking for
-
How to use HTMLB onClientClick in JSP?
I created a JSP page in PDK, but It doesn't work. Please help! <hbj:content id="myContext" > <hbj:page title="PageTitle"> <hbj:button id="mySubmit" text="Submit" onClick="ProcessConfirm" onClientClick="checkValid()"/> </hbj:page> <SCRIPT TYPE="text/j
-
Urgent!! Navigation from a link to a Zviewset
Hi Experts, I have created a link on standard veiw(followupdetails).On click of this link a new zview should open. I have created a new zviewset (zcharity) within which it contains zview( zcalendar). I have made the folloing entires in design time re
-
Use the requested_start_date as a default parameter of a periodically sched
Hi, Anyone knows how to use the requested_start_date as a default parameter of a periodically scheduled concurrent request. Sample: Date Submitted Requested Start Date Default Parameter January 01, 2010 January 15, 2010 January 15, 2010 January 15, 2
-
Fail to add all day event on a specific date
Hi, my ical fail to add all day event on a specific date 2nd of Oct, when i double clickto add all day event, it falls on the 1st or 3rd but not the 2nd?
-
I am trying to limit access to my XE installation. I've set up a reverse proxy via Apache as described in other threads and locked down the Application Express access in that regard. My concern is direct access to my XE database via port 1521. I don'