Oracle JDBC poor performance at high volume
We have a requirement to read through huge ASCII files (120 million records, 9 gigs uncompressed) and do database lookups on 4 values in each record. I have written a multithreaded application to read through one of the files and perform the lookups.
Each Thread has a dedicated Connection object from which I create a CallableStatement object that I reuse throughout the lifecycle of the Thread. I use a CallableStatement object to call a stored procedure with INOUT parameters that effectively wraps the 4 lookups into one database roundtrip.
The issue is that we can't seem to top 240 lookups per second, which is just not fast enough for our needs. Our DBA says that while we are not taxing the Oracle database in the least, we are eating up a TON of processor utilization (20 Connections ups the load on a Solaris 8 4-way box from anywhere from 10 to 20). So if we up the number of threads, any performance gain we would hope to get is nullified by the slowdown brought on by the increased load.
I'll be happy to post sample code if anyone shows further interest. And yes, we've considered doing a sql*loader direct load of the file, and then doing the validation on the DB via PL/SQL. I'm just trying to find out if this is a limitation of the Oracle JDBC drivers (we've had similar results with thin and OCI), or if I'm doing something terribly wrong.
For the purposes of readability (and client confidentiality) I removed about 400 lines of processing logic that from performance testing I found to not be the bottleneck.. it's all in the cs.executeUpdate(). It should compile.
import java.sql.*;
import java.io.*;
import java.util.*;
public class Sample extends Thread{
private static final String url = "jdbc:oracle:oci8:@instance";
private static final String dbUser = "user";
private static final String dbPass = "pass";
private static BufferedReader br;
// instance members for each thread
private Connection con;
private CallableStatement cs = null;
public static void main(String args[]){
try{
int numThreads = Integer.parseInt(args[1]);
br = new BufferedReader(new FileReader("FileToRead.txt"));
//create and load a LinkedList to keep track of all threads
LinkedList threads = new LinkedList();
Thread curThread;
for (int i = 0; i < numThreads; i++){
curThread = new Sample(String.valueOf(i));
synchronized (threads){
threads.add(curThread);
curThread.start();
System.out.println("Threads started, reading file...");
Iterator it = threads.iterator();
//wait for all threads to die before closing the shared resources
while (it.hasNext()){
((Thread)it.next()).join();
catch(Exception ex){
ex.printStackTrace();
//close all resources
finally{
closeResources();
public Sample(String threadId) throws Exception{
super(threadId);
Class.forName("oracle.jdbc.driver.OracleDriver");
con = DriverManager.getConnection(url, dbUser, dbPass);
con.setAutoCommit(false);
public void run(){
String line;
int wDimId = -1, sDimId = -1, tDimId = -1, iDimId = -1;
int abcd;
String efgh, ijkl, mnop, cFlg = "N";
try{
cs = con.prepareCall("{call my_wrapper_prc(?,?,?,?,?,?,?,?,?)}");
//read first line
synchronized (br){
line = br.readLine();
while (line != null){
abcd = Integer.parseInt(line.substring(0,4).trim());
efgh = line.substring(31,36).trim();
ijkl = line.substring(36,42).trim();
mnop = line.substring(24,31).trim();
cs.setInt(1, abcd);
cs.setString(2, efgh);
cs.setString(3, ijkl);
cs.setString(4, mnop);
cs.setInt(5, wDimId);
cs.setInt(6, tDimId);
cs.setInt(7, iDimId);
cs.setInt(8, sDimId);
cs.setString(9, cFlg);
cs.registerOutParameter(5, java.sql.Types.INTEGER);
cs.registerOutParameter(6, java.sql.Types.INTEGER);
cs.registerOutParameter(7, java.sql.Types.INTEGER);
cs.registerOutParameter(8, java.sql.Types.INTEGER);
cs.registerOutParameter(9, java.sql.Types.VARCHAR);
cs.executeUpdate();
wDimId = cs.getInt(5);
tDimId = cs.getInt(6);
iDimId = cs.getInt(7);
sDimId = cs.getInt(8);
cFlg = cs.getString(9);
cs.clearParameters();
* Logic to deal with DimId's
//read next line
synchronized (br){
line = br.readLine();
// if an exception occurs, let the thread die but don't exit the application
catch(Exception ex){
ex.printStackTrace();
finally{
//close connection and CallableStatement objects
try{
if (cs != null) cs.close();
if (con != null && !con.isClosed()) con.close();
catch(SQLException ex){
ex.printStackTrace();
private static void closeResources(){
//Close IO resources
}
Similar Messages
-
Poor performance and high number of gets on seemingly simple insert/select
Versions & config:
Database : 10.2.0.4.0
Application : Oracle E-Business Suite 11.5.10.2
2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
WIA.ITEM_TYPE = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 4 0
Execute 2 3.44 6.36 2 24297 198 36
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.44 6.36 2 24297 202 36
Misses in library cache during parse: 1
Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
Rows Execution Plan
0 INSERT STATEMENT MODE: ALL_ROWS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 12 0.00 0.00
gc current block 2-way 14 0.00 0.00
db file sequential read 2 0.01 0.01
row cache lock 24 0.00 0.01
library cache pin 2 0.00 0.00
rdbms ipc reply 1 0.00 0.00
gc cr block 2-way 4 0.00 0.00
gc current grant busy 1 0.00 0.00
********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
If I make the insert into an empty, non-partitioned table, I get :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.08 0 137 53 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.08 0 137 53 25and same explain plan - using index range scan on WF_Item_Attributes_PK.
This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.10 10 27 136 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.10 10 27 136 25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
further info on the objects concerned:
query source table :
WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
WF_Item_Attributes tbl : non-partitioned, 160 blocks
insert destination table:
WF_Item_Attribute_Values:
range partitioned on Item_Type, and hash sub-partitioned on Item_Key
both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
Bind values:
exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
thanks and regards
Ivanhi Sven,
Thanks for your input.
1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
============= From DBA_Part_Tables : Partition Type / Count =============
PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
RANGE HASH 77 APPS_TS_TX_DATA
1 row selected.
============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
Partition Name TS Name High Value High Val Len
WF_ITEM1 APPS_TS_TX_DATA 'A1' 4
WF_ITEM2 APPS_TS_TX_DATA 'AM' 4
WF_ITEM3 APPS_TS_TX_DATA 'AP' 4
WF_ITEM47 APPS_TS_TX_DATA 'OB' 4
WF_ITEM48 APPS_TS_TX_DATA 'OE' 4
WF_ITEM49 APPS_TS_TX_DATA 'OF' 4
WF_ITEM50 APPS_TS_TX_DATA 'OK' 4
WF_ITEM75 APPS_TS_TX_DATA 'WI' 4
WF_ITEM76 APPS_TS_TX_DATA 'WS' 4
WF_ITEM77 APPS_TS_TX_DATA MAXVALUE 8
77 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_TYPE 1
1 row selected.
PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
Partition Name SUBPARTITION_NAME TS Name High Value High Val Len
WF_ITEM49 SYS_SUBP3326 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3328 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3332 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3331 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3330 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3329 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3327 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3325 APPS_TS_TX_DATA 0
8 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_KEY 1
1 row selected.
from DBA_Segments - just for partition WF_ITEM49 :
Segment Name TSname Partition Name Segment Type BLOCKS Mbytes EXTENTS Next Ext(Mb)
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3332 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3331 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3330 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3329 TblSubPart 16112 125.875 1007 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3328 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3327 TblSubPart 16224 126.75 1014 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3326 TblSubPart 16208 126.625 1013 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3325 TblSubPart 16128 126 1008 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3332 IdxSubPart 59424 464.25 3714 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3331 IdxSubPart 59296 463.25 3706 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3330 IdxSubPart 59520 465 3720 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3329 IdxSubPart 59104 461.75 3694 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3328 IdxSubPart 59456 464.5 3716 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3327 IdxSubPart 60016 468.875 3751 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3326 IdxSubPart 59616 465.75 3726 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3325 IdxSubPart 59376 463.875 3711 .125
sum 4726.5
[the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
Ivan -
Oracle reports having issue with high volume
Hi
We are facing problems when generating the oracle reports with 100 000 records to be written into pdf/text format generated through Oracle report.
The error we are getting is
Unexpected Signal : 11 occurred at PC=0xFEDCD524
Function=[Unknown. Nearest: JVM_GetCPFieldClassNameUTF+0x4B30]
Library=/orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/client/libjvm.so
Dynamic libraries:
0x10000 /orarep/asuser/product/9.0.4/Reports/bin/rwrun
0xfec00000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/libjvm.so
0xfe000000 /orarep/asuser/product/9.0.4/Reports/lib/librw90.so
0xff100000 /orarep/asuser/product/9.0.4/Reports/lib/libobx90.so.0
0xff0d0000 /orarep/asuser/product/9.0.4/Reports/lib/libnn90.so.0
0xff080000 /orarep/asuser/product/9.0.4/Reports/lib/librws90.so.0
0xfdd80000 /orarep/asuser/product/9.0.4/Reports/lib/libde90.so.0
0xfebc0000 /orarep/asuser/product/9.0.4/Reports/lib/libucol90.so.0
0xfeb90000 /orarep/asuser/product/9.0.4/Reports/lib/libuicc90.so.0
0xfeb30000 /orarep/asuser/product/9.0.4/Reports/lib/libca90.so.0
0xfeb10000 /orarep/asuser/product/9.0.4/Reports/lib/libmma90.so.0
0xfead0000 /orarep/asuser/product/9.0.4/Reports/lib/libmmiw90.so.0
0xff060000 /orarep/asuser/product/9.0.4/Reports/lib/libmmov90.so.0
0xfea90000 /orarep/asuser/product/9.0.4/Reports/lib/libmmos90.so.0
0xfdfc0000 /orarep/asuser/product/9.0.4/Reports/lib/libmmoi90.so.0
0xfdfa0000 /orarep/asuser/product/9.0.4/Reports/lib/libmmia90.so.0
0xfdd60000 /orarep/asuser/product/9.0.4/Reports/lib/libmmft90.so.0
0xfdd20000 /orarep/asuser/product/9.0.4/Reports/lib/libmmcm90.so.0
0xfdc00000 /orarep/asuser/product/9.0.4/Reports/lib/libvgs90.so.0
0xfdd00000 /orarep/asuser/product/9.0.4/Reports/lib/libuihx90.so.0
0xfdb90000 /orarep/asuser/product/9.0.4/Reports/lib/libuc90.so.0
0xfdb20000 /orarep/asuser/product/9.0.4/Reports/lib/libuipr90.so.0
0xfd900000 /orarep/asuser/product/9.0.4/Reports/lib/libuimotif90.so.0
0xfdae0000 /orarep/asuser/product/9.0.4/Reports/lib/libot90.so.0
0xfd8a0000 /orarep/asuser/product/9.0.4/Reports/lib/librem90.so.0
0xfd820000 /orarep/asuser/product/9.0.4/Reports/lib/libree90.so.0
0xfd800000 /orarep/asuser/product/9.0.4/Reports/lib/librec90.so.0
0xfd7d0000 /orarep/asuser/product/9.0.4/Reports/lib/libuiimg90.so.0
0xfd790000 /orarep/asuser/product/9.0.4/Reports/lib/libuia90.so.0
0xfdac0000 /orarep/asuser/product/9.0.4/Reports/lib/libtknqap90.so.0
0xfd750000 /orarep/asuser/product/9.0.4/Reports/lib/libutt90.so.0
0xfd720000 /orarep/asuser/product/9.0.4/Reports/lib/librod90.so.0
0xfd6f0000 /orarep/asuser/product/9.0.4/Reports/lib/libror90.so.0
0xfd6c0000 /orarep/asuser/product/9.0.4/Reports/lib/libros90.so.0
0xfd690000 /orarep/asuser/product/9.0.4/Reports/lib/libuat90.so.0
0xfd670000 /orarep/asuser/product/9.0.4/Reports/lib/libdfc90.so.0
0xfd650000 /orarep/asuser/product/9.0.4/Reports/lib/libutc90.so.0
0xfd630000 /orarep/asuser/product/9.0.4/Reports/lib/libutj90.so.0
0xfd5f0000 /orarep/asuser/product/9.0.4/Reports/lib/libutl90.so.0
0xfd5d0000 /orarep/asuser/product/9.0.4/Reports/lib/libutsl90.so.0
0xfcc00000 /orarep/asuser/product/9.0.4/Reports/lib/libclntsh.so.9.0
0xfd480000 /orarep/asuser/product/9.0.4/Reports/lib/libnnz9.so
0xfd5b0000 /orarep/asuser/product/9.0.4/Reports/lib/libwtc9.so
0xfcb00000 /usr/lib/libnsl.so.1
0xfd460000 /usr/lib/libsocket.so.1
0xfd440000 /usr/lib/libgen.so.1
0xff3fa000 /usr/lib/libdl.so.1
0xfcbe0000 /usr/lib/libsched.so.1
0xfca00000 /usr/lib/libc.so.1
0xfcbc0000 /usr/lib/libaio.so.1
0xfc9b0000 /usr/lib/libm.so.1
0xfc980000 /usr/lib/libthread.so.1
0xfc700000 /usr/lib/libXm.so.4
0xfc690000 /usr/openwin/lib/libXt.so.4
0xfc580000 /usr/openwin/lib/libX11.so.4
0xff3a0000 /usr/lib/libw.so.1
0xfcad0000 /usr/lib/libCrun.so.1
0xfc400000 /orarep/asuser/product/9.0.4/Reports/lib/libix90.so
0xfc960000 /orarep/asuser/product/9.0.4/Reports/lib/libixd90.so
0xfc940000 /usr/lib/librt.so.1
0xfc670000 /usr/lib/libmp.so.2
0xfc640000 /usr/openwin/lib/libXext.so.0
0xfc560000 /usr/openwin/lib/libSM.so.6
0xfc530000 /usr/openwin/lib/libICE.so.6
0xfc3e0000 /usr/lib/libmd5.so.1
0xfdcf0000 /usr/platform/SUNW,Sun-Fire-V440/lib/libc_psr.so.1
0xfc3a0000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/native_threads/libhpi.so
0xfc370000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/libverify.so
0xfc330000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/libjava.so
0xfc310000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/libzip.so
0xe3420000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/libnet.so
0xe3550000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/libioser12.so
0xe24e0000 /usr/lib/nss_files.so.1
0xe2610000 /usr/lib/nss_cluster.so.1
0xe24b0000 /usr/cluster/lib/libclos.so.1
0xe23d0000 /usr/lib/libsecdb.so.1
0xe23b0000 /usr/lib/libdoor.so.1
0xe0b00000 /usr/lib/libCstd.so.1
0xe2260000 /usr/lib/libcmd.so.1
0xe2220000 /usr/lib/cpu/sparcv8plus/libCstd_isa.so.1
0xe2390000 /orarep/asuser/product/9.0.4/Reports/lib/librwu90.so
0xe1f30000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/libcmm.so
0xe15b0000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/libjpeg.so
0xe0600000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/libawt.so
0xe0580000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/libmlib_image.so
0xe20d0000 /orarep/asuser/product/9.0.4/Reports/jdk/jre/lib/sparc/headless/libmawt.so
Local Time = Thu Feb 2 18:08:09 2006
Elapsed Time = 222
# HotSpot Virtual Machine Error : 11
# Error ID : 4F530E43505002E6 01
# Please report this error at
# http://java.sun.com/cgi-bin/bugreport.cgi
# Java VM: Java HotSpot(TM) Client VM (1.4.1_03-b02 mixed mode)
# An error report file has been saved as /tmp/hs_err_pid31802.log.
# Please refer to the file for further information.
Could anybody help us to findout the solution.how many pages of the file are generated? also was it being written to a file or displayed on the screen? try also to check the log file for any possible solutions.
-
How to get comparable Oracle JDBC performance using Java 1.4 vs 1.1.7?
Our application makes extensive use of JDBC to access an Oracle database. We wrote it a number of years ago using java 1.1.7 and we have been unable to move to new versions of java because of the performance degradation.
I traced the problem to JDBC calls. I can reproduce the problem using a simple program that simply connects to the database, executes a simple query and then reads the data. The same program running under java 1.4 is about 60% slower than under java 1.1.7. The query is about 80% slower and getting the data is about 150% slower!
The program is attached below. Note, I run the steps twice as the first time the times are much larger which I attribute to java doing some initializations. So the second set of values I think are more representative of the actual performance in our application where there are numerous accesses to the database. Specifically, I focus on step 4 which is the execute query command and step 5 which is the data retrieval step. The table being read has 4 columns with 170 tuples in it.
Here are the timings I get when I run this on a Sparc Ultra 5 running
SunOs 5.8 using an Oracle database running 8.1.7:
java 1.1.7 java 1.4
overall: 2.1s 3.5s
step 1: 30 200
step 2: 886 2009
step 3: 2 2
step 4: 9 17
step 5: 122 187
step 6: 1 1
step 1: 0 0
step 2: 203 161
step 3: 0 1
step 4: 8 15 <- 87% slower
step 5: 48 117 <- 143% slower
step 6: 1 2I find the same poor performance from java versions 1.2 and 1.3.
I tried using DataDirect's type 4 JDBC driver which gives a little better performance but overall it is still much slower than using java 1.1.7.
Why do the newer versions of java have such poor performance when using JDBC?
What can be done so that we can have performance similar to java 1.1.7
using java 1.4?
========================================================================
import java.util.*;
import java.io.*;
import java.sql.*;
public class test12 {
public static void main(String args[]) {
try {
long time1 = System.currentTimeMillis();
/* step 1 */ DriverManager.registerDriver(
new oracle.jdbc.driver.OracleDriver());
long time2 = System.currentTimeMillis();
/* step 2 */ Connection conn = DriverManager.getConnection (
"jdbc:oracle:thin:@dbserver1:1521:db1","user1","passwd1");
long time3 = System.currentTimeMillis();
/* step 3 */ Statement stmt = conn.createStatement();
long time4 = System.currentTimeMillis();
/* step 4 */ ResultSet rs = stmt.executeQuery("select * from table1");
long time5 = System.currentTimeMillis();
/* step 5 */ while( rs.next() ) {
int message_num = rs.getInt(1);
String message = rs.getString(2);
long time6 = System.currentTimeMillis();
/* step 6 */ rs.close(); stmt.close();
long time7 = System.currentTimeMillis();
System.out.println("step 1: " + (time2 - time1) );
System.out.println("step 2: " + (time3 - time2) );
System.out.println("step 3: " + (time4 - time3) );
System.out.println("step 4: " + (time5 - time4) );
System.out.println("step 5: " + (time6 - time5) );
System.out.println("step 6: " + (time7 - time6) );
System.out.flush();
} catch ( Exception e ) {
System.out.println( "got exception: " + e.getMessage() );
... repeat the same 6 steps again...
}If I run my sample program with the -server option, it
takes a lot longer (6.8s vs 3.5s).Which has to be expected, as the -server option optimizes for long running programs - so it shoudl go with my second suggestion, more below...
I am not certain what you mean by "just let the jvm
running". Our users issue a command (in Unix) which
invokes one of our java programs to access or update
data in a database. I think what you are suggesting
would require that I rewrite our application to have a
java program always running on the users workstation
and to also rewrite our
commands (over a hundred) to some how pass data and
receive data with this new server java program. That
does not seem a very reasonable just to move to a new
version of java. Or are you suggesting something
else?No I was just suggestion what you descript. But if this is not an option, then maybe you should merge your java-programs to C or another native language. Or you could try the IBM-JDK with the -faststart (or similar) option. If thew Unix you mention is AIX, then there would be the option of a resetable-vm. But I cannot say if this VM would solve your problem. Java is definitly not good for applications which only issue some unqiue commands because the hotspot-compiler can not be efficiently used there. You can only try to get 1.1.7 performance by experimenting with vm-parameters (execute java -X). -
High volumes on receiver JDBC adapter
Hi,
We have a RFC ->JDBC scenario where the RFC pulls huge amounts of data from R/3 and sends to XI.
XI needs to upload this data into 5 different Db tables.Each table contains 3000-8000 records with each record containing 10-15 fields.
When we try to run this scenario, due to high volumes of data the JBDC adapter hangs and msgs were in 'holding/delivering' status for long time.
Please advice on possibilities of handling this within XI.Hi,
We changed the design and now we have only 'INSERT' and we don't have concerns with table refresh now.
I am splitting the records in XI mapping as bunches on 1000 each. But I found one of the tables have more that 1lakh records.
The data volume that we received in RFC is 150000 records(45MB). It took 7.5 mins to process this msg in Integration Engine.
But the messages delivery into Db tables (receiver JDBC adapter processing) is very slow.At maximum it can process 250 records in minute.
Please provide your inputs on this design. Is it Ok to accept 45MB message into XI at one shot? Even though the message got processed(splitted) in IE, they are processing in AE for long time. I believe this will have impact on other interfaces that use JDBC adapter.
Please provide your suggestions on how to improve the design/performance of this interface.
Thanks! -
Oracle database integration with SAP PI for high volume & Complex Structure
Hi
We have requirement for integrating oracle database to SAP PI 7.0 for sending data which is eventually transferred to multiple receivers. The involved data structure is hugely complex (around 18 child tables) with high volume processing requirement (100K+ objects need to be processed in 6-7 hours). We need to implement logic for prioritizing the object i.e. high priority objects must be processed first and then objects with normal priority.
We could think of implementing this kind of logic in database procedures (at least it provides flexibility for implementing data selection logic as well as processed data can be marked as success in the same SP) but since PI sender adapter doesn't support calling Oracle stored procedures currently so this option is rules out. we can try implementing complex data selection using oracle table function but table function doesn't allow any SQL query which changes data (UPDATE, INSERT, DELETE etc) so it is impossible to mark selected objects in table function from PI communication channel "Update Query" option.
Also, we need to make sure that we are not processing all the objects at once as message size for 20 objects can vary from 100 KB to 15 MB which could really lead to serious performance issues for bigger messages.
Please share any implementation experience for handling issues:
1 - Database Integration involving Oracle at sender side
2 - Complex Data structures
3 - High Volume Processing
4 - Controlled data selection from database to contro the message size in PI
Thanks,
PanchdevHi,
We can call the stored procedure using receiver adapter using ccBPM, we can follow different approaches for reading the data in this case.
a) In this a ccBPM instance needs to be triggered using some dummy message, after receiving this message the ccBPM can make a sync call to the Oracle database the store procedure(this can be done using the specific receiver data type strucure), on getting the response message the ccBPM can then proceed with the further steps.The stored procedure needs to be optimized for improving the performance as the mapping complexity will largely get affected by the structure in which the stored procedure returns the message.Prioritization of the objects can be handled in the stored procedure.
b) In this a ccBPM instance can first read data from the header level table, then it can make subsequent sync calls to Oracle tables for reading data from the child tables.This approach is less suitable for this interface as the number child tables is big.
Pravesh. -
Non jdriver poor performance with oracle cluster
Hi,
we decided to implement batch input and went from Weblogic Jdriver to Oracle Thin 9.2.0.6.
Our system are a Weblogic 6.1 cluster and an Oracle 8.1.7 cluster.
Problem is .. with the new Oracle drivers our actions on the webapp takes twice as long as with Jdriver. We also tried OCI .. same problem. We switched to a single Oracle 8.1.7 database .. and it worked again with all thick or thin drivers.
So .. new Oracle drivers with oracle cluster result in bad performance, but with Jdriver it works perfectly. Does sb. see some connection?
I mean .. it works with Jdriver .. so it cant be the database, huh? But we really tried with every JDBC possibility! In fact .. we need batch input. Advise is very appreciated =].
Thanx for help!!
Message was edited by mindchild at Jan 27, 2005 10:50 AM
Message was edited by mindchild at Jan 27, 2005 10:51 AMThx for quick replys. I forget to mention .. we also tried 10g v10.1.0.3 from instantclient yesterday.
I have to agree with Joe. It was really fast on the single machine database .. but we had same poor performance with cluster-db. It is frustrating. Specially if u consider that the Jdriver (which works perfectly in every combination) is 4 years old!
Ok .. we got this scenario, with our appPage CustomerOverview (intensiv db-loading) (sorry.. no real profiling, time is taken with pc watch) (Oracle is 8.1.7 OPS patch level1) ...
WL6.1_Cluster + Jdriver6.1 + DB_cluster => 4sec
WL6.1_Cluster + Jdriver6.1 + DB_single => 4sec
WL6.1_Cluster + Ora8.1.7 OCI + DB_single => 4sec
WL6.1_Cluster + Ora8.1.7 OCI + DB_cluster => 8-10sec
WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_single => 4sec
WL6.1_Cluster + Ora9.2.0.5/6 thin + DB_cluster => 8sec
WL6.1_Cluster + Ora10.1.0.3 thin + DB_single => 2-4sec (awesome fast!!)
WL6.1_Cluster + Ora10.1.0.3 thin + DB_cluster => 6-8sec
Customers rough us up, because they cannot mass order via batch input. Any suggestions how to solve this issue is very appreciated.
TIA
>
>
Markus Schaeffer wrote:
Hi,
we decided to implement batch input and went fromWeblogic Jdriver to Oracle Thin 9.2.0.6.
Our system are an Weblogic 6.1 cluster and a Oracle8.1.7 cluster.
Problem is .. with the new Oracle drivers ouractions on the webapp takes twice as long
as with Jdriver. We also tried OCI .. same problem.We switched to a single Oracle 8.1.7
database .. and it worked again with all thick orthin drivers.
So .. new Oracle drivers with oracle cluster
result in bad performance, but with
Jdriver it works perfectly. Does sb. see someconnection?Odd. The jDriver is OCI-based, so it's something
else. I would try the latest
10g driver if it will work with your DBMS version.
It's much faster than any 9.X
thin driver.
Joe
I mean .. it works with Jdriver .. so it cant bethe database, huh? But we really
tried with every JDBC possibility!
Thanx for help!! -
Tool to export and import high volume data from/to Oracle and MS Excel
We are using certain reports (developed in XLS and CSV) to extract more than 500K to 1M records in single report. There around 1000 reports generated daily. The business users review those reports and apply certain rules to identify exceptions then they apply those corrections back to the system through XL upload.
The XL reports are developed in TIBCO BW and deployed in AMX platform. The user interface is running on TIBCO GI.
Database Version: Oracle 11.2.0.3.0 (RAC - 2 node)
The inputs around following points will be of great help:
1) Recommendation to handle such higher volumes reports and mechanism to apply bulk correction back to system?
2) Suggestions for any Oracle tool or third party toolIf you were to install Oracle client software on the PC where EXCEL is installed,
then you can utilize ODBC such that Excel can connect directly to the DB & issue SQL. -
URGENT: Migrating from SQL to Oracle results in very poor performance!
*** IMPORTANT, NEED YOUR HELP ***
Dear, I have a banking business solution from Windows/SQL Server 2000 to Sun Solaris/ORACLE 10g migrated. In the test environment everything was working fine. On the production system we have very poor DB performance. About 100 times slower than SQL Server 2000!
Environment at Customer Server Side:
Hardware: SUN Fire 4 CPU's, OS: Solaris 5.8, DB Oracle 8 and 10
Data Storage: Em2
DB access thru OCCI [Environment:OBJECT, Connection Pool, Create Connection]
Depending from older applications it's necessary to run ORACLE 8 as well on the same Server. Since we have running the new solution, which is using ORACLE 10, the listener for ORACLE 8 is frequently gone (or by someone killed?). The performance of the whole ORACLE 10 Environment is very poor. As a result of my analyse I figured out that the process to create a connection to the connection pool takes up to 14 seconds. Now I am wondering if it a problem to run different ORACLE versions on the same Server? The Customer has installed/created the new ORACLE 10 DB with the same user account (oracle) as the older version. To run the new solution we have to change the ORACLE environment settings manually. All hints/suggestions to solve this problem are welcome. Thanks in advance.
AntonOn the production system we have very poor DB performanceHave you identified the cause of the poor performance is not the queries and their plans being generated by the database?
Do you know if some of the queries appear to take more time than what it used to be on old system? Did you analyze such queries to see what might be the problem?
Are you running RBO or CBO?
if stats are generated, how are they generated and how often?
Did you see what autotrace and tkprof has to tell you about problem queries (if in fact such queries have been identified)?
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10752/sqltrace.htm#1052 -
Poor performance with Oracle Spatial when spatial query invoked remotely
Is anyone aware of any problems with Oracle Spatial (10.2.0.4 with patches 6989483 and 7003151 on Red Hat Linux 4) which might explain why a spatial query (SDO_WITHIN_DISTANCE) would perform 20 times worse when it was invoked remotely from another computer (using SQLplus) vs. invoking the very same query from the database server itself (also using SQLplus)?
Does Oracle Spatial have any known problems with servers which use SAN disk storage? That is the primary difference between a server in which I see this poor performance and another server where the performance is fine.
Thank you in advance for any thoughts you might share.OK, that's clearer.
Are you sure it is the SQL inside the procedure that is causing the problem? To check, try extracting the SQL from inside the procedure and run it in SQLPLUS with
set autotrace on
set timing on
SELECT ....If the plans and performance are the same then it may be something inside the procedure itself.
Have you profiled the procedure? Here is an example of how to do it:
Prompt Firstly, create PL/SQL profiler table
@$ORACLE_HOME/rdbms/admin/proftab.sql
Prompt Secondly, use the profiler to gather stats on execution characteristics
DECLARE
l_run_num PLS_INTEGER := 1;
l_max_num PLS_INTEGER := 1;
v_geom mdsys.sdo_geometry := mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(0,0,45,45,90,0,135,45,180,0,180,-45,45,-45,0,0));
BEGIN
dbms_output.put_line('Start Profiler Result = ' || DBMS_PROFILER.START_PROFILER(run_comment => 'PARALLEL PROFILE')); -- The comment name can be anything: here it is related to the Parallel procedure I am testing.
v_geom := Parallel(v_geom,10,0.05,1); -- Put your procedure call here
dbms_output.put_line('Stop Profiler Result = ' || DBMS_PROFILER.STOP_PROFILER );
END;
SHOW ERRORS
Prompt Finally, report activity
COLUMN runid FORMAT 99999
COLUMN run_comment FORMAT A40
SELECT runid || ',' || run_date || ',' || run_comment || ',' || run_total_time
FROM plsql_profiler_runs
ORDER BY runid;
COLUMN runid FORMAT 99999
COLUMN unit_number FORMAT 99999
COLUMN unit_type FORMAT A20
COLUMN unit_owner FORMAT A20
COLUMN text FORMAT A100
compute sum label 'Total_Time' of total_time on runid
break on runid skip 1
set linesize 200
SELECT u.runid || ',' ||
u.unit_name,
d.line#,
d.total_occur,
d.total_time,
text
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid
AND
u.unit_number = d.unit_number
JOIN all_source als ON ( als.owner = 'CODESYS'
AND als.type = u.unit_type
AND als.name = u.unit_name
AND als.line = d.line# )
WHERE u.runid = (SELECT max(runid) FROM plsql_profiler_runs)
ORDER BY d.total_time desc;Run the profiler in both environments and see if you can see where the slowdown exists.
regards
Simon -
High volume of batches with Split valuation - impact on system performance
Hi!
I have a client that is intending to load a new material type from their legacy system which will be automatically batch managed with split valuation. So, Valuation category will be 'x' and the valuation type will also be the batch number as automatically created on GR.
The concern of the client is the impact on system performance. Having up to 80,000 batches per material master record (so, 80,000 valuation types will be mainatined with a unique price in the Accounting 1 tab of the MMR) and overall around 1 million batches a year. I'm not aware of any system performance issues around this myself but there seems to be anecdotal evidence that SAP has advised against using this functionality with high volumes of batches.
Could you please let me know of any potential problems I might encounter, having 1 million batches with split valuation may cause? Logically, this would increase to tens of millions of batches over time until archived off via SARA.
Many thanks!
AnthonyI currently have about 1.5 million batches with split valuation in my system (but it is not the X split), and we archive yearly.
having many batches for one material ( lets say 1000) causes dramatic performace issues during automatic batch determination.
it took about 5 minutes until a batch was returned into a delivery. if the user then wants a different batch and has to carry out batch determination again, then he just works for 10 to 15 minutes on one delivery.
This is mainly caused by the storage location segment of the batches. if one batch gets movedd within a plant thru 3 different locations, then the batch has 3 records in table MCHB. But SAP has a report to reorganize the MCHB table that have zero stock.
The X split has more effect, it is not only the batch table that makes issues in this case. With the x-split SAP adds an MBEW record (material master valuation view) for each new batch.
However, if the design is made to get a certain functionality (here valution at batch level), then you have to get a proper hardware in place that can give you the performance that is needed. -
Very poor performance on oracle linux 6.3 on virtual-box 4.2
Dear all,
I installed an oracle linux 6.3 on vbox VM (2 gb RAM, 20 gb disk): the installation took a long time and after, I faced poor performance and big latency.
One bug reported:
WARNING: at kernel/time/clockevents.c:47 clockevent_delta2ns+0x79/0x90()
Hardware name: VirtualBox
Modules linked in:
Pid: 1, comm: swapper Not tainted 2.6.39-200.24.1.el6uek.x86_64 #1
Call Trace:
[<ffffffff8106ad1f>] warn_slowpath_common+0x7f/0xc0
[<ffffffff8106ad7a>] warn_slowpath_null+0x1a/0x20
[<ffffffff8109d649>] clockevent_delta2ns+0x79/0x90
[<ffffffff819b8427>] calibrate_APIC_clock+0xeb/0x2f6
[<ffffffff819b874c>] setup_boot_APIC_clock+0x59/0x7e
[<ffffffff819b8336>] APIC_init_uniprocessor+0xfc/0x102
[<ffffffff819b60fc>] smp_sanity_check+0x69/0x145
[<ffffffff819b62fd>] native_smp_prepare_cpus+0x125/0x215
[<ffffffff819a776b>] kernel_init+0x1c9/0x2a8
[<ffffffff8150edc4>] kernel_thread_helper+0x4/0x10
[<ffffffff819a75a2>] ? parse_early_options+0x20/0x20
[<ffffffff8150edc0>] ? gs_change+0x13/0x13
I upgraded memory to 4 gb RAM but it was on vain
Please, any idea so far ?
Best RegardsI can confirm that there is no issue with Virtualbox 4.2 and Oracle Linux 6.3. I use it all the time. Performance should be near native and some aspects like disk I/O may even be faster. Your problem could be insufficient hardware, like missing x86_64 or hardware virtualization support. However, like rukbat wrote, this is not the right forum to discuss Virtualbox and your computer hardware here. I suggest to start with the Virtualbox documentation to verify your hardware is sufficient.
https://www.virtualbox.org/manual/ch10.html#hwvirt
https://www.virtualbox.org/manual/ch14.html -
Modifiy and maximize performance of Oracle JDBC driver
Hello all,
due to some boring errors I'm trying to modify and maximize my JDBC driver for oracle connection... in particular I've two questions:
1. to substitute oracle driver with a newer version you have simply to upgrade the JDBC driver in $ODI_HOME/drivers/ORACLE or you have to add/modify the file named DriverRefV3.xml in sunopsis.zip. In particular I've checked DriverRefV3.xml but there's not reference to the file used for oracle connection so I do not know if ODI is using the ojdbc5.jar updated present in ODI_HOME/drivers/ORACLE
2. Oracle JDBC driver supports some properties as inactivity-timeout. Usually these are related to connection pool opened to the database. I want to change these properties...it's possible?
Thanks
StefanoHi Stefano ,
If you have only 1 Oracle JDBC driver ie. ojdbc5.jar in $ODI_HOME/drivers the ODI have to use that driver only (provided you are using correct JDK for that driver )
DriverRefV3.xml is for listing down the driver in the JDBC connection URL
Unless and until you have the driver in $ODI_HOME/drivers , DriverRefV3.xml will not help you.
Have no idea about timeout setting in JDBC .
If you find answer then it will be very helpful if you can share the same in this forum .
Thanks,
Sutirtha -
Differences between Oracle JDBC Thin and Thick Drivers
If any body is looking for this information...
============================================================
I have a question concerning the Oracle JDBC thin vs. thick drivers
and how they might affect operations from an application perspective.
We're in a Solais 8/Oracle 8.1.7.2 environment. We have several
applications on several servers connecting to the Oracle database.
For redundancy, we're looking into setting up TAF (transparent
application failover). Currently, some of our apps use the Oracle
<B>JDBC thin</B> drivers to talk to the database, with a connection
string that like this:
<B> jdbc:oracle:thin:@host:port:ORACLE_SID </B>
In a disaster recovery mode, where we would switch the database
from one server to another, the host name in the above string
would become invalid. That means we have to shut down our application
servers and restart them with an updated string.
Using the Oracle <B>OCI (thick)</B> driver though, allows us to connect
to a Net8 service instead of a specific server:
<B> jdbc:oracle:oci8:@NET8_SERVICE_NAME </B>
Coupled with the FAILOVER=ON option configured in Net8, it is
then possible to direct a connection from the first server to
the failover database on another server. This is exactly what
we would like to do.
My question is, from an application perspective, how is the Oracle
thick driver different from the thin driver? If everything
else is "equal" (i.e. the thick driver is compatible with the
app servers) would there be something within the the thick/OCI
driver that could limit functionality vs. the thin driver?
My understand, which obviously is sketchy, is that the thick
driver is a superset of the thin driver. If this is the case,
and for example if all database connections were handled through
a configuration file with the above OCI connection string, then
theoretically the thick driver should work.
============================================================
<B>
In the case with the Oracle, they provide a thin driver that is a 100% Java driver for client-side use without the need of an Oracle installation (maybe that's why we need to input server name and port number of the database server). This is platform indipendent, and has good performance and some features.
The OCI driver on the other hand is not java, require Oracle installation, platform dependent, performance is faster, and has a complete list of all the features.
</B>
========================================================
I hope this is what you expect.
JDBC OCI client-side driver: This is a JDBC Type 2 driver that uses Java native methods to call entrypoints in an underlying C library. That C library, called OCI (Oracle Call Interface), interacts with an Oracle database. <B>The JDBC OCI driver requires an Oracle (7.3.4 or above) client installation (including SQL*Net v2.3 or above) and all other dependent files.</B> The use of native methods makes the JDBC OCI driver platform specific. Oracle supports Solaris, Windows, and many other platforms. This means that the Oracle JDBC OCI driver is not appropriate for Java applets, because it depends on a C library to be preinstalled.
JDBC Thin client-side driver: This is a JDBC Type 4 driver that uses Java to connect directly to Oracle. It emulates Oracle's SQL*Net Net8 and TTC adapters using its own TCP/IP based Java socket implementation. <B>The JDBC Thin driver does not require Oracle client software to be installed, but does require the server to be configured with a TCP/IP listener. Because it is written entirely in Java, this driver is platform-independent.</B> The JDBC Thin driver can be downloaded into any browser as part of a Java application. (Note that if running in a client browser, that browser must allow the applet to open a Java socket connection back to the server.
JDBC Thin server-side driver: This is another JDBC Type 4 driver that uses Java to connect directly to Oracle. This driver is used internally by the JServer within the Oracle server. This driver offers the same functionality as the client-side JDBC Thin driver (above), but runs inside an Oracle database and is used to access remote databases. Because it is written entirely in Java, this driver is platform-independent. There is no difference in your code between using the Thin driver from a client application or from inside a server.
======================================================
How does one connect with the JDBC Thin Driver?
The the JDBC thin driver provides the only way to access Oracle from the Web (applets). It is smaller and faster than the OCI drivers, and doesn't require a pre-installed version of the JDBC drivers.
import java.sql.*;
class dbAccess {
public static void main (String args []) throws SQLException
DriverManager.registerDriver (new oracle.jdbc.driver.OracleDriver());
Connection conn = DriverManager.getConnection
("jdbc:oracle:thin:@qit-uq-cbiw:1526:orcl", "scott", "tiger");
// @machineName:port:SID, userid, password
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery("select BANNER from SYS.V_$VERSION");
while (rset.next())
System.out.println (rset.getString(1)); // Print col 1
stmt.close();
How does one connect with the JDBC OCI Driver?
One must have Net8 (SQL*Net) installed and working before attempting to use one of the OCI drivers.
import java.sql.*;
class dbAccess {
public static void main (String args []) throws SQLException
try {
Class.forName ("oracle.jdbc.driver.OracleDriver");
} catch (ClassNotFoundException e) {
e.printStackTrace();
Connection conn = DriverManager.getConnection
("jdbc:oracle:oci8:@qit-uq-cbiw_orcl", "scott", "tiger");
// or oci7 @TNSNames_Entry, userid, password
Statement stmt = conn.createStatement();
ResultSet rset = stmt.executeQuery("select BANNER from SYS.V_$VERSION");
while (rset.next())
System.out.println (rset.getString(1)); // Print col 1
stmt.close();
=================================================================Wow, not sure what your question was, but there sure was a lot of information there...
There really is only one case where failover occurs, and it would not normally be in a disaster recovery situation, where you define disaster recovery as the obliteration of your current server farm, network and concievably the operational support staff. This would require a rebuild of your server, network etc and isn't something done with software.
Fail over is normally used for high availablity that would take over in case of hardware server failure, or when your support staff wants to do maintenance on the primary server.
Using the thin and thick driver should have ZERO affect on a failover. Transparent failover will make the secondary server the same IP as the primary, therefore the hostname will still point to the appropriate server. If you are doing this wrong, then you will have to point all your applications to a new IP address. This should be something that you tell your management is UNACCEPTABLE in a fail-over situation, since it is almost sure to fail to fail-over.
You point out that you are providing the TNSNAME, rather than the HOSTNAME when using the thick driver. That's true within your application, but that name is resolved to either a HOSTNAME, or IP ADDRESS before it is sent to the appropriate Oracle server/instance. It is resolved using either a NAME server (same as DNS server but for Oracle), or by looking at a TNSNAMES file. Since the TNSNAMES files profilerate like rabbits within an organization you don't want a fail over that will make you find and switch all the entries, so you must come up with a fail over that does not require it.
So, the application should not be concerned with either the hostname, or the IP address changing during fail over. That makes use of the thin or thick client acceptable for fail over.
Don't know if this will help, but this shows the communication points.
THIN DRIVER
client --> dns --> server/port --> SID
THICK DRIVER
client --> names server --> dns --> server/port --> SID
client --> tnsnames --> dns --> server/port --> SID -
Sender RFC adapter High volume messaging
Hi,
This question is related to this thread:RFC connection problem
ERP system is sending through 1 RFC dest. (program ID) 20 requests in a minute. And PI starts to hang. ERP is not able tp sent the messages and after a while the request sent from ERP starts to get cancelled. This is a synchonous scenario. How can I handle such a high volume through 1 sender RFC adapter?Hello
You can monitor the load on RFC adapter queues/threadsin the RWB
-> Component Monitoring
-> Adatper Engine XIP
-> Engine Status
-> Additional Data
See note #791655 Documentation of the XI Messaging System Service Properties, for an explaination of the queues.
To increase the number of threads/queues, see the blog:
1) /people/kenny.scott/blog/2007/08/20/messaging-system-queue-properties-after-xi-30-sp19-xi-70sp11
2) /people/kenny.scott/blog/2008/12/05/xipi-file-and-jdbc-receiver-adapter-performance-and-availability-improvements - this shows how to prevent a problem on one RFC channel blocking other RFC channels that you may be using.
Also, ensure note #937159 XI Adapter Engine is stuck, has been applied to help overall system performance.
Regards
Mark
Maybe you are looking for
-
Drop ship, Sales order line quanity changes
At what stages can we able to change the line quanities on the sales order for the drop ship transaction type. 1. Book 2. Run purchase release from OM 3. Run requisition import from PO( Requistion creation) 4. Auto create PO 5. Approved PO. Above are
-
"Save as Template" not working
Hi I'm new to Motion 3 and have been working through the Apple Pro Training book to try and learn this stuff. Everything seems to be working ok, but I'm having some trouble with the "save as template" function. I know that you have to create a new th
-
Hi guys, Does anyone have any good material / suggestions on the following points concerning XI and it's ability to support an ESA: 1) BPM/BPEL modelling 2) integration with adapaters and routings 3) integration for business activity monitoring 4) cr
-
[SOLVED]pacman -Syu sychronize failure
Following the Installation instructions I installed a cd iso on the 26th. There are 3 other linux distros on the 80 G drive and arch booted up and everything seemed to be OK. I uncommented 5 repo sites in the mirrorlist and tried to update pacman -Sy
-
Mac Pro USB ports and bluetooth not working
I have a Mac Pro (early 2008) Quad-Core Intel Xeon, 2.8 GHz, 4GB ram, Airport card, Magic Mouse. Suddenly the keyboard and wired/bluetooth Mouse are unavailable, so it is not possible to control the computer. The only way I can use it is by using Scr