Performence problem
Hi All,
I have written a select command on the table COEP based on objnr and vrgng fields ( both are required )
So when i was execuiting the query it was taking too much time for a single record.
Can you guide me the alternative table or alternative how to get the data fastly.
Advanced Thanks for your guidence
Best Regards
Sudhakar
Hi All,
Please find code below :
SELECT wtgbtr objnr vrgng
FROM coep
INTO TABLE itb_coep
FOR ALL ENTRIES IN itb_aufk
WHERE objnr = itb_aufk-objnr
and vrgng ne 'KOAO'.
The above query is taking too much time to fetch the single record .
Thank you for your guidence
Regrds
Sudhakar
Similar Messages
-
Hi all
I am suposed to replace a paser already made wich is very slow by a parser made like SAX, DOM or JDOM.
My problem is that my SAx program takes the same time to anylise than the otehr rogram. My question is . Is there another way to get all the values (faster) of an XML document than this way:
public void startElement(String uri,String name,String qname,Attributes atts)throws SAXException{
// System.out.print(" uri:"+uri+'\n');
// System.out.print(" name:"+name+'\n');
// System.out.print(" qname:"+qname+'\n');
// System.out.print("\n att:"+atts+'\n');
if (name.trim().equals("MEMBRE")){
a++;
if(objetMembre.nomMembre!=null){
objetVPN.membres.addElement(objetMembre);
objetMembre=new ObjetMembre();
/* if (name.trim().equals("SITE")){
a++;
// System.out.print(a+"un membre");
if(objetMembre.nomMembre!=null){
// System.out.println(a+"ajout dans objetVPN");
objetVPN.membres.addElement(objetMembre);
// System.out.println("new Membre");
objetMembre=new ObjetMembre();
if (name.trim().equals("UTILISATEUR")){
a++;
// System.out.print(a+"un membre");
if(objetMembre.nomMembre!=null){
// System.out.println(a+"ajout dans objetVPN");
objetVPN.membres.addElement(objetMembre);
// System.out.println("new Membre");
objetMembre=new ObjetMembre();
else if (name.trim().equals("VERSION_OUTIL"))
balise=VERSION_OUTIL;
public void characters(char[] ch,int start,int length){
StringBuffer temp=new StringBuffer();
switch(balise){
case VERSION_OUTIL:
temp.append(ch,start,length);
if (!temp.toString().trim().equals("")){
versionOutil=temp.toString();
temp=new StringBuffer();
break;Hi Cecile,
The performance problem may be in the code you have hooked into the method
public void characters(char[] ch,int start,int length)
Depending on the parser, this method may be called once for every character, or once for every value, or any arbitrary number of times, particularly if the node contains the likes of &
It also gets called for whitespace between nodes.
Anyhow, in your implementation of this method, you're creating new objects, which can get performance-costly.
For best performance, try the following approach:
Create a CharArrayWriter object as a member of your class. Inside the characters method, simply write the characters to the chararraywriter instance.
Inside the startElement method, call reset() on the writer
Inside the endElement method, call toString() on the writer to extract the value of the current node. This is very convenient because you have the name and the complete value of the node available to do your switch.
Using this in combination with the Xerces (or other SAX parser) should give you pretty good results. -
Hi,
I have one problem.One of reports are taking long time and there is performence problem.For that I have to modify update rule logic.I want to know howmuch time is taking for reports at front side and backend side and How to I know?Can anyone tell the procedure to find out?
SridharSridhar,
You can use technical content..
OR
Go to ST03N Tcode
Goto RSRT give the query name and see the technical property you will find the query generation time.
For the time taken for execution Select the execute and debug mode in the options select display statistics.
Execute the query
How to know the Query Execution time
Query Run time -
Database 10g partitioning performance reported
Dear sir,
I have a sales history table that holds 68 million records, as a data architect I have partitioned the table by RANGE based on the client ID that is a sequence.
currently the table holds 120,000 physical partitions, the performence problem is reported by the developers once trying to insert a bulk of 8 million records into the table, they are trying to do so in a new partition, I provided them a partitioning syntax to do the insert directly into the table but it's taking hours without being accomplished.
only a PK on the ID is created, tablespace is OK and server space is OK. I'm trying to move some of the data into a backup table, but this is a manual job that can't be done in production.
is there any restriction on the number of partitions in a table? I did some search and nothing specified.
what would be a solution for such activity?? is there any other technology that I might adopt?
Thank you for your support,
IGuser10651321 wrote:
Hi Gurus,
We have recently introduce partitioning over about 40 tables, which grew more than 5 Gb each and wanted see what are the things I can keep looking into AWR, which can indicate performance impact over all on the database.
In another words what sections of AWR I should look into to be able to say that impact was less or more on the database server apart from Load Profile?
Env: Linux, 10.2.0.4
Regards,
MS 1) First of all what type of partitioning you have implemented? List,Hash,Range or other?
2) What was the idea behind implementing the partition? Manageability or performance?
3) Did you see any impact? I.e any user complaint about performance impact of queries after you implemented the partitioning?
The more appropriate case to check the performance difference is to check the explain plan and tracefile.
Explain plan will show you if partition pruning is happening or not, if not then performance would be nearly same as with non partition
Tracing through tkprof will show you if you get any benefits out of consistent/physical IOs?
May be in case of local partition index you would see more consistent gets then normal. -
Hi.
I am trying to create empty movieClips in runtime and load
images to them. My problem is that it results in some performence
problem.
The thing is that I load images from a digital camera and
each image is about 1MB. First of all, that takes a long time to
load. Then i change the _x/_y scale so that the images becomes
thumbnails. (Does the image remain 1MB when i use scale, or does
Flash make the filesize smaller?)
Next problem is that i want to use "duplicateMovieClip" on
those thumbnails and start drag the copy. It doesn't seems to work
with "duplicateMovieClip" on dynamic created mc so I need to make a
new emptyMC and load the image into it. That takes about a second.
Is it possible to create a movieClip to the library in
runtime and then use attachMovie() for every new instance that i
want to create?
I hope you understand my problem.
/AntewikHi,
> (Does the image remain 1MB when i use scale, or does
Flash make the filesize smaller?)
It's the same picture, just scaled down, so it will remain 1
MB. Maybe there are thumbnail versions stored in the camera, which
could be loaded instead the full image. Not sure about that...
> It doesn't seems to work with "duplicateMovieClip" on
dynamic created mc so I need to make a new emptyMC and load >
the image into it. That takes about a second.
Yep, dynamically loaded content cannot be duplicated. Flash
should load it the 2nd time from the cache though, so it should be
faster than the first loading.
> Is it possible to create a movieClip to the library in
runtime and then use attachMovie() for every new instance that i
> want to create?
No, afaik you can't add items to the library at runtime. You
could add an MC in the Flash IDE and use this with attachMovie(),
but I doubt that this will be faster than createEmptyMovieClip().
Could be worth a try though.
hth,
blemmo -
Blending multiple bitmap's, is there limits??
Hello, I've got 50 bitmaps on the stage, lying on top each other with a blendshader on each image. I performe some addative, and I get the desired result. And waht suprises me alot is that there are no performence problems, not eaven online. How ever, when I go over 24 images I'm startibng to get bugs.
Each images ha a variable going from 0-1 that set how much of the rgb I wan't to use, so i can change the blend amount in my sahder. So I got some sliders to alter them in the application, and as I said it works excellent untill I put to many images on the stage. When I do some images just don't affect the blending anymore. I know the values is set in the shader by tracing when I do it.
My question is if there are limit's how many blending falsh can performe per frame.
My understanding is that the pictures is drawn in the same order as they are added to the stage, and that the blending sahder will blend with what ever is allready behind it on the stage. But is 50 images to much to handle per frame?
As I said the performencr is raly good the application runs very smootly, even with 50 images.One of the engineers from the player team confirmed that there is indeed a maximum of 24 layers that can be blended per pixel in the player:
he hit the 24 spot on :)
that is the maximum number of blends per pixel...
it's a hard flash limit, unlikely to change. -
Performance issue of ecc6.0
HI,
We have ECC6.0 server(prdserver) Oracle 10g (Database) O/s windows 2003 Enterprise 32 bit RAM 12Gb.we are facing performence problem.how to improve performence. iam attaching existing parameters.please help me out.
abap/buffersize 2000000
SAPSYSTEM 00
login/system_client 090
INSTANCE_NAME DVEBMGS00
DIR_CT_RUN $(DIR_EXE_ROOT)\$(OS_UNICODE)\NTI386
DIR_EXECUTABLE $(DIR_INSTANCE)\exe
PHYS_MEMSIZE 512
rdisp/wp_no_dia 20
rdisp/wp_no_btc 4
icm/server_port_0 PROT=HTTP,PORT=80$$
ms/server_port_0 PROT=HTTP,PORT=81$$
rdisp/wp_no_enq 1
rdisp/wp_no_vb 3
rdisp/wp_no_vb2 3
rdisp/wp_no_spo 4
DIR_CLIENT_ORAHOME $(DIR_EXECUTABLE)
rdisp/max_wprun_time 60000
zcsa/system_language E
zcsa/installed_languages DE
rdisp/appc_ca_blk_no 100
rdisp/wp_ca_blk_no 600
login/create_sso2_ticket 2
login/accept_sso2_ticket 1
ms/server_port_0 PROT=HTTP,PORT=81$$
icm/server_port_0 PROT=HTTP,PORT=8000
icm/host_name_full prdserver.growel.com
DIR_ROLL ap/buffersizeD:\usr\sap\GPS\DVEBMGS0
DIR_PAGING D:\usr\sap\GPS\DVEBMGS00\data
DIR_DATA D:\usr\sap\GPS\DVEBMGS00\data
DIR_REORG D:\usr\sap\GPS\DVEBMGS00\data
DIR_TRANS
prdserver\sapmnt\trans
ztta/parameter_area 16000
icm/host_name_full prdserver.growel.com
DIR_ROLL ap/buffersizeD:\usr\sap\GPS\DVEBMGS00\data
DIR_PAGING D:\usr\sap\GPS\DVEBMGS00\data
DIR_DATA D:\usr\sap\GPS\DVEBMGS00\data
DIR_REORG D:\usr\sap\GPS\DVEBMGS00\data
DIR_TRANS
prdserver\sapmnt\trans
ztta/parameter_area 16000
DIR_TEMP .
DIR_SORTTMP D:\usr\sap\GPS\DVEBMGS00\data
install/codepage/appl_server 1100
abap/use_paging 0
ztta/roll_first 1024
ztta/roll_area 2550896
rdisp/ROLL_SHM 32768
rdisp/ROLL_MAXFS 32768
rdisp/PG_SHM 16384
rdisp/PG_MAXFS 32768
abap/heap_area_dia 4000000000
abap/heap_area_nondia 4000000000
abap/heap_area_total 4050733008
abap/heaplimit 60894464
abap/swap_reserve 20971520
ztta/roll_extension 2500733008
em/initial_size_MB 1068
em/blocksize_KB 1024
em/stat_log_size_MB 20
em/stat_log_timeout 6000
rdisp/wp_no_spo_Fro_max 3
em/address_space_MB 200
ztta/roll_first 1
em/max_size_MB 20000
rdisp/PG_MAXFS 32768
rdisp/ROLL_SHM 20000
Regards,
madhan
Edited by: Sakhalkar Swamy on Aug 28, 2009 7:25 AM
Edited by: Sakhalkar Swamy on Aug 28, 2009 7:26 AM
Edited by: Sakhalkar Swamy on Aug 28, 2009 7:27 AMHi,
for getting a clear picture it is very useful that you provide more informations from transaction ST03N. Therefore it would be a good idea to provide performance data.
Have a look at this link:
Link: [https://wiki.sdn.sap.com/wiki/display/MaxDB/WorkloadMonitor(ST03orST03N)]
Here you can find a table with performance data. It shows which process type needs how much response time. Could you be so kind to offer such table from your troubled system?
Kindly,
Andreas
Edited by: Andreas Tacke on Sep 4, 2009 10:57 AM -
[SOLVED] Converting the var partition filesystem from ReiserFS to ext4
Hi,
I have a system that still uses ReiserFS for its /var partition. It gives me some performence problems which make me want to convert it to the newer ext4. Can it be done without reinstalling the entire system? and if so, how?
Last edited by Greenstuff (2013-10-05 14:01:41)Boot to a live environment.
Mount the var partition and copy somewhere or tar it up.
Umount the var partition.
Format the old var partition to ext4.
Mount the newly formatted var partition.
Copy back or untar the archive.
Reboot.
EDIT: Make sure you adjust the fstab in the root partition before you reboot!
Last edited by graysky (2013-10-05 15:14:31) -
Hi,
As per documents In general, the addition of wait classes helps direct the DBA more quickly toward the root cause of performance problems.
How could i trace the root cause of performence problems if it is related to wait class?
Thanks,userpat wrote:
Hi,
As per documents In general, the addition of wait classes helps direct the DBA more quickly toward the root cause of performance problems.
How could i trace the root cause of performence problems if it is related to wait class?
Thanks,I am not completely sure that I understand your question. The wait class gives you an approximate idea of where the performance problem will be found. You must then further investigate the wait events in that wait class. There are of course potential problems with starting at the wait class (some wait classes have 2 wait events, while others have many - that could throw off the search for the problem that is impacting performance the most), but at least it provides a starting point. To give you an idea of the wait events in each wait class, here is a SQL statement that was executed on Oracle Database 11.1.0.7:
SQL> DESC V$EVENT_NAME
Name Null? Type
EVENT# NUMBER
EVENT_ID NUMBER
NAME VARCHAR2(64)
PARAMETER1 VARCHAR2(64)
PARAMETER2 VARCHAR2(64)
PARAMETER3 VARCHAR2(64)
WAIT_CLASS_ID NUMBER
WAIT_CLASS# NUMBER
WAIT_CLASS VARCHAR2(64)
SELECT
SUBSTR(NAME,1,30) EVEMT_NAME,
SUBSTR(WAIT_CLASS,1,20) WAIT_CLASS
FROM
V$EVENT_NAME
ORDER BY
SUBSTR(WAIT_CLASS,1,20),
SUBSTR(NAME,1,30);
EVEMT_NAME WAIT_CLASS
ASM COD rollback operation com Administrative
ASM mount : wait for heartbeat Administrative
Backup: sbtbackup Administrative
Backup: sbtbufinfo Administrative
Backup: sbtclose Administrative
Backup: sbtclose2 Administrative
OLAP DML Sleep Application
SQL*Net break/reset to client Application
SQL*Net break/reset to dblink Application
Streams capture: filter callba Application
Streams: apply reader waiting Application
WCR: replay lock order Application
Wait for Table Lock Application
enq: KO - fast object checkpoi Application
enq: PW - flush prewarm buffer Application
enq: RC - Result Cache: Conten Application
enq: RO - contention Application
enq: RO - fast object reuse Application
enq: TM - contention Application
enq: TX - row lock contention Application
enq: UL - contention Application
ASM PST query : wait for [PM][ Cluster
gc assume Cluster
gc block recovery request Cluster
enq: BB - 2PC across RAC insta Commit
log file sync Commit
Shared IO Pool Memory Concurrency
Streams apply: waiting for dep Concurrency
buffer busy waits Concurrency
cursor: mutex S Concurrency
cursor: mutex X Concurrency
cursor: pin S wait on X Concurrency
Global transaction acquire ins Configuration
Streams apply: waiting to comm Configuration
checkpoint completed Configuration
enq: HW - contention Configuration
enq: SQ - contention Configuration
enq: SS - contention Configuration
enq: ST - contention Configuration
enq: TX - allocate ITL entry Configuration
free buffer waits Configuration
ASM background timer Idle
DIAG idle wait Idle
EMON slave idle wait Idle
HS message to agent Idle
IORM Scheduler Slave Idle Wait Idle
JOX Jit Process Sleep Idle
ARCH wait for flow-control Network
ARCH wait for net re-connect Network
ARCH wait for netserver detach Network
ARCH wait for netserver init 1 Network
ARCH wait for netserver init 2 Network
ARCH wait for netserver start Network
ARCH wait on ATTACH Network
ARCH wait on DETACH Network
ARCH wait on SENDREQ Network
LGWR wait on ATTACH Network
LGWR wait on DETACH Network
LGWR wait on LNS Network
LGWR wait on SENDREQ Network
LNS wait on ATTACH Network
LNS wait on DETACH Network
LNS wait on LGWR Network
LNS wait on SENDREQ Network
SQL*Net message from dblink Network
SQL*Net message to client Network
SQL*Net message to dblink Network
SQL*Net more data from client Network
SQL*Net more data from dblink Network
AQ propagation connection Other
ARCH wait for archivelog lock Other
ARCH wait for process death 1 Other
ARCH wait for process death 2 Other
ARCH wait for process death 3 Other
ARCH wait for process death 4 Other
ARCH wait for process death 5 Other
ARCH wait for process start 1 Other
Streams AQ: enqueue blocked du Queueing
Streams AQ: enqueue blocked on Queueing
Streams capture: waiting for s Queueing
Streams: flow control Queueing
Streams: resolve low memory co Queueing
resmgr:I/O prioritization Scheduler
resmgr:become active Scheduler
resmgr:cpu quantum Scheduler
ARCH random i/o System I/O
ARCH sequential i/o System I/O
Archiver slave I/O System I/O
DBWR slave I/O System I/O
LGWR random i/o System I/O
BFILE read User I/O
DG Broker configuration file I User I/O
Data file init write User I/O
Datapump dump file I/O User I/O
Log file init write User I/O
Shared IO Pool IO Completion User I/O
buffer read retry User I/O
cell multiblock physical read User I/O
cell single block physical rea User I/O
cell smart file creation User I/O
cell smart index scan User I/O
cell smart table scan User I/O
cell statistics gather User I/O
db file parallel read User I/O
db file scattered read User I/O
db file sequential read User I/O
db file single write User I/O
...So, if the User I/O wait class floats to the top of the wait classes between a known start time and end time, and the Commit wait class is at the bottom of the wait classes when comparing accumulated time, it probably would not make much sense to spend time investigating the wait events in the Commit class... until you realize that there is a single event in the Commit wait class that typically contributes wait time, while there are many in the User I/O wait class.
Charles Hooper
Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Data request deletion problem from remote system.
Hi experts.
Now i am facing one critical problem . The description is as following:
we design the edw level in one phsical BW machine(A) which contains only dso infoprovider. This machine(A) works as datasource to provide data to another physical BW machine(B).
The dso in A system is exported as datasource , Cubes in B system load data through this datasource .
We first do the initialization with data contained on B system and then load delta data from A system. But if we want to delete some request in system A ,the system indicate this request can not be deleted unless the initialization in system B is deleted( this will lead to all the data missing in system B).
Can anyone explain this mechanism to me , please help me suggesting some kind of methods to delete the request in system A without deleting the initialization information in system B.
Thanks in advance.
Waiting online for your kindly reply.1. Ask your Basis to look into it.
2. try analyse your Baqckground Process in SM51.
3. Is your Deletion Job started immediately or any Delay?
4. If That cube or ODS has only this request which you are deleting, go with Context Menu Cube or ODS (by right click) --> Delete Contents.
Delete Contents will improve performence compare to Request Deletion..
If you have request more than this, you should not go for this.
Nagesh Ganisetti.
Assign points if it helps. -
Hi,
Hi I am getting some Errors when i am trying to Run the Apache Server. Actual Problem is
I want to create an Cluster Environment using 2 weblogic Cluster. i read that documentation, using apache-weblogic plug-in, we can improve the cluster performence.
I follow the Same rules and i install ed the Apache server and i copy the *.so file that required for apache(if i commented out the LoadModule weblogic_module libexec/mod_wl.so , serverrunning fine)
But whenever i am trying to run apache Server with that
Its given a error like
"BUG IN DYNAMIC LINKER ld.so:dynamic-link.h:57: elf_get_dynamic_info: Assertion '! "bad dynamic tag" failed!
./apachectl start: httpd could not be started
If u will give some help about this its most helpful for me.
ThankQ
Krishna Botla
Which version of Apache are you using? Are you on Solaris?
Have you enabled the bootstap module mod_so for apache? Do httpd -l to ensure that.
Did you use apxs to deploy mod_wl.so?
Please go through the steps described in the docs again to see if you missed something.
--Vinod.
KrishnaKumar wrote:
> Hi,
> Hi I am getting some Errors when i am trying to Run the Apache Server. Actual Problem is
> I want to create an Cluster Environment using 2 weblogic Cluster. i read that documentation, using apache-weblogic plug-in, we can improve the cluster performence.
> I follow the Same rules and i install ed the Apache server and i copy the *.so file that required for apache(if i commented out the LoadModule weblogic_module libexec/mod_wl.so , serverrunning fine)
> But whenever i am trying to run apache Server with that
> Its given a error like
> "BUG IN DYNAMIC LINKER ld.so:dynamic-link.h:57: elf_get_dynamic_info: Assertion '! "bad dynamic tag" failed!
> /apachectl start: httpd could not be started
>
> If u will give some help about this its most helpful for me.
>
> ThankQ
> Krishna Botla
-
Do Outer Joins as well as Self Joins Affect the Performence of a Query ??
4 Tables A,B,C,D.
1 View V Based
Each of the tables having 1 and only one primary key and a few NOT NULL columns.
Now my Query selects from A,B,C,D,V with mapping Table A columns with all other Table(B,C,D,V) Primary Key Columns using Left Outer Join.
Select A.a1,A.a2,A.a3,A.a4,B1.ba,B2.bb,B3.bc,B4.bd,C.c1,D.d1 from (((((((A left outer join B B1 on A.ba=B1.ba) left outer join B B2 on A.bb=B2.bb) left outer join B B3 on A.bc=B3.bc)left outer join B B4 on A.bd=B4.bd)left outer join C on A.c1=C.c1)left outer join D on A.d1=D.d1) left outer join V on A.v1 = V.v1) order by 1;
In this case will the query design effect the Performence???... As this query is taking a long time. As for as Indexes are there on these tables only default Indexes due to Primary, Unique as well as Foreign key only.... Hence table structure is very simple and straight. I need a suggession in such a manner that without making changes to the Table (I am not even allowed to include a single Index to them) ...can the query be modified to optimise the performence??Each change to a query can effect the performance.
Your query looks straight and simple. Maybe you could increase the performance by simply removeing the order by criteria.
This requires a sort and a sort can be slow, especially when the resulting dataset is very large.
If there are indexes on all foreign keys and you join using those FKs then there should be no problem with the index structure.
I can't say whether it would be better to use an index or not. but you can look at the execution plan and check what the CBO wants to do.
Make sure that all statistics are up to date. You could consider to run an additional dbms_stats.gather_table_stats with compute in the test environment, just to see if the execution plan changes after this.
For further ideas search this forum for the thread "When your query takes too long". -
Photoshop CS6 have problems with my dual graphics card
I'm currently using photoshop cs6 on my laptop (hp pavilion dv4 3114 ), and this laptop has dual graphics cards(one is mobile intel hd graphics, the other one is ati hd 6750), it ususlly switches to the faster one when games are running or some software that requires a lot of graphic work. But this time I have already set in the graphics card profile to let photoshop use the faster one, but when I launch photoshop cs6 it tells me that the graphics card is not officially supported, as you can see this in the screenshot.
Then I turned to photoshop's preference(performence) and found it only identifies the mobile intel hd graphics. I'm currently using photoshop for designing large posters so performence is very important for me. Is anyone know how to solve this problem so that photoshop can run with the highest performence?II can't imagine how these computer companies expect to fully integrate GPUs from two different vendors, but they're trying. To me, knowing a bit about how display drivers are implemented, it seems an impossible task.
I hve heard (in the context of Macs that have this option) that disabling the low power options so that only the more powerful GPU is active can help. I don't know if that can apply to your system, but it might give you an idea of a place to start looking.
Another thing might be to try to find whatever settings it offers for "affinity" and set it so Photoshop and its cousins use just the ATI GPU exclusively. It might be the "switchover" process that is causing the problems.
Good luck.
-Noel -
Tomcat memory performence issues
Hi all,
iam facing performence issues with tomcat 5.5.9.
i have developed one webapllication which uses 20,000 employees...
problem is tomcat not releasing the memory ...memory is growing up to some peek stage then it is not allowed any requests and it going down..
plz see below cofigurations what iset for my tomcat....
server.xml.
<Connector
port="8080" maxHttpHeaderSize="8192" debug="0"
maxThreads="200" minSpareThreads="25" maxSpareThreads="200"
enableLookups="false" redirectPort="8443" acceptCount="20"
connectionTimeout="10000" disableUploadTimeout="true" />context.xml for connection pooling
<Resource name="jdbc/Ids" auth="Container" type="javax.sql.DataSource" username="DATASYNC"password="DATASYNC"
driverClassName="oracle.jdbc.driver.OracleDriver" url="jdbc:oracle:thin:@10.3.1.163:1521:ISPC" removeAbandoned="true" logAbandoned="true" removeAbandonedTimeout="300"
maxActive="1000" maxIdle="50" maxWait="10000" />my server has 2GB of RAM
i set heap size initially as 512 MB
database is Oracle 9i in which max procees are set as 1000
i closed all database connections in all servlets/jsp pages .....still problem persits...
some times ...my databse is giving exception ORA-00020: maximum number of processes (1000) exceeed...once i restarted tomcat every thing working...
is there any thing need to be done from my end . to improve the performence......
Plz share your to improve my application performence.....Thanks in advancesorry, but if your connections are not being returned to the pool, you are not closing your connections everywhere. Do you close your connections in finally clauses everywhere? Otherwise a thrown exception might prevent the connection from being closed.
-
Tomcat performence issues..Urgent
Hi all,
iam facing performence issues with tomcat 5.5.9.
i have developed one webapllication which uses 20,000 employees...
problem is tomcat not releasing the memory ...memory is growing up to some peek stage then it is not allowed any requests and it going down..
plz see below cofigurations what iset for my tomcat....
server.xml.
<Connector
port="8080" maxHttpHeaderSize="8192" debug="0"
maxThreads="200" minSpareThreads="25" maxSpareThreads="200"
enableLookups="false" redirectPort="8443" acceptCount="20"
connectionTimeout="10000" disableUploadTimeout="true" />context.xml for connection pooling
<Resource name="jdbc/Ids" auth="Container" type="javax.sql.DataSource" username="DATASYNC" password="DATASYNC"
driverClassName="oracle.jdbc.driver.OracleDriver" url="jdbc:oracle:thin:@10.3.1.163:1521:ISPC" removeAbandoned="true" logAbandoned="true" removeAbandonedTimeout="300" maxActive="1000" maxIdle="50" maxWait="10000" />my server has 2GB of RAM
i set heap size initially as 512 MB
database is Oracle 9i in which max procees are set as 1000
i closed all database connections in all servlets/jsp pages .....still problem persits...
some times ...my databse is giving exception ORA-00020: maximum number of processes (1000) exceeed...once i restarted tomcat every thing working...
is there any thing need to be done from my end . to improve the performence......
Plz share your to improve my application performence.....Thanks in advancei closed all database connections in all servlets/jsp pagesAnd Statements? PreparedStatements? ResultSets?
You may consider implementing a robust DAO/ORM instead of writing it on your own. Hibernate for example. I bet that your code is not written very well.
Maybe you are looking for
-
How can i export imovie project in divx format?
I can't find the DIVX format in the list of the export format? Thanks for your help! Regards
-
Flash not displaying in Firefox [was: Murray *ACE*]
Hi all with regards to Murrays reply to my message in HTML Discussion, i have a problem with my site i uploaded it to test it out on my server and it works fine on Internet Explorer, but when i visit the site with Mozilla Firefox its comes up where t
-
Withholding tax for bank interests
Hi, Experts, I'm facing the problem that the bank doesn't pay the total interests, they will deduct the withholding tax amount. I've review the documents for SAP extended withholding tax function, found that it is only available for vendors and custo
-
WRT320N Can't get IP Address - only using 1 wired port - no internet access - web setup pages hang
Just bought Linksys WRT320N to replace Netgear MR814. Can't get connected to internet using Linksys WRT320N. Setup: ISP: Cox Communications (Cable) Firmware: v1.0.03 build 010 Jul 24, 2009 1 wired - port 1- to Windows XP SP 3 Dell Desktop Setup At
-
To Know the Risk with dbms_java.grant_permission
Hi Everyone, I am Using dbms_java.grant_permission to give the grant for java execution. begin dbms_java.grant_permission('****','java.net.SocketPermission','***.****.co.in','connect,resolve'); end; begin dbms_java.grant_permission('****','java.net.S