Xslt in client memory
Hello.
I have some server components sending xml to a web browser client. There is a reference to an xslt stylesheet in the xml document, that results in html. I assume that every time the processor in the browser encounters that reference while parsing the xml, it loads the stylesheet anew.
I would like to know if there is some way of keeping the stylesheet in browser memory so that it can be applied to multiple xml files, perhaps by inducing the browser to cache the stylesheet? In other words, I would like to minimize the amount of traffic between the client and the server. Could the stylesheet even be stored on the client's machine? What would the stylesheet reference look like in that case?
thanks.
jan
I don't know how to keep the stylesheet in the browser's memory but the stylesheet will probably stay in the browser's cache.
Don't forget that your solution doesn't work with all the browsers on all operating systems!
And on Windows with IE, be carefull with your style-sheet because the version of the XML transformator can be different between computers.
Your solution will works on an Intranet, but is very restrictive for an Internet web site.
If you're working on an Internet web site, do the transformation on the server-side (with Saxon by example).
Similar Messages
-
XSLT processing and Memory Leak
I have the following code for a Simple XSLT Transformation from one form of XML to another, the Size of the XML and the XSLT file are very small( few KB).
As you can see I'm explicitly setting everything to null just to make sure the objects get GC'd.
When I run the transformation on my local m/c running Windows XP on WSAD5.0 there are no memory leak issues, but when I deploy the Same app on our Server running WSAD 5.1 on Solaris, I see memory issues and finally throws an OutOfMemory Exception.
Any Ideas would be appreciated.
public String translate( String xml, String xsltFileName) throws Exception{
String xmlOut = null;
File fXslt = null;
ByteArrayOutputStream baos = null;
javax.xml.transform.Source xmlSource = null;
javax.xml.transform.Source xsltSource = null;
javax.xml.transform.Result result = null;
InputStream isXML = null;
javax.xml.transform.TransformerFactory transFact = null;
javax.xml.transform.Transformer trans = null;
Templates cachedXSLT = null;
try{
// String classname = System.setProperty("javax.xml.transform.TransformerFactory", "org.apache.xalan.processor.TransformerFactoryImpl");
String classname = System.getProperty("javax.xml.transform.TransformerFactory");
System.out.println( "******* TRANSFORMER CLASS ***** = "+classname);
isXML = new ByteArrayInputStream( xml.getBytes());
fXslt = new File(xsltFileName);
baos = new ByteArrayOutputStream();
xmlSource =
new javax.xml.transform.stream.StreamSource( isXML);
xsltSource =
new javax.xml.transform.stream.StreamSource( fXslt);
result =
new javax.xml.transform.stream.StreamResult( baos);
// create an instance of TransformerFactory
transFact = javax.xml.transform.TransformerFactory.newInstance();
//transFact.setAttribute("http://xml.apache.org/xalan/features/incremental", Boolean.TRUE);
cachedXSLT = transFact.newTemplates(xsltSource);
trans = cachedXSLT.newTransformer();
//trans = transFact.newTransformer(xsltSource);
trans.transform(xmlSource, result);
xmlOut = baos.toString();
System.out.println("xmlout=***" + xmlOut);
catch( Exception e){
System.out.println( e.getMessage());
throw e;
finally{
trans = null;
//transFact = null;
result = null;
xsltSource = null;
xmlSource = null;
baos.close();
baos = null;
fXslt = null;
isXML.close();
isXML = null;
return xmlOut;
}scream3r wrote:
All code work's as well, but i have a memory leak by using structure (by creating a new MyStructure());Presumably this really is java code. As such the following are the only possibilities
1. You do not have a memory leak. You are misreading a tool (probably task manager) and assuming a leak exists when it doesn't.
2. You need to call something either on MyStructure or by passing it to another class to free it. See the documentation.
3. The leak is caused by something else. -
Oracle 9i client memory leak?
Hi
Very recently we upgraded our oracle client from 8i to 9i (9.2.0.4.0), One of out NT services which was built with 8i client libraries now rebuilt with 9i client libraries and put into production. However now we see huge memory increase of that service. Normally it should be consistent and when the load goes down it memory usage should drop down. but in this case within 1hr it will increased up to 850MB. Our code didn't change.
So we're suspecting is there any know memory leak issue in 9i client libraries. if so what is the fix for it.
Also if I try to find the process info of our service, I found there are lot of opened file handles for following files
D:\oracle\ora92\xdk\mesg\lpxus.msb
sometimes more than 2500 handles
again I tried with our service compiled with 8i client and ran under 8i client environment, then maximum number of handles opened to this file is under 20-30 and just after the load gone away it droped down to 1.
Pleas help me on this issueI'm using the oracle ODBC driver 8.05.10 with MFC and client version 8.0.5. In my experience you can't prevent memory leaks with that or earlier versions of the ODBC driver. Client patchkits or service packs for NT or the Visual Studio doesn't solve the problem.
The following code will result in a memory leak with the oracle driver. With every expiration of the timer the leak will grow.
void CTestOdbcOracleDriverDlg::OnTimer(UINT nIDEvent)
TCHAR errString[255];
//open the database with class CDatabase
//use of CRecordset
TRY {
//my table name is AL_ALARME_LOG
pMyRecordset->Open(CRecordset::dynaset,"SELECT * FROM AL_ALARME_LOG",CRecordset::none);
//do something with the data
Sleep(0);
pMyRecordset->Close();
CATCH_ALL(error) {
error->GetErrorMessage(errString,255);
DELETE_EXCEPTION(error);
END_CATCH_ALL
CDialog::OnTimer(nIDEvent);
The same code with the Microsoft ODBC driver
doesn't cause memory leaks.
Andreas ([email protected]) -
SQL server client memory required
Hy
I want to ask, that I have a MS SQL 2000 Enterprise edition server. How much memory required per client. I know, the SQL system memory required (more 1 G) but my question is, if my SQL server use plus one client, how much memory must take in my blade?
pls helpAlthough there are some thumb rules out there, it is hard to tell as you did not specify which kind of access the users are doing at the database. For ressource intensive querying you should put more memory in your server, where on the other hand you would not have that much memory for users doing plain and simple (Select / DML ) operations. How many users do you have, how many are connected on average, what are they doing on the database ?
Jens K. Suessmeyer.
http://www.sqlserver2005.de
--- -
Oracle client memory requirements
Hi everyone,
I read the Oracle 8.1.? for Linux documentation as saying Oracle products won't install on an Intel computer with less than 128MB RAM. Does anyone know if this is true for installing just the client? (If so, it seems like quite a stiff requirement for an operating system which boasts such efficient memory usage.)
Thanks in advance,
Ted GordonNo, it's nonsense. I've installed the Oracle Server on an Intel compatible with only 96mb memory. It's not fast, but it works.
If you will be using Oracle in a production environment you will need a better configuration, but for testing it's good enough. -
Does anyone know how to configure memory allocation for a remote client
partition, for example using flag: -fm (n:2000,x:20000) ?
Thanks in advance!
xiang fang
To unsubscribe, email '[email protected]' with
'unsubscribe forte-users' as the body of the message.
Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>Hi,
We could add Memory Type of Physical Memory(Win32_PhysicalMemory) to the Hardware Inventory on client settings.
http://msdn.microsoft.com/en-us/library/aa394347(v=vs.85).aspx
Best Regards,
Joyce Li
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
The CUPC.exe and cucsf.exe processes are taking up anywhere from 250 Mb - 400 Mb of memory. This brings some machines (that only have 1 Gb or 2 Gb of RAM) to a crawl. The memory usage seems to increase throughout the day (the longer the application is open). TAC has told me that this is normal and nothing can be done to reduce the memory usage. Has anyone ran into this issue with the CUPS 8 Client?
Here is Link to the release notes outlining the various HW requirements for running CUPC and depending on video and other needs the minimums change,
Pls be sure to keep in mind these are minimums
http://www.cisco.com/en/US/docs/voice_ip_comm/cupc/8_0/english/release/notes/cupc80.html#wp180794
Pls always review our release notes as they contain all the most up to date information.
George -
Possible memory leak in Oracle 12.1.0 C client
Dear Oracle Users and Professionals,
I want to report Oracle 12.1.0 C client memory leak when reconnect feature is on place. I have used Valgrind/massif tool to diagnostic our components and there was small memory leak in libclntsh.so.12.1 which is calling libc function getaddrinfo(). This seems to be not freed when connection is closed, but my application is still running and keep reconnect when needed.
I sought a bit on internet and Oracle Portals about this and did not find any information that some has detected this particular issue.
In the attachment is trace back from massif: comparison of two different time slots.
We are developers and use only free available Oracle client versions. Our customer which will operate the system has available whole Oracle Support.
If you can give me advice, to reach state where we will have no memory leak, it would be helpfull.
Thank you very much
Jan Kianicka
([email protected])Hi Jan,
This forum is for questions about connecting to non-Oracle databases. For questions about the Oracle client connecting to Oracle databases then try either one of these forums - I am not sure which will be best -
ODBC
or
General Database Discussions
Regards,
Mike -
4 gig memory installed only 3.5 gig available on Win8 64 bit
model: hp compaq presatio sr5520NL
Mobo: Leonite 2
OS: win64 bit
bios: phoenix technologies, LTD 5.21
I upgraded my memory from 3 gig memory to 4 gigs,
Now windows tells me, that 4 gig is installed but only 3,5 gig is available.
I want to be able to use the 514 that i am missing in windows.
What did i do to figure this out:
I checked if my motherboard supports 4 gig and it does.
I checked if my bios recognizes the 4 gig installed and it does.
I did uncheck the memory cap box in msconfig advanced boot settings.
Window source control tells me that 514 MB is reserved for hardware
I reseated the memory blocks but it didnt take effect
Now i found that it maybe has to with the memory remapping / memory hole function in bios.
but in my bios there is no such function.
I tried to update the bios version to 5.23 (sp37378.exe) but in the process it tells me that my system is not supporting the update...
I am wondering if there is a way that i can use the complete 4 gig set of installed memory? or can i at least update my bios to version 5.23 in a work-around?
or can i somewhere enable this function memory remapping outside of bios??
Thx any help will be appreciatedHi Remko_2013. The extra memory usage is most likely being used by shared graphics memory (can be altered in the BIOS). The short answer is no, you will not be able to regain all of that memory back.
Unfortunately I couldn't find exact info on your computer as the model number you provided me didn't bring up any results. If you could, double check the product number I might have information more specific to your system:
<How Do I Find My Notebook Model Number>: http://h10025.www1.hp.com/ewfrf/wc/document?lc=en&cc=us&docname=c00033108
<How Do I Find My Desktop Model Number>: http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&dlc=en&docname=bph07555
Not for the feint of heart, but very informative, this document is a great explanation which might help you pinpoint the usage.
(Go down to the "Windows Client Memory Limits" section and read through "*32-bit Client Effective Memory Limits ")
*I know you have 64 bit windows, but his article does get into that later on in the 32-bit section.
Mark Russinovich’s Blog - Pushing the Limits of Windows: Physical Memory:
http://blogs.technet.com/b/markrussinovich/archive/2008/07/21/3092070.aspx
TwoPointOh
I work on behalf of HP
Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
Click the “Kudos, Thumbs Up" on the bottom to say “Thanks” for helping! -
Database issue? Client issue?
very similar sql statements. if it returns less than 100 rows, it takes 1 second, which is acceptable. But when it returns 1000+ rows, it takes 10+ seconds, which is not acceptable.
My question is, is it the database issue? or because of the network/ client memory that taking too long to show the data on the sql plus?
Since the execution of the sql is fast, I think the bottleneck is on showing the results....how to improve it?
Thanks for your help!
Ken
=======
1099 rows selected.
Elapsed: 00:00:21.12
SQL>
10 rows selected.
Elapsed: 00:00:00.31
SQL>
Edited by: user9511515 on May 2, 2013 8:55 AMuser9511515 wrote:
very similar sql statements. if it returns less than 100 rows, it takes 1 second, which is acceptable. But when it returns 1000+ rows, it takes 10+ seconds, which is not acceptable.Very similar? Not good enough.
As this means that you cannot simply isolate one aspect (number of rows returned), and use that as a comparative benchmark. You have not provided any evidence that the number of rows returned, is the reason for the difference in performance.
Assuming your sql1 and sql2 are on the same data - another contributing factor to the performance difference could be that sql1 hit the disk (and cached data), and that sql2 conveniently hit the cache and not the disk.
A word from he-who-waves-lead-pipe-and-foams-at-the-mouth - the type of comparison you are attempting is almost always fundamentally flawed. Even an identical SQL executed within seconds of one another, will have different elapsed execution times.
Performance tuning is not about comparing process 1 with process 2 and trying to figure out why one is slow and the other fast. Performance tuning is about examining, in detail, the workload of a process. Because if you have no idea WHAT the process is doing, how can determine which parts are slow and which parts can be optimised? -
Is it possible to use the Client Result Cache when you use ODP.NET?
With the client side query cache it should be possible to cache query results in client memory.Is it possible to use the Client Result Cache when you use ODP.NET?
Yes, absolutely. In fact, my next Oracle Magazine column is on just that subject... though you won't see it until the May/June 2008 issue is published.
- Mark -
Database consuming lot of Physical memory
Hi ,
My database is on version 11.1.0.7.0 and on SUN SOLARIS SPARC.
My server admin just informed me that my database is using lot of physical memory , which i understand is RAM.
I am looking on google also but i am not able to find a way in which i can check on it and see how it can be controlled.
Any help/suggestion would be highly appreciated.
Regards
KkThere are 2 basic methods that Oracle uses memory.
Statically. Oracle allocates memory (for the SGA) when it starts. This memory remains fixed in size.
Dynamically. In order to service a client, memory is needed for that client session. Oracle dynamically allocates memory for such sessions (called the PGA).
When Oracle memory consumptions grows, it must be dynamically allocated memory. Static memory is just that - static. It does not grow in size.
The usual reason for PGA memory consumption to grow is incorrectly designed and coded bulk processing. A single Oracle server process can easily consume all available free memory on the server as Oracle dynamically increases the size of the PGA of the process running the flawed PL/SQL code.
However, one should not be looking at o/s command line commands to determine Oracle processes's memory utilisation. The output of such commands are often incorrectly interpreted.. as shared memory can be (and often is) included to provide a process's memory utilisation. There are notes on Metalink (mysupport.oracle.com) on the topic and how to correctly use CLI commands to view Oracle process memory utilisation.
An easier, and more accurate, view of Oracle memory utilisation can be obtained from Oracle's virtual performance views.
So, a sysadmin e-mailing a ps (Unix/Linux process listing) showing a particular Oracle process "+using too much memory+" is not really solid enough evidence that memory is being abused. One needs to look closer at the type of memory used by the process. -
JMS Not enough memory to complete the operation. at end of queue
Hi,
Using a custom written java JMS client program I have the following error when retrieving messages from the queue. The error occurs when the last message is read from the queue.
Messages are retrieved correctly , but the program runs in an exception
Exception occurred: javax.jms.JMSException: Not enough memory to complete the operation.
javax.jms.JMSException: Not enough memory to complete the operation.
at com.sap.jms.client.memory.MemoryManager.allocateMemoryForBigMessage(MemoryManager.java:94)
at com.sap.jms.client.session.Session.processFinalMessage(Session.java:1669)
at com.sap.jms.client.session.Session.provideMessage(Session.java:1591)
at com.sap.jms.client.session.MessageConsumer.receive(MessageConsumer.java:167)
at jmsClientPackage.GetMessageFromJMS.main(GetMessageFromJMS.java:205)
This is a netweaver 2004 system.
I already implemented optimizations described in the different SAP notes concerning the
Not enough memory to complete the operation.
but this does not solve the problem.
Did anybody had similar problems?
Thanks for your help
RafHi user_1,
no, you can't hide that error message.
It seems you are dealing with very big data arrays (kind of duplicate posting). Please read the knowledge base article on memory efficient programming!
The "not enough memory" occurs when LabView has to create a data copy and doesn't get the memory needed for. The only (?) way to avoid this is efficient programming!
When you increase the memory available in the PC the error will only occur later!
Best regards,
GerdW
CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
Kudos are welcome -
Hello,
I try to get the client side result set cache working, but i have no luck :-(
I'm using Oracle Enterprise Edition 11.2.0.1.0 and as client diver 11.2.0.2.0.
Executing the query select /*+ result_cache*/ * from p_item via sql plus or toad will generate an nice execution plan with an RESULT CACHE node and the v$result_cache_objects contains some rows.
After I've check the server side cache works. I want to cache the client side
My simple Java Application looks like
private static final String ID = UUID.randomUUID().toString();
private static final String JDBC_URL = "jdbc:oracle:oci:@server:1521:ORCL";
private static final String USER = "user";
private static final String PASSWORD = "password";
public static void main(String[] args) throws SQLException {
OracleDataSource ds = new OracleDataSource();
ds.setImplicitCachingEnabled(true);
ds.setURL( JDBC_URL );
ds.setUser( USER );
ds.setPassword( PASSWORD );
String sql = "select /*+ result_cache */ /* " + ID + " */ * from p_item d " +
"where d.i_size = :1";
for( int i=0; i<100; i++ ) {
OracleConnection connection = (OracleConnection) ds.getConnection();
connection.setImplicitCachingEnabled(true);
connection.setStatementCacheSize(10);
OraclePreparedStatement stmt = (OraclePreparedStatement) connection.prepareStatement( sql );
stmt.setLong( 1, 176 );
ResultSet rs = stmt.executeQuery();
int count = 0;
for(; rs.next(); count++ );
rs.close();
stmt.close();
System.out.println( "Execution: " + getExecutions(connection) + " Fetched: " + count );
connection.close();
private static int getExecutions( Connection connection ) throws SQLException {
String sql = "select executions from v$sqlarea where sql_text like ?";
PreparedStatement stmt = connection.prepareStatement(sql);
stmt.setString(1, "%" + ID + "%" );
ResultSet rs = stmt.executeQuery();
if( rs.next() == false )
return 0;
int result = rs.getInt(1);
if( rs.next() )
throw new IllegalArgumentException("not unique");
rs.close();
stmt.close();
return result;
100 times the same query is executed and the statement exection count is incemented every time. I expect just 1 statement execution ( client database roundtrip ) and 99 hits in client result set cache. The view CLIENT_RESULT_CACHE_STATS$ is empty :-(
I'm using the oracle documentation at http://download.oracle.com/docs/cd/E14072_01/java.112/e10589/instclnt.htm#BABEDHFF and I don't kown why it does't work :-(
I'm thankful for every tip,
André KullmannI wanted to post a follow-up to (hopefully) clear up a point of potential confusion. That is, with the OCI Client Result Cache, the results are indeed cached on the client in memory managed by OCI.
As I mentioned in my previous reply, I am not a JDBC (or Java) expert so there is likely a great deal of improvement that can be made to my little test program. However, it is not intended to be exemplary, didactic code - rather, it's hopefully just enough to illustrate that the caching happens on the client (when things are configured correctly, etc).
My environment for this exercise is Windows 7 64-bit, Java SE 1.6.0_27 32-bit, Oracle Instant Client 11.2.0.2 32-bit, and Oracle Database 11.2.0.2 64-bit.
Apologies if this is a messy post, but I wanted to make it as close to copy/paste/verify as possible.
Here's the test code I used:
import java.sql.ResultSet;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import oracle.jdbc.pool.OracleDataSource;
import oracle.jdbc.OracleConnection;
class OCIResultCache
public static void main(String args []) throws SQLException
OracleDataSource ods = null;
OracleConnection conn = null;
PreparedStatement stmt = null;
ResultSet rset = null;
String sql1 = "select /*+ no_result_cache */ first_name, last_name " +
"from hr.employees";
String sql2 = "select /*+ result_cache */ first_name, last_name " +
"from hr.employees";
int fetchSize = 128;
long start, end;
try
ods = new OracleDataSource();
ods.setURL("jdbc:oracle:oci:@liverpool:1521:V112");
ods.setUser("orademo");
ods.setPassword("orademo");
conn = (OracleConnection) ods.getConnection();
conn.setImplicitCachingEnabled(true);
conn.setStatementCacheSize(20);
stmt = conn.prepareStatement(sql1);
stmt.setFetchSize(fetchSize);
start = System.currentTimeMillis();
for (int i=0; i < 10000; i++)
rset = stmt.executeQuery();
while (rset.next())
if (rset != null) rset.close();
end = System.currentTimeMillis();
if (stmt != null) stmt.close();
System.out.println();
System.out.println("Execution time [sql1] = " + (end-start) + " ms.");
stmt = conn.prepareStatement(sql2);
stmt.setFetchSize(fetchSize);
start = System.currentTimeMillis();
for (int i=0; i < 10000; i++)
rset = stmt.executeQuery();
while (rset.next())
if (rset != null) rset.close();
end = System.currentTimeMillis();
if (stmt != null) stmt.close();
System.out.println();
System.out.println("Execution time [sql2] = " + (end-start) + " ms.");
System.out.println();
System.out.print("Enter to continue...");
System.console().readLine();
finally
if (rset != null) rset.close();
if (stmt != null) stmt.close();
if (conn != null) conn.close();
}In order to show that the results are cached on the client and thus server round-trips are avoided, I generated a 10046 level 12 trace from the database for this session. This was done using the following database logon trigger:
create or replace trigger logon_trigger
after logon on database
begin
if (user = 'ORADEMO') then
execute immediate
'alter session set events ''10046 trace name context forever, level 12''';
end if;
end;
/With that in place I then did some environmental setup and executed the test:
C:\Projects\Test\Java\OCIResultCache>set ORACLE_HOME=C:\Oracle\instantclient_11_2
C:\Projects\Test\Java\OCIResultCache>set CLASSPATH=.;%ORACLE_HOME%\ojdbc6.jar
C:\Projects\Test\Java\OCIResultCache>set PATH=%ORACLE_HOME%\;%PATH%
C:\Projects\Test\Java\OCIResultCache>java OCIResultCache
Execution time [sql1] = 1654 ms.
Execution time [sql2] = 686 ms.
Enter to continue...This is all on my laptop, so results are not stellar in terms of performance; however, you can see that the portion of the test that uses the OCI client result cache did execute in approximately half of the time as the non-cached portion.
But, the more compelling data is in the resulting trace file which I ran through the tkprof utility to make it nicely formatted and summarized:
SQL ID: cqx6mdvs7mqud Plan Hash: 2228653197
select /*+ no_result_cache */ first_name, last_name
from
hr.employees
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 10000 0.10 0.10 0 0 0 0
Fetch 10001 0.49 0.54 0 10001 0 1070000
total 20002 0.60 0.65 0 10001 0 1070000
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 94
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
107 107 107 INDEX FULL SCAN EMP_NAME_IX (cr=2 pr=0 pw=0 time=21 us cost=1 size=1605 card=107)(object id 75241)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 10001 0.00 0.00
SQL*Net message from client 10001 0.00 1.10
SQL ID: frzmxy93n71ss Plan Hash: 2228653197
select /*+ result_cache */ first_name, last_name
from
hr.employees
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 11 22 0
Fetch 2 0.00 0.00 0 0 0 107
total 4 0.00 0.01 0 11 22 107
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 94
Number of plan statistics captured: 1
Rows (1st) Rows (avg) Rows (max) Row Source Operation
107 107 107 RESULT CACHE 0rdkpjr5p74cf0n0cs95ntguh7 (cr=0 pr=0 pw=0 time=12 us)
0 0 0 INDEX FULL SCAN EMP_NAME_IX (cr=0 pr=0 pw=0 time=0 us cost=1 size=1605 card=107)(object id 75241)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
log file sync 1 0.00 0.00
SQL*Net message from client 2 1.13 1.13The key differences here are the execute, fetch, and SQL*Net message values. Using the client-side cache, the values drop dramatically due to getting the results from client memory rather than round-trips to the server.
Of course, corrections, clarifications, etc. welcome and so on...
Regards,
Mark -
Urgent WriteExternal and memory increase
Hi,
I have method
writeExternal(objectoutput)
for (int i=0;i<50000;i++){
obj=mysql.get(i)
obj.writeExternal(out)
Even though I am resetting the obj in loop my memory keep increasing and it nver stop even if i call garbage collection
I tried call out.flush but it still does not free the memory
What do i do ??
Please help I need this urgently !!!!!11
Mahesh
}Kanad
You are not getting me let me clear you more
My object
class objArr{
int size;
Vector objV;
writeExternal(objectoutput)
out.writeInt(50000)
for (int i=0;i<50000;i++){
obj=getDifferentObjectFromDB(i);// i is primary key of that object
obj.writeExternal(out)
readExternal{
size=in.readInt();
for (int i=0;i<50000;i++){
obj=in.readExternal();
objV.add(obj);
Are you understanding what am i doing ?
My Aim is my Server memory should not go high i can afford my client memory to go high..
Some where i found we should use writeSharedobject to clear referenced object in streamI
do not know in above example how to do
Maybe you are looking for
-
HT201320 How can I add a company logo to my email signature on my iPhone 5
Can someone detail the process to add a company logo to my email signature on my iphone. It is for an additional email account thru go daddy, it's not my icloud account. I would appreciate the help. Thanks.
-
Connecting 2 ipods to one library
I have a 2nd generation nano and my brother just got a 4th generation nano. We have a Dell computer with windows XP. I want to be able to connect both ipods to the same library so we dont have to buy new songs. Many people said we lose all our songs
-
Why does my Muse page scroll to the right?
Here is the link to the page http://logrenew.businesscatalyst.com/index.html As you see th epage scrolls horizontally as well. not sure how to fix this. Thanks, Russ
-
What is the difference between "Enable usage rights in Adobe Reader" and "Enable for commenting in Adobe Reader"? I want to be able to have commenting on the PDF for those that only have reader. I have been enabling usage rights and it has been worki
-
Error Creating Function in GRC ARA
Dear Experts, I am new to GRC AC,while creating function for standard MM transation code :ME21N and ME51N, showing as error:Request submission inconsistent; process &SAP_GRAC_FUNC_APPR&, version &000001&, external ID &SOD_FUNCTION/ZF800F1&.please sug