Heap dumps on very large heap memory
We are experiencing memory leak issues with one of our application deployed on JBOSS (SUN JVM 1.5, Win 32 OS). The application is already memory intensive and consumes the maximum heap (1.5 GB) allowed on a 32-bit JVM on Win32.
This leaves very few memory for heap dump and the JVM crashes whenever we try adding the heap dump flag (-agentlib:), with a "malloc error"..
Has anyone faced a scenario like this?
Alternatively for investigation purpose, we are trying to deploy it on a Windows X64 - but the vendor advises only to run on 32 bit JVM. Here is my question:
1) Can we run 32bit JVM on Windows 64? Even if we run, can i allocate more than 2 GB for heap memory?
2) I dont see the rational why we cannot run on 64bit JVM - because, JAVA programs are supposed to be 'platform-independent' and the application in form of byte code should be running no matter if it is a 32bit or 64-bit JVM?
3) Do we have any other better tools (except HPROF heapdumps) to analyze memory leaks - we tried using profiling tools but they too fail becos of very few memory available.
Any help is really appreciated! :-)
Anush
anush_tv wrote:
1) Can we run 32bit JVM on Windows 64? Even if we run, can i allocate more than 2 GB for heap memory?Yes, but you're limited to 2GB like any other 32-bit process.
2) I dont see the rational why we cannot run on 64bit JVM - because, JAVA programs are supposed to be 'platform-independent' and the application in form of byte code should be running no matter if it is a 32bit or 64-bit JVM?It's probably related to JBoss itself, which is likely using native code. I don't have experience with JBoss though.
3) Do we have any other better tools (except HPROF heapdumps) to analyze memory leaks - we tried using profiling tools but they too fail becos of very few memory available.You could try "jmap", which can dump the heap.
Similar Messages
-
Hi,
I've to analyze a huge heap dump file (ca. 1GB).
I've tried some tools HAT, YourKit, Profiler4J... etc.)
Always an OutOfMemoryError occures.
My machine has 2GB physocal Memory and 3GB SwapSpace on a Dual Core Intel Processor with 2,6GHz
Is there a way to load the file on my machine or is there a tool which is able to load dump files partially?
ThanX ToX1GB isn't very large :-) When you say you tried HAT, did you mean jhat? Just checking as jhat requires less memory than the original HAT. Also, did you include an option such as -J-mx1500m to increase the maximum heap for the tools that you tried? Another one to try is the HeapWalker tool in VisualVM. That does a better job than HAT/jhat with bigger heaps.
-
JRockit for applications with very large heaps
I am using JRockit for an application that acts an in memory database storing a large amount of memory in RAM (50GB). Out of the box we got about a 25% performance increase as compared to the hotspot JVM (great work guys). Once the server starts up almost all of the objects will be stored in the old generation and a smaller number will be stored in the nursery. The operation that we are trying to optimize on needs to visit basically every object in RAM and we want to optimize for throughput (total time to run this operation not worrying about GC pauses). Currently we are using hugePages, -XXaggressive and -XX:+UseCallProfiling. We are giving the application 50GB of ram for both the max and min. I tried adjusting the TLA size to be larger which seemed to degrade performance. I also tried a few other GC schemes including singlepar which also had negative effects (currently using the default which optimizes for throughput).
I used the JRMC to profile the operation and here were the results that I thought were interesting:
liveset 30%
heap fragmentation 2.5%
GC Pause time average 600ms
GC Pause time max 2.5 sec
It had to do 4 young generation collects which were very fast and then 2 old generation collects which were each about 2.5s (the entire operation takes 45s)
For the long old generation collects about 50% of the time was spent in mark and 50% in sweep. When you get down to the sub-level 2 1.3 seconds were spent in objects and 1.1 seconds in external compaction
Heap usage: Although 50GB is committed it is fluctuating between 32GB and 20GB of heap usage. To give you an idea of what is stored in the heap about 50% of the heap is char[] and another 20% are int[] and long[].
My question is are there any other flags that I could try that might help improve performance or is there anything I should be looking at closer in JRMC to help tune this application. Are there any specific tips for applications with large heaps? We can also assume that memory could be doubled or even tripled if that would improve performance but we noticed that larger heaps did not always improve performance.
Thanks in advance for any help you can provide.Any suggestions for using JRockit with very large heaps?
-
OAS Heap memory issue: An error "java.lang.OutOfMemoryError: GC overhead
OAS - 10.1.3.4.0
We are running out of Heap memory and seeing lots of full GC and out of memory events
Verbose GC is on.
Users don't know what they are doing to cause this
We have 30-40 users per server and 1.5 GB heap memory allocated
There are no other applications on the machine. Only the PRD instance with 1.5 GB allocated to the JVM. We do not have any issue with memory on the server and we could increase the heap but we dont want to go over the 1.5 GB since that is what I understood to be the high end of what is recommended. we only have 30-40 users on each machine. There are 8 servers and a typical heavy usage day we may have 1 or two machines that have the out of memory or continuous full GC in the logs. When this occurs the phones light up with the people on that machine experiencing slowness.
below is an example of what we see in a file created in the OPMN log folder on the JAS server then this occurs. I think this is the log created when Verbose GC is turned on. I can send you the full log or anything else you need. Thanks
1194751K->1187561K(1365376K), 4.6044738 secs]
java.lang.OutOfMemoryError: GC overhead limit exceeded
Dumping heap to java_pid10644.hprof ...
[Full GC 1194751K->1188321K(1365376K), 4.7488200 secs]
Heap dump file created [1326230812 bytes in 47.602 secs]
[Full GC 1194751K->1177641K(1365376K), 5.6128944 secs]
[Full GC 1194751K->986239K(1365376K), 4.6376179 secs]
[Full GC 1156991K->991906K(1365376K), 4.5989155 secs]
[Full GC 1162658K->1008331K(1365376K), 4.1139016 secs]
[Full GC 1179083K->970476K(1365376K), 4.9670050 secs]
[GC 1141228K->990237K(1365376K), 0.0561096 secs]
[GC 1160989K->1012405K(1365376K), 0.0920553 secs]
[Full GC 1012405K->1012274K(1365376K), 4.1170216 secs]
[Full GC 1183026K->1032000K(1365376K), 4.4166454 secs]
[Full GC 1194739K->1061736K(1365376K), 4.4009954 secs]
[Full GC 1194739K->1056175K(1365376K), 5.1124431 secs]
[Full GC 1194752K->1079807K(1365376K), 4.5160851 secs]
in addition to the 'overhead limit exceded' we also will see :
[Full GC 1194751K->1194751K(1365376K), 4.6785776 secs]
[Full GC 1194751K->1188062K(1365376K), 5.4413659 secs]
[Full GC 1194751K->1194751K(1365376K), 4.5800033 secs]
[Full GC 1194751K->1194751K(1365376K), 4.4951213 secs]
[Full GC 1194751K->1194751K(1365376K), 4.5227857 secs]
[Full GC 1194751K->1171773K(1365376K), 5.5696274 secs]
11/07/25 11:07:04 java.lang.OutOfMemoryError: Java heap space
[Full GC 1194751K->1183306K(1365376K), 4.5841678 secs]
[Full GC 1194751K->1184329K(1365376K), 4.5469164 secs]
[Full GC 1194751K->1184831K(1365376K), 4.6415273 secs]
[Full GC 1194751K->1174738K(1365376K), 5.3647290 secs]
[Full GC 1194751K->1183878K(1365376K), 4.5660217 secs]
[Full GC 1194751K->1184651K(1365376K), 4.5619460 secs]
[Full GC 1194751K->1185795K(1365376K), 4.4341158 secs]There's an Oracle support note with a very similar MO :
WebLogic Server: Getting "java.lang.OutOfMemoryError: GC overhead limit exceeded" exception with Sun JDK 1.6 [ID 1242994.1]
If I search for "java.lang.OutOfMemoryError: GC overhead" on Oracle Support it returns at least 12 documents
Might be bug 6065704. Search Oracle support for this bug number.
Best Regards
mseberg -
ORA-00385: cannot enable Very Large Memory with new buffer cache 11.2.0.2
[oracle@bnl11237dat01][DWH11]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Mon Jun 20 09:19:49 2011
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup mount pfile=/u01/app/oracle/product/11.2.0/dbhome_1/dbs//initDWH11.ora
ORA-00385: cannot enable Very Large Memory with new buffer cache parameters
DWH12.__large_pool_size=16777216
DWH11.__large_pool_size=16777216
DWH11.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
DWH12.__pga_aggregate_target=2902458368
DWH11.__pga_aggregate_target=2902458368
DWH12.__sga_target=4328521728
DWH11.__sga_target=4328521728
DWH12.__shared_io_pool_size=0
DWH11.__shared_io_pool_size=0
DWH12.__shared_pool_size=956301312
DWH11.__shared_pool_size=956301312
DWH12.__streams_pool_size=0
DWH11.__streams_pool_size=134217728
#*._realfree_heap_pagesize_hint=262144
#*._use_realfree_heap=TRUE
*.audit_file_dest='/u01/app/oracle/admin/DWH/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='/dborafiles/mdm_bn/dwh/oradata01/DWH/control01.ctl','/dborafiles/mdm_bn/dwh/orareco/DWH/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='DWH'
*.db_recovery_file_dest='/dborafiles/mdm_bn/dwh/orareco'
*.db_recovery_file_dest_size=7373586432
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=DWH1XDB)'
DWH12.instance_number=2
DWH11.instance_number=1
DWH11.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat01-vip)(PORT=1521))))'
DWH12.local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=bnl11237dat02-vip)(PORT=1521))))'
*.log_archive_dest_1='LOCATION=/dborafiles/mdm_bn/dwh/oraarch'
*.log_archive_format='DWH_%t_%s_%r.arc'
#*.memory_max_target=7226785792
*.memory_target=7226785792
*.open_cursors=1000
*.processes=500
*.remote_listener='LISTENERS_SCAN'
*.remote_login_passwordfile='exclusive'
*.sessions=555
DWH12.thread=2
DWH11.thread=1
DWH12.undo_tablespace='UNDOTBS2'
DWH11.undo_tablespace='UNDOTBS1'
SPFILE='/dborafiles/mdm_bn/dwh/oradata01/DWH/spfileDWH1.ora' # line added by Agent
[oracle@bnl11237dat01][DWH11]$ cat /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.
# Controls IP packet forwarding
net.ipv4.ip_forward = 0
# Controls source route verification
net.ipv4.conf.default.rp_filter = 1
# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0
# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename
# Useful for debugging multi-threaded applications
kernel.core_uses_pid = 1
# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1
# Controls the maximum size of a message, in bytes
kernel.msgmnb = 65536
# Controls the default maxmimum size of a mesage queue
kernel.msgmax = 65536
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
#kernel.shmall = 4294967296
kernel.shmall = 8250344
# Oracle kernel parameters
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
kernel.shmmax = 536870912
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
net.ipv4.tcp_wmem = 262144 262144 262144
net.ipv4.tcp_rmem = 4194304 4194304 4194304
Please can I know how to resolve this error.CAUSE: User specified one or more of { db_cache_size , db_recycle_cache_size, db_keep_cache_size, db_nk_cache_size (where n is one of 2,4,8,16,32) } AND use_indirect_data_buffers is set to TRUE. This is illegal.
ACTION: Very Large Memory can only be enabled with the old (pre-Oracle_8.2) parameters -
Roll, extended and heap memory EXTM
hi Dear,
I am getting issue performance issue one new server while the system specs is:
*Processor 2 * x3690 X5, Xeon 8C E7-2820 105W 2.00GHz
Cache 18MB L3
Memory (Installed) 32 GB PC3L-10600 CL9 ECC DDR3 1333MHz
Instance Profile
Parameter Name Parameter value
em/initial_size_MB 12288
ztta/roll_extension 2000683008
abap/swap_reserve 20971520
abap/heaplimit 40894464
abap/heap_area_total 15204352000
abap/heap_area_nondia 0
abap/heap_area_dia 2000683008
rdisp/PG_MAXFS 32768
rdisp/PG_SHM 16384
rdisp/ROLL_MAXFS 32768
rdisp/ROLL_SHM 32768
ztta/roll_area 3000320
ztta/roll_first 1024
rsdb/ntab/sntabsize 9631
rsdb/ntab/irbdsize 19261
rsdb/ntab/ftabsize 96305
rsdb/ntab/entrycount 64167
zcsa/presentation_buffer_area 14640128
rsdb/cua/buffersize 8000
rtbb/buffer_length 90000
zcsa/table_buffer_area 30000000
abap/buffersize 1500000
PHYS_MEMSIZE 18432
login/no_automatic_user_sapstar 1
login/password_history_size 5
login/fails_to_user_lock 5
rsdb/obj/buffersize 40000
rdisp/wp_no_dia 12
rdisp/wp_no_btc 3
rdisp/wp_no_enq 1
rdisp/wp_no_vb 2
rdisp/wp_no_vb2 1
rdisp/wp_no_spo 2
rsdb/obj/max_objects 2500
rdisp/max_wprun_time 2400
would you give me your expert Suggestion with respect to system Specs(Processor_8C and RAM_32gb)
Regards,hi Gaurav,
1) When this dump occurs , ( during specific activity or anytime).
Dump is occurring against specific transaction when users are executing tcode with big selection criteria.
System: PRDSAP_PRD_00 Tune summary
Date + Time of Snapshot: 21.02.2012 09:37:21 Startup: 19.02.2012 22:05:14
Buffer HitRatio % Alloc. KB Freesp. KB % Free Sp. Dir. Size FreeDirEnt % Free Dir Swaps DB Accs
Nametab (NTAB) 0
Table definition 86.81 21,809 64,167 127,586 192,179
Field definition 77.90 101,318 45,098 46.83 64,167 50,613 78.88 109,591 123,279
Short NTAB 98.44 11,636 8,958 93.01 16,041 13,391 83.48 0 2,650
Initial records 0.99 21,266 14,669 76.16 16,041 2,860 17.83 53,460 66,641
0
program 96.73 1,500,000 795,367 57.04 375,000 305,187 81.38 0 209,493
CUA 99.30 8,000 3,129 47.71 4,000 3,754 93.85 0 252
Screen 99.59 14,297 9,503 67.43 2,000 1,731 86.55 0 275
Calendar 100.00 488 366 76.57 200 48 24.00 0 152
OTR 100.00 4,096 3,375 100.00 2,000 2,000 100.00 0
0
Tables 0
Generic Key 99.73 29,297 2,399 8.65 5,000 1,611 32.22 39 14,181
Single record 88.77 90,000 51,599 57.45 500 403 80.60 0 72,187
0
Export/import 92.43 40,000 34,517 88.28 2,500 975 39.00 0
Exp./ Imp. SHM 53.57 4,096 3,203 94.90 2,000 1,999 99.95 0
SAP Memory Curr.Use % CurUse[KB] MaxUse[KB] In Mem[KB] OnDisk[KB] SAPCurCach HitRatio %
Roll area 0.12 325 3,488 262,144 0 IDs 98.77
Page area 0.25 666 94,624 131,072 131,072 Statement 93.00
Extended memory 4.03 675,840 6,881,280 16,773,120 0 0.00
Heap memory 0 1,084,762 0 0 0.00
Call Stati HitRatio % ABAP/4 Req ABAP Fails DBTotCalls AvTime[ms] DBRowsAff.
Select single 99.84 1,755,375 267,018 9,898 0 1,488,357
Select 1.53 1,471,444 0 561,166 0 5,708,457
Insert 0.00 225,271 5,926 58,267 0 4,146,957
Update 0.00 497 25 572 0 507
Parameters of SWAP entries
Efficiency HITRATIO % 87
HITS 1,264,801
REQUESTS 1,456,982
DB access quality % 87
DB access 192,179
DB access saved 1,264,787
Reorgs 0
Size Allocated KB 21,809
Available KB 18,297
Used KB 18,297
Free KB 0
Free KB 0
Directory entries Available 64,167
Used 64,167
Free 0
Swaps Objects swapped 127,586
Frames swapped 0
Resets Total 0
you can check the starting date of above server while there are some users are login on this server for data testing.
would you suggest me the value of above parameters?
Regards,
Regards, -
UCCX 7 Heap Memory Usage Exceeded Error
UCCX 7.0.(1) SR5
Getting the following error when updating or adding new script applications:
"It is not recommended to update the application as Engine heap memory usage exceeded configured threshold. Click OK to continue and Cancel to exit."
Apparently this is an alert that was built into SR4 and is configurable under the System Parameters.
Does anyone have information on what processes use the heap memory in UCCX or how to monitor the usage?As Tom can attest to by now, this is something of an iceberg with big sharp edges below the surface.
The Java heap is fixed at 256MB on CCX. The Java heap is used by Tomcat as execution memory. In addition to this, applications, scripts, and other repository data is loaded into the heap at runtime. Depending on your environment, you may be approaching the limits of the heap, which cannot be changed. If the heap size is reached, it will be dumped and impact calls.
What have you been doing as of late on your CCX server? How many applications and scripts do you have? Are any of these using XML files extensively?
Note there is also a possible bug where the MIVR engine does not properly release all objects loaded into the heap at the end of a script execution leading to a memory leak of sorts. The discussion [debate] over this behavior is continuing. As of this week, it may be represented under
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:"Times New Roman","serif";}
CSCte49231. If it is, this may qualify as the most poorly described defect ever. -
Amount of heap memory is occupied by a Thread
Hy, I am using threads in my application. I know the objects that are created inside the thread are stored in the heap space and this is for all the threads, right?
So in the heap space are stored all the objects created for the threads.
Is there a way to know the amount of heap memory is occupied by the objects created for a single thread ?
I know I cant limit the stack memory for a thread but this memory is for the variables and not for the objects, right?
ThanksNot really. The "size" of an object is a very complicated thing to compute, as it involves not only knowing the size of primitive objects and the size of references, but also a recursive calculation involving the size of objects referred to by the original object, taking into account that the object references form a directed graph and not necessarily a tree.
That's probably more work than you want to do for whatever prompted this requirement, so I would suggest finding some simpler proxy calculation. -
Find heap memory Size for Web Intelligence processing server in BO 4.0
Hi All ,
We need to gather data for sizing inputs.For Adaptive processing server , we can find same by going in CMC> Server > APS > Properties & check value for the parameter -Xmx . Could you please tell how to find the max heap memory allocated for a Web Intelligence processing server in BO 4.0 as for Webi server this parameter is not maintained. ?
Regards ,
AbhinavHi Abhinav,
The maximum threshold is a value which may reach on peak usage and Webi processing server cannot occupy memory beyond this value at any time.
In your situation 9 Webi Processing servers with 16 GB of RAM on server is not recommended. This is because consider situation with total 16 GB of host RAM.
4 GB should left for OS
Tomcat will need minimum 2 GB for 200 users
So you are left with 10 GB RAM for all BO services
Now 9 Webi Processing servers with 6 GB of Threshhost will not work here
For this configuration you can have 2 Webi Processing servers with default threshold should be running on single host.
Regards,
Hrishikesh -
Threaded inner classes & heap memory exhaustion
(_) how can i maximize my threading without running out of
heap memory?
push it to the limit, but throttle back before an
java.lang.OutOfMemoryError.
(_) within 1 threaded class ThreadClass, i have two threaded inner classes. for each instance of ThreadClass i only
start one instance of each inner class.
and, i start hundreds of ThreadClass, but not until the previously running ThreadClass object exits, so only one should be running at any given time.
so, what about threaded inner classes?
are they good? bad? cause "OutOfMemoryErrors"?
are those inner threads not dying?
what are common causes of:
java.lang.OutOfMemoryError: java heap space?
my program runs for about 5-minutes, then
bails with the memory error.
how can i drill down and see what
is eating-up all my memory?
thanks.A Thread class is not the same as a thread of
execution. Those inner class based threads of
execution are not dying.maybe. but this is the way i test a thread's life:
public void run() {
System.out.println("thread start");
System.out.println("thread dies and release memory");
}for each inner thread, and the outer thread, this approach for
testing thread life reveals that they die.
Why don't you use a thread pool?ok. i will think about how to do this.
>
If not, you need to ensure those inner threads have
exited and completed.what is a 100% sure check to guarantee a thread exits other than
the one i use above?
note:
the outer thread is running on a remote host, and the inner threads
are running locally. here are the details:
public class BB implements Runnable, FinInterface {
public void run() {
// do some work on the remote machine
private void startResultsHandler(OisXoos oisX) {
ResultHandler rh = new ResultHandler(oisX);
rh.start();
public void startDataProxy(OisXoos oisX, String query) {
DataProxy dp = new DataProxy(oisX, query);
dp.start();
public class ResultsHandler extends Thread {
// runs locally; waits for results from servers
public void run() {
ObjectInputStream ois = new ObjectInputStream(oisX.input);
Set result = (Set) ois.readObject();
} // ____ class :: _ ResultsHandler _ :: class ____
public class DataProxy extends Thread {
// runs locally; performs db queries on behalf of servers
public void run() {
ObjectOutputStream oos = new ObjectOutputStream(oisX.output);
while(moreData) {
.... // sql queries
oos.writeObject(data);
StartResultsHandler(oisX);
} // _____ class :: _ DataProxy _ :: class _____
}now, the BB class is not started locally.
the inner threads are started locally to both service data requests
by the BB thread as well as wait for its results.
(_) so, maybe the inner threads cannot exit (but they sure look
like they exit) until their parent BB thread exits.
(_) yet, those inner threads have no knowledge that the BB
thread is running.
externalizing those inner thread classes will put 2-weeks of work
in the dust bin. i want to keep them internal.
thanks.
here this piece of code that controls everything:
while(moreData) {
FinObjects finObj = new BB();
String symb = (String) data_ois.readObject();
OisXoos oisX = RSAdmin.getServer();
oisX.xoos.writeObject(finObj);
finObj.startDataProxy(finObj, oisX, symb);
} -
Profile Performanc​e and Memory shows very large 'VI Time' value
When I run the Profile Performance and Memory tool on my project, I get very large numbers for VI Time (and Sub VIs Time and Total Time) for some VIs. For example 1844674407370752.5. I have selected only 'Timing statistics' and 'Timing details'. Sometimes the numbers start with reasonable values, then when updating the display with the snapshot button they might get large and stay large. Other VI Times remain reasonable.
LabVIEW 2011 Version 11.0 (32-bit). Windows 7.
What gives?
- lesles,
the number indicates some kind of overroll.... so, do you have a vi where this happens all the time? Can you share this with us?
thanks,
Norbert
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it. -
Can I increase heap memory without specify any class or jar file??
Hi,
I tried to increase my heap memory in this way :
java -Xms256m -Xmx256m
but I got an error ... it's seem that I must specify a class java or a .jar file ...
This is the error :
Usage: java [-options] class [args...]
(to execute a class)
or java [-options] -jar jarfile [args...]
(to execute a jar file)
where options include:
-client to select the "client" VM
-server to select the "server" VM
-hotspot is a synonym for the "client" VM [deprecated]
The default VM is client.
-cp <class search path of directories and zip/jar files>
-classpath <class search path of directories and zip/jar files>
A ; separated list of directories, JAR archives,
and ZIP archives to search for class files.
-D<name>=<value>
set a system property
-verbose[:class|gc|jni]
enable verbose output
-version print product version and exit
-version:<value>
require the specified version to run
-showversion print product version and continue
-jre-restrict-search | -jre-no-restrict-search
include/exclude user private JREs in the version search
-? -help print this help message
-X print help on non-standard options
-ea[:<packagename>...|:<classname>]
-enableassertions[:<packagename>...|:<classname>]
enable assertions
-da[:<packagename>...|:<classname>]
-disableassertions[:<packagename>...|:<classname>]
disable assertions
-esa | -enablesystemassertions
enable system assertions
-dsa | -disablesystemassertions
disable system assertions
-agentlib:<libname>[=<options>]
load native agent library <libname>, e.g. -agentlib:hprof
see also, -agentlib:jdwp=help and -agentlib:hprof=help
-agentpath:<pathname>[=<options>]
load native agent library by full pathname
-javaagent:<jarpath>[=<options>]
load Java programming language agent, see java.lang.instrument
-splash:<imagepath>
show splash screen with specified image
can I increase heap memory without specify any class or jar file??
thxchiara wrote:
Hi,
I tried to increase my heap memory in this way :
java -Xms256m -Xmx256m
but I got an error ... it's seem that I must specify a class java or a .jar file ...
This is the error :
Usage: java [-options] class [args...]
(to execute a class)
or java [-options] -jar jarfile [args...]
(to execute a jar file)
can I increase heap memory without specify any class or jar file??The job of java.exe is to execute java bytecode.
What is it supposed to do with your request to use 256m of memory for heap
when you are not giving it a class or a jar to run? -
Can off JVM heap Memory used in the Near-Cache front-tier
I had tried to config a near-Cache used nio-manager(off JVM heap) in the Front-tier.
<near-scheme>
<scheme-name>CohApp-near</scheme-name>
<front-scheme>
<external-scheme>
</external-scheme>
</front-scheme>
<back-scheme>
<distributed-scheme>
<scheme-ref>CohApp-distributed</scheme-ref>
</distributed-scheme>
</back-scheme>
<invalidation-strategy>auto</invalidation-strategy>
<autostart>true</autostart>
</near-scheme>
when start 'com.tangosol.net.DefaultCacheServer' for this config, error as:
Oracle Coherence Version 3.7.1.0 Build 27797
Enterprise Edition: Development mode
Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
2014-03-30 16:34:17.518/1.201 Oracle Coherence EE 3.7.1.0 <Error> (thread=main,
member=n/a): Error org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invali
d content was found starting with element 'external-scheme'. One of '{"http://xm
lns.oracle.com/coherence/coherence-cache-config":local-scheme, "http://xmlns.ora
cle.com/coherence/coherence-cache-config":class-scheme}' is expected. - line 92
Exception in thread "main" (Wrapped: Failed to load the factory) (Wrapped: Missi
ng or inaccessible constructor "com.tangosol.net.DefaultConfigurableCacheFactory
(String)"
<configurable-cache-factory-config>
<class-name>com.tangosol.net.DefaultConfigurableCacheFactory</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>coherence-cache-config.xml</param-value>
</init-param>
</init-params>
</configurable-cache-factory-config>) java.lang.reflect.InvocationTargetExceptio
n
at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
at com.tangosol.net.ScopedCacheFactoryBuilder.getDefaultFactory(ScopedCa
cheFactoryBuilder.java:311)
at com.tangosol.net.DefaultCacheFactoryBuilder.getSingletonFactory(Defau
ltCacheFactoryBuilder.java:48)
at com.tangosol.net.DefaultCacheFactoryBuilder.getFactory(DefaultCacheFa
ctoryBuilder.java:121)
at com.tangosol.net.ScopedCacheFactoryBuilder.getConfigurableCacheFactor
y(ScopedCacheFactoryBuilder.java:112)
at com.tangosol.net.CacheFactory.getConfigurableCacheFactory(CacheFactor
y.java:126)
at com.tangosol.net.DefaultCacheServer.getDefaultConfigurableCacheFactor
y(DefaultCacheServer.java:364)
at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:197)
Caused by: (Wrapped: Missing or inaccessible constructor "com.tangosol.net.Defau
ltConfigurableCacheFactory(String)"
<configurable-cache-factory-config>
<class-name>com.tangosol.net.DefaultConfigurableCacheFactory</class-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>coherence-cache-config.xml</param-value>
</init-param>
</init-params>
</configurable-cache-factory-config>) java.lang.reflect.InvocationTargetExceptio
n
at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2652)
at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2536)
at com.tangosol.net.ScopedCacheFactoryBuilder.getDefaultFactory(ScopedCa
cheFactoryBuilder.java:273)
... 6 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct
orAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC
onstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at com.tangosol.util.ClassHelper.newInstance(ClassHelper.java:694)
at com.tangosol.run.xml.XmlHelper.createInstance(XmlHelper.java:2611)
... 8 more
Caused by: (Wrapped: Failed to load cache configuration: coherence-cache-config.
xml) (Wrapped) java.io.IOException: Exception occurred during schema validation:
cvc-complex-type.2.4.a: Invalid content was found starting with element 'externa
l-scheme'. One of '{"http://xmlns.oracle.com/coherence/coherence-cache-config":l
ocal-scheme, "http://xmlns.oracle.com/coherence/coherence-cache-config":class-sc
heme}' is expected.
at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
at com.tangosol.run.xml.XmlHelper.loadResourceInternal(XmlHelper.java:34
1)
at com.tangosol.run.xml.XmlHelper.loadFileOrResource(XmlHelper.java:283)
at com.tangosol.net.DefaultConfigurableCacheFactory.loadConfig(DefaultCo
nfigurableCacheFactory.java:439)
at com.tangosol.net.DefaultConfigurableCacheFactory.loadConfig(DefaultCo
nfigurableCacheFactory.java:425)
at com.tangosol.net.DefaultConfigurableCacheFactory.<init>(DefaultConfig
urableCacheFactory.java:155)
... 14 more
Caused by: (Wrapped) java.io.IOException: Exception occurred during schema valid
ation:
cvc-complex-type.2.4.a: Invalid content was found starting with element 'externa
l-scheme'. One of '{"http://xmlns.oracle.com/coherence/coherence-cache-config":l
ocal-scheme, "http://xmlns.oracle.com/coherence/coherence-cache-config":class-sc
heme}' is expected.
at com.tangosol.run.xml.XmlHelper.loadXml(XmlHelper.java:122)
at com.tangosol.run.xml.XmlHelper.loadXml(XmlHelper.java:157)
at com.tangosol.run.xml.XmlHelper.loadResourceInternal(XmlHelper.java:32
2)
... 18 more
Caused by: java.io.IOException: Exception occurred during schema validation:
cvc-complex-type.2.4.a: Invalid content was found starting with element 'externa
l-scheme'. One of '{"http://xmlns.oracle.com/coherence/coherence-cache-config":l
ocal-scheme, "http://xmlns.oracle.com/coherence/coherence-cache-config":class-sc
heme}' is expected.
at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:212)
at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:93)
at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:162)
at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:115)
at com.tangosol.run.xml.XmlHelper.loadXml(XmlHelper.java:118)
... 20 more
Caused by: org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid conten
t was found starting with element 'external-scheme'. One of '{"http://xmlns.orac
le.com/coherence/coherence-cache-config":local-scheme, "http://xmlns.oracle.com/
coherence/coherence-cache-config":class-scheme}' is expected.
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAX
ParseException(ErrorHandlerWrapper.java:195)
at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(Err
orHandlerWrapper.java:131)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(
XMLErrorReporter.java:384)
at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(
XMLErrorReporter.java:318)
at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator$XSIErro
rReporter.reportError(XMLSchemaValidator.java:417)
at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.reportS
chemaError(XMLSchemaValidator.java:3182)
at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.handleS
tartElement(XMLSchemaValidator.java:1806)
at com.sun.org.apache.xerces.internal.impl.xs.XMLSchemaValidator.startEl
ement(XMLSchemaValidator.java:705)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scan
StartElement(XMLNSDocumentScannerImpl.java:400)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImp
l$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2756)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(X
MLDocumentScannerImpl.java:648)
at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next
(XMLNSDocumentScannerImpl.java:140)
at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImp
l.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(X
ML11Configuration.java:808)
at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(X
ML11Configuration.java:737)
at com.sun.org.apache.xerces.internal.jaxp.validation.StreamValidatorHel
per.validate(StreamValidatorHelper.java:144)
at com.sun.org.apache.xerces.internal.jaxp.validation.ValidatorImpl.vali
date(ValidatorImpl.java:111)
at javax.xml.validation.Validator.validate(Validator.java:127)
at com.tangosol.run.xml.SaxParser.validateXsd(SaxParser.java:236)
at com.tangosol.run.xml.SimpleParser.parseXml(SimpleParser.java:206)
So I think that if can off JVM heap Memory used in the Near-Cache front-tier?
Or can help how to config the off JVM heap Memory used in the Near-Cache front-tier.
Thanks.Only local-scheme and class-scheme can be used in the front-scheme of a near cache.
-
Just i notice a heap memory notification in alert.log file
Hi,
i notice that whenever a user exectuing a qurey this heap memory notification is coming in alert.log file.
Memory Notification: Library Cache Object loaded into SGA
Heap size 2294K exceeds notification threshold (2048K).
what it is actully tell me plz.Duplicate post. Seems you don't want to read the replies to your previous similar question.
geting error in alert log file
Jaffar -
"Fatal error: Allowed memory size of 33554432 bytes exhausted"
I get this error message whenever I try to access very large threads at my favorite debate site using Firefox vs. 4 or 5 on my desktop or laptop computers. I do not get the error using IE8.
The only fixes I have been able to find are for servers that have Wordpress and php.ini files.It works, thanks
Maybe you are looking for
-
How Do I Sync 2 Apple ID's from the same iPod to my Mac???
I am currently using two different Apple ID's to purchase and download content on the iPod appstore. The reason I need two accounts is because my first account is Singaporean, and can not redeem gift cards. I have no credit card or paypal and thus I
-
Time Machine - ambiguous error message, cannot backup
I've been without use of Time Machine for sometime (months) now. I've been getting the following error: My backup drive (Seagate FreeAgent GoFlex - 320GB) was a attached to my Airport Extreme, but in desperation I moved it to being directly attached
-
Trying to import .mov audio file and burn as MP3
Can someone point me to a thread, or be kind enough to post answer, about how I might successfully import a .mov audio file and wind up burning it as an MP3. I've been dinking around with this for too long. I can import it and end up with it in the f
-
HP LaserJet 400 color M451dn - help connecting wirelessly
Hello, I am having the hardest time connecting my HP LaserJet wirelessly. I bought it to be able to print wirelessy, but I cannot get it connected. Does anyone have any suggestions? It seems like this should be something very simple... Thank you,
-
How to deal with 4GB limit of the mail box
Thunderbird says there is not enough space, even after archiving too. please guide to increase the limit from 4 GB to higher