Query running out of temp
Hi,
on my 9.2.0.8 I've got query like this
select DISTINCT T.ID_KONTR as ID_KONTR,
T.PESEL as PESEL,
T.NAZWISKO as NAZWISKO,
T.IMIE as IMIE,
T.DATA_REJ as DATA_REJ
from T,
(SELECT DISTINCT MAX(T.ID_KONTR) as ID_KONTR from T ,
(select DISTINCT T.ID_KONTR as ID_KONTR,T.PESEL as PESEL,
T.NAZWISKO as NAZWISKO,
T.IMIE as IMIE,
T.DATA_REJ as DATA_REJ
from T
where
(T.OK =1)
and (T.PESEL is not null)
and T.data_rej between to_date('2011-10-21' , 'YYYY-MM-DD') and to_date('2011-10-25','YYYY-MM-DD')
) aaa
where
(T.OK =1)
and (T.PESEL is not null)
and T.pesel = aaa.pesel
group by T.PESEL
having count(*) >1
) bbb
where T.ID_KONTR=bbb.ID_KONTR order by T.DATA_REJ desc;
with plan
lan
SELECT STATEMENT RULE
15 SORT UNIQUE
14 NESTED LOOPS
11 VIEW INSTALL.
10 SORT UNIQUE
9 FILTER
8 SORT GROUP BY
7 VIEW SYS.
6 SORT UNIQUE
5 TABLE ACCESS BY INDEX ROWID T
4 NESTED LOOPS
2 TABLE ACCESS BY INDEX ROWID T
1 INDEX RANGE SCAN NON-UNIQUE TK_DATA_REJ
3 INDEX RANGE SCAN NON-UNIQUE TK_PESEL
13 TABLE ACCESS BY INDEX ROWID T
12 INDEX UNIQUE SCAN UNIQUE TK_PK
Unfortunately this query fills up temp space (because of huge sorts) , any ideas how that query can be rewritten to
decrease temp usage ?
Regards
GregG
Hello
Just to follow on from what Don is saying, I think you can start to rewrite the query like so
SELECT
id_kontr,
pesel,
nazwisko,
imie,
data_rej
FROM
( SELECT
t.id_kontr AS id_kontr,
t.pesel AS pesel,
t.nazwisko AS nazwisko,
t.imie AS imie,
t.data_rej AS data_rej
COUNT(CASE
WHEN t.data_rej BETWEEN to_date('2011-10-21', 'YYYY-MM-DD')
AND to_date('2011-10-25', 'YYYY-MM-DD')
THEN 1
END
) OVER(PARTITION BY t.pesel) p_count,
MAX(t.id_kontr) OVER (PARTITION BY t.pesel) max_id_kontr
FROM
t
WHERE
t.ok = 1
AND
t.pesel IS NOT NULL
WHERE
p_count > 1
AND
id_kontr = max_id_kontr
ORDER BY
data_rej DESCWithout understanding the need for the distinct, I can't say whether it does exactly what you need but it should be a start and it also means you're only accessing "t" once.
HTH
David
Similar Messages
-
ORA-30928: "Connect by filtering phase runs out of temp tablespace"
i have created a query that is sued to display a data in a label. This particular query will then be stored into a program that we use. The query runs just fine until this morning when it returns the error ORA-30928: "Connect by filtering phase runs out of temp tablespace". I have Googled and found out that I can do any of the following:
Include a NO FILTERING hint - but did not work properly
Increase the temp tablespace - not applicable to me since this runs in a production server that I don't have any access to.
Are there other ways to fix this? By the way, below is the query that I use.
SELECT * FROM(
SELECT
gn.wipdatavalue
, gn.containername
, gn.l
, gn.q
, gn.d
, gn.l2
, gn.q2
, gn.d2
, gn.l3
, gn.q3
, gn.d3
, gn.old
, gn.qtyperbox
, gn.productname
, gn.slot
, gn.dt
, gn.ws_green
, gn.ws_pnr
, gn.ws_pcn
, intn.mkt_number dsn
, gn.low_number
, gn.high_number
, gn.msl
, gn.baketime
, gn.exptime
, NVL(gn.q, 0) + NVL(gn.q2, 0) + NVL(gn.q3, 0) AS qtybox
, row_number () over (partition by slot order by low_number) as n
FROM
SELECT
tr.*
, TO_NUMBER(SUBSTR(wipdatavalue, 1, INSTR (wipdatavalue || '-', '-') - 1)) AS low_number
, TO_NUMBER(SUBSTR(wipdatavalue, 1 + INSTR ( wipdatavalue, '-'))) AS high_number
, pm.msllevel MSL
, pm.baketime BAKETIME
, pm.expstime EXPTIME
FROM trprinting tr
JOIN CONTAINER c ON tr.containername = c.containername
JOIN a_lotattributes ala ON c.containerid = ala.containerid
JOIN product p ON c.productid = p.productid
LEFT JOIN otherdb.pkg_main pm ON trim(p.brandname) = trim(pm.pcode)
WHERE (c.containername = :lot OR tr.SLOT= :lot)
)gn
LEFT JOIN otherdb.intnr intn ON TRIM(gn.productname) = TRIM(intn.part_number)
connect by level <= HIGH_NUMBER + 1 - LOW_NUMBER and LOW_NUMBER = prior LOW_NUMBER and prior SYS_GUID() is not null
ORDER BY low_number,n
WHERE n LIKE :n AND wipdatavalue LIKE :wip AND ROWNUM <= 300 AND wipdatavalue NOT LIKE 0
I am using Oracle 11g too.
Thanks for the help everyone.Hi,
The documentation implies that the START WITH and CONNECT BY clauses should come before the GROUP BY clause. I've never known it to make a difference before, but you might try putting the GROUP BY clause last.
If you're GROUPing by LEVEL, what's the point of SELECTing MAX (LEVEL)? MAX (LEVEL) will always be the same as LEVEL.
What are you trying to do?
Post some sample data (CREATE TABLE and INSERT statements) and the results you want from that data, and somebody will help you get them. -
Query erroring out with lack of temp space?
I have 2 databases. One was cloned from the other on to another server.
On one server the query takes 5 seconds to run on the other server the query errors lack with lack of disk space.
The init files are the same. The Oracle memory is the same. The temp tablespace size is the same.
I'm stuck!
What should I look for to find out what is causing the error?On both databases the version is 9.2.0.6.
This is the view that worked in database1 and errored in database2...
CREATE OR REPLACE VIEW SAP_STAGE.MTC_STG_MEASURE_POINT_V
EQUIP_NO,
MEASUR_PNTPOS,
CHARACT_NAME,
UNIT,
MEASUR_PNT_DESC,
VALUATION_CODE,
TEXT_FIELD
AS
SELECT DISTINCT xref.equnr AS equip_no, rpmp.*
FROM sap_stage.measuring_point mp JOIN xref_sap_equi xref
ON TRIM (mp.equip_no) = TRIM (xref.groes)
JOIN stg_mtc_msf600 msf600
ON TRIM (mp.equip_no) = TRIM (msf600.equip_no)
AND TRIM (msf600.dstrct_code) in ('347','315')
JOIN rec_prodstat_code_measur_pts rpmp ON 1 = 1
If I qualify all the tables with SAP_STAGE the query runs in 5 seconds in database2.
Why does the query run in database 1 without the schema names? -
Query runs fine in 9i but results to ORA-01652 unable to extend temp in 10g
Hi,
We are having issues in running a SQL query in 10g. In 9i, it runs fine with no problems but when run in 10g, It takes forever and the temp tablespace grows very large upto 60GB until we get ORA-01652 error due to lack of disk space. This does not occur in 9i, where query runs in only 20 mins and does not take up temp that big. 9i version is 9.2.0.8. 10g is 10.2.0.3Heres the SQL query:
SELECT
J2.EMPLID,
TO_CHAR(J2.EFFDT,'YYYY-MM-DD'),
J2.EFFSEQ,
J2."ACTION",
J2.ACTION_REASON,
TO_CHAR(J2.GRADE_ENTRY_DT,'YYYY-MM-DD'),
J2.COMPRATE,
J2.CHANGE_AMT,
J2.COMP_FREQUENCY,
J2.STD_HOURS,
J2.JOBCODE,
J2.GRADE,
J2.PAYGROUP,
PN2.NATIONAL_ID,
TO_CHAR(PC.CHECK_DT,'YYYY-MM-DD'),
SUM(PO.OTH_EARNS),
To_CHAR(SUM(PO.OTH_EARNS)),
PO.ERNCD,
'3',
TO_CHAR(PC.PAY_END_DT,'YYYY-MM-DD'),
PC.PAYCHECK_NBR
FROM PS_JOB J2,
PS_PERS_NID PN2,
PS_PAY_OTH_EARNS PO,
PS_PAY_CHECK PC
WHERE J2.EMPL_RCD = 0
AND PN2.EMPLID = J2.EMPLID
AND PN2.COUNTRY = 'USA'
AND PN2.NATIONAL_ID_TYPE = 'PR'
AND J2.COMPANY <> '900'
AND J2.EFFDT <= SYSDATE
AND PC.EMPLID = J2.EMPLID
AND PC.COMPANY = PO.COMPANY
AND PC.PAYGROUP = PO.PAYGROUP
AND PC.PAY_END_DT = PO.PAY_END_DT
AND PC.OFF_CYCLE = PO.OFF_CYCLE
AND PC.PAGE_NUM = PO.PAGE_NUM
AND PC.LINE_NUM = PO.LINE_NUM
AND PC.SEPCHK = PO.SEPCHK
AND EXISTS (SELECT ERNCD
FROM PS_P1_CMP_ERNCD P1_CMP
WHERE P1_CMP.ERNCD = PO.ERNCD AND EFF_STATUS = 'A')
GROUP BY J2.EMPLID,
J2.EFFDT,
J2.EFFSEQ,
J2.ACTION,
J2.ACTION_REASON,
J2.GRADE_ENTRY_DT,
J2.COMPRATE,
J2.CHANGE_AMT,
J2.COMP_FREQUENCY,
J2.STD_HOURS,
J2.JOBCODE,
J2.GRADE,
J2.PAYGROUP,
PN2.NATIONAL_ID,
PC.CHECK_DT,
PO.ERNCD,
'3',
PC.PAY_END_DT,
PC.PAYCHECK_NBR -
Running out of space/ Temp files???
I'm making my first movie with iMovie 06. In the process of importing clips from my camcorder I got a warning message. The message said that I was running critically low on space and I should delete some files in order to continue. At the same time I noticed that it said I have 27 GB available in the lower right corner of the iMovie window??
What's going on here? I only have 136 MB in the trash.
Are there temp files being created somewhere?
Where?
How can I get rid of them in order to give iMovie the "space" it needs to do it's job??Im running out of space, how do I clear old content temp files old apps?
I'm not sure exactly what you're asking. Temp files get automatically deleted when you restart. Old apps you can either uninstall (if it was originally installed with an installer, there should be an uninstaller somewhere) or drag it to the trash. What's "old content"?
i have done the browser cache and cleared history.
This is not going to give you back much space. -
Im running out of space, how do I clear old content temp files
Im running out of space, how do I clear old content temp files old apps?
i have done the browser cache and cleared history.I am clearing items off the macbook pro but is there other areas i should clean
ThanksIm running out of space, how do I clear old content temp files old apps?
I'm not sure exactly what you're asking. Temp files get automatically deleted when you restart. Old apps you can either uninstall (if it was originally installed with an installer, there should be an uninstaller somewhere) or drag it to the trash. What's "old content"?
i have done the browser cache and cleared history.
This is not going to give you back much space. -
Hi Friends,
now here i have question. from st22 we can get list of query time out in production and from error log by fill_sp function we can get ABAP program name from which we can get querie's detail like name, cube name etc...
but i am facing some of the query time out gives ABAP program name but cannt get query info from that !!! i tried it by se38 by that also i didnt find all the query info for few time out. is there any way i can get info for those query time out ??
please reply to my as i am new to this forums....last time when i asked questions noone replied me !!!
waiting for help !!Hi,
if my question is not clear then let me just ask in general.
from st22 how can i get query name which have been time out ??
can u please elaborate the process ?
thank you in advance and i will give the points once i get the solution. -
What are the ways to make Query run fast?
Hi Experts,
When a query runs slow, we generally go for creating an aggregate. My doubt is - what other things can be done to make a query run faster before creating an aggregate? What is the thumb rule to be carried out for creating an aggregate?
Regards,
ShreeemHi Shreem,
If you keep Query simple not complicate it with runtime calculations , it would be smooth. However as per business requirements we will have to go for it anyways mostly.
regarding aggregates:
Please do not use the standard proposal , it will give you hundreds based on std. rules , which consumes lots of space and adds up to load times. If you have users already using the Query and you are planning to tune it then go for the statistics tables:
1.RSDDSTAT_OLAP find the query with long runtimes get the Stepuid
2. RSDDSTAT_DM
3. RSDDSTATAGGRDEF - use the stepuid above to see which aggregate is necessary for which cube.
Another way to check ; check the users as in 1 to find the highest runtime users and find the last used bookmarks by user thru RSZWBOOKMARK for this query and check if the time matches and create the aggregates as in 3 above.
You can also Use Transaction RSRT > execute & debug (display stats ) - to create generic aggregates to support navigations for New queries and later refine as above.
Hope it helps .
Thnks
Ram -
Running out of disk space in c drive
Hi,
i'm new to OBIEE, i installed obiee 10.1.3.4 on my computer , i'm doing practice on sample sales. every day i could see message running out of disk space, daily i'm purging cache entries and deleting sessions in UI, and also deleting temp files in oracle BI data, but still i'm facing this problem.
could anyone give me more ideas.
Your valuable suggestions are appreciated.iTunes prefs - Advanced.
Set the *iTunes media folder location* to the external drive.
Make sure *Keep iTunes media folder organized* and *Copy files to iTunes media folder ...* are both checked.
Then go to iTunes menu File > Library > Organize library - Consolidate.
This will copy everything in iTunes to the new location.
After it is complete, quit iTunes then delete \Music\iTunes\iTunes media\ folder. -
Oracle 9i running out of memory
Folks !
I have a simple 3 table schema with a few thousand entries each. After dedicating gigabytes of hard disk space and 50% of my 1+ GB memory, I do a few simple Oracle Text "contains" searches (see below) on these tables and oracle seems to grow some 25 MB after each query (which typically return less than a dozen rows each) till it eventually runs out of memory and I have to reboot the system (Sun Solaris).
This is on Solaris 9/Sparc with Oracle 9.2 . My query is simple right outer join. I think the memory growth is related to Oracle Text index/caching since memory utilization seems pretty stable with simple like '%xx%' queries.
"top" shows a dozen or so processes each with about 400MB RSS/SIZE. It has been a while since I did Oracle DBA work but I am nothing special here. Databse has all the default settings that you get when you create an Oracle database.
I have played with SGA sizes and no matter how large or small the size of SGA/PGA, Oracle runs out of memory and crashes the system. Pretty stupid to an Enterprise databas to die like that.
Any clue on how to arrest the fatal growth of memory for Oracle 9i r2?
thanks a lot.
-Sanjay
PS: The query is:
SELECT substr(sdn_name,1,32) as name, substr(alt_name,1,32) as alt_name, sdn.ent_num, alt_num, score(1), score(2)
FROM sdn, alt
where sdn.ent_num = alt.ent_num(+)
and (contains(sdn_name,'$BIN, $LADEN',1) > 0 or
contains(alt_name,'$BIN, $LADEN',2) > 0)
order by ent_num, score(1), score(2) desc;
There are following two indexes on the two tables:
create index sdn_name on sdn(sdn_name) indextype is ctxsys.context;
create index alt_name on alt(alt_name) indextype is ctxsys.context;I am already using MTS.
Atached is the init.ora file below.
may be I should repost this article with subject "memory leak in Oracle" to catch developer attention. I posted this a few weeks back in Oracle Text groiup and no response there either.
Thanks for you help.
-Sanjay
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
# Cache and I/O
db_block_size=8192
db_cache_size=33554432
db_file_multiblock_read_count=16
# Cursors and Library Cache
open_cursors=300
# Database Identification
db_domain=""
db_name=ofac
# Diagnostics and Statistics
background_dump_dest=/space/oracle/admin/ofac/bdump
core_dump_dest=/space/oracle/admin/ofac/cdump
timed_statistics=TRUE
user_dump_dest=/space/oracle/admin/ofac/udump
# File Configuration
control_files=("/space/oracle/oradata/ofac/control01.ctl", "/space/oracle/oradata/ofac/control02.ctl", "/space/oracle/oradata/ofac/control03.ctl")
# Instance Identification
instance_name=ofac
# Job Queues
job_queue_processes=10
# MTS
dispatchers="(PROTOCOL=TCP) (SERVICE=ofacXDB)"
# Miscellaneous
aq_tm_processes=1
compatible=9.2.0.0.0
# Optimizer
hash_join_enabled=TRUE
query_rewrite_enabled=FALSE
star_transformation_enabled=FALSE
# Pools
java_pool_size=117440512
large_pool_size=16777216
shared_pool_size=117440512
# Processes and Sessions
processes=150
# Redo Log and Recovery
fast_start_mttr_target=300
# Security and Auditing
remote_login_passwordfile=EXCLUSIVE
# Sort, Hash Joins, Bitmap Indexes
pga_aggregate_target=25165824
sort_area_size=524288
# System Managed Undo and Rollback Segments
undo_management=AUTO
undo_retention=10800
undo_tablespace=UNDOTBS1 -
Restricting a characteristic & query time out problem
Hi. We have the follwoing problem:
Before our BI upgrade, we have had a number of users querying different info areas, that are now having difficulty restricting on a characteristic in a query. This was performed with no problems before the upgrade.
Many times, the list of records for the particular characteristic is quite small. Either it takes 20-30 minutes for the list to appear, or the query times out (after 6000 seconds).
Any ideas?Any time you have an existing query that has been running quickly in production, and it suddenly starts to run much longer, you should be suspicious of some change to DB statisitics and or indexes.
So the first thing to do is work with your DBA to make sure the DB statistics are current for the tables involved in the query, then get an Explain Plan for the query which will show you how the DB is trying to execute the query, what indexes it uses, etc. Perhaps stats are not being refreshed after the upgrade, or something happened to an index -
Hello,
I have troubles regarding query performance. Many queries time out ("connection time out 500"). They seem a bit too heavy (for example when there is a need too drill down on material level from material group)
What can be done in order to decrease the query run-times?
Best regards,
Fredrikhi Fredrik,
check if helps
Business Intelligence Performance Tuning [original link is broken]
oss note
557870 'FAQ BW Query Performance'
and 567746 'Composite note BW 3.x performance Query and Web'.
Prakash's weblog
/people/prakash.darji/blog/2006/01/27/query-creation-checklist
/people/prakash.darji/blog/2006/01/26/query-optimization
service.sap.com/bi -> performance -
Running out of memory building csv file
I'm attempting to write a script that does a query on my
database. It will generally be working with about 10,000 - 15,000
records. It then checks to see if a certain file exists. If it
does, it will add the record to an array. When its done looping
over all the records, it takes the array that was created and
outputs a csv file (usually with about 5,000 - 10,000 lines).
But... before that ever happens, it runs out of memory. What can I
do to make it not run out of memory?quote:
Originally posted by:
nozavroni
I'm attempting to write a script that does a query on my
database. It will generally be working with about 10,000 - 15,000
records. It then checks to see if a certain file exists.
Sounds pretty inefficient to me. Is there no way you can
modify the query so that it only selects the records for which the
file exists? -
Running out of memory with Tomcat !!!!!
Hello gurus and good folk:
How can I ensure that the a JSP page that sets a ResultSet doesn't run out of memory? I have set the X flag to j2Se to be 1024mb and still runs out of memory! The size of the data being queried is only 30 MB. One would think the JDBC driver will be optimized for large ResultSet. Any pointers will be very helpful.
Many thanks
MurthyHi
As far as i believe, 30 mb data is pretty big for an online app. If you have too many rows in ur resultset, you could(or should) consider implementing paging and fetch x records at a time. Or you could just have a max limit for the records to be fetched(typically useful for 'search and list' type of apps) using Statement.setMaxRows(). This should ensure that Out of memory errors do not happen.
If your data chunk per row is large, consider displaying only a summary in the result and fetching the 'BIG' data column only when required(e.g. fetch the column value for a particular row only when that row is clicked).
Hope this helps ! -
Running out of memory despite having set je.maxMemory to a moderate value
I have set je.maxMemory to 20MB (je.maxMemory=20000000) and allowed a max heap size of 512MB (-Xms256M -Xmx512M).
After two hours of running my web service, I'm running out of memory. After having profiled my service (using Yourkit Java Profiler 1.10.6), I can see the following:
Name Objects ShallowSize RetainedSize
byte[] 16711 124124880 124124880
com.sleepycat.je.tree.BIN 181 24616 116254200
com.sleepycat.je.tree.Node[] 187 98736 115743184
com.sleepycat.je.tree.LN 7092 226944 115253600
java.util.concurrent.ConcurrentHashMap$HashEntry 554 17728 78328944
java.util.concurrent.ConcurrentHashMap$HashEntry[] 1053 34728 77489632
java.util.concurrent.ConcurrentHashMap 117 5616 71812072
java.util.concurrent.ConcurrentHashMap$Segment[] 118 10304 71807912
java.util.concurrent.ConcurrentHashMap$Segment 1052 42080 71798808
com.sleepycat.je.tree.IN 6 672 45592352
java.lang.String 135888 4348416 14152664The memory profiler claims further, that com.sleepycat.je.tree.BIN is responsible for 71% of all heap memory.
In any case, com.sleepycat.je.tree.BIN claims ~ 116MB of heap memory, which is by any goodwill, exceeded the limit of 20MB.
How can this be?
How is JE ensuring that the limit is not exceeded? Is there a timer (thread) running which once a while checks the memory used and then cleans up ; or is memory usage checked creating a com.sleepycat.je.tree.BIN object?
My environment:
BDB JE 4.0.92 - used as cache loader within Jboss Cache (3.2.7.GA), running on a JBOSS Application Server, Java 1.6 (IBM) on Linux. Further details are listed in the system properties below (except some deleted security items).
System properties:
(java.lang.String, int, java.lang.StringBuffer, int)=contains
DestroyJavaVM helper thread=(java.lang.String, java.security.KeyStore$Entry, java.security.KeyStore$ProtectionParameter)
base.collection.name=CD2JAVA
bind.address=10.12.25.130
catalina.base=/work/ocrgws_test/server0
catalina.ext.dirs=/work/ocrgws_test/server0/lib
catalina.home=/work/ocrgws_test/server0
catalina.useNaming=false
com.arjuna.ats.arjuna.objectstore.objectStoreDir=/work/ocrgws_test/server0/data/tx-object-store
com.arjuna.ats.jta.lastResourceOptimisationInterface=org.jboss.tm.LastResource
com.arjuna.ats.tsmx.agentimpl=com.arjuna.ats.internal.jbossatx.agent.LocalJBossAgentImpl
com.arjuna.common.util.logger=log4j_releveler
com.arjuna.common.util.logging.DebugLevel=0x00000000
com.arjuna.common.util.logging.FacilityLevel=0xffffffff
com.arjuna.common.util.logging.VisibilityLevel=0xffffffff
com.ibm.cpu.endian=little
com.ibm.jcl.checkClassPath=
com.ibm.oti.configuration=scar
com.ibm.oti.jcl.build=20100326_1904
com.ibm.oti.shared.enabled=false
com.ibm.oti.vm.bootstrap.library.path=/opt/ibm/java-x86_64-60/jre/lib/amd64/compressedrefs:/opt/ibm/java-x86_64-60/jre/lib/amd64
com.ibm.oti.vm.library.version=24
com.ibm.util.extralibs.properties=
com.ibm.vm.bitmode=64
common.loader=${catalina.home}/lib,${catalina.home}/lib/*.jar
epo.jboss.deploymentscanner.extradirs=/work/ocrgws_test/app/
external.cert.ldap.* = ***************
file.encoding=UTF-8
file.separator=/
flipflop.activation.time=16:30
hibernate.bytecode.provider=javassist
ibm.signalhandling.rs=false
ibm.signalhandling.sigchain=true
ibm.signalhandling.sigint=true
ibm.system.encoding=UTF-8
jacorb.config.log.verbosity=0
java.assistive=ON
java.awt.fonts=
java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment
java.awt.printerjob=sun.print.PSPrinterJob
java.class.path=/work/ocrgws_test/config:/usr/local/jboss-eap-4.3-cp07/bin/run.jar:/opt/ibm/java-x86_64-60/lib/tools.jar
java.class.version=50.0
java.compiler=j9jit24
java.endorsed.dirs=/usr/local/jboss-eap-4.3-cp07/lib/endorsed
java.ext.dirs=/opt/ibm/java-x86_64-60/jre/lib/ext
java.fullversion=JRE 1.6.0 IBM J9 2.4 Linux amd64-64 jvmxa6460sr8-20100401_55940 (JIT enabled, AOT enabled)
J9VM - 20100401_055940
JIT - r9_20100401_15339
GC - 20100308_AA_CMPRSS
java.home=/opt/ibm/java-x86_64-60/jre
java.io.tmpdir=/tmp
java.jcl.version=20100408_01
java.library.path=/opt/ibm/java-x86_64-60/jre/lib/amd64/compressedrefs:/opt/ibm/java-x86_64-60/jre/lib/amd64:/usr/lib64/mpi/gcc/openmpi/lib64:/usr/lib
java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces
java.net.preferIPv4Stack=true
java.protocol.handler.pkgs=org.jboss.net.protocol
java.rmi.server.codebase=http://10.12.25.130:8083/
java.rmi.server.hostname=10.12.25.130
java.rmi.server.randomIDs=true
java.runtime.name=Java(TM) SE Runtime Environment
java.runtime.version=pxa6460sr8-20100409_01 (SR8)
java.security.krb5.conf=/usr/local/jboss/etc/krb5.conf
java.specification.name=Java Platform API Specification
java.specification.vendor=Sun Microsystems Inc.
java.specification.version=1.6
java.util.prefs.PreferencesFactory=java.util.prefs.FileSystemPreferencesFactory
java.vendor.url=http://www.ibm.com/
java.vendor=IBM Corporation
java.version=1.6.0
java.vm.info=JRE 1.6.0 IBM J9 2.4 Linux amd64-64 jvmxa6460sr8-20100401_55940 (JIT enabled, AOT enabled)
J9VM - 20100401_055940
JIT - r9_20100401_15339
GC - 20100308_AA_CMPRSS
java.vm.name=IBM J9 VM
java.vm.specification.name=Java Virtual Machine Specification
java.vm.specification.vendor=Sun Microsystems Inc.
java.vm.specification.version=1.0
java.vm.vendor=IBM Corporation
java.vm.version=2.4
javax.management.builder.initial=org.jboss.mx.server.MBeanServerBuilderImpl
javax.net.ssl.trustStore=/usr/local/jboss/etc/ldap.truststore
javax.net.ssl.trustStorePassword=password
jboss.bind.address=10.12.25.130
jboss.home.dir=/usr/local/jboss-eap-4.3-cp07
jboss.home.url=file:/usr/local/jboss-eap-4.3-cp07/
jboss.identity=30df88bc0a52e350x6e2ff59cx136c17794d5x-8000757
jboss.lib.url=file:/usr/local/jboss-eap-4.3-cp07/lib/
jboss.messaging.controlchanneludpaddress=239.1.200.4
jboss.messaging.datachanneludpaddress=239.1.200.4
jboss.partition.name=ocrgws_test_Partition
jboss.partition.udpGroup=239.1.200.4
jboss.remoting.domain=JBOSS
jboss.remoting.instanceid=30df88bc0a52e350x6e2ff59cx136c17794d5x-8000757
jboss.remoting.jmxid=luu002t.internal.epo.org_1334685694459
jboss.remoting.version=22
jboss.security.disable.secdomain.option=true
jboss.server.config.url=file:/work/ocrgws_test/server0/conf/
jboss.server.data.dir=/work/ocrgws_test/server0/data
jboss.server.home.dir=/work/ocrgws_test/server0
jboss.server.home.url=file:/work/ocrgws_test/server0/
jboss.server.lib.url=file:/work/ocrgws_test/server0/lib/
jboss.server.log.dir=/work/ocrgws_test/server0/log
jboss.server.name=luu002t_ocrgws_test_server0
jboss.server.temp.dir=/work/ocrgws_test/server0/tmp
jboss.tomcat.udpGroup=239.1.200.4
jbossmx.loader.repository.class=org.jboss.mx.loading.UnifiedLoaderRepository3
je.maxMemory=20000000
jgroups.bind_addr=10.12.25.130
jmx.console.bindcredential=3bpwdmpc
jmx.console.binddn=cn=jbossauth-ro,ou=accounts,ou=auth,dc=epo,dc=org
jmx.console.rolesctxdn=ou=roles-test,ou=jboss,ou=applications,ou=internal,dc=epo,dc=org
jndi.datasource.name=java:MainframeDS
jnp.disableDiscovery=true
jxe.current.romimage.version=15
jxe.lowest.romimage.version=15
line.separator=
mainframelogin.password=720652a1e842fc7f
mainframelogin.username=test_t
org.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger
org.apache.tomcat.util.http.ServerCookie.VERSION_SWITCH=true
org.epo.jboss.application.home=/work/ocrgws_test
org.hyperic.sigar.path=/work/ocrgws_test/server0/./deploy/hyperic-hq.war/native-lib
org.jboss.ORBSingletonDelegate=org.jacorb.orb.ORBSingleton
org.omg.CORBA.ORBClass=org.jacorb.orb.ORB
org.omg.CORBA.ORBSingletonClass=org.jboss.system.ORBSingleton
org.w3c.dom.DOMImplementationSourceList=org.apache.xerces.dom.DOMXSImplementationSourceImpl
os.arch=amd64
os.name=Linux
os.version=2.6.32.46-0.3-xen
package.access=sun.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.,sun.beans.
package.definition=sun.,java.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.
path.separator=:
poll.interval.milliseconds=300000
program.name=run.sh
server.loader=
shared.loader=
spnego.config=/usr/local/jboss/etc/spnego.properties
sun.arch.data.model=64
sun.boot.class.path=/usr/local/jboss-eap-4.3-cp07/lib/endorsed/xercesImpl.jar:/usr/local/jboss-eap-4.3-cp07/lib/endorsed/xalan.jar:/usr/local/jboss-eap-4.3-cp07/lib/endorsed/serializer.jar:/opt/ibm/java-x86_64-60/jre/lib/amd64/compressedrefs/jclSC160/vm.jar:/opt/ibm/java-x86_64-60/jre/lib/annotation.jar:/opt/ibm/java-x86_64-60/jre/lib/beans.jar:/opt/ibm/java-x86_64-60/jre/lib/java.util.jar:/opt/ibm/java-x86_64-60/jre/lib/jndi.jar:/opt/ibm/java-x86_64-60/jre/lib/logging.jar:/opt/ibm/java-x86_64-60/jre/lib/security.jar:/opt/ibm/java-x86_64-60/jre/lib/sql.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmorb.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmorbapi.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmcfw.jar:/opt/ibm/java-x86_64-60/jre/lib/rt.jar:/opt/ibm/java-x86_64-60/jre/lib/charsets.jar:/opt/ibm/java-x86_64-60/jre/lib/resources.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmpkcs.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmcertpathfw.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmjgssfw.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmjssefw.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmsaslfw.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmjcefw.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmjgssprovider.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmjsseprovider2.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmcertpathprovider.jar:/opt/ibm/java-x86_64-60/jre/lib/ibmxmlcrypto.jar:/opt/ibm/java-x86_64-60/jre/lib/management-agent.jar:/opt/ibm/java-x86_64-60/jre/lib/xml.jar:/opt/ibm/java-x86_64-60/jre/lib/jlm.jar:/opt/ibm/java-x86_64-60/jre/lib/javascript.jar:/tmp/yjp201202191932.jar
sun.boot.library.path=/opt/ibm/java-x86_64-60/jre/lib/amd64/compressedrefs:/opt/ibm/java-x86_64-60/jre/lib/amd64
sun.io.unicode.encoding=UnicodeLittle
sun.java.command=org.jboss.Main -b 10.12.25.130 -Djboss.server.home.dir=/work/ocrgws_test/server0 -Djboss.server.home.url=file:/work/ocrgws_test/server0 -Djboss.server.name=luu002t_ocrgws_test_server0 -Djboss.partition.name=ocrgws_test_Partition -Depo.jboss.deploymentscanner.extradirs=/work/ocrgws_test/app/ -Dorg.epo.jboss.application.home=/work/ocrgws_test
sun.java.launcher.pid=17781
sun.java.launcher=SUN_STANDARD
sun.java2d.fontpath=
sun.jnu.encoding=UTF-8
sun.rmi.dgc.client.gcInterval=3685000
sun.rmi.dgc.server.gcInterval=3685000
system=java.io.ObjectStreamField
tomcat.util.buf.StringCache.byte.enabled=true
user.country=US
user.dir=/work/ocrgws_test
user.home=*****************
user.language=en
user.name=***********
user.timezone=Europe/Berlin
user.variant=The memory profiler claims further, that com.sleepycat.je.tree.BIN is responsible for 71% of all heap memory. In any case, com.sleepycat.je.tree.BIN claims ~ 116MB of heap memory, which is by any goodwill, exceeded the limit of 20MB. >
I'm not sure whether the profiler is reporting live objects only (referenced) or all objects (including those not yet reclaimed). If the latter, it isn't telling you how much memory is actually referenced by the JE cache.
Please look at the JE stats to see what the cache usage is, from JE's point of view.
If you believe there is a bug in JE cache management, you'll need to write a small standalone test to demonstrate it and submit it to us, since we don't know of any such bug. Also note that we'll have difficulty supporting JE 4.0 (without a support contract anyway). Please use JE 5.0, or at least 4.1.
Eviction occurs as objects are allocated, as well as in background threads. Eviction in background threads and concurrent eviction were greatly improved in JE 4.1.
--mark
Maybe you are looking for
-
24" imac & Logic Pro 7.2.3
Hi, This is my first post here so hello peeps. I have just got a 24" intel Imac and logic pro and Tascam FW1884. It all seemed to work fine for a while then i was putting a guitar track down and I couldnt enable record on the guitar track eventhough
-
Pse9-problem importing photos from pse5
Just installed pse9. I can open editor and organizer but having trouble getting all my photos into organizer. When I open pse5 I see all photos. Help says I can import catalogs but I do not see this option in editor or organizer. Help says I can
-
I have a movie in french that i am able to view with subtitles in vlc but i want to be able to convert the avi files into mpeg-4 so im not sure how to combine the subtitles and the video together. Also, since the movie is in two parts i was wondering
-
My iPad 2 won't charge when connected to the wall plug
I have plugged my iPad into many different USB cords and it will not charge. It charged fine a few days ago and I also plugged it in to different walls ports too.
-
Color adjustment eyedropper bug
Hi there. When using the color adjustments in Aperture I've found that the eyedropper doesn't work as it should. Everytime I try to select a colour using the eyedropper it selects red. This is regardless of which of the 6 color spots I can select f