How to avoid jvm caching objects?
I'm programming a class, which will be loaded into another class. Everytime I did some modifications on my class, recompile it, kill the loaded object, and reload it, the old version will be loaded again, rather than the newly compiled version. So I have to shut down the whole thing, and start again. please help. thanks.
Well, now, you are right along the lines of what I am working on. I am building a plugin engine that will offer this type of ability along with many other features. Of course, it isn't exactly what you are trying to do, per se, but the class loading stuff, reloading and picking up the new object, that stuff is what I am working on as well.
First of all, your message is a little vague on what you are doing. Is your application creating an object at runtime, then you "modify" the source while your application is still running, and then your app creates a new object but is still retaining the old version of the class, thus not picking up your modification to the source? I am guessing this is what you mean. You then need to answer if you are using a separate class loader instance for this object that you wish to reload. In this case, an object and a class are the same thing, because you are only creating one of them at runtime then wish to reload it with a new version of the class/object.
In any case, for every class you wish to reload at runtime with a new version, EVERY SINGLE CLASS must have its own class loader instance. I have no idea if you understand how the JVM loads classes, the classpath, and so forth, but let me give you a bit of what I have learned.
First, the JVM has a "system" class loader that it uses to load the Java APIs and usually all of your classes to an application unless you create an instance of a class loader to load your classes. I forget, to be honest, if there is more than one classloader the JVM uses. I think there is a boot loader for the API classes, then a system loader that has as its parent the boot loader. Now, if you are unfamiliar with the ClassLoader hierarchy and how it works, a good read is the ClassLoader API docs. It explains the delegation model. Then read the URLClassLoader docs, as it explains how it uses its own URL as a classpath, in which you can add more URLs to its classpath and so forth.
The delegation model works like this. Every class has a ClassLoader, usually its the System loader because most developers never get into the class loading issues. They develop an app that is launched by the JVM in the normal way java -jar someapp.jar or java myclass with various parameters and attributes. If their code never creates a ClassLoader instance to load classes dynamically, then every class of their app shares the same one classloader instance, the System loader. This is why your classes never have to include any of the JDK/JRE runtime, its automatically done for you as part of the boot loader classpath, which the system loader picks up due to how the delegation model of ClassLoader works. Now, lets get on to that. The delegation model is a way a ClassLoader instances looks for a class when asked by the JVM to find a class. First, the classloader looks in its "cache" of already loaded classes. If not found, it then asks the parent class loader to find the class. If the parent class loader can't find the class, it finally calls its findClass method, which is what the API recommends any new classloaders override to do their own specific class loading features. This is the delegation model in short. It ensures that the very first time a class is being loaded, the parent classloaders are searched first before your own classloader instance has a chance to find it however it needs. There is one problem I have with this. IF any class you need to load lies within the JVM classpath (or in the case of a J2EE deployed application, lies within the J2EE classpath or web-app classpath), it is ALWAYS found by the parent classloader before your classloader instance ever gets a chance to load it. What I mean is, if you create your own classloader, extending say URLClassLoader, and you try to load a class that you want to eventually reload, because the JVM classloader (or parent loader) loads it first before your own classloader implementation gets a chance (via the recommended findClass() method you should override and implement for your needs), you can't ever really "reload" the newly modified class, because it is within the JVM's classpath (or J2EE/web-app).
As I have had to do with my plugin engine to ensure any class can be reloaded, I have to override the loadClass() method of ClassLoader. Now, to understand why this is done, you need to understand how the JVM works, which in fact I posted a message on the JVM forum of this site just earlier today to get exact info on. The JVM, when it loads the bytecodes of a class, as far as I know asks that class' ClassLoader instance to "find" any classes the class implements, extends, or imports. If you have Class A use Class B, and Class A has its own classloader with say NO parent loader, and Class B has its own loader with no parent loader as well, The Class A classloader instance needs to "see" the Class B .class file (visibly) somehow in order to understand HOW to use Class B in the Class A use of it. Man this gets tricky to explain. So, somehow, Class A and Class B must be within the same path so that Class A classloader can also instantiate Class B for its use in Class A. I am using the word Class loosely here, because we know that every Class becomes an object when instantiated. Anyway, in order to have every class reload, every class must have its very own instance of a ClassLoader. Ideally, using URLClassLoader works well for this because you can declare URL objects as part of the classpath a URLClassLoader instances looks for classes in.
Anyway, that is all the time I have for this right now. Let me know if it helped at all, or if you have any more questions. I'll watch the topic and check back soon.
Similar Messages
-
How to avoid printing pdf object
I wrote on my html page:
I would like placing a row that denies when the user wants to print the document. Anyone knows that?
Hugs
FernandoYou can lock down a PDF to prevent printing, yes. it's been part of the PDF specification since version 1.1.
Leonard -
How to avoid the pdf file in browser cache
i am opening pdf files through browser. After that same pdf changed their contents . after i open the pdf but it will not updated. Bcoz this file in cache . so how to avoid the cache store for pdf files.
plz help me!!!the code is as bellow
File fBlob = new File ("test.pdf");
FileInputStream fIS = new FileInputStream(fBlob);
pstUpdate= con.prepareStatement("UPDATE table set file = ? where id = ?");
pstUpdate.setBinaryStream(1, fIS, (int) fBlob.length());
pstUpdate.setString(2, rs.getString("id"));
pstUpdate.execute();
con.commit(); -
How to avoid ViewObject or ApplicationModule cache?
I am developing a web application using JSP, EJB (SessionBean) and BC4J.
How to avoid ViewObject or ApplicationModule cache?
Is there any method in Oracle API (oracle.jbo, oracle.jbo.server, ...) to do this?
nullObjects returned from queries against your session are the shared/cached instances and should not be changed. Changes applied directly against these instances will not be reflected in the database.
To apply changes against objects within a transaction and isolate these changes from other thread while they are pending you must use a UnitOfWork. The UnitOfWork will provide you a transactional isolated working copy where changes can be safely made. Then when the UnitOfWork is committed the minimal change-set is calculated and written to the database prior to being merged into the shared cache.
Some useful links into the documentation:
http://download-west.oracle.com/docs/cd/B10464_01/web.904/b10313/undrstdg.htm#1110428
http://download-west.oracle.com/docs/cd/B10464_01/web.904/b10313/xactions.htm#1127587
Doug -
How can we avoid cloning an object?
how can we avoid cloning an object?
maddy123 wrote:
writing a singleton class but i want avoid the client to implement the cloneable for creating objectsHuh? What makes you think that's going to happen? Sounds like you have trust issues amongst developers, which would be better fixed by non-technical means -
How to avoid the objects dependency in the packages by standard settings?
Hi,
How to avoid the objects dependency in the packages by standard settings?
Example Scenario -> Our project uses two packages u2018ZZP1u2019 and u2018ZZP2u2019 for developments in the system u2018SN1u2019. We created a domain u2018ZZ_DO_TESTu2019 in the Package u2018ZZP1u2019. Now we have to make sure that the developer should not use or refer domain u2018ZZ_DO_TESTu2019 for the developments in the package u2018ZZP2u2019.
u2026NaddyEvevn i felt that in the CTS at least a warning can be given if the included objects refer to any other object(s) which arre:
1. Local Objects
2. Locked under other requests,
3. Lastly able to detect cyclic dependency as in we had a situation where we had a program locked in request A which calls an FM locked in request B. Now Request B refers to a message which is locked in request A.Since it was a message it gave only requrn code 4 in transport and transport ended with warnings. But if it is some other object then it is going to give compile error in at least one transport and neither can be moved without the other.
Anyways, i will check the BAPI he has mentioned and see if any workaround can be done,
Request: Please keep the post active until we arrive at a good solution,Thanks. -
How to create a cache for JPA Entities using an EJB
Hello everybody! I have recently got started with JPA 2.0 (I use eclipseLink) and EJB 3.1 and have a problem to figure out how to best implement a cache for my JPA Entities using an EJB.
In the following I try to describe my problem.. I know it is a bit verbose, but hope somebody will help me.. (I highlighted in bold the core of my problem, in case you want to first decide if you can/want help and in the case spend another couple of minutes to understand the domain)
I have the following JPA Entities:
@Entity Genre{
private String name;
@OneToMany(mappedBy = "genre", cascade={CascadeType.MERGE, CascadeType.PERSIST})
private Collection<Novel> novels;
@Entity
class Novel{
@ManyToOne(cascade={CascadeType.MERGE, CascadeType.PERSIST})
private Genre genre;
private String titleUnique;
@OneToMany(mappedBy="novel", cascade={CascadeType.MERGE, CascadeType.PERSIST})
private Collection<NovelEdition> editions;
@Entity
class NovelEdition{
private String publisherNameUnique;
private String year;
@ManyToOne(optional=false, cascade={CascadeType.PERSIST, CascadeType.MERGE})
private Novel novel;
@ManyToOne(optional=false, cascade={CascadeType.MERGE, CascadeType.PERSIST})
private Catalog appearsInCatalog;
@Entity
class Catalog{
private String name;
@OneToMany(mappedBy = "appearsInCatalog", cascade = {CascadeType.MERGE, CascadeType.PERSIST})
private Collection<NovelEdition> novelsInCatalog;
The idea is to have several Novels, belonging each to a specific Genre, for which can exist more than an edition (different publisher, year, etc). For semplicity a NovelEdition can belong to just one Catalog, being such a Catalog represented by such a text file:
FILE 1:
Catalog: Name Of Catalog 1
"Title of Novel 1", "Genre1 name","Publisher1 Name", 2009
"Title of Novel 2", "Genre1 name","Pulisher2 Name", 2010
FILE 2:
Catalog: Name Of Catalog 2
"Title of Novel 1", "Genre1 name","Publisher2 Name", 2011
"Title of Novel 2", "Genre1 name","Pulisher1 Name", 2011
Each entity has associated a Stateless EJB that acts as a DAO, using a Transaction Scoped EntityManager. For example:
@Stateless
public class NovelDAO extends AbstractDAO<Novel> {
@PersistenceContext(unitName = "XXX")
private EntityManager em;
protected EntityManager getEntityManager() {
return em;
public NovelDAO() {
super(Novel.class);
//NovelDAO Specific methods
I am interested at when the catalog files are parsed and the corresponding entities are built (I usually read a whole batch of Catalogs at a time).
Being the parsing a String-driven procedure, I don't want to repeat actions like novelDAO.getByName("Title of Novel 1") so I would like to use a centralized cache for mappings of type String-Identifier->Entity object.
Currently I use +3 Objects+:
1) The file parser, which does something like:
final CatalogBuilder catalogBuilder = //JNDI Lookup
//for each file:
String catalogName = parseCatalogName(file);
catalogBuilder.setCatalogName(catalogName);
//For each novel edition
String title= parseNovelTitle();
String genre= parseGenre();
catalogBuilder.addNovelEdition(title, genre, publisher, year);
//End foreach
catalogBuilder.build();
2) The CatalogBuilder is a Stateful EJB which uses the Cache and gets re-initialized every time a new Catalog file is parsed and gets "removed" after a catalog is persisted.
@Stateful
public class CatalogBuilder {
@PersistenceContext(unitName = "XXX", type = PersistenceContextType.EXTENDED)
private EntityManager em;
@EJB
private Cache cache;
private Catalog catalog;
@PostConstruct
public void initialize() {
catalog = new Catalog();
catalog.setNovelsInCatalog(new ArrayList<NovelEdition>());
public void addNovelEdition(String title, String genreStr, String publisher, String year){
Genre genre = cache.findGenreCreateIfAbsent(genreStr);//##
Novel novel = cache.findNovelCreateIfAbsent(title, genre);//##
NovelEdition novEd = new NovelEdition();
novEd.setNovel(novel);
//novEd.set publisher year catalog
catalog.getNovelsInCatalog().add();
public void setCatalogName(String name) {
catalog.setName(name);
@Remove
public void build(){
em.merge(catalog);
3) Finally, the problematic bean: Cache. For CatalogBuilder I used an EXTENDED persistence context (which I need as the Parser executes several succesive transactions) together with a Stateful EJB; but in this case I am not really sure what I need. In fact, the cache:
Should stay in memory until the parser is finished with its job, but not longer (should not be a singleton) as the parsing is just a very particular activity which happens rarely.
Should keep all of the entities in context, and should return managed entities form mehtods marked with ##, otherwise the attempt to persist the catalog should fail (duplicated INSERTs)..
Should use the same persistence context as the CatalogBuilder.
What I have now is :
@Stateful
public class Cache {
@PersistenceContext(unitName = "XXX", type = PersistenceContextType.EXTENDED)
private EntityManager em;
@EJB
private sessionbean.GenreDAO genreDAO;
//DAOs for other cached entities
Map<String, Genre> genreName2Object=new TreeMap<String, Genre>();
@PostConstruct
public void initialize(){
for (Genre g: genreDAO.findAll()) {
genreName2Object.put(g.getName(), em.merge(g));
public Genre findGenreCreateIfAbsent(String genreName){
if (genreName2Object.containsKey(genreName){
return genreName2Object.get(genreName);
Genre g = new Genre();
g.setName();
g.setNovels(new ArrayList<Novel>());
genreDAO.persist(t);
genreName2Object.put(t.getIdentifier(), em.merge(t));
return t;
But honestly I couldn't find a solution which satisfies these 3 points at the same time. For example, using another stateful bean with an extended persistence context (PC) would work for the 1st parsed file, but I have no idea what should happen from the 2nd file on.. Indeed, for the 1st file the PC will be created and propagated from CatalogBuilder to Cache, which will then use the same PC. But after build() returns, the PC of CatalogBuilder should (I guess) be removed and re-created during the succesive parsing, although the PC of Cache should stay "alive": shouldn't in this case an exception being thrown? Another problem is what to do when the Cache bean is passivated. Currently I get the exception:
"passivateEJB(), Exception caught ->
java.io.IOException: java.io.IOException
at com.sun.ejb.base.io.IOUtils.serializeObject(IOUtils.java:101)
at com.sun.ejb.containers.util.cache.LruSessionCache.saveStateToStore(LruSessionCache.java:501)"
Hence, I have no Idea how to implement my cache.. Can you please tell me how would you solve the problem?
Many thanks!
ByeHi Chris,
thanks for your reply!
I've tried to add the following into persistence.xml (although I've read that eclipseLink uses L2 cache by default..):
<shared-cache-mode>ALL</shared-cache-mode>
Then I replaced the Cache bean with a stateless bean which has methods like
Genre findGenreCreateIfAbsent(String genreName){
Genre genre = genreDAO.findByName(genreName);
if (genre!=null){
return genre;
genre = //Build new genre object
genreDAO.persist(genre);
return genre;
As far as I undestood, the shared cache should automatically store the genre and avoid querying the DB multiple times for the same genre, but unfortunately this is not the case: if I use a FINE logging level, I see really a lot of SELECT queries, which I didn't see with my "home made" Cache...
I am really confused.. :(
Thanks again for helping + bye -
How can a JVM terminate with an exit code of 141 and no other diagnostics?
Hello,
We are encountering a JVM process that dies with little explanation other than an exit code of 141. No hotspot error file (hs_err_*) or crash dump. To date, the process runs anywhere from 30 minutes to 8 days before the problem occurs. The last application log entry is always the report of a lost SSL connection, the result of an thrown SSLException. (The exception itself is unavailable at this time – the JVM dies before it is logged -- working on that.)
How can a JVM produce an exit code of 141, and nothing else? Can anyone suggest ideas for capturing additional diagnostic information? Any help would be greatly appreciated! Environment and efforts to date are described below.
Thanks,
-KK
Host machine: 8x Xeon server with 256GB memory, RHEL 6 (or RHEL 5.5) 64-bit
Java: Oracle Java SE 7u21 (or 6u26)
java version "1.7.0_21"
Java(TM) SE Runtime Environment (build 1.7.0_21-b11)
Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode)
JVM arguments:
-XX:+UseConcMarkSweepGC
-XX:+CMSIncrementalMode
-XX:+CMSClassUnloadingEnabled
-XX:MaxPermSize=256m
-XX:NewSize=64m
-Xms128m
-Xmx1037959168
-Djava.awt.headless=true
-Djava.security.egd=file:///dev/./urandom
Diagnostics attempted to date:
LD_PRELOAD=libjsig.so. A modified version of libjsig.so was created to report all signal handler registrations and to report SIGPIPE signals received. (Exit code 141 could be interpreted as 128+SIGPIPE(13).) No JNI libraries are registering any signal handlers, and no SIGPIPE signal is reported by the library for the duration of the JVM run. Calls to ::exit() are also intercepted and reported. No call to exit() is reported.
Inspect /var/log/messages for any indication that the OS killed the process, e.g. via the Out Of Memory (OOM) Killer. Nothing found.
Set ‘ulimit –c unlimited’, in case the default limit of 0 (zero) was preventing a core file from being written. Still no core dump.
‘top’ reports the VIRT size of the process can grow to 20GB or more in a matter of hours, which is unusual compared to other JVM processes. The RES (resident set size) does not grow beyond about 375MB, however, which is an considered normal.
This JVM process creates many short-lived Thread objects by way of a thread pool, averaging 1 thread every 2 seconds, and these objects end up referenced only by a Weak reference. The CMS collector seems lazy about collecting these, and upwards of 2000 Thread objects have been seen (in heap dumps) held only by Weak references. (The Java heap averages about 100MB, so the collector is not under any pressure.) However, a forced collection (via jconsole) cleans out the Thread objects as expected. Any relationship of this to the VIRT size or the JVM disappearance, however, cannot be established.
The process also uses NIO and direct buffers, and maintains a DirectByteBuffer cache. There is some DirectByteBuffer churn. MBeans report stats like:
Direct buffer pool: allocated=669 (20,824,064 bytes), released=665 (20,725,760), active=4 (98,304) [note: equals 2x 32K buffers and 2x 16K buffers]
java.nio.BufferPool > direct: Count=18, MemoryUsed=1343568, TotalCapacity=1343568
These numbers appear normal and also do not seem to correlate with the VIRT size or the JVM disappearance.True, but the JNI call would still be reported by the LD_PRELOAD intercept, unless the native code could somehow circumvent that. Using a test similar to GoodbyeWorld (shown below), I verified that the JNI call to exit() is reported. In the failure case, no call to exit() is reported.
Can an OS (or a manual) 'kill' specify an exit code? Where could "141" be coming from?
Thanks,
-K2
=== GoodbyeWorldFromJNI.java ===
package com.attachmate.test;
public class GoodbyeWorldFromJNI
public static final String LIBRARY_NAME = "goodbye";
static {
try {
System.loadLibrary(LIBRARY_NAME);
} catch (UnsatisfiedLinkError error) {
System.err.println("Failed to load " + System.mapLibraryName(LIBRARY_NAME));
private static native void callExit(int exitCode);
public static void main(String[] args) {
callExit(141);
=== goodbye.c ===
#include <stdlib.h>
#include "goodbye.h" // javah generated header file
JNIEXPORT void JNICALL Java_com_attachmate_test_GoodbyeWorldFromJNI_callExit
(JNIEnv *env, jclass theClass, jint exitCode)
exit(exitCode);
=== script.sh ===
#!/bin/bash -v
uname -a
export PATH=/opt/jre1.7.0_25/bin:$PATH
java -version
pwd
LD_PRELOAD=./lib/linux-amd64/libjsigdebug.so java -classpath classes -Djava.library.path=lib/linux-amd64 com.attachmate.test.GoodbyeWorldFromJNI > stdout.txt
echo $?
tail stdout.txt
=== script output ===
[keithk@keithk-RHEL5-dev goodbyeJNI]$ ./script.sh
#!/bin/bash -v
uname -a
Linux keithk-RHEL5-dev 2.6.18-164.2.1.el5 #1 SMP Mon Sep 21 04:37:42 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
export PATH=/opt/jre1.7.0_25/bin:$PATH
java -version
java version "1.7.0_25"
Java(TM) SE Runtime Environment (build 1.7.0_25-b15)
Java HotSpot(TM) 64-Bit Server VM (build 23.25-b01, mixed mode)
pwd
/tmp/goodbyeJNI
LD_PRELOAD=./lib/linux-amd64/libjsigdebug.so java -classpath classes -Djava.library.path=lib/linux-amd64 com.attachmate.test.GoodbyeWorldFromJNI > stdout.txt
echo $?
141
tail stdout.txt
JSIG: exit(141) called
JSIG: Call stack has 4 frames:
JSIG: ./lib/linux-amd64/libjsigdebug.so [0x2b07dc1bdc2f]
JSIG: ./lib/linux-amd64/libjsigdebug.so(exit+0x29) [0x2b07dc1bea41]
JSIG: /tmp/goodbyeJNI/lib/linux-amd64/libgoodbye.so [0x2aaab3e82547]
JSIG: [0x2aaaab366d8e]
=== === -
How can I write new objects to the existing file with already written objec
Hi,
I've got a problem in my app.
Namely, my app stores data as objects written to the files. Everything is OK, when I write some data (objects of a class defined by me) to the file (by using writeObject method from ObjectOutputStream) and then I'm reading it sequencially by the corresponding readObject method (from ObjectInputStream).
Problems start when I add new objects to the already existing file (to the end of this file). Then, when I'm trying to read newly written data, I get an exception:
java.io.StreamCorruptedException
at java.io.ObjectInputStream.readObject0(Unknown Source)
at java.io.ObjectInputStream.readObject(Unknown Source)
Is there any method to avoid corrupting the stream? Maybe it is a silly problem, but I really can't cope with it! How can I write new objects to the existing file with already written objects?
If anyone of you know something about this issue, please help!
JaiHere is a piece of sample codes. You can save the bytes read from the object by invoking save(byte[] b), and load the last inserted object by invoking load.
* Created on 2004-12-23
package com.cpic.msgbus.monitor.util.cachequeue;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.RandomAccessFile;
* @author elgs This is a very high performance implemention of Cache.
public class StackCache implements Cache
protected long seed = 0;
protected RandomAccessFile raf;
protected int count;
protected String cacheDeviceName;
protected Adapter adapter;
protected long pointer = 0;
protected File f;
public StackCache(String name) throws IOException
cacheDeviceName = name;
f = new File(Const.cacheHome + name);
raf = new RandomAccessFile(f, "rw");
if (raf.length() == 0)
raf.writeLong(0L);
* Whne the cache file is getting large in size and may there be fragments,
* we should do a shrink.
public synchronized void shrink() throws IOException
int BUF = 8192;
long pointer = getPointer();
long size = pointer + 4;
File temp = new File(Const.cacheHome + getCacheDeviceName() + ".shrink");
FileInputStream in = new FileInputStream(f);
FileOutputStream out = new FileOutputStream(temp);
byte[] buf = new byte[BUF];
long runs = size / BUF;
int mode = (int) size % BUF;
for (long l = 0; l < runs; ++l)
in.read(buf);
out.write(buf);
in.read(buf, 0, mode);
out.write(buf, 0, mode);
out.flush();
out.close();
in.close();
raf.close();
f.delete();
temp.renameTo(f);
raf = new RandomAccessFile(f, "rw");
private synchronized long getPointer() throws IOException
long l = raf.getFilePointer();
raf.seek(0);
long pointer = raf.readLong();
raf.seek(l);
return pointer < 8 ? 4 : pointer;
* (non-Javadoc)
* @see com.cpic.msgbus.monitor.util.cachequeue.Cache#load()
public synchronized byte[] load() throws IOException
pointer = getPointer();
if (pointer < 8)
return null;
raf.seek(pointer);
int length = raf.readInt();
pointer = pointer - length - 4;
raf.seek(0);
raf.writeLong(pointer);
byte[] b = new byte[length];
raf.seek(pointer + 4);
raf.read(b);
--count;
return b;
* (non-Javadoc)
* @see com.cpic.msgbus.monitor.util.cachequeue.Cache#save(byte[])
public synchronized void save(byte[] b) throws IOException
pointer = getPointer();
int length = b.length;
pointer += 4;
raf.seek(pointer);
raf.write(b);
raf.writeInt(length);
pointer = raf.getFilePointer() - 4;
raf.seek(0);
raf.writeLong(pointer);
++count;
* (non-Javadoc)
* @see com.cpic.msgbus.monitor.util.cachequeue.Cache#getCachedObjectsCount()
public synchronized int getCachedObjectsCount()
return count;
* (non-Javadoc)
* @see com.cpic.msgbus.monitor.util.cachequeue.Cache#getCacheDeviceName()
public String getCacheDeviceName()
return cacheDeviceName;
} -
Missing destructor or how to explictit end an object?!
hi,
for the explicit beginning of an object exist the 'constructor'
but what if i need a methode for the explicit end of an object and how can i control the object-end manually by myselfe (not automatically over the garbage-collector in an undefined time).
object-var = null + finally() doesn't work
cu
oliver scorp1.) implement the finalize() in your object (here you
can clean ressources etc. it was intended for this) -
if there are reasons for avoiding finalize: what are
they? the fundamental problem with finalize() is that it is run when the object is being gc'ed.
why i the object being gc'ed? because the system needs to reclaim resources
why does the system need to reclaim resources? because it is running low.
hence, finalize methods can very easily be the cause of OutOfMemory crashes.
2.) no way in java - if you remove all references to
an object it will eventuallly be gc'ed. - In
run-of-the-mill user-interacting applications that
usually means nearly instantly.On most JVMs, garbage collection occurs periodically, or when there is insufficient heap for an allocation to complete. There is no specification on what the period of gc is, so you cannot say 'nearly instantly'.
BTW: if you have a e.g. tree and set all references to
the root to null (and there are no references to any
nodes left apart from the tree-internal ones) the
whole tree is gc'ed - you don't have to think so much
about the destruction of your objects as in C++ -
though you'll have to keep in mind that only those
things are gc'ed for which no references exist any
more!!yes, im hoping every1 already knew this :-P
although, i've always wondered, if you had a realy big circular linklist, whether the gc would be able to work out if removing it was possible. Though without knowing how the gc is implemented, there is no way of knowing if such limitations apply ;-/ -
How to avoid sleepycat.je.log.ChecksumException
Hello,
We're building one on-line game application, and are evaluating JE. We want to use JE as a persistent store (very similar to a cache, periodically sync each on-line user's data to a remote JE server via network socket). We're testing with thousands of on-line users, and the data size for each user is about 83 KB.
I ran into one EnvironmentFailureException exception while doing anomaly testing. I used "kill -TERM pid" to kill the JVM process, and encountered the following exception. I had to clean all the JE log files in order to re-start JE. I'm wondering:
1. Is there anything wrong with my approach of anomaly testing? How to avoid the exception if I want to continue my anomaly testing?
2. Once the exception happens, are there any ways of "rescuing" the existing data? Because I had to clean all the JE log files in order to re-start JE.
The error message is as follows:
<daemonthread name="Cleaner-1"></daemonthread> caught exception: com.sleepycat.je.EnvironmentFailureException: (JE 4.1.6)DBServ (1):dbhome com.sleepycat.je.log.ChecksumExcep Read invalid log entry type: 0 LOG_CHECKSUM: Checksum invalid on read, log is likely invalid. Environment is invalid and must be closed. fetchTarget of 0x8a/0x59aaac parent IN=59 IN class=com.sleepycat.je.tree.BIN lastFullVersion=0xa4/0x7d1362 parent.getDirty()=true state=0
com.sleepycat.je.EnvironmentFailureException: (JE 4.1.6)DBServ(1):dbhome leepycat.je.log.ChecksumException: Read invalid log entry type: 0 LOG_CHECKSUM: Checksum invalid on read, log is likely invalid. Environment is invalid and must be closed. fetchTarget of 0x8a/0x59aaac parent IN=59 IN class=com.sleepycat.je.tree.BIN lastFullVersion=0xa4/0x7d1362 parent.getDirty()=true state=0
at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:784)
at com.sleepycat.je.log.LogManager.getLogEntryAllowInvisibleAtRecovery(LogManager.java:742)
at com.sleepycat.je.tree.IN.fetchTarget(IN.java:1315)
at com.sleepycat.je.tree.BIN.fetchTarget(BIN.java:1367)
at com.sleepycat.je.tree.Tree.getParentBINForChildLN(Tree.java:1017)
at com.sleepycat.je.cleaner.FileProcessor.processLN(FileProcessor.java:678)
at com.sleepycat.je.cleaner.FileProcessor.processFile(FileProcessor.java:553)
at com.sleepycat.je.cleaner.FileProcessor.doClean(FileProcessor.java:241)
at com.sleepycat.je.cleaner.FileProcessor.onWakeup(FileProcessor.java:143)
at com.sleepycat.je.utilint.DaemonThread.run(DaemonThread.java:162)
at java.lang.Thread.run(Thread.java:619)
Caused by: com.sleepycat.je.log.ChecksumException: Read invalid log entry type: 0
at com.sleepycat.je.log.LogEntryHeader.<init></init>(LogEntryHeader.java:138)
at com.sleepycat.je.log.LogManager.getLogEntryFromLogSource(LogManager.java:861)
at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:781)
Thanks,
LeeuanDo you still have the offending log files? If so, save them away.
With a fresh environment, are you able to reliably reproduce this? i.e. if you kill -9 on a running system, can you reproduce the same problem?
Charles Lamb -
My C application uses Java Invocation API to call java classes.
All works right but when I change and recompile my java class, the C app continues to see the old java class compilation (unless I quit and relaunch the C app).
Is there a way to manage the JVM cache?
How can I avoid or force classes to be cached or not?
Thanks for helpYou can use custom java.net.URLClassLoader instances to achieve this. Or just wait for the garbage collector to unload your class, as long as you release all global references to all instances and the class itself. But relying on GC to unload your class isn't a sure thing.
But exactly what are you trying to do? If this fast reloading is just for development time, then maybe it isn't worth the trouble. Or is this reloading supposed to be a production feature?
-slj- -
My C application uses Java Invocation API to call java classes.
All works right but when I change and recompile my java class, the C app continues to see the old java class compilation (unless I quit and relaunch the C app).
Is there a way to manage the JVM cache?
How can I avoid or force classes to be cached or not?
Thanks for helpAs in just a jave application you would have to create a hot swappable system. This is done using class loaders and support classes.
-
After REFRESH the cached object is not consistent with the database table
After REFRESH, the cached object is not consistent with the database table. Why?
I created a JDBC connection with the Oracle database (HR schema) using JDeveloper(10.1.3) and then I created an offline database (HR schema)
in JDeveloper from the existing database tables (HR schema). Then I made some updates to the JOBS database table using SQL*Plus.
Then I returned to the JDeveloper tool and refreshed the HR connection. But I found no any changes made to the offline database table JOBS in
JDeveloper.
How to make the JDeveloper's offline tables to be synchronized with the underling database tables?qkc,
Once you create an offline table, it's just a copy of a table definition as of the point in time you brought it in from the database. Refreshing the connection, as you describe it, just refreshes the database browser, and not any offline objects. If you want to syncrhnonize the offline table, right-click the offline table and choose "Generate or Reconcile Objects" to reconcile the object to the database. I just tried this in 10.1.3.3 (not the latest 10.1.3, I know), and it works properly.
John -
How can avoid the problem of Parameter Prompting when I submitting ?
I am developing web application in visual studio 2008 in csharp.How can avoid the issue or problem of Parameter Prompting when I send parameters programaticaly or dyanmicaly? I am sending the values from .net web form to crystal report but it is still asking for parameters. so when i submit second time that is when the reports is being genereated. How can i solve this problem. Please help. The code Iam using is below.
1. using System;
2. using System.Collections;
3. using System.Configuration;
4. using System.Data;
5. using System.Linq;
6. using System.Web;
7. using System.Web.Security;
8. using System.Web.UI;
9. using System.Web.UI.HtmlControls;
10. using System.Web.UI.WebControls;
11. using System.Web.UI.WebControls.WebParts;
12. using System.Xml.Linq;
13. using System.Data.OleDb;
14. using System.Data.OracleClient;
15. using CrystalDecisions.Shared;
16. using CrystalDecisions.CrystalReports.Engine;
17. using CrystalDecisions.Web;
18.
19.
20. public partial class OracleReport : System.Web.UI.Page
21. {
22. CrystalReportViewer crViewer = new CrystalReportViewer();
23. //CrystalReportSource crsource = new CrystalReportSource();
24. int nItemId;
25.
26. protected void Page_Load(object sender, EventArgs e)
27. {
28. //Database Connection
29. ConnectionInfo ConnInfo = new ConnectionInfo();
30. {
31. ConnInfo.ServerName = "127.0.0.1";
32. ConnInfo.DatabaseName = "Xcodf";
33. ConnInfo.UserID = "HR777";
34. ConnInfo.Password = "zghshshs";
35. }
36. // For Each Logon parameters
37. foreach (TableLogOnInfo cnInfo in this.CrystalReportViewer1.LogOnInfo)
38. {
39. cnInfo.ConnectionInfo = ConnInfo;
40.
41. }
42.
43.
44.
45.
46.
47.
48. //Declaring varibles
49. nItemId = int.Parse(Request.QueryString.Get("ItemId"));
50. //string strStartDate = Request.QueryString.Get("StartDate");
51. //int nItemId = 20;
52. string strStartDate = "23-JUL-2010";
53.
54. // object declration
55. CrystalDecisions.CrystalReports.Engine.Database crDatabase;
56. CrystalDecisions.CrystalReports.Engine.Table crTable;
57.
58.
59. TableLogOnInfo dbConn = new TableLogOnInfo();
60.
61. // new report document object
62. ReportDocument oRpt = new ReportDocument();
63.
64. // loading the ItemReport in report document
65. oRpt.Load("C:
Inetpub
wwwroot
cryreport
CrystalReport1.rpt");
66.
67. // getting the database, the table and the LogOnInfo object which holds login onformation
68. crDatabase = oRpt.Database;
69.
70. // getting the table in an object array of one item
71. object[] arrTables = new object[1];
72. crDatabase.Tables.CopyTo(arrTables, 0);
73.
74. // assigning the first item of array to crTable by downcasting the object to Table
75. crTable = (CrystalDecisions.CrystalReports.Engine.Table)arrTables[0];
76.
77. dbConn = crTable.LogOnInfo;
78.
79. // setting values
80. dbConn.ConnectionInfo.DatabaseName = "Xcodf";
81. dbConn.ConnectionInfo.ServerName = "127.0.0.1";
82. dbConn.ConnectionInfo.UserID = "HR777";
83. dbConn.ConnectionInfo.Password = "zghshshs";
84.
85. // applying login info to the table object
86. crTable.ApplyLogOnInfo(dbConn);
87.
88.
89.
90.
91.
92.
93. crViewer.RefreshReport();
94.
95. // defining report source
96. crViewer.ReportSource = oRpt;
97. //CrystalReportSource1.Report = oRpt;
98.
99. // so uptill now we have created everything
100. // what remains is to pass parameters to our report, so it
101. // shows only selected records. so calling a method to set
102. // those parameters.
103. setReportParameters();
104. }
105.
106. private void setReportParameters()
107. {
108.
109. // all the parameter fields will be added to this collection
110. ParameterFields paramFields = new ParameterFields();
111. //ParameterFieldDefinitions ParaLocationContainer = new ParameterFieldDefinitions();
112. //ParameterFieldDefinition ParaLocation = new ParameterFieldDefinition();
113.
114. // the parameter fields to be sent to the report
115. ParameterField pfItemId = new ParameterField();
116. //ParameterField pfStartDate = new ParameterField();
117. //ParameterField pfEndDate = new ParameterField();
118.
119. // setting the name of parameter fields with wich they will be recieved in report
120.
121. pfItemId.ParameterFieldName = "RegionID";
122.
123. //pfStartDate.ParameterFieldName = "StartDate";
124. //pfEndDate.ParameterFieldName = "EndDate";
125.
126. // the above declared parameter fields accept values as discrete objects
127. // so declaring discrete objects
128. ParameterDiscreteValue dcItemId = new ParameterDiscreteValue();
129. //ParameterDiscreteValue dcStartDate = new ParameterDiscreteValue();
130. //ParameterDiscreteValue dcEndDate = new ParameterDiscreteValue();
131.
132. // setting the values of discrete objects
133.
134.
135. dcItemId.Value = nItemId;
136.
137. //dcStartDate.Value = DateTime.Parse(strStartDate);
138. //dcEndDate.Value = DateTime.Parse(strEndDate);
139.
140. // now adding these discrete values to parameters
141. //paramField.HasCurrentValue = true;
142.
143.
144.
145. //pfItemId.CurrentValues.Clear();
146. int valueIDD = int.Parse(Request.QueryString.Get("ItemId").ToString());
147. pfItemId.Name = valueIDD.ToString();
148.
149. pfItemId.CurrentValues.Add(dcItemId);
150. //ParaLocation.ApplyCurrentValues;
151. pfItemId.HasCurrentValue = true;
152.
153. //pfStartDate.CurrentValues.Add(dcStartDate);
154. //pfEndDate.CurrentValues.Add(dcEndDate);
155.
156. // now adding all these parameter fields to the parameter collection
157. paramFields.Add(pfItemId);
158.
159. //paramFields.Add(pfStartDate);
160. //paramFields.Add(pfEndDate);
161. /////////////////////
162. //Formula from Crystal
163. //crViewer.SelectionFormula = "{COUNTRIES.REGION_ID} = " + int.Parse(Request.QueryString.Get("ItemId")) + "";
164. crViewer.RefreshReport();
165. // finally add the parameter collection to the crystal report viewer
166. crViewer.ParameterFieldInfo = paramFields;
167.
168.
169.
170. }
171. }Keep your post to under 1200 characters, else you loose the formatting. (you can do two posts if need be).
Re. parameters. First, make sure yo have SP 1 for CR 10.5:
https://smpdl.sap-ag.de/~sapidp/012002523100009351512008E/crbasic2008sp1.exe
Next, see the following:
[Crystal Reports for Visual Studio 2005 Walkthroughs|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/2081b4d9-6864-2b10-f49d-918baefc7a23]
CR Dev help file:
http://msdn2.microsoft.com/en-us/library/bb126227.aspx
Samples:
https://wiki.sdn.sap.com/wiki/display/BOBJ/CrystalReportsfor.NETSDK+Samples
Ludek
Follow us on Twitter http://twitter.com/SAPCRNetSup
Maybe you are looking for
-
Premier Pro CC won't play / render audio or video after today's update
Foolishly I allowed CC to update my software ( CC 2014.2 ) and now it won't play or render video or audio. It will allow me to scrub through and see the video, but the play button just turns to a square and does nothing.. Trying to render anything
-
My membership for Creative Cloud has expired and I want to uninstall Photoshop CC 2014 from my system. The ONLY Adobe application listed under 'Program and Features' (Windows 8.1) is the Adobe Creative Cloud, however when I uninstall that all the Ado
-
Using 2006 iMac as a monitor for Mac Book Pro 2012
I want to connect my 2006 17" inch iMac with Mac Boon Pro and use it as extra monitor. Can you please suggest me how to do this and the kind of cables I should buy?
-
My iphoto9 has not been able to open for over 10 days!! I can't load my Christmas pics, etc. I know the pics are still there because I can access them through a round about way. Can anyone help me to OPEN iPHOTO!?
-
When I edit in full screen, I lose the ability to open up the bottom "toolbar" by dragging the cursor to the bottom of the screen. When this happens I have to hit the escape button to get back to the iPhoto library. When I go back to editing the phot