Best practices about JTables.

Hi,
I'm programming in Java since 5 months ago. Now I'm developing an application that uses tables to present information from a database. This is my first time handling tables in Java. I've read Sun's Swing tutorial about JTable, and several information on other websites, but they limit to table's syntax and not in best practices.
So I decided what I think is a proper way to handle data from a table, but I'm not sure that is the best way.Let me tell you the general steps I'm going through:
1) I query employee data from Java DB (using EclipseLink JPA), and load it in an ArrayList.
2) I use this list to create the JTable, prior transformation to an Object[][] and feeding this into a custom TableModel.
3) From now on, if I need to search an object on the table, I search it on the list and then with the resulting index, I get it from the table. This is possible because I keep the same row order on the table and on the list.
4) If I need to insert an item on the table, I do it also on the list, and so forth if I'd need to remove or modify an element.
Is the technique I'm using a best practice? I'm not sure that having to keep synchronized the table with the list is the better way to handle this, but I don't know how I'd deal just with the table, for instance to efficiently search an item or to sort the table, without doing that first on a list.
Are there any best practices in dealing with tables?
Thank you!
Francisco.

Hi Joachim,
What I'm doing now is extending DefaultTableModel instead of implementing AbstractTableModel. This is to save implementing methods I don't need and because I inherit methods like addRow from DefaultTableModel. Let me paste the private class:
protected class MyTableModel extends DefaultTableModel {
        private Object[][] datos;
        public MyTableModel(Object[][] datos, Object[] nombreColumnas) {
            super(datos, nombreColumnas);
            this.datos = datos;
        @Override
        public boolean isCellEditable(int fila, int columna) {
            return false;
        @Override
        public Class getColumnClass(int col) {
            return getValueAt(0, col).getClass();
    }What you are suggesting me, if I well understood, is to register MyTableModel as a ListSelectionListener, so changes on the List will be observed by the table? In that case, if I add, change or remove an element from the list, I could add, change or remove that element from the table.
Another question: is it possible to only use the list to create the table, but then managing everything just with the table, without using a list?
Thanks.
Francisco.

Similar Messages

  • Best practic about web document

    Hi
    I need a handbook (best practic) about web document.
    can you help me?
    thanks.

    Hi Ignacio,
    try this:
    <a href="http://www.sapdesignguild.org/resources/htmlb_guidance/">http://www.sapdesignguild.org/resources/htmlb_guidance/</a>
    and
    <a href="http://help.sap.com/saphelp_erp2004/helpdata/en/4e/ac0b94c47c11d4ad320000e83539c3/frameset.htm">http://help.sap.com/saphelp_erp2004/helpdata/en/4e/ac0b94c47c11d4ad320000e83539c3/frameset.htm</a>
    Regards,
    Gianluca Barile

  • Is there any best practice about by-products

    Dear all,
    Is there any best practice about handleing by-products in production order?
    Thanks!

    Hi,
    Have you searched the SCN forum, blog and wiki?
    You may check this: http://wiki.sdn.sap.com/wiki/pages/viewpage.action?pageId=23593837
    Thanks,
    Gordon

  • Require official Oracle Best Practices about PSU patches

    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    The following is stated!
    Critical Patch Update
    Fixes for security vulnerabilities are released in quarterly Critical Patch Updates (CPU), on dates announced a year in advance and published on the Oracle Technology Network. The patches address significant security vulnerabilities and include other fixes that are prerequisites for the security fixes included in the CPU.
    The major products patched are Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, JD Edwards OneWorld XE, Oracle WebLogic Suite, Oracle Communications and Primavera Product Suite.
    Oracle recommends that CPUs be the primary means of applying security fixes to all affected products as they are released more frequently than patch sets and new product releases.
    BENEFITS
    * Maximum Security—Vulnerabilities are addressed through the CPU in order of severity. This process ensures that the most critical security holes are patched first, resulting in a better security posture for the organization.
    * Lower Administration Costs—Patch updates are cumulative for many Oracle products. This ensures that the application of the latest CPU resolves all previously addressed vulnerabilities.
    * Simplified Patch Management—A fixed CPU schedule takes the guesswork out of patch management. The schedule is also designed to avoid typical "blackout dates" during which customers cannot typically alter their production environments.
    PROGRAM FEATURES
    * Cumulative versus one-off patches—The Oracle Database Server, Oracle Application Server, Oracle Enterprise Manager, Oracle Collaboration Suite, Oracle Communications Suite and Oracle WebLogic Suite patches are cumulative; each Critical Patch Update contains the security fixes from all previous Critical Patch Updates. In practical terms, the latest Critical Patch Update is the only one that needs to be applied if you are solely using these products, as it contains all required fixes. Fixes for other products, including Oracle E-Business Suite, PeopleSoft Enterprise Tools, PeopleSoft CRM, JD Edwards EnterpriseOne, and JD Edwards OneWorld XE are released as one-off patches, so it is necessary to refer to previous Critical Patch Update advisories to find all patches that may need to be applied.
    * Prioritizing security fixes—Oracle fixes significant security vulnerabilities in severity order, regardless of who found the issue—whether the issue was found by a customer, a third party security researcher or by Oracle.
    * Sequence of security fixes—Security vulnerabilities are first fixed in the current code line. This is the code being developed for a future major release of the product. The fixes are scheduled for inclusion in a future Critical Patch Update. However, fixes may be backported for inclusion in future patch sets or product releases that are released before their inclusion in a future Critical Patch Update.
    * Communication policy for security fixes—Each Critical Patch Update includes an advisory. This advisory lists the products affected by the Critical Patch Update and contains a risk matrix for each affected product.
    * Security alerts—Security alerts provide a notification designed to address a single bug or a small number of bugs. Security Alerts have been replaced by scheduled CPUs since January 2005. Unique or dangerous threats can still generate Security Alert email notifications through MetaLink and the Oracle Technology Network.
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!
    Please clarify!
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Do we have any Best Practice document about PSU patches available for customers?

    cnawrati wrote:
    A customer complained about the following
    Your company statements are not clear...
    On your web page - http://www.oracle.com/security/critical-patch-update.html
    Who is the "your" to which you are referring?
    <snip>
    Nowhere in that statement is the Patch Set Update even mentioned. If Oracle intends to recommend to all customers that Patch Set Updates are the recommended means of Patching for Security and Functionality then it should be stated so here!Um. OK
    Please clarify!
    Of whom are you asking for a clarification?
    Where can I find the current information so that I can use to Official Oracle statement as a reference for my Enterprise Practices and Standards document? The individual patch package references you Who is the "you" to which you refer?
    are giving me do not state Oracle recommended Best Practice, they only speak to the specific patch package they describe. These do not help me in making an Enterprise statement of Practices and Standards.
    I need to close the process out to capture a window of availability for Practices and Standards approval.
    Be our guest.
    Do we What do you mean "we", Kemosabi?
    have any Best Practice document about PSU patches available for customers?This is a very confusing posting, but overall it looks like you are under the impression that this forum is some kind of channel for communicating back to Oracle Crop anything that happens to be on your mind about their corporate web site and/or policies and practices. Please be advised that this forum is simply a platform provided BY Oracle Corp as a peer operated user support group. No one here is responsible for anything on any Oracle web site. No one here is responsible for any content anywhere in the oracle.com domain, outside of their own personal posting on this forum. In other words, you can complain all you want about Oracle's policy, practice, and support, but "there's no one here but us chickens."

  • Best Practice about enabling RD

    After running the Best Practice Analyser i've been getting the error that the "server should be configured to allow remote desktop connections"
    See http://technet.microsoft.com/nl-nl/library/ff646929(v=ws.10).aspx
    I've got a GPO which enables RDC for me and works fine. If I go to Remote Settings on the specified server it's cleary enabled.
    Still the error kept popping up, only after I disabled the GPO and falling back to the default (local) off setting and enabling it manually, the error is gone. After this I can again apply the GPO without any errors.
    So the error is gone now, but I'm still curious why this happened. I should be able to configure stuff through GPO's right? Am I missing something here?
    Thanks in advance,

    Hi,
    Thank you for posting in Windows Server Forum.
    The setting which you have configured are all correct, {for information: need GPO setting “Allow logon through Remote Desktop Services” under User Rights assignment must be enabled}. From description seems the GPO setting is not applied\configured
    properly and this issue happens. Just be sure to update the GPO once you perform the steps.
    Just for reference you can go through following article.
    Securing Remote Desktop for System Administrators
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Solution Manager best practices about environments

    Hello,
    we intend to use Solution Manager 4.0.
    My question : I wonder whether we need to have a single instance a SM (production) or whether we need to have multiple instances (one development SM where developments and customizing will be performed and one production SM populated with transport requests coming from the development SM) ?
    What are the best practices ?
    Thank you.
    Regards,
    Fabrice

    Dear Fabrice,
    In principle you donot need to have 2 instances of Solution Manager. 1 Instance is sufficient enough for monitoring all the Satellite Systems.
    However if you intending to have Customized ABAP on Solution Manager then it might be a good idea to do so in an different client in the same instance keeping the client as an development client.
    Most of the customizing in Solution Manager is non transportable hence it should be directly done in the productive client.
    Hope this answers your queries.
    Regards
    Amit

  • Best practice about dial-peer creating when using analog lines

    Hi,
    I am trying to find out what is the best practice when creating dial-peer for analog lines on CME, should I use trunk group or create separate dial-peer for each FXO ports? If I use trunk group, is there any advantage ( lesser dial-peer)  or disadvantage?
    Thanks!

    The advantage of trunk groups is that a single dial peer can point to for instance PSTN, rather then multiple dialpeers, with varying preference, each pointing to a separate FXO. Funtionally I can't see much difference. So I guess it also comes down to personal preference.
    =============================
    Please remember to rate useful posts, by clicking on the stars below. 
    =============================

  • What is the best practice about IT Organization?

    Hi IT and SAP Specialist,
    I am SAP operation team lead in a company. And I'd like to make ISP (Information Strategy Plan) including ERP.
    So could you let me know what is the best IT's organization.
    And if it is possible, please consult my company organiztion about ERP.
    I would like to explain my company Organization very roughly.
    ERP ->
    Local : Two factories. - SAP ESS6.0
    Vietnam : Two factories. - SAP Business One
    China : Three factories. - Chinese local ERP.
    IT Organization ->
    Local : Factory : 3 peoples, Head Office : 3 people, Legacy programer (Web) : 4 peoples                SAP sustaining team : 4 peoples
    Vietnam : 2 people.
    China : 3 people
    I think GSI (Global Single Incident ) is good solution for our company. For it, I should merge IT organization. And I will implemente SAP ESS 6.0 in the other plant.
    But I don't know what is benefit or not. I need somebody's help. Please give me some comment for me.
    Thank you so much.

    Really?  Honestly? 
    You ask that in a coffee corner?
    How can you even think that there is any one that would even attempt to give you a 'best' IT strucutre.  There is no such thing.  Every organization is different.  SAP is just one small part of most IT orgs.  What about email?  File Servers?  Desktop support? Network infrastructure? Database support?  Do you have everything in SAP?  Mail? HR? Customer service? Qualty control?  product specs?  Asset tracking?  Do you use SAP to actually run the mfg equipment?  Who supports those computers?
    There are consulting firms that get paid big dollars to do organizational reviews and make recommendations to CIO's on staffing levels.  It's no small undertaking.  Every org is different due to business needs, industry, (pharma vs. chemical vs. autos), support levels desired (24 hrs? 12 hrs?), customer base (worldwide? Regional? local?), no. of employees, government regulations, etc. etc.. etc..
    If you have no ideas of what the benefit would be for you to merge you organizations how the heck would we?   I would recommend you talk to some local consulting firms or maybe poll your peers at an industry meeting of some sort.  Some universities and colleges often have professors that are willing to provide expertise in this.
    FF

  • ADF Best Practice : About user session management in ADF BC

    Hi,
    I'm using ADF BC and I want to manage information specific to a particular user.
    In the developer's guide at §9.10 it is advised to use APIs relative to DBTransaction class through the getSession and getUserData methods.
    I have several questions :
    * are Session objects returned by ApplicationModuleImpl.getSession() and DBTransaction.getSession() the same ?
    * is it safe to manage user information by using the map returned by the oracle.adf.share.ADFContext.getSessionScope() method ?
    * finally, which is the recommended approach and what are the caveats of each ?
    Thanks a lot,
    Seb.

    Here is the code fragment :
       * Return the jhsUser object.
       * For this method to return an object, the user object must have called setUser
       * on the application module
      public Object getUser()
        Object user = ADFContext.getCurrent().getSessionScope().get(JhsUser.JHS_USER_KEY);
        if (user!=null)
          return user;
        // code below for backwards compatibility
        if (getSession()==null)
          return null;
        return getSession().getUserData().get(JhsUser.JHS_USER_KEY);
       * Stores user object as am session userdata.
       * @deprecated user is now retrieved using pull meachnism with ADFContext, instead of
       * push.
      public void setUser(Object user)
        getSession().getUserData().put(JhsUser.JHS_USER_KEY, user);
      }Notice the deprecated method setUser.
    Seb.

  • Best Practices about context

    Hi SAP developers,
    I need somebody with a lot of WebDynpro experience. My question is the following: Under what conditions is it advisable to map context nodes/conetx attributes and when should I use the construct wdThis.wdGetTestComp().wdGetContext()....?
    Greets Ruben

    hi,
    suppose if u want have 2 views,
    and u want t o display(or use) the content of first view in the second view
    then u have to map the context of the first view to the context of the component controller
    then u have to map the context of the second view also to the component controller..
    what exactlu happening here is,
    u are placing the attributes of the view in a global context(i,e component controller context)
    as u cannot access the values of one view in another directly ..
    coming to the code u have written,
    u write this code when u want to get or set the atribute values in the component controller's context.
    Regards,
    Satya.

  • Best Practice : how 2 fetch tables, views, ... names and schema

    hi,
    I am looking for the best practice about getting the catalog of a database.
    I have seen that I can make some select in system-tables(or views) such as DBA_TABLES and DBA_VIEWS, or DBA_CATALOG, but is that the best way to grab thses informations ?
    (I ask this question because It seems a strange way to me to get the table names using a simple select, but getting column info using a specialized function, OCIDescribeAny(). this does not look like a coherent API...)
    thanks for your advice
    cd

    in the same idea, why use OCIDescribeAny instead of doing an appropriate select in DBA_TAB_COLUMNS ?
    cd

  • Best Practice: Usage of the ABAP Packages Concept?

    Hi SDN folks,
      I've just started on a new project - I have significant ABAP development experience (15 years+) - but one thing that I have never seen used correctly is the Package concept in ABAP - for any of the projects that I have worked on.
    I would like to define some best practices - about when we should create packages - and about how they should be structured.
    My understanding of the package concept is that they allow you to bundle all of the related objects of a piece of development work together. In previous projects - and almost every project I have ever worked on - we just have packages ZBASIS, ZMM, ZSD, ZFI and so on. But this to me is a very crude usage of packages, and really it seems that we have not moved on passed the 4.6 usage of the old development class concept - and it means that packages do not really add much value.
    I read in the SAP PRESS Next Generation ABAP book (Thomas Ljung, Rich Hellman) (I only have the 1st edition) - that we should use packages for defining separation of concern for an application. So it seems there they are recommending that for each and every application we write - we define at the least 3 packages - one for Model, one for Controller and one for view based objects. It occurs to me that following this approach will lead to a tremendous number of packages over the life cycle of an implementation, which could potentially lead to confusion - and so also add little value. Is this really the best practice approach? Has anyone tried this approach across a full blown implementation?
    As we are starting a new implementation - we will be running with 7 EHP2 and I would really like to get the most out of the functionality that is provided. I wonder what others have for experience in the definition of packages.
    One possible usage occurs to me that you could define the packages as a mirror image of the application business object class hierarchy (see below). But perhaps this is overcomplicating their usage - and would lead to issues later in terms of transportation conflicts etc.:
                                          ZSD
                                            |
                    ZSOrder    ZDelivery   ZBillingDoc
    Does anyone have any good recommendations for the usage of the ABAP Package concept - from real life project experience?
    All contributions are most welcome - although please refrain from sending links on how to create packages in SE80
    Kind Regards,
    Julian

    Hi Julian,
    I have struggled with the same questions you are addressing. On a previous project we tried to model based on packages, but during the course of the project we encountered some problems that grew overtime. The main problems were:
    1. It is hard to enforce rules on package assignments
    2. With multiple developers on the project and limited time we didn't have time to review package assignment
    3. Devopelers would click away warnings that an object was already part of another project and just continue
    4. After go-live the maintenance partner didn't care.
    So, my experience is is that it is a nice feature, but only from a high level design point of view. In real life it will get messy and above all, it doesn't add much value to the development. On my neew assignment we are just working with packages based on functional area and that works just fine.
    Roy

  • Slow starup of Java application - best practices for fine tuning JVM?

    We are having problems with a java application, which takes a long time to startup.
    In order to understand our question we better start with some background info. You will find the question(s) after that.
    Background:
    The setup is as follows:
    In a client-server solution we have a win xp, fat client running java 1.6.0.18.
    (Sun JRE). The fat client containt a lot of GUI, and connects to a server for DB access. Client machines are typical 1 to 3 years old (there are problems even on brand new machines). They have the client version of JRE - standard edition installed (Java SE 6 update 10 or better) Pretty much usual stuff so far.
    We have done a lot of profiling on the client code, and yes we have found parts of our own Java code that needs improving. we are all over this. Server side seems ok with good response times. So far, we havent found anything about shaky net connections or endless loops in the java client code or similiar.
    Still, things are not good. Starting the application takes a long time. too long.
    There are many complicating factors, but here is what we think we have observed:
    There is a problem with cold vs. varm starts of the application. Apparently, after a reboot of the client PC - things are really, really bad - and it takes (sometimes) up to 30-40 secs to start the application (until we arrive at the start GUI in our app).
    If we run our application, close it down, and then restart
    without rebooting, things are a lot better. It then usually takes
    something like 15 - 20 sec. which is "acceptable". Not good, but acceptable,
    Any ideas why?
    I have googled it, and some links seems to suggest that the reason could be disk cache. Where vital jar are already in disk cache on th warm start? Does that make any sense? Virus scanners presumable runs in both cases.
    People still think that 15 - 20 sec in start up on the warm start is an awful long time, even though there is a lot, a lot, of functionality in the application.
    We got a suggestion to use IBMs JRE - as it can do some tricks (not sure what) our SUN JRE cant do concerning the warm and cold start problem. But thats is not an option for us. And noone has come up with any really good suggestions with the SUN JRE so far?
    On the Java Quick Starter (JQS) -
    improves initial startup time for most java applets and applications.
    Which might be helpful? People on the internet seem more interested
    in uninstalling the thing than actually installing it though?
    And it seems very proprietary, where we cant give our Jar files to it?
    We could obviously try to "hide" the problem in some way and make it "seem" quicker. Where perceived performance can be just as good as actual performance. But it does seem a bad solution. So for the cold start we will probably try reading the jar files and thereby have them in disk cache before startup of our application. And see if that helps us.
    Still, ok the cold start is the real killer, but warm start isn't exactly wonderfull either.
    People have suggested that we read more on the JVM and performance.
    java.sun.com.javase/technologies/performance.jsp
    java.sun.com.docs/hotspot/gc5.0/gc_tuning_5.html
    the use of JVM flags "-Xms" "-Xmx" etc etc.
    And here comes the question .. da da ...
    Concerning various suggested reading material.
    it is very much appreciated - but we will like to ask people here - if it is possibe to get more specific pointers. to where the gold might be buried.
    I.e. in a an ideal world we would have time to read and understand all of these documents in depth. However, in this less than ideal world we are also doing a lot of very timeconsuming profiling in our own java code.
    E.g. java garbage collection is is a huge subject - and JVm settings also. Sure, in the end we will probably have to do this all very thoroughly. But for now we are hoping for some heuristics on what other people are doing when facing a problem like ours..?
    Young generation, large memory pages, garbage collection threads ect. all sounds interesting - but what would you start with?
    If you don't have info to decide - what kind of profiling would you be running and then adjust what JVM setting in your trials?
    In this pressed for time scenario. Ignorance is not bliss. But makes it hard to pinpoint the or those JVM parameters to adjust. So some good pointers from experienced JVM "configurators" will be much appreciated!
    Actually, If we can establish that finetuning of these parameters is a good idea, it will certainly also be much easier to allocate the time for doing so. - reading, experimenting etc. in our project.
    So, All in all , what kinds of performance improvements can we hope for? 5 out of 20 secs on the warm start? Or is it 10 % nitpicking? Whats the ball park figure for what we can hope to achieve here given our setup? What do you think based on above?
    Maybe someone out there have done some finetuning of JVM parameters in a similiar PC environments like, with similiar fat clients...? Finetuning so and so - gave 5 secs. So start your work with these one-two parameters?
    Something like that - some best practices? Thats what we are hoping for.
    best wishes
    -Simon

    Thanks for helpful answer from both you and kajbj.
    The app doesn't use shared network drives.
    What are you doing between main starts to get executed and the UI is
    displayed?
    Basicly, Calculating what to show in the UI. Accessing server - not so much, there are some reads from a cache, but the profiling doesnt indicate that it should be a problem. Sure, I could shift the startup time to some other slot, but sofar I havent found a place where the end-user wouldnt be annoyed.> Caching of something would seem most obvious. Normal VM stuff >seems unlikely. With profiling i basicly find that ''everything'' takes a lot longer in the cold start scenario. Some of our local java methods are going to be rewritten following our review. But what else can be tuned?You guys dont think the Java Quick Start approach, with more jars in disk cache will give something? And how should that be done/ what does people do?I.e. For the class loader I read something about
    1.Bootstrap class loader
    2.Extensions class loader
    3.System class loader
    and is wondering if this has something to do with the cold start problem?
    The extensions class loader loads the code in the extensions directories (<JAVA_HOME>/lib/ext
    So, we should move app classes to ext? Put them in one jar file? (We have many). Best practice about that?
    Otherwise it seems to me that it must be about finetuning the JVM?
    I imagine that it is a question about:
    1. the right heap size
    2. the right garbage collection scheme
    Googling heap size for XP
    CHE22 writes:
    You are right; -Xms1600M works well, but -Xms1700M bombs
    Thats one best practice or what?
    On garbage collection, there are numerous posts, and much "masters of Java black art" IMHO, And according to profiling GC is not really that much of a problem anyway? Still,
    Based on my description I was hoping for a short reply like "try setting these two parameters on your xp box, it worked for me" ...or something like that. With no takers on that one, I fear people are saying that there is nothing to be gained there?
    we read:
    [ -Xmx3800m -Xms3800m
    Configures a large Java heap to take advantage of the large memory system.
    -Xmn2g
    Configures a large heap for the young generation (which can be collected in parallel), again taking advantage of the large memory system. It helps prevent short lived objects from being prematurely promoted to the old generation, where garbage collection is more expensive.
    Unless you have problems with pauses, try granting as much memory as possible to the virtual machine. The default size (64MB) is often too small.
    Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the virtual machine. On the other hand, the virtual machine can't compensate if you make a poor choice.
    The -XX:+AggressiveHeap+ option inspects the machine resources (size of memory and number of processors) and attempts to set various parameters to be optimal for long-running, memory allocation-intensive jobs]
    So is Setting -Xms and -Xmx and -XX:AggressiveHeap
    best practice? What kind of performance improvement should we expect?
    Concerning JIT:
    I read this one
    [the impact of the JIT compiler is obvious on the graph: at startup the time taken is around 500us for the first few values, then quickly drops to 130us, before falling again to 70us, where it stays for 30 minutes,
    for this specific issue, I greatly improved my performances by configuring another VM argument: I set -XX:CompileThreshold=50]
    The size of the cache can be changed with
    -Xmaxjitcodesize
    This sounds like you should do something with JIT args, but reading
    // We disable the JIT during toolkit initialization. This
    // tends to touch lots of classes that aren't needed again
    // later and therefore JITing is counter-productiive.
    java.lang.Compiler.disable();
    However, finding
    the sweet spots for compilation thresholds has been tricky, so we're
    still experimenting with the recompilation policy. Work on it
    continues.
    sounds like there is no such straigth forward path, it all depends...
    Ok, its good, when
    [Small methods that can be more easily analyzed, optimized, and inlined where necessary (and not inlined where not necessary). Clearly delineated uses of data so that usage patterns and lifetimes are apparent. ]
    but when I read this:
    [The virtual machine is responsible for byte code execution, storage allocation, thread synchronization, etc. Running with the virtual machine are native code libraries that handle input and output through the operating system, especially graphics operations through the window system. Programs that spend significant portions of their time in those native code libraries will not see their performance on HotSpot improved as much as programs that spend most of their time executing byte codes.]
    I have the feeling that we might not able to improve performance that way?
    Any comments?
    otherwise i was wondering about
    -XX:CompileThreshold=50 -Xmaxjitcodesize (large, how large?)
    Somehow, we still feel that someone out there should have experienced similiar problems? But obviously there is no guarantee that the someone should surf by here!
    In c++ we used to just write everything ourselves. Here it does seem to be a question about the right use of other peoples stuff?
    Where you are kind of hoping for a shortcut, so you dont have to read endless number of documents, but can find a short document that actually addresses your problem ... well.
    -Simon
    Edited by: simoncpm on Mar 15, 2010 3:43 PM
    Edited by: simoncpm on Mar 15, 2010 3:53 PM

  • Best practice for MRP

    Hi,
    does someone know where to get a best practices about running MRP or has a good tutorial?
    I have setup my material database ( e.g. MRP 1,2,3,4 tabs) but am not really sure how to continue.
    I'm a little bit confused in which order I have to execute the transaction e.g. MD20 / MDAB, MD01/MDBT, MD15, MD05, etc.!?
    Could someone help me just a little bit
    Thanks in advance.

    Hi,
    Steps sequence which you have written is correct one.
    1)MD20 (Manual Planning File Entry Maintain / MDAB (Background job) :- This is the first step which system checks during total Planning run.System considers only those materials for which entry is maintain over here.But there is no need to maintain manual planning file entry each time.If your plant is activated(T.code OMDU)  for MRP then system will take care of this means entry will managed automatically by the system.but for safty side you can use MDAB for the scheduled maintenance of planning file.
    2) MD01 - Total Planning.
    3) MD15 - you can convert Planned orders to PR by this code in Mass.
    4) MD05 - Its report only and saw the result of last MRP run.
    Regards,
    Dhaval

  • What are the best practices for the RCU's schemas

    Hi,
    I was wondering if there is some best practices about the RCU's schemas created with BIEE.
    I already have discoverer (and application server), so I have a metadata repository for the Application Server. I will upgrade Discoverer 10g to 11, so I will create new schema with RCU in my metada repository (MR) of the Application Server. I'm wondering if I can put the BIEE's RCU schemas in the same database.
    Basically,
    1. is there a standard for the PREFIX ?
    2. If I have multiple components of Fusion in the same Database, I will have multiples PREFIX_MDS schema ? Can they have the same PREFIX ? Or They all need to have a different prefix ?
    For exemple: DISCO_MDS and BIEE_MDS or I can have DEV_MDS and this schema is valid for both Discoverer and BIEE.
    Thank you !

    What are the best practices for exception handling in n-tier applications?
    The application is a fat client based on MVVM pattern with
    .NET framework.
    That would be to catch all exceptions at a single point in the n-tier solution, log it and create user friendly messages displayed to the user. 

Maybe you are looking for

  • Help to keep multiple windows open and active?

    Just purchased a new Macbook Pro and when I open an application, any other open application minimizes to the dock. On my previous Mac, whenever I opened an application, all applications would remain open in the same window so I could click back and f

  • How to get LOV button in Forms 10g

    Hello Everyone, I have created a form in 10g and also create a LOV for item (DESIGNATION). But i can't get/see the LOV button, always i have to press Ctrl+L. And then from the list item can select. I need that button nearer to particular item for whi

  • Error in opening admin tool

    Hi, I am having one problem with iplanet administration tool.I am working in iplanet,sp3 on windows2000.I have an ear file of size 9 mb which contains jsps,servlets,supporting classes and ejbs.After deploying this application in Iplanet application s

  • Cisco WLC 5508 without image

    Dear team At the moment I have a problem with a WLC. This WLC is not booting by the fact it does not have an image. In the past option 5 in the following menu was selected. Now I received I message when I select option 6. Telling me that if I upgrade

  • Automate Hyperion Provisioning Report generation

    Hi All, We are planning to automate weekly Hyperion Provisioning Report generation from shared services. Is there a way to schedule it ? Kindly advice. Regards, Andy