Improving persistent class performance with on-demand DB reads?

Hi,
Something that has been troubling me for some time is the data access design of persistent classes. A persistent class loads the ENTIRE data set when it is instantiated.
To me this defeats one of the advantages of get/set methods. What I would like is to be able to read data when it is first accessed.
For example: A purchase order header can be loaded in a few ms. However with items it can take 10 times as long - even more if the items are implemented as persistent classes themselves. If items are only needed half the time, a significant performance gain is realized by only reading items the first time the GET_ITEMS method is called. In my own persistence implementations using BI_PERSISTENT (Nothing to do with ABAP Persistent Classes), I implement this using private attributes to buffer data, along the lines of:
method get_items.
  if m_items[] is initial.
    m_items = <read and/or instantiate items>.
  endif.
  re_items = m_items.
endif.
Using ABAP Persistent Objects this does not appear to be possible since they insist on loading everything up front in the DB mapping or IF_OS_STATE~INIT methods. Is some kind of on-demand reading possible using ABAP Persistent Classes?
Any input appreciated
Cheers,
Mike

Hi Mike,
yes, I am also very interested in this topic. While developing my first (and only a mid-complex) persistence application, I am expecting slightly different behaviour of linked persistent objects than you described above. According to the book 'Object Services in ABAP' and lazy loading paradigm mentioned there, the referenced persistent objects should be instantiated as representative objects only, containing no data, and the data should be physically loaded from database only once it's really needed (ie. accessed or explicitly instantiated). This should be the approach preventing the situation of mass chained instantiation of many linked persistent objects where most of them will not be needed, probably.
This is what they say about instantiation of persistent objects that are linked by persistent references.
However, since the persistent objects do not allow creating of 1:N relationships (classics: HEADER->ITEMS) via persistent references automatically, then this kind of relationship has to be maintained manually and therefore I expect you having full control of when it is accessed and instantiated for the first time...? What I want to ask here is then: is there any suitable approach which allows to maintain 1:N relationship fully or semi-automatically and which (as a disadvantage) makes the problem for you with 'full' instantiation? Am I missing something? There are also some threads Table attributes and persistent classes  regarding persistent mapping of table-like attributes, but no relevant answers (this should solve my problem as the table of persistent references could be saved then).
Until now, I have found almost nothing about how the 1:N relationship should be correctly built (only 1:1 or M:1 is supported via persistent references), so I am applying the following approach in my app. Any comments are welcome (excuse some typos, as this is written in Notepad, not on the running system):
ref_header = zca_header=>agent->create_persistent( ).
guid_of_header_class = ref_header->get_cl_guid( )."cl_guid is set in IF_OS_STATE~INIT
guid_of_header_instance =
  zca_header=>agent->if_os_ca_service~get_oid_by_ref( ref_header ).
ref_header->set_inst_guid(                   "keep header inst. ref. for future lookups
i_inst_guid = guid_of_header_instance
lt_r_items = ref_header->create_items(       "lt_r_items = table of items references
  i_cl_guid = guid_of_header_class
  i_inst_guid = guid_of_header_instance
  it_items = lt_items )
and the method zcl_header->create_items then looks somewhat like this:
loop at it_items assigning <it_items>.
  ref_item = zca_item=>agent->create_persistent( ).
  ref_item->set_cl_guid( i_header_cl_guid = i_cl_guid ).
  ref_item->set_inst_guid( i_header_inst_guid = i_inst_guid ).
  ref_item->set_some_other_attrib(
    i_some_other_attrib = <it_items>-some_other_attrib
  append ref_item to rt_r_items.             "rt_r_items = table of items references
endloop.
I have created a secondary index on items for header_cl_guid and header_inst_guid. This is used during the re-load of already saved items for particular header entry. Query service is used with header_cl_guid and header_inst_guid of actual header entry as filter criteria for item pesistent objects and table of item references is returned. Thus the instance attribute lt_r_items of ref_header is filled with references to relevant items again.
Thank you
Michal

Similar Messages

  • Persist class structure with XMLEncoder

    Hi,
    I am tring to persist the following class structure with an XMLEncoder:
    public class Header {
         public HeaderItem createHeaderItem() {
              //do some initialization
         public HeaderItem getHeaderItem() {
              return item;
         //no getter here
    public class HeaderItem {
         public void setName(String name) {
              this.name = name;
         public String getName() {
              return name;
    //creation
    Header h = new Header();
    HeaderItem item = h.createHeaderItem();
    item.setName("test");I dont want to use setters, so I need custom PersistenceDelegates to deal with the factory method. I have read the sun tutorial and tried several things
    but I don't succeed in persisting this structure. Can anyone help me?
    kind regards,
    Christiaan

    Does no one know a solution for this?

  • Improving hard drive performance with onboard RAID.

    After completing a recent PPBM5 test and being disappointed I would like to improve the disc performance of my system and would appreciate the thoughts of the forum.
    System:
    MB: Asus P6X58D-E
    CPU: Intel Core i7 930 2.8GHz
    RAM: 12GB (6 x 2GB) Corsair Dominator DDR
    Graphics: 1.28GB Asus GTX 470 3.3GHz
    My current hard drive set-up is:
    OS (Win 7 64bit)
    WD Velociraptor
    300GB
    10,000rpm
    Storage
    Seagate Barracuda
    1.5TB
    7,200rpm
    SATA II 3Gb/s
    Media
    Seagate Barracuda
    1.5TB
    7,200rpm
    SATA II 3Gb/s
    Project/Source Video
    Samsung Spinpoint F3
    1TB
    7,200rpm
    SATA II 3Gb/s
    Media Cache/Page File/Export
    Samsung Spinpoint F3
    1TB
    7,200rpm
    SATA II 3Gb/s
    Drive Bay (back-up and archive)
    Various bare drives
    My motherboard has 6 x 3Gb/s and 2 x 6Gb/s SATA ports and has an onboard RAID, using an Intel ICH10R Southbridge controller.  The motherboard also has USB 3.0.  I have a DVD drive using one of the SATA ports.
    Although I understand the principal of RAID I’ve never used it before.  So, what would be the best way of improving performance?
    I presume that as all my drives currently have data on them I’ll have to copy this data to another drive before formatting them and then creating the RAID drive.
    I look forward to hearing your thoughts and suggestions.
    Many thanks.

    Thanks Harm.  I've gone through your suggestions.
    As my computer has to function for several other tasks in addition to video editing, I realise that I will have to compromise somewhat on the performance to make sure that it's still useable for other things.
    Using Black Vipers tips I've reduced the processes running from 100+ to 89.
    Indexing and compression turned off for all drives except Storage, which isn't used for video related tasks.  Although it wouldn't surprise me to learn that having any indexing turned on will have a degree of drain on resources.
    Discs all defragged.
    Media Cache cleared.
    Any further advice on how to optimise the hard drives would be very much appreciated.

  • How can I improve NFSv3 file performance with MythTV thru a NAS Drive?

    Basically, I have both my MythTV combined frontend/backend machine and a NAS Drive connected via gigabit ethernet connections.
    The NAS drive stores my recordings, shared via NFSv3 (because that's all the NAS drive came with, unfortunatly).
    Now, my problem is that I can record or watch multiple *seperate* HD files simultaneously, travelling back and forth from the NAS without problem.
    However when I try to watch the same file *While it's still being recorded*, after around 20 minutes the network connection is full/overloaded (between the mythserver and the NAS drive) and playback grinds to a halt (It doesn't stop the recording, just the playback).
    It used to happen much quicker when I was only using a 100mb connection.
    As i've said, this doesn't happen at ALL with seperate files, just only if i watch the *same* file that's being recorded at the same time.
    Is there anyway (bar just having the recordings on a Hard Drive within my backend) that I can stop this problem? Is it because I use NFSv3 on the NAS drive? are there any options I can change in the /etc/exports file on the NAS drive that would help? Thanks
    If you want me to post what my NAS drive (Readynas Duo V2) has in its /etc/exports file, I will

    Post the file but I dunno how many Archer are using NFSv3... ;/

  • Is anyone facing the similar problem of visible lag while opening app with 3G data on? It seems apple has made 5s slow to improve iphone 4 performance.5s was very smooth on 7.0.6 even with 3G data on. Is this a bug? Is apple going to fix this soon?

    Is anyone facing the similar problem of visible lag while opening app with 3G data on? It seems apple has made 5s slow to improve iphone 4 performance.5s was very smooth on 7.0.6 even with 3G data on. Is this a bug? Is apple going to fix this soon?

    I have same Problem. I tried all options - Reset All Settings, Erase Content & reset all settings, Upgrated to IOS8.1, Restore using iTune. But problem persists.
    When I turn Cellular Data or Wifi ON, then there is a lag while opening the Apps. But If I turn Data/Wifi off, phone becomes super fast. I have checked with my friends. They are not facing this issue. Not sure if Apple know this.

  • Problems with generating persistent classes

    I've been following the tutorial for generating persistent classes from a
    DB. I'm not having much luck:
    First, with rd-schemagen, how do you tell it to only work on a specific
    schema? I run "rd-schemagen -file schema.xml NBS_ODS_101", but it still
    generates the schema file for all schemas in the DB. Is there a usage
    option for the tool (I haven't been able to find it yet)?
    Second, I have the following tables in my DB: ACT, ACT_ID, ENTITY,
    ENTITY_ID. When rd-reversemappingtool runs on these tables, it creates an
    ID class for ACT (ActId) which conflicts with the class generated for
    ACT_ID (ActId). Since renaming the tables is not an option and I really
    don't want to have to rename classes and change the mapping file every
    time I regenerate, what is a solution for this problem?
    Third, if I do the latter above so I can run the importtool and then I
    run "rd-importtool test\test.mapping", it runs successfully for a bit
    while spitting out information until I get this:
    Exception in thread "main" java.lang.NullPointerException
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.mapForeignKe
    y(ImportTool.java:336)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.mapField(Imp
    ortTool.java:207)
    at
    com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.importMappin
    gs(ImportTool.java:78)
    at com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.run
    (ImportTo
    ol.java:408)
    at com.solarmetric.rd.kodo.impl.jdbc.meta.compat.ImportTool.main
    (ImportT
    ool.java:385)

    Abe White <[email protected]> wrote in
    news:[email protected]:
    First, with rd-schemagen, how do you tell it to only work on a specific
    schema? I run "rd-schemagen -file schema.xml NBS_ODS_101", but it still
    generates the schema file for all schemas in the DB. Is there a usage
    option for the tool (I haven't been able to find it yet)?Try using -schemas <comma-separated list of schema names>
    I apologize for the documentation in this area. We're going to upgrade
    the tool and the documentation to a more recent version from our internal
    R&D codebase when our 2.5 release comes out in the next couple of weeks.
    This release will also include a system for customizing the tool's output
    in many more ways.This works:
    rd-schemagen -file schema.xml -indexes false -schemas NBS_ODS_101
    but this does not:
    rd-schemagen -file schema.xml -indexes false -schemas NBS_ODS_101,NBS_SRT_
    101
    Exception in thread "main" java.lang.IllegalArgumentException:
    com.solarmetric.r
    [email protected] = NBS_ODS_
    101,NBS_SRT_10
    1: java.lang.ArrayIndexOutOfBoundsException: 1
    at serp.util.Options.setInto(Options.java:206)
    at serp.util.Options.setInto(Options.java:168)
    at com.solarmetric.rd.conf.Configurations.populateConfiguration
    (Configur
    ations.java:144)
    at com.solarmetric.rd.kodo.impl.jdbc.schema.SchemaGenerator.main
    (SchemaG
    enerator.java:690)
    Second, I have the following tables in my DB: ACT, ACT_ID, ENTITY,
    ENTITY_ID. When rd-reversemappingtool runs on these tables, it creates
    an ID class for ACT (ActId) which conflicts with the class generated for
    ACT_ID (ActId)This is a bug, and will also be fixed with 2.5. I can't even think of a
    good way to tell you to work around it for now, unfortunately.I renamed the ID classes to ActOid and EntityOid and changed the .jdo file
    to reflect that. Do you see any problems with this strategy?
    Third, if I do the latter above so I can run the importtool and then I
    run "rd-importtool test\test.mapping", it runs successfully for a bit
    while spitting out information until I get this:
    Exception in thread "main" java.lang.NullPointerExceptionCan you please send the generated .mapping, .jdo, and .java files?
    Unless you want to wait until the 2.5 improvements to debug.I will send you all the files in a zip file by email.

  • Improve Portal performance with BI-Java by adding additional server node.

    Hi SAP Expert
    I wonder anyone might have attempted this before,
    In an environment with BI-Java and Portal Java stack installed on the same server, is it possible to improve the system performance by adding additional portal server node? or is there any improvement seen by adding the extra server node?
    I am after suggestion and experience on how the portal performance can be improve, with consideration of BI-Java sharing Portal resources.
    Any comment will be most appreciated.

    Hi Jim,
    We've this configuration at our site, Portal and BI running together (not federated).  Recommendations would be to:
    1. Set a sensibly large max heap size on each server node of Portal (at least 2GB if not larger) and implement several nodes at least.  For example we have 4 x physical nodes, each running 3 server nodes of 2GB max heap size apiece so a total of 12 nodes.
    2. Implement the BI Safety Belt for large results sets (1127156)
    3. Implement the latest JVM / JDK you can on your environment.  We found much improved performance after implementing JDK SR10 with the J9/2.3 options enabled e.g.,  -Xjvm:j9vm23
    -Xsoftrefthreshold0
    4. Patch your BI components on the Portal (BIBASES, BIWEBAPP etc) to the latest available patch level for your Portal SP level
    5. Make sure all your RFC connections are load balanced to the backend and tuned for the kind of load you expect on your reports, and the BI is sized appropriately in terms of app servers and dialog work processes, RFC-enabled login groups in SMLG etc.
    6. Consult notes 1048691 (especially useful, opions &PROFILING=X&TRACE=X added to reports for example for performance tracing), 937697, 1021921, 948158 for information about problem analysis for this scenario
    7. Implement SAP Web Dispatcher for Portal to reduce load on static mime files
    8. Tune the Portal application.  There's plenty information on SDN related to Portal performance tuning.
    I hope this helps!
    Cheers,
    Marc

  • How to improve Oracle Veridata Compair pair performance with tables that have big (30-40MB)CLOB/BLOB fileds ?

    How to improve Oracle Veridata Compair pair performance with tables that have big (30-40MB)CLOB/BLOB fileds ?

    Can you use insert .. returning .. so you do not have to select the empty_clob back out.
    [I have a similar problem but I do not know the primary key to select on, I am really looking for an atomic insert and fill clob mechanism, somone said you can create a clob fill it and use that in the insert, but I have not seen an example yet.]

  • Change documents with persistent class

    Hi,
    I have created a persistent class and want to use a set-method to change data in the mapped table - nothing special.
    But: Does anybody know if there's a possibility to write change documents for the changed field without calling FMs like CHANGEDOCUMENT_SINGLE_CASE?
    There would be the event if_os_state~changed but unfortunately no information about the changed field itself is available in this event. Is it possible to redefine events?
    Thanks a lot for each answer!
    Best regards
    Jörg

    Hi Narin,
    thanks for your answer!
    The first point is plain to me (although I don't really feel happy about this ) but I have problems understanding your second point because I don't really know what "Business object events" are... I'll try a different phrasing, maybe this is better:
    My persistent class inherits the event if_os_state~changed. Can I redefine this event in my class so that I can add an additional parameter where I can put the information about the changed attribute?
    Jörg

  • Improve Indesign performance with a huge number of links?

    Hi all,
    I am working on a poster infographic with a huge number of links, specifically around 4500. I am looking to use indesign over illustrator for the object styles (vs graphic styles in illustrator) and for the interactive capacity.
    The issue I am having is indesign's performance with this many links. My computer is not maxed out on resources when indesign is going full power, but indesign is still very slow.
    So far, here are the things I have tried:
    Display performance to fast
    Switching from link AI files to SVGs
    Turning off preflight
    Turning off live-draw
    Turning off save preview
    Please let me know if you have any suggestions on how to speed up indesign! See below system specs
    Lenovo w520
    8GB DDR3 @1333mhz
    nVidia 2000M, 2GB GDDR
    Intel Core i7 2760QM @ 2.4GHz
    240GB Samsung 840 SSD
    Adobe CS6
    Windows 8
    The only other thing I can think to try is to break up the poster into multiple pages/docs and then combine it later, but this is not ideal. Thank you all for your time.
    Cheers,
    Dylan Halpern

    I am not a systems expert, but I wonder if you were to hide the links, and keep InDesign from accessing them that it might help. Truly just guessing.
    Package the file so all the graphics are in a single folder. Then set the File Handling Preferences so InDesign doesn't yell at you when it can't find the links.
    Then quit InDesign and move the folder to a new place. Then reopen InDesign. The preview of the graphics will suck, but it might help. And one more thing, close the Links panel.

  • Updating linked fields in a persistent class

    Hi all,
    If a field of a persistent class is updated, what is the best way to update a related field?
    Scenario:
    A persistent class with public GET/SET access to a table field STUFF, and public GET access to a field CATEGORY.
    Whenever SET_STUFF is called from the outside, the class should also re-evaluate and update CATEGORY.
    The easiest is to modify the SET_STUFF method to determine the new category and update it if necessary. However GET/SET methods are always overwritten whenever the class is regenerated, which happens all too easily. Can this be done a bit more elegantly?
    Most modifications can be done by redefinitions of the agent class and do not get overwritten each time the class it regenerated. e.g. I have done something similar by redefining the DB access methods to update various fields, but for my current requirement the update needs to be immediate, not during save.
    Any input appreciated,
    MIke
    Edited by: Mike Pokraka on Sep 15, 2009 4:12 PM

    Hi Naimesh,
    >
    Naimesh Patel wrote:
    > Updating the field CATEGORY immediatly would lead to some problems while working with Persistent class because of the COMMIT WORK. As soon as COMMIT WORK after the CATEGORY's "save" hits, all pending save request would execute and may update some fields which you don't want to update at that point of time.
    No COMMITs should be performed, that's part of what I'm trying to accomplish. STUFF may change several times during the lifetime of the process and CATEGORY must change with it. When the application finally does a COMMIT WORK then both are written to the DB. (In fact my real scenario involves both transient and persistent instances and multiple callers).
    My objectives are to have a readonly attribute in the persistent class that changes in response to other fields changing. It should behave as though the app is calling a SET_STUFF and a SET_CATEGORY, but the persistent class must retain control of the CATEGORY field as several applications can modify STUFF.
    My current workaround is an UPDATE_CATEGORY method that must be implemented by each caller, so it works. Layering another class above it an option but is a bit of overkill which also makes it a workaround.
    I have found that EXT_HANDLER_WRITE_ACCESS seems to provide something, but the comments in there seem to indicate I shouldn't modify the object info. Unfortunately I have run out of time to experiment this week so the workaround will have to stay for now, but I'd still be interested in a more elegant solution.
    Thanks for the input,
    MIke

  • On activating persistent class: There is no mapping for one or more fields

    Hi all,
    I'm using an ECC 6.0 system.
    I've just created a persistent class and defined the persistence. When I try to activate the class activating fails and I get the message "There is no mapping for one or more fields."
    I did not, in fact, use all the fields of the database table I defined the persistence on. When I do use all the fields activating the class works without a problem.
    However, as far as I know it should be possible to select only some of the fields when defining the persistence (the only fields I have to select are all the key fields of the table and I've done this).
    Has anybody encountered the same problem or has anybody any idea on this?
    Cheers,
    Kathy

    Hi Kathy,
    this is exactly what I meant.
    If you'd like, then you can also take a look at the documentation: http://help.sap.com/saphelp_nw04/helpdata/en/b0/9d0a3ad259cd58e10000000a11402f/frameset.htm
    There under Mapping, you can find:
    "You must map all columns of a database table to attributes. If you only want to manage some of the columns using Object Services, you must create a database view."
    Making attributes private doesn't change the fact, that you still map all fields. If you have a lot of fields, which you don't want to map, then I will again suggest, that you define a DB-view. This will boost the performance of your implementation.
    In case you need "quality 1st" performance, then I would suggest to use an ABAP implementation with internal tables, instead of the Persistent Service.
    HTH,
    Hristo

  • Improve web form performance of Hyperion Planning

    Hi all,
    I am not sure whether anyone has tried the recommended method from Oracle to compress the web forms used in Hyperion Planning so as to improve the network perfromance?
    http://download.oracle.com/docs/cd/E12032_01/doc/epm.921/html_hp_admin/frameset.htm?/docs/cd/E12032_01/doc/epm.921/html_hp_admin/hpamin-19-17.htm
    Slow Performance When Opening Data Forms Using a Dial-Up Connection
    Scenario:
    Opening a data form using a slow network connection (for example, with a modem) takes a long time.
    Solution:
    You can significantly increase the network bandwidth when opening data forms by modifying the web.xml file, as described in this section. This solution compresses by approximately 90% the data stream sent from the Planning server to the client.
    Note: If you are using a WebLogic (all supported versions) Web application server, complete the second procedure, which is specific to WebLogic. If you are using any other Web application server, complete the first procedure, which contains general instructions.
    To modify the web.xml file to improve performance, for any Web application server except WebLogic:
    1 With a text editor, open the web.xml file, located in HyperionPlanning.ear or HyperionPlanning.war.
    2 After the tag </description> and before the tag <listener>, insert the following lines:
    <filter>
    <filter-name>HspCompressionFilter</filter-name> <filter-
    class>com.hyperion.planning.HspCompressionFilter</filter-class>
    <init-param>
    <param-name>compressionThreshold</param-name>
    <param-value>2048</param-value>
    </init-param>
    <init-param>
    <param-name>debug</param-name> <param-value>1</param-value>
    </init-param>
    </filter>
    <filter-mapping>
    <filter-name>HspCompressionFilter</filter-name>
    <url-pattern>/EnterData.jsp</url-pattern>
    </filter-mapping>
    3 Save the web.xml file.
    If you are using WebLogic, you must manually modify the .ear file and redeploy it for the Web application server.
    To improve performance with a WebLogic application server:
    1 Unzip the HyperionPlanning.ear file to C\:ear, for example.
    2 Unzip Hyperion.war under C:\ear to C:\war.
    3 With a text editor, open the C:war\WEB-INF\web.xml file and modify it using the instructions in step 2 in the preceding procedure.
    4 Compress the content in C:\war to C:\ear\HyperionPlanning.war.
    5 Compress the content in C:\ear into C:ear\HyperionPlanning.ear
    6 Deploy the new HyperionPlanning.ear for the WebLogic Web application server.

    Hi,
    Yes we did this with a bank which had branches connected to the servers via dial-up. They were complaining about the amount of data being transferred in between branches and servers. This has worked well over there and increased the data transfer performance considerably. However, nowadays the performance bottlenecks are not generally due to the amount of data being transferred over the network but the javascript components that should run on client and heavy calculation/data and report creation processes.
    Cheers,
    Alp

  • Improving TableRowSorter Filter Performance

    Hi, I currently have a TableModel with over 17300 rows of data, which has a TableRowSorter attached to provide sorting and filtering on the JTable. Everything works fine except that filtering the table takes to long, even on a Wndow Vista machine with an Intel Core 2 Duo (3.6ghz processor)
    Is there any known way to improve the filtering performance on TableRowSorter? It should be noted however, that Sorting with the same data does not take as long (at times). It is also much slower when all the necessary bells and whistles or rendering are included.
    Here is an example piece of code:
    import java.awt.*;
    import java.awt.event.*;
    import java.util.*;
    import javax.swing.*;
    import javax.swing.event.*;
    import javax.swing.table.*;
    public class Trial extends JFrame {
        public JTable table;
        public DefaultTableModel model;
        public TableRowSorter sorter;
        public JTextField filterField;
        public Trial() {
            super("Filter Test");
            setDefaultCloseOperation(EXIT_ON_CLOSE);
            Vector<String> cols = new Vector<String>();
                cols.addElement("Column 1");
                cols.addElement("Column 2");
                cols.addElement("Column 3");
                cols.addElement("Column 4");
                cols.addElement("Column 5");
                cols.addElement("Column 6");
            Vector<Vector<Object>> rows = new Vector<Vector<Object>>();
            for(int i = 0; i < 20000; i++) {
                Vector<Object> row = new Vector<Object>();
                    row.addElement("Column 1 Data " + (i+1) );
                    row.addElement("Some Data " + (i+1) );
                    row.addElement("Column 3 Data " + (i+1) );
                    row.addElement("Even More Data " + (i+1) );
                    row.addElement("Column 5 Data " + (i+1) );
                    row.addElement("Please No More Data " + (i+1) );
                rows.addElement( row );
            model = new DefaultTableModel(rows, cols);
            table = new JTable( model );
            table.setRowSorter( new TableRowSorter(model) );
            JScrollPane scr = new JScrollPane(table);
            filterField = new JTextField(20);
            filterField.addCaretListener( new CaretListener() {
                public void caretUpdate(CaretEvent e) {
                    if(table.getRowSorter() == null) return;
                    setCursor( Cursor.getPredefinedCursor(Cursor.WAIT_CURSOR) );
                    String text = filterField.getText();
                    ((DefaultRowSorter)table.getRowSorter()).setRowFilter(RowFilter.regexFilter(".*" + text + ".*"));
                    setCursor( Cursor.getDefaultCursor() );
            JPanel filterPanel = new JPanel( new FlowLayout(FlowLayout.LEFT) );
                filterPanel.add( new JLabel("Filter: ") );
                filterPanel.add( filterField );
            getContentPane().add( filterPanel, BorderLayout.NORTH );
            getContentPane().add( scr, BorderLayout.CENTER);
            pack();
            setLocationRelativeTo(null);
            try {
                 setExtendedState(MAXIMIZED_BOTH);
            } catch(Exception e) {}
            setVisible(true);
        public static void main(String[] args) {
            new Trial();
    }Any ideas as to how performance (or percieved performance) can be improved would gladly be apprieciated.
    ICE

    I came to the conclusion that trying to improve the sort performance of TableRowSorter was almost an impossibilty. So I changed my approach from improvement of the actual sorting code to user perception of the process. And Hello, IndicatorRowSorter. This code will display a small window with the String "Searching" and an icon to indicate progress whiles filtering the table and disposes it onces filtering is complete.
    import java.awt.*;
    import java.awt.event.*;
    import javax.swing.*;
    import javax.swing.event.*;
    import javax.swing.border.*;
    import javax.swing.table.*;
    * Provides valuable user feedback for the table filtering operation.
    Executes the sorting algorithm on a different thread so as not to block the EDT
    public class IndicatorRowSorter extends TableRowSorter {
        JTable table;
        JWindow sortProgressWin;
        public IndicatorRowSorter() {
            createSortProgressWindow();
        public IndicatorRowSorter(JTable table) {
            super( table.getModel() );
            this.table = table;
            createSortProgressWindow();
        public void createSortProgressWindow() {
            JLabel progressIndicator = new JLabel("Searching...",
                   new ImageIcon("resources/images/loadArrow.gif"), JLabel.LEFT );
                progressIndicator.setBackground(Color.white);
                progressIndicator.setOpaque(true);
                progressIndicator.setBorder( BorderFactory.createEmptyBorder(2,2,2,2) );
            sortProgressWin = new JWindow();
            sortProgressWin.getContentPane().add(progressIndicator);
            sortProgressWin.pack();
            sortProgressWin.setLocationRelativeTo(table);
        public void sort() {
            sortProgressWin.setVisible(true);
            Thread t = new Thread( new Runnable() {
                public void run() {
                    try {
                        IndicatorRowSorter.super.sort();                   
                        sortProgressWin.setVisible(false);
                    } catch(Exception e) {}
            t.start();
    }I got the idea from looking at some of the example from JDNC Incubator project down at java.net. Atleast with this solution, the crappy performance still remains, but the user percieves this as serious data manipulation.
    To get yourself a nice load progress icon, go down over to www.ajaxload.info. You can then replace the reference to the image used in the code above.
    To use it, just a single line of code is required.
    table.setRowSorter( new IndicatorRowSorter(table) );You can test this out with the Trial example posted above.
    ICE

  • Poor *First Launch* Performance with AppV 5

    I'm having an issue where the first time I launch a rather large AppV 5 application the performance I get is very slow.  Subsequent launches, however, are near native.  We reboot our Citrix server's daily so this is causing some issues for us.
     I have noticed that the applications that launch slower tend to be the larger applications we sequence.  Performance is slow regardless whether the package is fully mounted or streamed.
    I've detailed it here with some animated gifs showing the performance slowness.
    http://trentent.blogspot.ca/2014/11/appv-5-first-launch-application.html
    The short of it:
    When AppV is "launching" the application for the first time after a net stop/net start it starts consuming memory and CPU for the 196 seconds that it's launching, peaking at nearly 600MB RAM and 50% CPU (though most of the time it's peaked at 25%
    CPU).
    The AppV Debug logs do not give a whole lot of info as to what AppVClient.exe is doing during this time.  Most of the logs show the application "start" as they setup their components, and when the application has launched.  Almost all
    the logs show the first second or two of application launch and the last second or two before the GUI.
    When the application is launched after a reboot complete with add-appvpackage/publish-appvpackage (global) the first launch is ~40s.
    Subsequent launches of the application are consistently between 11-15s (near native).
    Does anyone have any thoughts about what I can do to make it so the application will launch at (near) native speed for the first launch?  I am fresh out of ideas.

    Let me share my test results with your provided test package:
    Test Case 1
    Default Package (without any modifications):
    - Add-Package: ~23.5s
    - Publish-Package: ~2.0s
    - First start registry.exe: Nearly instant, but Registry staging takes 35s with high CPU usage (>=30%)
    - Second start registry.exe: Instant
    Test Case 2
    Removed all ActiveX exts., COM Object exts. and Filetype assosciations in AppxManifest.xml & renamed HKLM\Classes to HKLM\Classes_TEST
    (I developed a tool that allows me to modify the .appv package directly including all files in it):
    - Add-Package: ~3s
    - Publish-Package: ~1.6s
    - First start registry.exe: Instant, but registry staging takes 33s with high CPU usage (30%)
    - Second start registry.exe: Instant
    Test Case 3
    Removed all COM object extensions in AppxManifest.xml:
    - Add-Package: ~2.6s
    - Publish-Package: ~1s
    - First start registry.exe: Not completely instant, takes 6s to start and Registry staging takes 35s with high CPU usage (>=30%)
    - Second start registry.exe: Instant
    These are my findings:
    1) The long time needed to add the package is due to the amount of COM object extensions. The size of the manifest file is 71MB, so that's really big. When I remove the COM objects from the manifest, the file size is 209KB.
    2) The high CPU usage is caused by registry staging. The registry file is ~58MB. With the first start, registry staging starts, the .dat hive gets loaded and the content will be copied to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AppV\Client\Packages\<Package-GUID>\Versions\<Version-GUID>\REGISTRY.
    In the same tree as the REGISTRY, you will find a key named "RegistryStagingFinished" when the staging has been finished. I observed that the high CPU load decreases as soon as registry staging has been finished (this is the case when RegistryStagingFinished
    is there). So the staging took ~35s on my test System to do the registry staging for a 58MB registry hive.
    And now the intersting part. If you close the virtualized app / the app you start in the VE (in your case registry.exe) before registry staging is finished (so before RegistryStagingFinished is set), it will re-start the registry staging from the
    beginning the next time you start the app. To me it looks like it doesn't continue from the last point it stopped the registry staging. This might be something that could be improved in a next release so that it continues registy staging from the point it
    previously stopped.
    What does it mean for you:
    If you need all integration points like COM, ActiveX, etc., you won't be able to reduce the add-package time. Otherwise if you remove them from the package, the add-package time can be reduced to <5s.
    Your registry is as well quite big (58MB). Loading that into registry needs time. There is no way around, but when deploying the package globally, you could start a process within the VE of your application right after publishing it so that the registry
    staging executes once and is already finished when the first user starts the app.
    Due to the big registry size and the amount of integration points of your package, I believe there won't be a lot of possibilities to improve the experienced performance issues except reducing the size and amount of integration points.
    Regards, Michael - http://blog.notmyfault.ch -

Maybe you are looking for

  • Printing from an iMac to a printer connected to a router

    I have a HP LaserJet 2100 connected to my router (via Parallel Port), and shared across my windows home network. The printer is easily located within the 'windows' section of 'add printer', and the correct driver is selected. When I try to print anyt

  • Format over catalog file, tried to recover file but now corrupt?!

    I recently performed a format of my hard drive.  When doing this, I did not back up my catalog file from photoshop elements(I have pse 8.0).  I backed up everything else, and I do have all of the original full size photo files from my hard drive, ~11

  • Colors fading on PDF export from Illustrator/Indesign CS2

    Hi guys, I can't figure out why colors are fading when exporting to PDF.  I've had this issue for a long time and am finally fed up with it enough to post.  I'm in the RGB space in Illustrator (and transparency blend in InDesign).  I'm using CS2 and

  • Daily news app not updating

    I have the daily new app for my ipad 1.  I keep getting the previous day's news not the current day. 

  • I cant find my photos

    I cant seem to get used to this Photo thing on iMac. I can see my photos are all on there but when for example I want to add one of those new photos to a website photos can't locate my recent photos. if you get me. So why is it accessible in Photos,