Best practice for updating a list that is data bound

Hi All,
I have a List component and the data is coming in from a bindable ArrayCollection. When I make changes to the data in the bindable ArrayCollection, the change is not being reflected in the list. I notice that if I resize the browser the component redraws I suppose and then the list updates. But how can I show the update when I change the data in the bindable ArrayCollection instantly?
Thanks,
Ryan

ok thanks for that, I have it sorted out now and found out where the problem was. I got a hint from your statement: "truly [Bindable]"..
Yes, the List is using a bindable ArrayCollection but I'm also using a custom item renderer and this item renderer takes the data and sets the label fields which are not binded. I didnt know that I had to carry the "binding" all the way through. I'm overriding the "set data" function and setting the label fields similar to: myLabel.text = _data.nameHere inside that function. That's where the problem was.
It works great now that I bind the data directly to the Label fields in my custom item renderer. I'm also using functions to parse certain pieces of data too. Is this taxing on the application? I notice that the List updates everytime I scroll, and resetting / calling all the functions in my Labels in the custom itemrender (for example: myDate.text = "{parseDate(_data.date)}")
Thanks!

Similar Messages

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • Best practices for updating agents

    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.

    Originally Posted by jcw_av
    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.
    To be honest, you have to work around your other deploys, etc. The ZCM agent isn't "aware" of other deploys going on. For example, ZPM doesn't care that you're doing Bundles at the same time (you'll get errors in the logs about the fact that only one MSI can run at a time, for example). ZPM usually recovers and picks up where it left off.
    Bundles on the other hand, with System Update, are not so forgiving. Especially if you have the agents prior to 11.2.4 MU1 (cache corruption errors).
    We usually:
    a) Halt all software rollouts/patching as best we can
    b) Our software deploys (bundles) are on event: user login Typically the system update is on Device Refresh, OR scheduled time, and are device associated.
    IF possible, I'd suggest that you use WOL, system update and voila.
    Or, if no WOL available, then tell your users to leave their pc turned on (doesn't have to be logged in), on X night, and setup your system updates for that night, with the auto-reboot enabled. That worked well
    But otherwise the 3 components of ZCM (Bundles, ZPM, System Update) don't know/care about each other, AFAIK.
    --Kevin

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best Practice for Updating Infotype HRP1001 via Class / Methods

    I want to update an existing (custom) relationship between two positions.
    For example I want
    Position 1 S  = '50007200'
    Position 2 S =  '50007202'
    Relationship = 'AZCR'
    effective today through 99991231
    Is there a best practice or generally accepted way for doing this using classes/methods rather than RH_INSERT_INFTY ?
    If so, please supply an example.
    Thanks...
    ....Mike

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best practice for updating SL to 10.6.8

    recently purchased a 2009 iMac with Snow Leopard upgrade installed  OS 10.6
    Looks as though there are two updates; should I install both or can I install the last/latest. Appreciate being directed to best practices discussions.
    FYI I will want to install Rosetta for older applications, CS3 & CS4, that I need for old client files. Thanks.
    Ali

    Buy one. Anything you want to keep shouldn't be on only one drive; problems may occur at any time, and are particularly likely to occur during an OS update or upgrade.
    (78403)

  • Best practice for update/insert on existing/non existing rows

    Hello,
    In an application I want to check if a row exists and if yes I want to update the row and if not I want to insert the row.
    Currently I have something like this:
    begin
    select *
    into v_ps_abcana
    from ps_abcana
    where typ = p_typ
    and bsw_nr = p_bsw_nr
    and datum = v_akt_date
    for update;
    exception
    when no_data_found then
    v_update := false;
    when others then
    raise e_error_return;
    end;
    if v_update = false
    then
    /* insert new row */
    else
    /* update locked row */
    end if;
    The problem is that the FOR UPDATE lock has no effect for inserts. So if another session executes this part exactly the same time then there will be two rows inserted.
    What is the best way to avoid this?

    for me the 1st solution is the most efficient one.
    in your 2nd solution it seems to me that you're gonna create a dummy table that will serve as a traffic cop. well that possible but not the proper and clean approach for your requirement. you're absolutely complicating your life where in fact Oracle can do it all for you.
    First thing that you have to consider is your database design. This should somehow corresponds to the business rule and don't just let the program to do it on that level leaving the database vulnerable to data integrity issue such as direct data access. In your particular example, there's no way you can assure that there'll be no duplicate records in the table.
    this is just an advice when designing solution: Don't use a "Mickey Mouse" approach on your design!

  • Best Practice for SAP PI installation to share Data Base server with other

    Hi All,
    We are going for PI three tire installation but now I need some best practice document for PI installation should share Data base with other Non-SAP Application or not. I never see SAP PI install on Data base server which has other Application sharing. I do not know what is best practice but I am sure sharing data base server with other non-sap application doesnu2019t look good means not clean architecture, so I need some SAP document for best practice to get it approve from management. If somebody has any document link please let me know.
    With regards
    Sunil

    You should not mix different apps into one database.
    If you have a standard database license provided by SAP, then this is not allowed. See these sap notes for details:
    [581312 - Oracle database: licensing restrictions|https://service.sap.com/sap/bc/bsp/spn/sapnotes/index2.htm?numm=581312]
    [105047 - Support for Oracle functions in the SAP environment|https://service.sap.com/sap/bc/bsp/spn/sapnotes/index2.htm?numm=105047] -> number 23
          23. External data in the SAP database
    Must be covered by an acquired database license (Note 581312).
    Permitted for administration tools and monitoring tools.
    In addition, we do not recommend to use an SAP database with non-SAP software, since this constellation has considerable disadvantages
    Regards, Michael

  • Best Practice for caching global list of objects

    Here's my situation, (I'm guessing this is mostly a question about cache synchronization):
    I have a database with several tables that contain between 10-50 rows of information. The values in these tables CAN be added/edited/deleted, but this happens VERY RARELY. I have to retrieve a list of these objects VERY FREQUENTLY (sometimes all, sometimes with a simple filter) throughout the application.
    What I would like to do is to load these up at startup time and then only query the cache from then on out, managing the cache manually when necessary.
    My questions are:
    What's the best way to guarantee that I can load a list of objects into the cache and always have them there?
    In the above scenario, would I only need to synchronize the cache on add and delete? Would edits be handled automatically?
    Is it better to ditch this approach and to just cache them myself (this doesn't sound great for deploying in a cluster)?
    Ideas?

    The cache synch feature as it exists today is kind of an "all or nothing" thing. You either synch everything in your app, or nothing in your app. There isn't really any mechanism within TopLink cache synch you can exploit for more app specific cache synch.
    Keeping in mind that I haven't spent much time looking at your app and use cases, I still think that the helper class is the way to go, because it sounds like your need for refreshing is rather infrequent and very specific. I would just make use of JMS and have your app send updates.
    I.e., in some node in the cluster:
    Vector changed = new Vector();
    UnitOfWork uow= session.acquireUnitOfWork();
    MyObject mo = uow.registerObject(someObject);
    // user updates mo in a GUI
    changed.addElement(mo);
    uow.commit();
    MoHelper.broadcastChange(changed);
    Then in MoHelper:
    public void broadcast(Vector changed) {
    Hashtable classnameAndIds = new Hashtable();
    iterate over changed
    if (i.getClassname() exists in classAndIDs)
    classAndIds.get(i.getClassname()).add(i.getId());
    else {
    Vector vc = new Vector();
    vc.add(i.getId())
    classAndIds.add(i.getClassname(),vc);
    jmsTopic.send(classAndIds);
    Then in each node in the cluster you have a listener to the topic/queue:
    public void processJMSMessage(Hashtable classnameAndIds) {
    iterate over classAndIds
    Class c = Class.forname(classname);
    ReadAllQuery raq = new ReadAllQuery(c);
    raq.refreshIdentityMapResult();
    ExpressionBuilder b = new ExpressionBuilder();
    Expression exp = b.get("id").in(idsVector);
    roq.setSelectionCriteria(exp);
    session.executeQuery(roq);
    - Don

  • "Best Practice" for a stored procedure that needs to access two schemas?

    Greetings all,
    When my company's application is deployed, two schema owners are typically created and all database objects divided between the two. I'll call them FIRST and SECOND.
    In a standard, vanilla implementation there is never any reason for the two to "talk to each other". No rights to objects in one schema are ever granted to the other.
    I am currently charged, however, with writing custom code to roll up data from one of the schemas and update tables in the other with the rollups. I have created a user whose job it is to run this process, and this user has the proper permissions to all necessary objects in both schemas. I'll call this user MRBATCH.
    Typically, any custom objects, whether they be additional staging tables, temp tables or stored procedures are saved in the FIRST schema. I tried to save this new stored procedure in the FIRST schema and compile it, but got "Insufficient priviliges" errors whenever the code in the stored procedure tried to access any tables in the SECOND schema. This surprised me a little bit because I had no plans to actually EXECUTE the stored procedure as FIRST, but I guess I can understand it from the point of view of, you ought be able to execute something you own.
    So which would be be "better" (assuming there's any difference): Grant FIRST all of the rights it needs in SECOND and save the stored procedure in FIRST, or could I just save the stored procedure in the MRBATCH schema? I'm not sure which would be "better practice".
    Is there a third option I'm overlooking perhaps?
    Thanks
    Joe

    In this case I would put it again into schema THIRD. This is a kind of API schema. There are procedures in it that allow some customized functionality. And since you grant only the right to execute those procedures (should be packages of cause) you won't get into any conflicts about allowing somebody too much.
    Note that this suggestion seems very similiar to putting the procedure directly to the executing user MRBATCH. It depends how this schemauser is used. I always prefer separating users from schemas.
    By definition the oracle object to represent a schema is identical to the oracle object representing a user (exception: externally defined users).
    my definition is:
    Schema => has objects (tables, packages) and uses tables space
    User => has priviledges (including create session and connect) and uses temp tablespace only. Might have synonyms and views.
    You can mix both, but sometimes it makes much sense to separate one from the other.
    Edited by: Sven W. on Aug 13, 2009 9:51 AM

  • Best Practice for update to iPhone and iTouch

    OK, when 3.0 comes down the pike, what is the best way to get 3.0 as a "clean" install? Currently 2.2.1 is on both. If I do a restore, will the system only pick up 3.0 or will it see 2.2.1 which is currently on the hard drive? With that in mind, how can I delete the 2.2.1 version of the iPhone and iTouch software? Sorry for two question in one post.
    Steve H

    When firmware update 2.0 was released, the entire iPhone was eraseed first including the existing firmware - just as when restoring an iPhone with iTunes, followed by 2.0 being installed, which was followed by the iPhone's backup being transferred to the iPhone.
    The same may apply with firmware update 3.0 with your iPhone's backup being updated immediately before. If not, firmware version 2.2.1 will be updated with 3.0.
    If 2.2.1 is updated and you want a "clean" install of 3.0, you can follow the initial upgrade by restoring your iPhone with iTunes.

  • Best Practice for Updating Administrative Installation Point for Reader 9.4.1 Security Update?

    I deployed adobe reader 9.4 using an administrative installation point with group policy when it was released. This deployment included a transform file.  It's now time to update reader with the 9.4.1 security msp.
    My question is, can I simply patch the existing AIP in place with the 9.4.1 security update and redeploy it, or do I need to create a brand new AIP and GPO?
    Any help in answering this would be appreciated.
    Thanks in advance.

    I wouldn't update your AIP in place. I end up keeping multiple AIPs on hand. Each time a security update comes out I make a copy and apply the updates to that. One reason is this: when creating the AIPs, you need to apply the MSPs in the correct  order; you cannot simply apply a new MSP to the previous AIP.
    Adobe's support patch order is documented here:  http://kb2.adobe.com/cps/498/cpsid_49880.html.
    That link covers Adode  Acrobat and Reader, versions 7.x through 9.x. A quarterly update MSP can  only be applied to the previous quarterly. Should Adobe Reader 9.4.2  come out tomorrow as a quarterly update, you will not be able to apply it to  the 9.4.1 AIP; you must apply it to the previous quarterly AIP - 9.4.0. At a minimum I keep the previous 2 or 3 quarterly AIPs around, as well as the MSPs to update them. The only time I delete my old AIPs is when I am 1000% certain they are not longer needed.
    Also, when Adobe's developers author the MSPs they don't include the correct metadata entries for in place upgrades of AIPs - any AIP based on the original 9.4.0 MSI will not in-place upgrade any installtion that is based on the 9.4.0 MSI and AIP - you must uninstall Adobe Reader, then re-install. This deficiency affects all versions of Adobe Reader 7.x through 9.x. Oddly, Adobe Acrobat AIPs will correctly in-place upgrade.
    Ultimately, the in-place upgrade issue and the patch order requirements are why I say to make a copy, then update and deploy the copy.
    As for creating the AIPs:
    This is what my directory structure looks like for my Reader AIPs:
    F:\Applications\Adobe\Reader\9.3.0
    F:\Applications\Adobe\Reader\9.3.1
    F:\Applications\Adobe\Reader\9.3.2
    F:\Applications\Adobe\Reader\9.3.3
    F:\Applications\Adobe\Reader\9.3.4
    F:\Applications\Adobe\Reader\9.4.0
    F:\Applications\Adobe\Reader\9.4.1
    The 9.4.0 -> 9.4.1 MSP is F:\Applications\Adobe\Reader\AdbeRdrUpd941_all_incr.msp
    When I created my 9.4.1 AIP, I entered these at a cmd.exe prompt - if you don't have robocopy on your machine you can get it from the Server 2003 Resouce Kit:
    F:
    cd \Applications\Adobe\Reader\
    robocopy /s /e 9.4.0 9.4.1
    cd 9.4.1
    rename AdbeRdr940_en_US.msi AdbeRdr941_en_US.msi
    msiexec /a AdbeRdr941_en_US.msi /update F:\Applications\Adobe\Reader\AdbeRdrUpd941_all_incr.msp /qb

  • OIM 10g: Best practice for updating OIM user status from target recon?

    We have a requirement, where we need to trigger one or more updates to the OIM user record (including status) based on values pulled in from a target resource recon from an LDAP. For example, if an LDAP attribute "disable-flag=123456" then we want to disable the OIM user. Other LDAP attributes may trigger other OIM user attribute changes.
    I think I need to write a custom adapter to handle "recon insert received" and "recon update received" events from the target recon, but wanted to check with the community to see if this was the right approach. Would post-insert/post-update event handlers be a better choice?

    Thanks Nishith. That's along the lines of what I was thinking. The only issue in my case is that I might need to update additional custom attributes on the OIM User in addition to enable/disable. Because of that requirement, my thought was to call the API directly from my task adapter to do the attribute updates in addition to the enable/disable. Does this seem like a sound approach?

  • Biztalk Best practice for update message after transformation

    Hi
    i have a requirement in which i have to update a message .
    Message have multiple receords .
    all records are created by transformation .
    but i have to update i element in the record from the context .
    i tried Xpath but since message have multiple receords so it didn't work .
    can you suggest the best ways .
    Regards,
    Mohit Gupta

    I assume you’re using orchestration (as you said you have tried with XPath), updating an element is part of constructing a message, so you have following options:
    Message assignment, update a record using XPath.
    Using external helper – pass the transformed message and context property value as two parameter to an external c# helper and update the element in the external C# helper
    which would give more control.
    Using context accessor functiod (codeplex one) – Use this codeplex functoid, update the relevant element in the map under transformation.
    Which is best way among this, certainly using Message assignment, update the record using XPath. If you have any issues I would try to solve it. I can’t comment on your
    issue in using XPath as I don’t know the complexity of the XML record structure. But better options in this.
    Using external helper, give more control where you have complete flexibility of using C#, so you can easily debug, complete control over the XML message (which also there
    in Orch, but in C# it much easier). But problem with this approach is, you will have additional assembly to manage.
    Least favoured among the options is context accessor functoid, still does the job, but overhead of managing a custom functiod to do this work.
    You can choose the one which suits your need best.
    If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

Maybe you are looking for

  • Problems with Message Pool

    Hallo all, I add 3 error messages in the MessagePool of the WebDynpro Project and want to fire those in coding, but it doesn't work. The coding looks like follows... <i>IWDMessageManager msgMgr = wdThis.wdGetAccountController().wdGetAPI().getMessageM

  • Rolando

    I have a problem with an application in the HTML DB that I am creating. the problem is: Error print success message checksum content error: Action procesada./6À1Ç9ÁFE08952600ACEC1E0188ÅC/: 4730F16841157B21D061A969F6640A61. the design I did it wit

  • My iPhone 3GS is disabled

    I put my password in wrong too many times and my iPhone 3GS got disabled and I don't know how to fix it.

  • Tried to add JUnit Test in Eclipse -- "Test type does not exist" error

    Hi guys, I'm developing an assignment whose details I won't bother going into -- it's enough to say that we were introduced to JUnit testing a couple of weeks ago in university tutorials, and I've stupidly tried to add one to my project without fully

  • SPS17 patch error

    during patching of my SAP J2EE engine to SP17, I get the following error: " 2008-12-02 dbs-Info:  $Id: //tc/DictionaryDatabase/645_VAL_REL/src/_dictionary_database_dbs/java/com/sap/dictionary/database/dbs/DbModificationManager.java#4 $ Dec 2, 2008 4: