Biztalk Best practice for update message after transformation

Hi
i have a requirement in which i have to update a message .
Message have multiple receords .
all records are created by transformation .
but i have to update i element in the record from the context .
i tried Xpath but since message have multiple receords so it didn't work .
can you suggest the best ways .
Regards,
Mohit Gupta

I assume you’re using orchestration (as you said you have tried with XPath), updating an element is part of constructing a message, so you have following options:
Message assignment, update a record using XPath.
Using external helper – pass the transformed message and context property value as two parameter to an external c# helper and update the element in the external C# helper
which would give more control.
Using context accessor functiod (codeplex one) – Use this codeplex functoid, update the relevant element in the map under transformation.
Which is best way among this, certainly using Message assignment, update the record using XPath. If you have any issues I would try to solve it. I can’t comment on your
issue in using XPath as I don’t know the complexity of the XML record structure. But better options in this.
Using external helper, give more control where you have complete flexibility of using C#, so you can easily debug, complete control over the XML message (which also there
in Orch, but in C# it much easier). But problem with this approach is, you will have additional assembly to manage.
Least favoured among the options is context accessor functoid, still does the job, but overhead of managing a custom functiod to do this work.
You can choose the one which suits your need best.
If this answers your question please mark it accordingly. If this post is helpful, please vote as helpful by clicking the upward arrow mark next to my reply.

Similar Messages

  • Best practice for using messaging in medium to large cluster

    What is the best practice for using messaging in medium to large cluster In a system where all the clients need to receive all the messages and some of the messages can be really big (a few megabytes and maybe more)
    I will be glad to hear any suggestion or to learn from others experience.
    Shimi

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • Best Practice for Updating children UIComponents in a Container?

    What is the best practice for updating children UIComponents in response to a Container being changed?  For instance, when a Canvas is resized, I would like to update all the children UIComponents height and width so the content scales properly.
    Right now I am trying to loop over the children calling InvalidateProperties, InvalidateSize, and InvalidateDisplayList on each.  I know some of the Containers such as VBox and HBox have layout managers, is there a way to leverage something like that?
    Thanks.

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • SQL 2008 R2 Best Practices for Updating Statistics for a 1.5 TB VLDB

    We currently have a ~1.5 TB VLDB (SQL 2008 R2) that services both OLTP and DSS workloads pretty much on a 24x7x365 basis. For many years we have been updating statistics (full scan- 100% sample size) for this VLDB once a week on the weekend, which
    is currently taking up to 30 hours to complete.
    Somewhat recently we have been experiencing intermitent issues while statistics are being updated, which I doubt is just a coincidence. I'd like to understand exactly why the process of updating statistics can cause these issues (timeouts/errors). My theory
    is that the optimizer is forced to choose an inferior execution plan while the needed statistics are in "limbo" (stuck between the "old" and the "new"), but that is again just a theory. I'm somewhat surprised that the "old" statistics couldn't continue to
    get used while the new/current statistics are being generated (like the process for rebuilding indexes online), but I don't know all the facts behind this mechanism yet so that may not even apply here.
    I understand that we have the option of reducing the sample percentage/size for updating statistics, which is currently set at 100% (full scan).  Reducing the sample percentage/size for updating statistics will reduce the total processing time, but
    it's also my understanding that doing so will leave the optimizer with less than optimal statistics for choosing the best execution plans. This seems to be a classic case of not being able to have one’s cake and eat it too.
    So in a nutshell I'm looking to fully understand why the process of updating statistics can cause access issues and I'm also looking for best practices in general for updating statistics of such a VLDB. Thanks in advance.
    Bill Thacker

    I'm with you. Yikes is exactly right with regard to suspending all index optimizations for so long. I'll probably start a separate forum thread about that in the near future, but for now lets stick to the best practices for updating statistics.
    I'm a little disappointed that multiple people haven't already chimed in about this and offered up some viable solutions. Like I said previously, I can't be the first person in need of such a thing. This database has 552 tables with a whole lot more statistics
    objects than that associated with those tables. The metadata has to be there for determining which statistics objects can go (not utilized much if at all so delete them- also produce an actual script to delete the useless ones identified) and what
    the proper sample percentage/size should be for updating the remaining, utilized statistics (again, also produce a script that can be used for executing the appropriate update statistics commands for each table based on cardinality).
    The above solution would be much more ideal IMO than just issuing a single update statistics command that samples the same percentage/size for every table (e.g. 10%). That's what we're doing today at 100% (full scan).
    Come on SQL Server Community. Show me some love :)
    Bill Thacker

  • What's best practice for logging messages in pageflow?

    What's best practice for logging messages in pageflow?
    Workshop complains when I try to use a Log4J logger by saying it's not serializable. Is there a context similar to JWSContext that you can get a logger from?
    There seems to be a big hole in the documentation on debug logging in workflows and JSP pages.
    thanks,
    Rodger...

    Make the configuration change in setDomainEnv.cmd. Find where the following variable is set:
    LOG4J_CONFIG_FILE
    and change it to your desired path.
    In your Global.app class, instantiate a static Logger like this:
    transient static Logger logger = Logger.getLogger(Global.class);
    You should be logging now as long as you have the categories and appenders configured properly in your log4j.xml file.

  • Best practices for updating agents

    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.

    Originally Posted by jcw_av
    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.
    To be honest, you have to work around your other deploys, etc. The ZCM agent isn't "aware" of other deploys going on. For example, ZPM doesn't care that you're doing Bundles at the same time (you'll get errors in the logs about the fact that only one MSI can run at a time, for example). ZPM usually recovers and picks up where it left off.
    Bundles on the other hand, with System Update, are not so forgiving. Especially if you have the agents prior to 11.2.4 MU1 (cache corruption errors).
    We usually:
    a) Halt all software rollouts/patching as best we can
    b) Our software deploys (bundles) are on event: user login Typically the system update is on Device Refresh, OR scheduled time, and are device associated.
    IF possible, I'd suggest that you use WOL, system update and voila.
    Or, if no WOL available, then tell your users to leave their pc turned on (doesn't have to be logged in), on X night, and setup your system updates for that night, with the auto-reboot enabled. That worked well
    But otherwise the 3 components of ZCM (Bundles, ZPM, System Update) don't know/care about each other, AFAIK.
    --Kevin

  • Best practice for updating ATWRT (Characteristic Value) in AUSP

    I've notice that when we change the Characteristic Value of a Classification, that it does not update in the MM record. We have to go into MM02 for each material number that references Char Value and manually change it for row in AUSP to get updated.
    Should i just create a report to Loop through and update table AUSP directly? Or is there a better way to do this via a function or BAPI etc? Wanting to know what best practice is recommended.

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best Practice for Updating Infotype HRP1001 via Class / Methods

    I want to update an existing (custom) relationship between two positions.
    For example I want
    Position 1 S  = '50007200'
    Position 2 S =  '50007202'
    Relationship = 'AZCR'
    effective today through 99991231
    Is there a best practice or generally accepted way for doing this using classes/methods rather than RH_INSERT_INFTY ?
    If so, please supply an example.
    Thanks...
    ....Mike

    Hi Scott
    You can use a BAPI to do that.
    Check the following thread:
    BAPI to update characteristics in Material master?
    BR
    Caetano

  • Best practice for updating SL to 10.6.8

    recently purchased a 2009 iMac with Snow Leopard upgrade installed  OS 10.6
    Looks as though there are two updates; should I install both or can I install the last/latest. Appreciate being directed to best practices discussions.
    FYI I will want to install Rosetta for older applications, CS3 & CS4, that I need for old client files. Thanks.
    Ali

    Buy one. Anything you want to keep shouldn't be on only one drive; problems may occur at any time, and are particularly likely to occur during an OS update or upgrade.
    (78403)

  • Best practice for update/insert on existing/non existing rows

    Hello,
    In an application I want to check if a row exists and if yes I want to update the row and if not I want to insert the row.
    Currently I have something like this:
    begin
    select *
    into v_ps_abcana
    from ps_abcana
    where typ = p_typ
    and bsw_nr = p_bsw_nr
    and datum = v_akt_date
    for update;
    exception
    when no_data_found then
    v_update := false;
    when others then
    raise e_error_return;
    end;
    if v_update = false
    then
    /* insert new row */
    else
    /* update locked row */
    end if;
    The problem is that the FOR UPDATE lock has no effect for inserts. So if another session executes this part exactly the same time then there will be two rows inserted.
    What is the best way to avoid this?

    for me the 1st solution is the most efficient one.
    in your 2nd solution it seems to me that you're gonna create a dummy table that will serve as a traffic cop. well that possible but not the proper and clean approach for your requirement. you're absolutely complicating your life where in fact Oracle can do it all for you.
    First thing that you have to consider is your database design. This should somehow corresponds to the business rule and don't just let the program to do it on that level leaving the database vulnerable to data integrity issue such as direct data access. In your particular example, there's no way you can assure that there'll be no duplicate records in the table.
    this is just an advice when designing solution: Don't use a "Mickey Mouse" approach on your design!

  • Best Practice for Updating Administrative Installation Point for Reader 9.4.1 Security Update?

    I deployed adobe reader 9.4 using an administrative installation point with group policy when it was released. This deployment included a transform file.  It's now time to update reader with the 9.4.1 security msp.
    My question is, can I simply patch the existing AIP in place with the 9.4.1 security update and redeploy it, or do I need to create a brand new AIP and GPO?
    Any help in answering this would be appreciated.
    Thanks in advance.

    I wouldn't update your AIP in place. I end up keeping multiple AIPs on hand. Each time a security update comes out I make a copy and apply the updates to that. One reason is this: when creating the AIPs, you need to apply the MSPs in the correct  order; you cannot simply apply a new MSP to the previous AIP.
    Adobe's support patch order is documented here:  http://kb2.adobe.com/cps/498/cpsid_49880.html.
    That link covers Adode  Acrobat and Reader, versions 7.x through 9.x. A quarterly update MSP can  only be applied to the previous quarterly. Should Adobe Reader 9.4.2  come out tomorrow as a quarterly update, you will not be able to apply it to  the 9.4.1 AIP; you must apply it to the previous quarterly AIP - 9.4.0. At a minimum I keep the previous 2 or 3 quarterly AIPs around, as well as the MSPs to update them. The only time I delete my old AIPs is when I am 1000% certain they are not longer needed.
    Also, when Adobe's developers author the MSPs they don't include the correct metadata entries for in place upgrades of AIPs - any AIP based on the original 9.4.0 MSI will not in-place upgrade any installtion that is based on the 9.4.0 MSI and AIP - you must uninstall Adobe Reader, then re-install. This deficiency affects all versions of Adobe Reader 7.x through 9.x. Oddly, Adobe Acrobat AIPs will correctly in-place upgrade.
    Ultimately, the in-place upgrade issue and the patch order requirements are why I say to make a copy, then update and deploy the copy.
    As for creating the AIPs:
    This is what my directory structure looks like for my Reader AIPs:
    F:\Applications\Adobe\Reader\9.3.0
    F:\Applications\Adobe\Reader\9.3.1
    F:\Applications\Adobe\Reader\9.3.2
    F:\Applications\Adobe\Reader\9.3.3
    F:\Applications\Adobe\Reader\9.3.4
    F:\Applications\Adobe\Reader\9.4.0
    F:\Applications\Adobe\Reader\9.4.1
    The 9.4.0 -> 9.4.1 MSP is F:\Applications\Adobe\Reader\AdbeRdrUpd941_all_incr.msp
    When I created my 9.4.1 AIP, I entered these at a cmd.exe prompt - if you don't have robocopy on your machine you can get it from the Server 2003 Resouce Kit:
    F:
    cd \Applications\Adobe\Reader\
    robocopy /s /e 9.4.0 9.4.1
    cd 9.4.1
    rename AdbeRdr940_en_US.msi AdbeRdr941_en_US.msi
    msiexec /a AdbeRdr941_en_US.msi /update F:\Applications\Adobe\Reader\AdbeRdrUpd941_all_incr.msp /qb

  • Best Practice for update to iPhone and iTouch

    OK, when 3.0 comes down the pike, what is the best way to get 3.0 as a "clean" install? Currently 2.2.1 is on both. If I do a restore, will the system only pick up 3.0 or will it see 2.2.1 which is currently on the hard drive? With that in mind, how can I delete the 2.2.1 version of the iPhone and iTouch software? Sorry for two question in one post.
    Steve H

    When firmware update 2.0 was released, the entire iPhone was eraseed first including the existing firmware - just as when restoring an iPhone with iTunes, followed by 2.0 being installed, which was followed by the iPhone's backup being transferred to the iPhone.
    The same may apply with firmware update 3.0 with your iPhone's backup being updated immediately before. If not, firmware version 2.2.1 will be updated with 3.0.
    If 2.2.1 is updated and you want a "clean" install of 3.0, you can follow the initial upgrade by restoring your iPhone with iTunes.

  • Best practice for updating a list that is data bound

    Hi All,
    I have a List component and the data is coming in from a bindable ArrayCollection. When I make changes to the data in the bindable ArrayCollection, the change is not being reflected in the list. I notice that if I resize the browser the component redraws I suppose and then the list updates. But how can I show the update when I change the data in the bindable ArrayCollection instantly?
    Thanks,
    Ryan

    ok thanks for that, I have it sorted out now and found out where the problem was. I got a hint from your statement: "truly [Bindable]"..
    Yes, the List is using a bindable ArrayCollection but I'm also using a custom item renderer and this item renderer takes the data and sets the label fields which are not binded. I didnt know that I had to carry the "binding" all the way through. I'm overriding the "set data" function and setting the label fields similar to: myLabel.text = _data.nameHere inside that function. That's where the problem was.
    It works great now that I bind the data directly to the Label fields in my custom item renderer. I'm also using functions to parse certain pieces of data too. Is this taxing on the application? I notice that the List updates everytime I scroll, and resetting / calling all the functions in my Labels in the custom itemrender (for example: myDate.text = "{parseDate(_data.date)}")
    Thanks!

  • OIM 10g: Best practice for updating OIM user status from target recon?

    We have a requirement, where we need to trigger one or more updates to the OIM user record (including status) based on values pulled in from a target resource recon from an LDAP. For example, if an LDAP attribute "disable-flag=123456" then we want to disable the OIM user. Other LDAP attributes may trigger other OIM user attribute changes.
    I think I need to write a custom adapter to handle "recon insert received" and "recon update received" events from the target recon, but wanted to check with the community to see if this was the right approach. Would post-insert/post-update event handlers be a better choice?

    Thanks Nishith. That's along the lines of what I was thinking. The only issue in my case is that I might need to update additional custom attributes on the OIM User in addition to enable/disable. Because of that requirement, my thought was to call the API directly from my task adapter to do the attribute updates in addition to the enable/disable. Does this seem like a sound approach?

  • Best practices for cleaning up after a Bulk REST API v2 export

    I want to make sure that I am cleaning up after my exports (not leaving anything staging, etc). So far I am
    DELETEing $ENTITY/exports/$ID and $ENTITY/exports/$ID/data as described in the Bulk REST API documentation
    Using a dataRetentionDuration when I create an export (as a safety net in case my code crashes before deleting).
    Is there anything else I should do? Should I/can I DELETE the syncs I create (syncs are not listed in the "Delete an Entity" section of the documentation)? Or are those automatically deleted when I DELETE an export?
    Thanks!
    1086203

    Hi Chris,
    I met the same problem as pod
    It happens when I tried to load all historical activities, and one sample is same activityId was given to 2 different types (one is EmailOpen, the other is FormSubmit) that generated in year 2013
    Before full loading, I ever did testing for my job, extracting the activity records from Nov 2014, and there is not unique ID issue
    Seems Eloqua fixed this problem before Nov 2014, right?
    So if I start to load Activity generated since 2015, there will not be PK problem, or else, I have to use ActivityId + ActivityType as compound PK for historical data
    Please confirm and advise
    Waiting for your feedback
    Thanks~

Maybe you are looking for

  • Windows 8.1 - Windows Store - Error Code: 0x80073cf9

    Hi All, After a quick Google search, it appears that I am not the only person who seems to be having issues with the Windows Store. Whenever I try to download certain games or apps, I receive the following error message:  "Something happened and your

  • Transaction BIL(Balance Carried forward)

    hi , in the Ob53 transction(To define Retained earnings account) there is a value BIL in transaction. What it means. What is the use of these transaction. where can i check its configuration. Thanks in advance Moderator message: contrary to your user

  • Built in flashlight

    Hey everyone. I just wanted to make sure everyone knows about the built in flashlight for the 8900. Its actually the video camera light which uses the camera flash. It can be turned on by going to video camera, hitting options and selecting video lig

  • Displaying colored thumbnail as playhead reaches certain labels

    Hi All, I am doing a simple slideshow where I have 2 sets of thumbnails, one in b/w and another in color. I have hidden the colored thumbnails using the display:none and will only appear when I move the mouse over on the black and white thumbnails. I

  • Dragging items in Cover Flow

    I really hope I'm not the only one this happens to. For some reason about 2 times out of 15 when I try to drag items from the list under the 'covers' in a finder window it won't let me. I.e. say I have 3 files in my finder and it's in cover flow view