Controlling Scope in MDX

Hi
I have a problem controlling the posting to several dimensions. I have the following code:
*SELECT (%MODEL%, ID, MODEL, [MATL_TYPE] = "PARTS")
*FOR %MOD% = %MODEL%
       [MODEL].[#%MOD%] = ([P_ACCT].[P00200],[MODEL].[DUMMY]) * ([P_ACCT].[IKR0000500000],[MODEL].[%MOD%])
*NEXT
*COMMIT
However, I want to control the posting to another account (IKR0000616560)  that it is posting to at the moment. I have tried the following:
      ([P_ACCT].[#IKR0000616560],[MODEL].[#%MOD%]) =  ([P_ACCT].[P00200],[MODEL].[DUMMY]) * ([P_ACCT].[IKR0000500000],[MODEL].[%MOD%])
But I get an error: "Invalid MDX Statement"
How can I get the posting to post to the account I want?
Any help appreciated.
L.

Here is the full code, where should I place the commit?
//Calculate COS
      [P_ACCT].[#IKR0000608050] =  ( 1 - [P_ACCT].[P00003] ) * [P_ACCT].[IKR0000500000]
      [P_ACCT].[#IKR0000608070] =  ( 1 - [P_ACCT].[P00003] ) * [P_ACCT].[IKR0000500010]
      [P_ACCT].[#IKR0000610510] =  ( 1 - [P_ACCT].[P00003] ) * [P_ACCT].[IKR0000500020]
      [P_ACCT].[#IKR0000608100] =  ( 1 - [P_ACCT].[P00003] ) * [P_ACCT].[IKR0000500040]
*COMMIT
*SELECT (%MODEL%, ID, MODEL, [MATL_TYPE] = "PARTS")
*DIM_MEMBERSET P_ACCT = "IKR0000616560"
*FOR %MOD% = %MODEL%
      [MODEL].[#%MOD%] = ([P_ACCT].[P00200],[MODEL].[DUMMY]) * ([P_ACCT].[IKR0000500000],[MODEL].[%MOD%])
*NEXT
*COMMIT
What I want is a posting to IKR0000616560, whether there is a change to P00003, P00200 or any other account. At the moment the code posts to any of the changed accounts and no posting to IKR0000616560.

Similar Messages

  • Clarification?: Frank & Lynn's book - task flow "shared" data control scope

    I'm seeking clarification around shared data control scopes please, regarding a point made in Frank Nimphius and Lynn Munsinger's "Oracle Fusion Developer Guide" McGraw-Hill book.
    On page 229 there is a note that states "The data control scope can be shared only if the transaction is also shared". Presumably this implies that only the transaction options "Always Use Existing Transaction" or "Use Existing Transaction if Possible" are applicable for a shared data control scope.
    However this seems at odds with what the IDE supports, as you can also select the transaction options "<No Controller Transaction>" and "Always Begin New Transaction" when the data control scope is set to shared.
    What's correct? The IDE or the book?
    Your assistance appreciated.
    CM.

    Chris,
    "The data control scope can be shared only if the transaction is also shared"
    At least the book stands correct for what I could test in a simple test case:
    1. no transaction - no sharing
    - no master/detail synchronization. DC are bot shared
    - commit in called btf does not commit caller task flow
    2. "always use existing" transaction selects shared Data Control and automatically disables this field so there is no other option for this
    3. Share DataControl and "Always begin transaction"
    Committing transaction in called btf also commits the transaction in calling TF
    So bottom line is that the transaction handling in ADFc appears to be confusing as it only is a directive for the DataControl to interpret.
    Also see page 14 "Task flow "new transaction" vs. "new db connection"" of : http://www.oracle.com/technetwork/developer-tools/adf/learnmore/march2011-otn-harvest-351896.pdf
    In ADF BC it seems that separated transactions only exist if you use isolated mode. If you use shared and new transaction then basically the transactions are not isolated.
    Frank
    Ps.: I took an action item to follow up with development about what the expected ADF BC behavior for the controller settings are.

  • Data-control-scope=shared is not working

    Steve Muench said in January 2010 in that post {thread:id=1012099}
    "In order to share the connection/transaction when data-control-scope=shared, today as an implementation detail we do AM nesting for you at runtime when using task flow calls”
    From that thread I understood that: by setting *"data-control-scope"* to *"shared"* +(this is the default in 11.1.1.3.0)+ in _task flow definition file_ +(in behavior section)+, it should make the task-flow using the same database connection than the _parent’s application module_.
    *Is that true?*
    If so, it is not working for me.
    <li>I’ve created 2 _independents_ ADF Fusion Applications : *MyMainApp* and *MyTFlow* </li>
    <li>Each application has a Model and a ViewController projects.</li>
    <li>Each application has one ViewObject and one Application Module.</li>
    <li>MyMainApp/Model contains MyMainService Application Module & AllEmployeesView ViewObject </li>
    <li>MyTFlow/Model contains MyTFlowService Application Module & AllDepartmentView ViewObject</li>
    <li>MyMainService and MyTFlowService have the same JDBC Datasource (jdbc/GCCDS).</li>
    <li>_In MyTFlow/ViewController project_</li>
    <li>I’ve created one page fragment and dragged & dropped AllDepartmentView as an ADF Read-Only Table</li>
    <li>I’ve created one Task Flow (allDept-task-flow-definition) and added the pageFragment onto it.</li>
    <li>I’ve set the “data-control-scope” to “shared” (it was the default) </li>
    <li>I’ve created a new deployment profile (ADF Library Jar) and deployed the jar file </li>
    <li>_In MyMainApp/ViewController project_</li>
    <li>I’ve created a JSF Page (home.jspx)</li>
    <li>I’ve dragged & dropped AllEmployeesView as an ADF Read-Only Form. </li>
    <li>I’ve dragged & dropped the “allDept-task-flow-definition” from “Resource Palette” onto the JSF Page. It automatically put it inside an af:region and imports the jar file to the Libraries (ADF Library)</li>
    <li>I run the home.jspx page. It works fine but with *2 database connections!*</li>
    I think I’ve missed something.
    Best Regards
    Nicolas

    Hi,
    the problem is that the Application Modules are root modules and therefore open a new database connection. Only nested application modules reuse the connection of the parent AM. Andrejus Baranovski did blog about this and came up with the suggestion to build reusable projects such that the model and the view controller parts are deployed separately into ADF Libraries and then combined in a super ADF Librery. This way you can use nested application modules (a single datase connection)
    http://andrejusb.blogspot.com/2010/10/how-to-reduce-database-connections-and.html
    http://andrejusb.blogspot.com/2010/06/adf-regions-and-nested-application.html
    Note that IMO the real power of shared Data Controls comes when you use ADF Regions in an application as a mean of modularization.
    Frank

  • Controlling scope of availability check

    Hi,
    The requirement is to have different scope for availability check when Standard Sales Order and a Replacement Order (Replacing the goods for free).
    In the above scenario, the item category is different, so we can have different requirement type, hence can have different requirement class.
    Item category determination is based on Sales document type, but availability check do not have any control at sales document type level.
    Scope of availability check can be controlled with the combination of Checking group (material master) and checking rule.
    I need clarification here on how we can get different scope of check for the same material.
    Clarification on this is highly appreciated.
    Regards.

    Checking Rule is assigned with Plant also. So if Replacement Order is created from a single Plant then you can achieve the same.
    Else if you don't want to run Availabily Check then you can remove the checkbox "Availability" from Schedule Line Category.
    Best Regards,
    Ankur

  • How can we assign a different scope of planning in MTO scenario?

    Hi,
    We are using MTO scenario. In this case, during sales order creation checking control AE is getting triggered. We want to use a different scenario during rescheduling transaction. But as per my analysis, V_V2 is also calling the same AE checking control eventhough we assigned a different checking control for Back order processing.
    In this case, can we assign a different checking control (scope of check) for back order processing?
    Regards.

    Let me explain my requirement here....
    This is a kind of trading or sub-contracting. Initially we will give the forecast to vendor...but Vendor will start the finished product production only after getting the confirmed order from us. Hence we thought to use strategy 50. So we will create a PO to Vendor only after getting a confirmed sales order from customer.
    But in strategy 50 ATP confirmation is based on PIR. Hence we created a new requirement class (copy of 45 with ATP active). So in this new strategy ATP is against actual receipts instead of PIRs.
    But sometimes Vendor may not be able to supply as per lead-time...and he may postpone the delivery date. In this case we are getting exception message 10 for the PO...even though PO is specific to a sales order. Still rescheduling job (V_V2) is not populating the new confirmation date as it is referring to the TRLT.
    Hence we want to use ATP based on TRLT during sales order creation. And during rescheduling we want to use a different scope of check ('without TRLT' and with 'No storage location check'). Hence we created a new checking rule for rescheduling and this we assigned for that specific plant.
    As per my observation, rescheduling job is not picking the scope of check that is assigned for rescheduling....it is calling the same checking rule AE during rescheduling also.
    So my question here is... can we call a different scope of check during rescheduling run while using make to order strategy?
    Regards.

  • Intel Hyper-Thre​ading Technology conflicts with LabVIEW utilities (VISA, Scope GUI, IO Trace...)

    I would like to share a pretty hard-to-troubleshoot issue we are experiencing for the last few months.
    Our company used to get DELL T5500 for our engineers. The PCs work just fine with all LabVIEW utilities. But DELL has discontinued T5500 series and replace them with T5600. I got one of them few months ago and after installing LabVIEW, I tried to run VISA console via MAX. It immediately crashes MAX and destroy MAX database. After that I try to run other utilities like NI IO Trace, VISA Interactive Control, Scope Soft Front Panel, ... All of then crashed. I am running 64-bit Windows 7  + 64-bit LabVIEW. And we know that most of NI Utilities are 32-bit.
    After a lot of frustration I went down to researching the computer BIOS level. And try to side-by-side compare with T5500. T5600 has much newer CPU and has  lot more performance enhancement features. I tried to turn of/on one by one to see if any affect LabVIEW utilities. To my surprise I found that Intel® Hyper-Threading Technology (Intel® HT Technology) is the sinner. After turning it off all LabVIEW utilities start to work just fine. All T5600s are shipped with this feature enabled by default.
    We know that DELL Precision PCs are almost industry standard for all engineering department. I think in the next few years a lot of people will be hit by this issue. I already notified NI and DELL R&D so they can find a good solution. But I just would like to make this issue Google-searchable so that anybody see this issue may get some help.
    Give me any feedback if you encountered the same problem.
    Thanks,

    This means that you were on a witch hunt and hyperthreading is not the problem. (I always had doubts).
    The original thread was about crashes in the visa console, but your problems seem to be much more generic:
    "- The application stalls unpredictably after some time, sometimes a minute, sometimes hours. After clicking into the GUI it starts working again. This repeats in an unpredictable way. Competitive activities on the computer seemed to increase the stalling-frequency.
    - Sound Input VI stops unpredictably and has to be restarted."
    Are you sure you don't have a general code issues such as race conditions or deadlocks. Maybe you should start a new thread and show us a simplified version of your program that still demonstrates the problem. If there are race conditions, moving to a different CPU can cause slight changes in execution order, exposing them.
    Did you repair the LabVIEW and driver installation? What are the power settings of the computer? Did you update other drivers (such as video, power management, etc)
    What is the exact CPU you are using? What third party utilities and security software is running on your PC?
    LabVIEW Champion . Do more with less code and in less time .

  • Shared Data Control

    Dear All,
    I often find myself reading documents regarding
    shared data control and transaction in taskflow but I often
    scratch my head what does it mean?
    I googled about it but cant find a good resource othat explains the relevance of the topic
    in ADF programming.
    Can somebody please share a link or a resource where I could read about it?
    Thanks.

    http://download.oracle.com/docs/cd/E12839_01/web.1111/b31974/taskflows_parameters.htm#ADFFD1693
    http://download.oracle.com/docs/cd/E12839_01/web.1111/b31974/bclookups.htm#ADFFD1596
    "Use Existing Transaction if Possible and Shared data control scope options should be used, as, this option will reuse an existing transaction if available from the calling task flow, or, establish a new transaction if one isn't available."

  • How to set the scope for Mxml classes?

    Hi,
    i want to create the mxml class but with internal (package)
    scope (for example).
    Is it possible? (for AS classes yes)

    Don't think you can control scope like this in MXML.

  • MDX ParallelPeriod with Multiple Calendars

    Hello,
    I have a bit of a unique situation here. The cube has multiple Calendars, in that there is a M2M relationship of the standard date dimension to create the ability to have multiple different calendars (14-ish).
    The problem I now have is trying to do parallel period calculations, e.g. the typical previous year calc:
    (ParallelPeriod([Calendar].[Calendar].[Year],1,[Calendar].[Calendar].CurrentMember),[Measures].[MyTotal])
    Is spanning multiple calendars
    e.g. assume in the calendar dimension I have Calendars A, B and C
    Each Calendar has 3 years of data, 2013, 2014 and 2015.
    You would expect for Calendar C, Year 2013, the Previous year total to be Null or 0, however it is grabbing the total for Calendar B year 2015.
    Is there a way to restrict this, so it only looks at the current calendar name selected?

    Hi BeardyMan,
    According to your description, you want to have the total measure to show different value depends on different calendar in date dimension. Right?
    In this scenario, I suggest you using SCOPE() statement to limit different measure under different scope. Please refer to sample below:
    Scope
    [Date].[Fiscal].CurrentMember,
    [Date].[Fiscal].[Month].Members,
    [Measures].[My Total]
    This = 0 ;
    End Scope ;
    Scope
    [Date].[Calendar].CurrentMember,
    [Date].[Calendar].[Month].Members,
    [Measures].[My Total]
    This = ParallelPeriod
    [Date].[Calendar].[Fiscal Year], 1,
    [Date].[Calendar].CurrentMember
    End Scope ;
    Reference:
    SCOPE Statement (MDX)
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Best way to refresh page after returning from task flow?

    Hello -
    (Using jdev 11g release 1)
    What is the best way to refresh data in a page after navigating to and returning from a task flow with an isolated data control scope where that data is changed and commited to the database?
    I have 2 bounded task flows: list-records-tf and edit-record-tf
    Both use page fragments
    list-records-tf has a list.jsff fragment and a task flow call to edit-record-tf
    The list.jsff page has a table of records that a user can click on and a button which, when pressed, will pass control to the edit-record-tf call. (There are also set property listeners on the button to set values in the request that are used as parameters to edit-record-tf.)
    The edit-record-tf always begins a new transaction and does not share data controls with the calling task flow. It consists of an application module call to set up the model according to the parameters passed in (edit record X or create new record Y or...etc.), a page fragment with a form to allow users to edit the record, and 2 different task flow returns for saving/cancelling the transaction.
    Back to the question - when I change a record in the edit page, the changes do not show up on the list page until I requery the data set. What is the best way to get the list page to refresh itself automatically upon return from the edit-record-tf?
    (If I ran the edit task flow in a popup dialog I could just use the return listener on the command component that launched the popup. But I don't want to run this in a dialog.)
    Thank you for reading my question.

    What if you have the bean which has refresh method as TF param? Call that method after you save the data. or use contextual event.

  • Get the Values of the components inside the region onto the Parent Page

    Hi All,
    I am using Jdeveloper 11.1.1.5.
    I just wanted to know that my use case can be fitted to contextual events or should i use the #{data.pageDef.Attribute.inputValue} to get the values.
    I have a bounded taskflow "ChildTF.xml" with fragment say "child.jsff".Now i have dragged the ChildTF.xml on the other page say "Parent.jspx" as a region.In the child.jsff page i had input form having 20 fields and all the 20 fields are dragged from a View Object so i don't have the field values binded to the Managed Bean and on the parent page "Parent.jspx" i have a button. Now on click of button i need to get the values of those 20 fields of "child.jsff" page.
    How can i achieve this?
    Thanks
    Shah

    Hi Timo,
    Thanks .
    For sharing the data control i should follow this :-
    In the Property Inspector for the called task flow i.e ChildTF.xml, select Behavior.
    In the data-control-scope list, select shared.
    Is there anything else do i need to do ??
    I mean do i need to add the iterator to the parent page also. ?
    Kindly Suggest!!
    Regards,
    Shah

  • The subtle use of task flow "No Controller Transaction" behavior

    I'm trying to tease out some subtle points about the Task Flow transactional behavior option "<No Controller Transaction>".
    OTN members familiar with task flows in JDev 11g and on wards would know that task flows support options for transactions and data control scope. Some scenarios based on these options:
    a) When we pick options such as "Use Existing Transaction" and shared data control scope, the called Bounded Task Flow (BTF) will join the Data Control Frame of its caller. A commit by the child does essentially nothing, a rollback of the child rolls any data changes back to when the child BTF was called (i.e. an implicit save point), while a commit of the parent commits any changes in both the child and parent, and a rollback of a parent loses changes to the child and parent.
    A key point to realize about this scenario is the shared data control scope gives both the caller and called BTF the possibility to share a db connection from the connection pool. However this is dependent on the configuration of the underlying services layer. If ADF BC Application Modules (AMs) are used, and they use separate JNDI datasources this wont happen.
    b) When we pick options such as "Always Begin New Transaction" and isolated data control scope, the called BTF essentially has its own Data Control Frame separate to that of the caller. A commit or rollback in either the parent/caller or child/called BTF are essentially isolated, or in other words separate transactions.
    Similar to the last point but the exact opposite, regardless how the underlying business services are configured, even if ADF BC AMs are used with the same JNDI data source, essentially separate database connections will be taken out assisting the isolated transactional behavior with the database.
    This brings me back to my question, of the subtle behavior of the <No Controller Transaction> option. Section 16.4.1 of the Fusion Guide (http://download.oracle.com/docs/cd/E17904_01/web.1111/b31974/taskflows_parameters.htm#CIHIDJBJ) says that when this option is set that "A new data control frame is created without an open transaction." So you could argue this mode is the same as isolated data control scope, and by implication, separate connections will be taken out by the caller/called BTF. Is this correct?
    Doesn't this in turn have implications about database read consistency? If we have one BTF participating in a transaction with the database, reading then writing data, and a separate BTF with the <No Controller Transaction> option set, it's possible it wont see the data of the first BTF unless committed before the No Controller Transaction BTF is called and queries it's own dataset correct?
    An alternative question which takes a different point of view, is why would you ever want this option, don't the other options cover all the scenarios you could possibly want to use a BTF?
    Finally as a separate question based around the same option, presumably an attempt to commit/rollback the Data Control Frame of the associated No Controller Transaction BTF will fail. However what happens if the said BTF attempts to call the Data Control's (not the Data Control Frame's) commit & rollback options? Presumably this will succeed?
    Your thoughts and assistance appreciated.
    Regards,
    CM.

    For other readers this reply is a continuation of this thread and another thread: Re: Clarification?: Frank & Lynn's book - task flow "shared" data control scope
    Hi Frank
    Thanks for your reply. Okay I get the idea that were setting the ADFc options here, that can be overridden by the implementation of data control, and in my specific case that's the ADF BC AM implementation. I've always known that, but the issue became complicated because it didn't make sense what "No Controller Transaction" actually did and when you should use it, and in turn data control frames and their implementation aren't well documented.
    I think a key point from your summation is that "No Controller Transaction" in context of ADF BC, with either data control scope option selected, is effectively (as far as we can tell) already covered by the other options. So if our understanding is correct, the recommendation for ADF BC programmers is I think, don't use this option as future programmers/maintainers wont understand the subtlety.
    However as you say for users of other data controls, such as those using web services, then it makes sense and possibly should be the only option?
    Also regarding your code harvest pg 14 entry on task flow transactions: http://www.oracle.com/technetwork/developer-tools/adf/learnmore/march2011-otn-harvest-351896.pdf
    ....and the following quote in context of setting the transaction option to Begin New Transaction:
    >
    When a bounded task flow creates a new transaction, does it also mean it creates a new database connection? No.
    >
    ....I think you need to be a little more careful in this answer, as again it depends on the underlying data control implementation as you point out in this thread. In considering ADF BC, this is correct if you assume only one root AM. However if the BTFs have separate root AMs, this should result in 2 connections and transactions..... well at least I assume it does, though I wonder what will happen if both AMs share the same JNDI data source.... is the framework smart enough to join the connections/transactions in this case?
    Also in one of your other code harvests (apologies I can't find which one at the moment) you point out sharing data control scopes is only possible if the BTF data controls have the same name. In context of an ADF BC application, with only one root AM used by multiple BTFs, this of course would be the case. Yet, the obvious implication to your summary of transaction outcomes in this thread, if the developers for whatever reason change the DC name across DataBindings.cpx files sourced from ADF Libraries containing the BTFs, then no, it wont.
    Overall the number of variables in this gets really complicated, creating multiple dimensions to the matrix.
    Going to your last point, how can the documentation be improved? I think as you say the documentation is right in context of the options for ADFc, but, as the same documentation is included in the Fusion Dev Guide which assumes ADF BC is being used, then it isn't clear enough and can be misleading. It would seem to me, that depending on the underlying data control technology used, then there needs to be documentation that talks about the effect of ADFc task flow behavior options in the context of each technology. And God knows how you describe a scenario where BTFs use DCs that span technologies.
    From context of ADF BC, one thing that I've found hard in analyzing all of this is there doesn't seem to be an easy way from the middletier to check how many connections are being taken out from a data source. The FMW Control unfortunately when sampling db connections taken out from a JNDI data source pool, doesn't sample quickly enough to see how many were consumed. Are you aware of some easy method to check the number of the db connections opened/closed?
    Finally in considering an Unbounded Task Flow as separate to BTFs, do you have any conclusions about how it participates in the transactions? From what I can determine the UTF lies in it's own data control frame, and is effectively isolated from the BTF transactions, unless, the BTF calls commit/rollback at the ADF BC data control level (as separate to commit/rollback at the data control frame level), and the data control is used both by the UTF and BTF.
    As always thanks for your time and assistance.
    CM.

  • How to use/sync single data (file) across multiple instances of same application

    Currently we have an application (a diagram editor), that have the ability to save and load (serialize) its state in a xml file.
    Now we want this application to behave like Microsoft OneNote application. Where multiple users have the ability to access the same file.
    Later we may also need to enhance with other things like, (1)what is changed and who changed it, (2)option to resolve conflicts if any.
    I came to know about sync framework to resolve this. so far, i have not tried it.
    All i want is,
    Virtually single file should be edited by multiple instances of same application.
    We need a dll (sync framework) that does following
    It takes complete responsibility of file handling.
    Using this dll, each instance of the application will notify their own changes.
    Each instance of the application should have the ability to detect the changes that is recently made (when, who, what are the changes).
    My question:
    Will sync framework be suitable for this requirement?
    If so, is there a demo application that represents this?
    - Jegan

    Seems like I have found the solution.
    In the taskflow there is a property named data-control-scope and I set it to isolated instead of the default (shared) and this seemed to do the trick.
    I can now have two instances of the same taskflow running with different ApplicationModules
    Cheers,
    Mark

  • How to commit data at the end of a bounded task flow

    Hi all,
    I am using JDev 11.1.1.0.2.
    I have this situation
    1) A page with a button that goes to a task-flow to insert data (property data-control-scope set to shared and property transaction set to requires-transaction as suggested in http://www.oracle.com/technology/products/jdev/tips/fnimphius/cancelForm/cancelForm_wsp.html?_template=/ocom/print )
    2) At the end of this task-flow a I have a TaskFlowReturn (property End Transaction set to commit)
    When I click the button associated to this TaskFlowReturn, I return to the first page (described in 1) ), but the data I have just inserted are only submitted, but not committed.
    What's the problem?
    Any suggestions?
    Thanks
    Andrea

    Hi,
    if you set the return activity to commit the transaction then this is done. I don't see why rollback should work but commit doesn't
    Frank

  • How can I refresh a calling page on return from a TF activity ?

    Hi,
    I have a (sounds easy) case where a view activity invokes a BTW with fragments. The called BTF has been declared with isolated data control scope. Inside the BTF some updates happen and when the BTF returns the calling view should display the modified data. The BTF should not be opened as a dialog and the above take place inside a region. There are examples in the Internet for how to implement such case but only calling a taskflow with pages from a dialog.
    I tried creating first the Taskflow binding definition [19.10.1 How to Associate a Page Definition File with a Task Flow Activity|http://docs.oracle.com/cd/E35521_01/web.111230/e16182/taskflows_activities.htm#sthref539] , then I set the related operations (Execute, setCurrentRowWithKey, ...) in the Page Definition and invoke the operations inside the “After Listener” Listener (<after-listener>) of the taskflow activity but the binding context is not accessible inside the listener, I get NPE accessing the binding.
    After some tests I have found that I can call the Operations using “#data.taskflowdefinition..” but I afraid using this technique because of so many papers stating that this is a bad practice.
    Additionally I tested using an invokeAction in the calling page definition page and it works but I prefer “refreshing” the model from the activity (there are many pages calling the same activity).
    I am wondering if there is a more elegant solution that I haven’t seen yet.
    Thanks for any ideas,
    Yiannis
    Edited by: Tses on Mar 8, 2013 11:15 PM

    Hi Timo and thank you for your reply,
    As far as I know any method executed in the BTF has no effect in the calling page because of the isolated data control scope.
    Consider the following layout of a very simple TF diagram. The BTF has "Share data controls with calling task flow" unchecked and "Always begin new transaction"
    View --> Method Call --> BTF
    Now imagine that a user navigates from the View to the BTF, make updates and finally commits the transaction from a taskflow return activity. The updates are executed inside the "private" BTF's data control scope because of the BTF settings.
    Returning to the calling View the user sees stale data from the Data Control of the View, until he re-queries the model.
    ADF supports execution of code during the call of the BTF (through Method Call) and also sending and returning parameters from the View to the BTF.
    Conversely on return from the BTF there is no handler for executing code (something like "Return Method Call") in order to refresh (e.g. re-query and set current row) the View exploiting the return parameters.
    The "After Listener" in the taskflow call activity has not access to the bindings context. Using #{data.bindingTFXXX} I guess that has a risk in a high availability environment for NPE where the http call to the BTF might be processed in a different server than the returning http call so the #{data.bindingTFXXX} might not exists.
    The other solution I found using invokeAction with RefreshCondition depends on the returning values of the BTF, bloats the View with Bindings that I would prefer to be in a central place.
    Am i missing something in the whole flow above ?
    Yiannis

Maybe you are looking for