Schema qname collision issue

I am trying to apply a BPEL orchestration (11g PS2) to receive and process an outbound msg from Salesforce. Salesforce provides the wsdl for the outbound msg that Salesforce sends to my service. Alone, this is not a problem. However, circumstances require that I do additional queries of Salesforce. The wsdl for these queries is separate. The problem I have is that the schemas for both wsdls (the schemas are in-line in both cases) share the same namespace for some schema definitions, resulting in duplicate qnames. In one case, the duplicate qnames represent distinctly different definitions (one is essentially a subset of the other). I naively hoped that I could get away with having both in the same composite (ordering them such that the last import trumps the first import in the case of common definitions while all other definitions are merged). It appears that one import only is applied. I can change which one is applied by reordering the imports in the composite - not quite what I had hoped for. My way around this problem has been to construct a sort of proxy composite to keep the two wsdls from 'touching'. Can anyone suggest a better way to handle this situation?
Thanks,
Cris

1 set up a default tablespace for the target user
2 Make sure the target user doesn't have the RESOURCE role and/or UNLIMITED TABLESPACE privilege.
3 Make sure the target user has QUOTA on the default tablespace ONLY
alter user quota unlimited on <target> quota 0 on <the rest>
4 Import without importing indexes, those won't be relocated.
5 Imp indexfile=<any file> ---> file with create index statements
6 edit this file, adjusting tablespaces
7 run it.
Tablespaces can't be renamed in 9i.
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • Schema level Import issue

    Hi ,
    Recently i faces one issue :-
    schema backup from one database is created for <SCHEMA1> whose default tablespace is <TABS1> , and trying to import in to <SCHEMA1> of different database whose default tablespace is <TABS2> but it looks for <TABS1> tablespace.i have used fromuser touser clause during import.
    So, How can i perform this task without creating a <TABS1> tablespace and assign a default tablespace for <SCHEMA1> or renamimg a <TABS2> tablespace to <TABS1> tablespace which is a tidious task in oracle 9i.

    1 set up a default tablespace for the target user
    2 Make sure the target user doesn't have the RESOURCE role and/or UNLIMITED TABLESPACE privilege.
    3 Make sure the target user has QUOTA on the default tablespace ONLY
    alter user quota unlimited on <target> quota 0 on <the rest>
    4 Import without importing indexes, those won't be relocated.
    5 Imp indexfile=<any file> ---> file with create index statements
    6 edit this file, adjusting tablespaces
    7 run it.
    Tablespaces can't be renamed in 9i.
    Sybrand Bakker
    Senior Oracle DBA

  • Schema filter sorting issue

    After connecting to EBS database I filtered "Other users" to see only 2 schemas.
    The schemas were sorted alphabetically on both sides, which made it easy to select the ones I needed.
    After that I tried to modify the filter and this time Available schemas ar not sorted (at least not by name). Displayed schemas ar still sorted.
    There is a workaround, however. Just keep pressing the schema name first letter to cycle through all schemas starting with that letter. Or type in full name to jump to it directly.
    Pressing [help] brings up error "Cannot find help content...". Which is understandable. :)
    Developer version v1215, Windows 2k.

    Another workaround is to use >> to shift all users back to Displayed and then << to shift all back to Available and then they will be alphabetic. That should not be the permanent solution - I did log a bug on this.
    -- Sharon

  • Pricing Schema Header Condition Issue

    Dear gurus,
    how can I do this?
    Item 1:Sales Price:100 eurMWST:18eur
    Item 2:Sales Price:200 eurMWST:36eur
    In header:
    Sales Price: 300eur
    MWST:54eur
    Total:354eur
    There is a HEADER condition as YOFI:
    YOFI:355eur
    YODI=YOFI-TOTAL
    Problem is SAP multiples YOFI(which is header condition) to items.
    As a result calculation is wrong.
    How can I fix it?
    Regards,

    You don't want YOFI to be distributed to items correct? If so, then in t.code V/06, for condition type YOFI, remove the checks in the boxes of group condition,item condition, quantity and check by creating a new sales order.
    Regards,

  • Issues faced with XML (Objt-Rel) - Plan to move to Binary XML (schema-less)

    Hi All,
    Our Production DB has Oracle XMLDB implementation using 9 XMLDB Object-Relational
    Tables. These have been implemented almost since a year, and we faced several issues,
    have listed some of the most important ones:
    Obviously it is Object-Relational implementation, so we have 4-5 XSDs to start with that
    support the Object-Relational schema
    1) copyEvolve Issues : Due to changing Business Requirements constantly, we had to constantly
    and continuously modify/upgrade XSDs and then use "copyEvolve" to apply new XSDs
    Encountered several issues with CopyEvolve
    2) "Home-grown" solution to evolve XSD/Schema:
    We came up with our own Solution to migrate/evolve XSD schema.
    a) Backup all data from 9 XML DB Tables to 9 CLOB Tables (data is thus "dereferenced" and "delinked"
    from underlying XSDs
    b) Since data is backed up to CLOB Tables, go ahead and drop the entire schema and register
    new XSDs and recreate Tables. (GRANTS and PUBLIC SYNONYMS reappled, needless to say)
    c) Reload data from backed up CLOB Tables to newly created XMLDB (Obj-Relational) Tables
    Above approach (without "copyEvolve") has worked fine so far and helped in each Release/ every migration
    With our data sets becoming increasingly huge, downtime is not sufficient to follow this successful, home-grown
    approach and as a result we would like to get away from Object-Relational XMLDB Tables altogether.
    3) Our Application currently uses XPath heavily on all 9 XMLDB Tables and we understand XPath is already
    deprecated by Oracle (as of 11.2.0.2)
    We are seriously considering doing the following:
    1) Migrate all 9 XMLDB Tables and modify underlying Storage from "Object-Relational" to Binary XML (Schema-less)
    2) Modify Appln code with all XPath replaced by corresponding XQuery constructs
    3) Replace existing "B-TREE" based Indexes in Object-Relational XMLDB Tables with either a) Indexes on Virtual DB Columns to enforce Primary Key and Unique Constraints and b) XMLIndex on all other Non-Unique Columns instead of
    the corresponding "B-TREE" based Indexes
    What we hope to achieve with the above:
    1) Eliminate XSD Usage completely (and copyEvolve nightmares thereby)
    2) Eliminate Usage of XPath totally
    3) Better Performance overall :-)
    Would like to get some advice and feedback on our Proposed Plan and mainly are we taking the
    right direction especially in respect of Performance, Point 3) listed above
    Any feedback or tips would be truly appreciated
    Regards and Thanks
    Auro

    WRT to XPATH Vs XQuery.
    1. XPATH is a subset of XQuery.. Any XPATH expression is, by definition, an example of simple XQuery expression, so XPATH is not depricated.
    2. What we are depricateing are the older, oracle specific XML operators (EXTRACT(), EXTRACTVALUE(), EXISTSNODE()), that ONLY support XPATH. We are depricating these in favour of the new SQL/XML operators (XMLTABLE, XMLQUERY, XMLEXISTS) defined by the SQL standards committee. These operators provide support for the full XQquery standard, and implicitly all of the XPATH expressions that were supported by the older operators. This menas we strongly recommned that new code, developed to work with 11g make use of nrw newer operators. Personally I cannot see a point where we would ever consider de-supportting the older operators, we are well aware of how much code makes use of them.
    What this means is that any code that is written using the older operators will continue to work, unmodified. However should a bug surface in the use of the older operators, we would strongly recommend that the code in question be migrated to the new operators as part of the remidiation process.
    Also, once the initial pain of learning the new syntax is overcome, I truely believe that the new operators result in much more efficient and mantainable code, so taking the time to do code renevation when possible will probably pay off in the long term...
    WRT to moving away from Schema-Based OR storage, I would look at the kind of changes you have made to the XML Schema. If they are the kind of changes that would be supported by in-place evolution in 11g then you might want to re-consider this. If, on the other hand you are regularly making changes to the XML schema that are not backwardsly compatable with your older XML Schema then Schema-Based binary XML storeage (which is more flexible than Schema-Based Object-Relational storage) or even non-schema based Binary XML may be a better choice for your applicaiton..
    I would experment by registering the oldest version of your XML Schema in 11gR2, and then testing each of the evolutions you have gone through to see if 11GR2 inplace evolution would have managed them. Also, ask yourself do you expect your XML Schema to keep changing so drastically moving forward, or were some of these changes the results of the growing pains associated with learning how to use XML schema effectively. BTW the approach you outline is effectively what CopyEvolve is doing under the covers...
    Bear in mind that for the use-cases Object-Relational storage addesses, when all of the XPATH expressions are correctly re-written into SQL operations on the underlying tables, and where the majority of queries end up accessing or updating leaf-level nodes in the XML, it is unlikely that a Binary XML / Unstructured XML Index combination will deliver similar performance. If you are only accesss a small subset of the leaf-level nodes, creating structured XML Indexes that project out the nodes in question may be able to deliver similar performance to an Object-Relational storage model, but you will need to get the index definitions correct.
    -Mark
    Edited by: mdrake on Mar 7, 2011 7:40 PM

  • [RESOLVED] dbms_metadata.get_ddl() issues

    Hi all,
    I'm having a bit of an issue using the dbms_metadata package. I've never used it so possibly I'm unaware of something basic.
    I'm getting diferent results in my production and dev servers, both of which have this configuration:
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.6.0 - Production
    I am trying to replicate the DDL for one particular schema and so tried the following:
    SQL> select dbms_metadata.get_ddl('TABLE',u.table_name)
      2  from user_tables u
      3  where rownum = 1;
    ERROR:
    ORA-06502: PL/SQL: numeric or value error
    LPX-00210: expected '<' instead of 'n'
    ORA-06512: at "SYS.UTL_XML", line 0
    ORA-06512: at "SYS.DBMS_METADATA_INT", line 3698
    ORA-06512: at "SYS.DBMS_METADATA_INT", line 4553
    ORA-06512: at "SYS.DBMS_METADATA", line 458
    ORA-06512: at "SYS.DBMS_METADATA", line 615
    ORA-06512: at "SYS.DBMS_METADATA", line 1221
    ORA-06512: at line 1I tried again with hard-coding a table name and got these very scary results:
    SQL> select dbms_metadata.get_ddl('TABLE','CAMPAIGN_LOOKUP','XMLUSER') FROM DUAL;
    ERROR:
    ORA-06502: PL/SQL: numeric or value error
    ORA-31605: the following was returned from LpxXSLResetAllVars in routine
    kuxslResetParams:
    LPX-1: NULL pointer
    ORA-06512: at "SYS.UTL_XML", line 0
    ORA-06512: at "SYS.DBMS_METADATA_INT", line 3722
    ORA-06512: at "SYS.DBMS_METADATA_INT", line 4553
    ORA-06512: at "SYS.DBMS_METADATA", line 458
    ORA-06512: at "SYS.DBMS_METADATA", line 615
    ORA-06512: at "SYS.DBMS_METADATA", line 1221
    ORA-06512: at line 1in the above, I am logging in as the XMLUSER user and so owns the table campaign_lookup.
    Next I tried getting the DDL for another schema that while still logged in as xmluser.
    I'm certain that I have access read/write from the tclient table but got these results:
    possibly the dbms_metadata package requires you to be loged in as the schema owner though
    the oracle documentation link gives me a 404 error at the moment so I can't check.
    SQL> select dbms_metadata.get_ddl('TABLE','TCLIENT','TRAVEL') from dual;
    ERROR:
    ORA-31603: object "TCLIENT" of type TABLE not found in schema "TRAVEL"
    ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
    ORA-06512: at "SYS.DBMS_METADATA", line 628
    ORA-06512: at "SYS.DBMS_METADATA", line 1221
    ORA-06512: at line 1So now I log into production:
    SQL> select dbms_metadata.get_ddl('TABLE',u.table_name)
      2      from user_tables u
      3      where rownum = 1;
    DBMS_METADATA.GET_DDL('TABLE',
      CREATE TABLE "XMLUSER"."CAMPAIGN_LOOKUP"
       (    "SCHEME_ID" VARCHAR2(30),
            "S
    etc...but still can't extract DDL for another schema.
    my main issue is, I can't log into production (or our implementation environment) as the schema
    I want to extract due to big nasty DBAs locking it all down. however I can in dev, but get the above errors.
    thoughts anyone?

    Hi,
    For your table not existing error it could be
    1) Table not existing
    or
    2) Nonprivileged users can see the metadata of only their own objects.
    SYS and users with SELECT_CATALOG_ROLE can see all objects
    For other problem, there is a bug reported for your version, can search in metalink for workaround
    Regards

  • Issue with triggers while importing (imp) to another user

    hi all,
    we have two oracle databases (oracle 10g R2)
    1. production
    2. test (clone of production)
    whenever we need to update the test database for the developers we take a full export dump of the production (hardly 20GB) and import the same into test after dropping the tables in the required schema
    export system/***@production file=exp.dmp log=log.log full=y statistics=nonewe have a schema named user1, in production.
    same is the case in test
    I got a request to refresh the user1 schema in test, but with a different schema user2 (new schema created), as there was some critical dev. work going on with user1 schema.
    hence i didnt want to touch the user1 schema in test environment & after taking an export dump of the production i didnt import it into the same schema.
    i imported the dump data to user 2 schema
    imp system/***@test file=exp.dmp log=implog.log fromuser=user1 touser=user2 statistics=nonethe moment i did this developers working with user1 schema started facing issues when their code hit triggers on user1 schema. all insert/update triggers in the user1 schema started pointing to corresponding tables in user2.
    any clues why something like this happened? what was the mistake in he imp command i used?
    thanks for your time

    Hi,
    Check for the trigger source in your original schema. If it has schema name mentioned before the tables on which the trigger is created, it would create problem during import. While importing, it will reference the previous schema , though you are importing it in new schema.
    If this is the case, remove the schema name from the source and then export and re import.
    Regards
    Vinitaa

  • Exception: Unable to parse schema abc.xsd

    hi',
    I am using abc.xsd in file adapter where I am browsing for the schema file, the issue is this abc.xsd is internally referring to other
    schema files, so it is giving me this error "Exception: Unable to parse schema abc.xsd" if I manually copy paste all the XSD it requires
    inside the project BPEL folder is works, the issue is in production enviornment all this xsd will come from one shared folder so I am
    confused how to achieve this i.e. without copying all the XSD into the project BPEL folder refer it from one shared foler which is outside
    of the project.
    thanks
    Yatan

    You can enable WebDav on the Oracle HTTP Server component.
    cd $ORACLE_HOME/Apache/Apache/htdocs/dav_public
    mkdir Schemas
    cd $ORACLE_HOME/Apache/oradav/conf
    open the moddav.conf file for editing. Find the entry for the dav_public directory and modify as follows:
    <Location /dav_public>
    DAV on
    </Location>
    Restart the server
    Now the Schema can be accessed using the URL http://soahost:port/dav_public/Schemas/abc.xsd
    This way you can share the common xsd among different BPELProcesses.
    --Prasanna                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Best practice to avoid Firefox cache collision

    Hi everybody.
    Reading through this page of Google Developers
    https://developers.google.com/speed/docs/best-practices/caching#LeverageBrowserCaching
    I found this line that I can't figure out the exact meaning:
    "avoid the Firefox hash collision issue by ensuring that your application generates URLs that differ on more than 8-character boundaries".
    Can you please explain this with practical examples ?
    For example, are this two strings different enough?
    "hello123hello"
    "hello456hello"
    or these
    "1aaaaaa1"
    "2aaaaaa2"
    Also, what version of firefox does have this problem?
    Thank you

    Personally, I would work out the application design before you consider performance settings. There are so many variables involved that to try to do it upfront is going to be difficult.
    That being said each developer has their own preferred approach and some will automatically add certain expressions into the Essbase.cfg file, set certain application level settings via EAS (Index Cache, Data Cache, Data File Cache).
    There are many posts discussing these topics in this forum so suggest you do a search and gather some opinions.
    Regards
    Stuart

  • Column int_objname empty after register schema

    Hi all,
    We have come across a very strange feature in our XDB repository (RDBMS 10.2.0.2 EE) and I hope someone can give a hint to solve it.
    In the past we registered xml schemas and worked with it with success. The schemas are local. Now, we're implementing new versions of these xml schemas by deleting the old ones and registering the new versions. But, when we try to delete the old xml schemas, ORA-31000 is the result:
    begin dbms_xmlschema.deleteschema ('budget-canoniek.xsd', dbms_xmlschema.delete_cascade_force ); end
    ERROR at line 1:
    ORA-31000: Resource 'budget-canoniek.xsd' is not an XDB schema document
    ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 82
    ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 102
    ORA-06512: at line 1However, all_xml_schemas shows that the schema does exists:
      1  select owner, schema_url, int_objname
      2    from all_xml_schemas
      3*  where schema_url like '%canoniek%' and local = 'YES'
    puc@PGIO> /
    OWNER      SCHEMA_URL                INT_OBJNAME
    PUC        budget-canoniek2.xsd      XDLTCTsQgnhyHgQPwK+fx/kQ==
    PUC        budget-canoniek.xsdTo test what is happening I registered a schema under another name : budget-canoniek2.xsd. This schema can be unregistered without any problems.
    The difference between both schemas is a missing value INT_OBJNAME.
    Questions:
    - Does anyone have a clue how can this int_objname get nullified? I guess some internal process is doing this?
    - Does anyone have a clue how we can get this column fixed and filled again. Apparently, this column causes the dbms_xdb.deleteSchema procedure to fail. It also causes other functions on this schema to fail.
    - In thread Error when deleting schema the same issue was "solved" by deleting a record in xdb$schema table, but was not recommended. An aternative was not given. Before I totally wreck up my XDB by executing this delete statement: is there an alternative?
    Please help,
    Regards,

    Harm, could you request the backport via metalink? An upgrade to 10.2.0.3.0 will not be a solution, just checked on metalink
    I will install it on Thursday (tomorrow I am off to a client) for the project if it is available for download.
    Grz
    Marco

  • New Travel Schema

    Hi Gurus,
    I have a question regarding travel schema. Our issue is that, we need to create new travel schema where not all expense types should be assigned. At the same time, remove some expense types to already created travel schema.
    How to configure the system to remove and select particular expense types that should be in the schema.
    Thanks.

    Hi
    Please check the below mentioned path
    http://help.sap.com/erp2005_ehp_04/helpdata/en/2c/efb007c22c11d194cb00a0c92946ae/content.htm
    You can add missing Travel Management  configuration path by using the below mentioned links :-
    http://sap.ittoolbox.com/groups/technical-functional/sap-hr/how-to-add-travel-management-node-in-ecc-60-2162708
    http://sap.ittoolbox.com/groups/technical-functional/sap-acct/travel-management-missing-menus-2334641
    Regards
    Praveen PC
    Edited by: Praveen Chirakkel on May 6, 2011 11:24 AM

  • Give the read only access to user on Apps Schemas!

    Hi,
    How we can create the database user and give the access on APPS schemas(INV, PO etc).
    Please assist me.
    Thanks,
    Faziarain

    Dear Hussein,
    I followed anil passi forum for creating apps read only schema , but only issue is we have multiorg enabled .
    what is the modification dowe need in this step.
    Step 4
    Write a after logon trigger on apps_query schema. The main purpose of this trigger is to alter the session to apps schema, such that the CurrentSchema will be set to apps for the session(whilst retaining apps_query restrictions).In doing so your logon will retain the permissions of apps_query schema(read_only). Howerver it will be able to reference the apps objects with exactly the same name as does a direct connection to apps schema.
    conn apps/&1 ;
    PROMPT CREATE OR REPLACE TRIGGER xx_apps_query_logon_trg
    CREATE OR REPLACE TRIGGER xx_apps_query_logon_trg
    --16Jun2006 By Anil Passi
    --Trigger to toggle schema to apps, but yet retaining apps_query resitrictions
    --Also sets the org_id
    AFTER logon ON apps_query.SCHEMA
    DECLARE
    BEGIN
    EXECUTE IMMEDIATE
    'declare begin ' ||
    'dbms_application_info.set_client_info ( 101 ); end;';
    EXECUTE IMMEDIATE 'ALTER SESSION SET CURRENT_SCHEMA =APPS';
    END;
    Thanks and Regards

  • DCOM 10009 error after renaming local computer name. (No network computers involved in this issue)

    The DCOM 10009 error generated references the previous name of the computer. Here is the scenario:
    These aren't the real names of the computer). The computer was named
    computerX-NEW and was renamed to computerX during deployment setup. The DCOM error states "DCOM was unable to communicate with the computer
    computerX-NEW using any of the configured protocols." When I look at the error details, a PID is referenced which turns out to be the RPCSS process/service. I suspected a piece of software that was installed on the computer until I performed
    a Clean Boot with all startup programs disabled and all non-Microsoft services disabled. The error still occured. Also the error is occuring about 5 or 6 times per minute. The only thing I can start doing at this point is disabling MS services at boot to find
    the culprit, but I'd like a less brutal approach to correct the issue. Any suggestions?

    Hi,
    The following are possible causes of this error:
    1. Name resolution errors are occurring.
    2. All TCP ports on the server are being used.
    3. TCP port collisions are occurring.
    4. The network connections may not be configured properly.
    To troubleshoot DCOM 10009 errors, use the following methods.
    Method 1: Verify that name resolution is working correctly
    The activation page for a COM+ proxy application contains a Remote Server Name (RSN) property. The RSN property can be an IP address, a Fully Qualified Domain Name (FQDN), or a NetBIOS name. To troubleshoot this issue, use the ping command to test connectivity
    to the remote server by using the IP address, the FQDN, and the NetBIOS name.
    Method 2: Verify TCP port usage
    When a client makes DCOM calls to a COM+ server application, each connection may use a different TCP port. Therefore, all TCP ports on the server may be used. When this condition occurs, the server cannot accept additional connections.
    Method 3: Verify basic network connectivity to troubleshoot TCP collision issues
    Method 4: Check the network connections. Check the Registry under HKEY Local Machine/Software/Microsoft/RPC/DCOM Protocols for the list of RPC protocol sequences.
    For further troubleshooting information, please also refer to the following Microsoft article:
    Known post installation event errors in SBS 2008 (and how to resolve them)
    http://blogs.technet.com/b/sbs/archive/2008/08/26/known-post-installation-event-errors-in-sbs-2008-and-how-to-resolve-them.aspx
    Hope it helps.
    Regards,
    Blair Deng
    Blair Deng
    TechNet Community Support

  • Promote Properties from Orchestration without dependency on Property Schema

    I have a property schema deployed in a common application, all properties are of MessageContextPropertyBase Type. I want to promote properties in orchestration outgoing message, which generally is done by adding a reference to property schema assembly
    and assigning the values to promoted property and then initialize a correlation on the property while sending the message. This approach create a dependency on property schema assembly. I don't want to have this dependency. Is there any other way to promote
    properties from orchestration except correlation initialization.

    The Promoted Property in BizTalk is a method of naming and share names between BizTalk artifacts. Say we receive a message and want to route it by looking to some node in this message or some context value of this message. We don't want to use an XPath,
    we just want to use a name of some message property (which further can be mapped to XPath internally or some context value, which is an implementation detail). So now we have a name. The next step is to limit my names in some namespace.
    The solution in BizTalk was the names implemented as the XML Schema root nodes together with XML Namespaces. The possible solutions could be use of DB or some generic service to store and manage the shared names, but in BizTalk we have the names which are
    effectively the nodes of property schema. To manage those shared names we can use the standard BizTalk tooling for XML Schemas, which is nice. Now we can use the same name for different message properties in several messages, which gives us a nice correlation
    method.
    But again, the main idea of Promoted Property in BizTalk is a method of naming and share names between BizTalk artifacts. Sorry for this description. Just want to know we are on the same page.
    Back to your question. It is not clear what is your issue.
    Is it the weird way to promote property for new messages in orchestration?
    IMHO, this is a real issue but a small one. 
    Is it the application dependency created by shared schema? This issue is a big one, IMHO. It harms development and deployment productivity, but makes deployment more secure and reliable. Reliability won in this
    battle. If we compare the deployment process of the .NET applications with BizTalk applications, the .NET app deployment is simpler and less reliable, the BTS app deployment is more complex and more reliable. Now developers should pay a high attention to the
    deployment aspects of the BTS application up-front. 
    Leonid Ganeline [BizTalk MVP] <a href="http://social.technet.microsoft.com/wiki/contents/articles/20258.biztalk-integration-development-architecture.aspx">BizTalk Development Architecture</a>

  • Root Application Module issue

    We use JDev Build JDEVADF_11.1.1.5.0_GENERIC_110409.0025.6013.
    We have a scenario as follows
    Model
    1. We have a nested AM scenario which contains 2 view instances of the same View Object (EmployeeView) in the nested AM
    View
    1. URL Invokable Bounded Taskflow containing just one JSPX with transaction set as "Always Begin New Transaction - Not Sharing Data Controls"
    2. Bounded Taskflow with JSFF with transaction set as "Use Existing Transaction If Possible - Sharing Data Controls"
    3. (2) is dropped as a region in the Page in context of (1)
    4. The JSPX page contains a Form intended to display details for EmpId = 105 along with (3)
    5. The JSFF referenced in (2) contains another form intended to display details for EmpId = 185
    Because, there are 2 different view instances in the RootAM > NestedAM of the same view object EmployeeView,
    Expected Behaviour
    1. JSPX page should display details for EmpId = 105
    2. JSFF Region should display details for EmpId = 185
    Noticed Behaviour (Buggy)
    Both display details for EmpId = 100 (First Record in the table)
    What are we missing here?
    Sample demo app. (HR schema) demonstrating the issue
    http://www.box.com/s/lr3ervdhnyg696tyoz6d

    I can see this working in 11.1.2.1.0. However same failed for the version you mentioned.
    Logged a bug#13797593 to track this case. Please follow it up through Oracle Support

Maybe you are looking for