Instance mapping

Hello all,
As we just started understanding and setting up iSetup i have a very basic question concerning the instances.
We have 2 instances; 1 (A) which will be used as source and 1 (B) as an target instance. For now we don't want to have an central instance. We want to copy setup A --> B.
How should we create the instance mapping?
A: instance mapping with .dbc file of B
B: instance mapping with .dbc file of A
Will this be sufficient?
Any help would be great!
Thanks.

There is no need to create dbc file (Instance Mapping) on instance B. It is sufficient if you could copy the A dbc file on B.
Setup would look something like this:
A => Create instance mapping for both A & B. $FND_SECURE should have A.dbc & B.dbc
B => No need to create instance mapping. $FND_SECURE should have A.dbc & B.dbc.
Thanks
Mugunthan

Similar Messages

  • ISetup Instance Mapping issue

    Release 12.1.3: Env A is our central environment, Env B is our SOURCE env and Env C is our TARGET env. On env A, under $FND_TOP/secure, I have A.dbc, B.dbc and C.dbc. I have logged on env A using iSetup resp. and I can only create Instance Mapping row for A env. When I try to create Instance Mapping for B or C, I get error message saying 'File B.dbc or C.dbc' does not exist. Please contact the Web Master or System Administrator'. Our sqlnet.ora also has the value tcp_invited_nodes for envs B and C. Please share what is not being done correctly?
    Thanks
    Sunil

    I have the same issue, have you solved it?

  • Oracle Isetup Instance Mapping Issue

    Hello -
    We installed isetup patches on our production database.
    we are trying to configure Isetup instance mapping from our Test instance to our PROD instance.
    we are getting following error
    Message not found. Application:AZ, Message Name: Oracle.apps.fnd.common.PoolException. Exception creating new Poolable Object.
    Appreciate your suggestions on this.
    Regards
    satish

    Hi,
    Post your question in [Technology - LCM: Oracle iSetup |http://forums.oracle.com/forums/forum.jspa?forumID=503] forum, you would probably get a better response.
    Regards,
    Hussein

  • Hyper-v cpu monitoring - instance mapping

    Hi,
    I try to monitor the guest processors on a  hyper-v cluster using Hyper-V Hypervisor Logical Processor %Guest Run Time and %Total Run Time.
    The instances have quite confusing names: "Hv LP xx" where xx is the instance number. How can I find out, which processor maps to which vm?
    Thanks in advance,
    Tim

    Hi Tim,
    "To measure total physical processor utilization of the host operating system and all guest operating systems, use the “\Hyper-V Hypervisor Logical Processor(_Total)\% Total Run Time” performance monitor counter."
    Based on my understanding , you can not figure out the CPU usage of a single VM , because the VM's task is not stably bounded to one or more logical  processor ,it is allocated randomly .
    If you want to get VM instance CPU usage ,please use performance counter : " hyper-v hypervisor virtual processor \%total run time "  then select the instance .
    Best Regards
    Elton Ji
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • ISetup - Instance Mapping

    Hi,
    I have installed Oracle EBS R12.1.1 Vision on Windows Server 2003 and installed Oracle EBS R12.1.1 Fresh on Windows Server 2003.
    Vision is my Source Instacne and Fresh is my Target Instacne.
    In Vision Instacne I have a DBC file VIS.dbc with Oracle SID "VIS" and in Fresh Instacne I have a DBC file PROD.dbc with Oracle SID "PROD".
    Now, when I start Instacne Mapping:
    Name: Source
    DBC Filename: VIS.dbc
    Responsibility: iSetup
    then it raises an error: "Message not found. Application: AZ, Message Name: oracle.apps.fnd.common.PoolException: Exception creating new Poolable object.."
    I have recreated the dbc files on both the instacnes.
    Regards,
    Waqas Hassan

    Hi,
    I have two cases I want to discuss with this forum's users, please guide me which case is best;
    Case 1:_
    Now, I have two instacnes Central and Source and both are Windows 2003 based Platforms, I have sent the request from Central to Source and it is submitted successfully to Source but after some time it raises an error: *"Extract failed: java.lang.Exception: There are no APIs in the selection set to extract."* What does this means *"There are no API's in the selection set to extract."*, when I run the same request directly from Central Instacne then it is completed successfully but when I send the same request from Central to Source then its is failed with this error "There are no API's in the selection set to extract.". What does it meants?
    Application Implementation: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    AZR12EXTRACTOR module: iSetup R12 Extractor
    Current system time is 03-JAN-2012 09:18:14
    Working Directory= c:\oracle\VIS\inst\apps\VIS_tarebsr12\logs\appl\conc\log\
    CP Request Directory=> c:\oracle\VIS\inst\apps\VIS_tarebsr12\logs\appl\conc\log\lreq394283
    Concurrent Request Parameters
    SELECTION_SET_NAME=Complete SS GF TARGET
    JOB_NAME=Complete EXT SS GF TARGET
    USER_NAME=ISETUP
    DBC_FILE_NAME=VIS.dbc
    IS_REMOTE=Y
    REQUEST_TYPE=E
    Environment Parameters
    TWO_TASK=VIS
    APPLCSF=c:\oracle\VIS\inst\apps\VIS_tarebsr12\logs\appl\conc
    APPLLOG=log
    FNDTOP=c:\oracle\VIS\apps\apps_st\appl\fnd\12.0.0
    FNDSECURETOP=c:\oracle\VIS\inst\apps\VIS_tarebsr12\appl\fnd\12.0.0\secure
    Extract failed:
    java.lang.Exception: There are no APIs in the selection set to extract.
    at oracle.apps.az.r12.api.APICollections.createAPIsFromSelectionSet(APICollections.java:169)
    at oracle.apps.az.r12.extractor.cpserver.APIExtractor.callAPIs(APIExtractor.java:125)
    at oracle.apps.az.r12.extractor.cpserver.ExtractorContextImpl.export(ExtractorContextImpl.java:61)
    at oracle.apps.az.r12.extractor.cpserver.ExtractorCp.runProgram(ExtractorCp.java:62)
    at oracle.apps.fnd.cp.request.Run.main(Run.java:157)
    Start of log messages from FND_FILE
    End of log messages from FND_FILE
    Executing request completion options...
    Finished executing request completion options.
    Concurrent request completed
    Current system time is 03-JAN-2012 09:18:26
    Case 2:_
    In this case, I make a Selection Set with the name *"Complete SS GF"* in Central Instance and make the same Selection Set in the Source Instance. Now, when I send the request then it is submitted successfully and it runs *"further 10 requests"* on own behalf so total requests are 11, but after completion of *"further 10 requests successfully"*, the master request which is send from the Central Instance only that one is failed *(Phase: Completed, Status: Error)* and no link is generated on the Central Instance. But when I click on "View Log..." button for the Error of master request then interesting error shown see below:
    Application Implementation: Version : 12.0.0
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    AZR12EXTRACTOR module: iSetup R12 Extractor
    Current system time is 03-JAN-2012 09:40:33
    Working Directory= c:\oracle\VIS\inst\apps\VIS_tarebsr12\logs\appl\conc\log\
    CP Request Directory=> c:\oracle\VIS\inst\apps\VIS_tarebsr12\logs\appl\conc\log\lreq395200
    Concurrent Request Parameters
    SELECTION_SET_NAME=Complete SS GF TAR
    JOB_NAME=Complete EXT SS GF TAR
    USER_NAME=ISETUP
    DBC_FILE_NAME=VIS.dbc
    IS_REMOTE=Y
    REQUEST_TYPE=E
    Environment Parameters
    TWO_TASK=VIS
    APPLCSF=c:\oracle\VIS\inst\apps\VIS_tarebsr12\logs\appl\conc
    APPLLOG=log
    FNDTOP=c:\oracle\VIS\apps\apps_st\appl\fnd\12.0.0
    FNDSECURETOP=c:\oracle\VIS\inst\apps\VIS_tarebsr12\appl\fnd\12.0.0\secure
    Export Apis...
    1)FND_User
    2)FND_RequestGroup
    3)FND_Menu
    4)FND_KeyFlexfields
    5)FND_DescFlexfields
    6)FND_ConcPrograms
    7)FND_DocumentSequenceAssignments
    8)FND_DocumentSequenceCategories
    9)FND_DocumentSequences
    10)FND_Lookups
    11)FND_Printers
    12)FND_Responsibilities
    13)FND_ValueSetValues
    14)AZ_PreValidator
    15)AZ_SelectionSets
    Entity Name: Selection Sets
    Type: BC4J
    Executable Path: oracle.apps.az.migrator.server.SelectionSetsAM
    Ignore Warnings and Continue: No
    Update Existing Records: Yes
    Time Taken(seconds): 0.0
    Logging parameter criteria xml
    <?xml version="1.0"?><parameters><conjunction>AND</conjunction><language></language><mode type="Import"> </mode> <mode type="Export"><param type="NameValuePair" seq="1" display="DisplayEnabled" editable="Editable"><operator>LIKE</operator><separator> </separator><name>SELECTION_SET_CODE</name><value>1041</value><msgcode></msgcode><appcode></appcode><filtercode>SELECTION_SET_CODE</filtercode><datatype></datatype><sqlforlov></sqlforlov><sqlforlovcol></sqlforlovcol></param></mode> </parameters>
    Status: SUCCESS
    There is no Error it is completed successfully, but (Phase: Completed, Status: Error) (RED Color) is shown. What does it means?
    Please guide me which case is best; either I should make Same Selection Sets on all the Instances or the Case: 1 is the best approch.
    Regards,
    Waqas Hassan

  • Multiple ASM Instances on single node

    Hi All,
    After going through some threads it seems to me that multiple ASM instances on a node is not supported and recommended by ORACLE but i coudn't find any ORACLE document or support matrix mentioning this. Can any one give me a pointer to this. Please correct me if i am wrong and Multiple ASM instances are fully supported by ORACLE.
    Thanks,

    Multiple ASM instances on a single node are supported but not recommended due to several issues you could encounter during that kind of configuration.
    Possible interraction between those instances for identification of each disk area usage,each disk area permissions , database instance to asm instance mapping and so on, could result in unwanted behaviour, as ASM is in some way just oracle's representation of LVM.
    Intention is that any kind of distinction/separation of oracle related data under the ASM should be done through the disk groups.
    So it seems it would be better to apply this kind of logic rather than doing suspiciously magical and rare configurations that could bring you similiary strange and unexpected problems.

  • Error : DB Mapping does not exist for the Host in iSetup

    Hi Experts,
    We configured iSetup for migrating data from source to target, while configuring instance mapping has been completed successfully but while extracting data from source to target is showing below error :
    Error : DB Mapping does not exist for the Host:<host name > with Two Task: TEST
    EBS Version : 12.1.1 with 11.1.0.7 DB
    OS : RHEL 5.3 x86_64
    Any idea much Appreciated .
    Thanks in advance,
    Regards,
    900076.

    900076 wrote:
    Hi Experts,
    We configured iSetup for migrating data from source to target, while configuring instance mapping has been completed successfully but while extracting data from source to target is showing below error :
    Error : DB Mapping does not exist for the Host:<host name > with Two Task: TESTIs this the complete/exact error message?
    EBS Version : 12.1.1 with 11.1.0.7 DB
    OS : RHEL 5.3 x86_64
    Any idea much Appreciated .
    http://docs.oracle.com/cd/E18727_01/doc.121/e12899/toc.htm
    http://docs.oracle.com/cd/E18727_01/doc.121/e15842/toc.htm
    Thanks,
    Hussein

  • Message mapping strange behaviour

    Hi, all.
    I have designed simple message mapping:
    Source Message
    mt_Object
    -ObjectVersion [1..1]
    --TimeStamp [string][1..1][lenght=13]
    --UID [string][0..1]
    -Name [string][0..1]
    Target Message
    Z_TEST (imported RFC function description)
    -TIMESTAMP [string][0..1][length=13]
    -NAME [string][0..1]
    So mapping looks like that:
    mt_Object/ObjectVersion/TimeStamp -> Z_TEST/TIMESTAMP
    mt_Object/Name -> Z_TEST/NAME
    When I test this mapping in Integration Repository (IR) manually (enter values in test XML instance) mapping works properly.
    But when i load XML-instance from XML file after mapping target tag Z_TEST/TIMESTAMP is not produced however tag Z_TEST/NAME is produced properly.
    Where am I incorrect?
    Best regards.

    Maxim,
    You might also want to check what "Context" Z_TEST/TIMESTAMP has been assigned to. The error could be because when the message is processed in the queue, it is not picking the right context object.
    regards,
    SK

  • Data Maps & Custom Collectors

    Hi Guys,
    I am fairly new to the development side of the collectors for Sentinel,
    and have what may be a total beginners question but I've tried the docs
    and feel I am getting no where with an answer so hopefully you can help
    out
    So I am constructing an event from within the collector, I have read
    that its best practice to try to use maps where possible so I am trying
    to do this. If I hard code for example e.TargetUserID = <the parsed
    string that I have>, then that works, but I want to try to make use of
    the Rec2Evt.map as most of the data that I am populating at this point
    is listed in here.
    What I have done is to add into the the following:
    Code:
    Collector.prototype.initialize = function() {
    this.MAPS.Rec2Evt = new DataMap(this.CONFIG.collDir + "/Rec2Evt.map");
    Then within parse have the following:
    Code:
    rec.testIP = "123.4.5.6";
    rec.convert(this, instance.MAPS.Rec2Evt);
    instance.SEND_EVENT = true;
    return true;
    Within the Rec2Evt.map file it has the default list of Sentinel Event
    Fields and I have appended a record for TargetIP: TargetIP,testIP
    Have I missed any obvious steps out? What I was expecting to happen was
    when the event is recieved and parsed in Sentinel the TargetIP field
    should have the value 123.4.5.6, when I look in either the ESM or the
    Sentinel 7 webUI I dont see this field getting set, other fields which I
    manually set are being set correctly.
    This is the first time that I have tried to use the data maps so I
    assume I am doing something wrong and any pointers you guys have would
    be great,
    Thanks
    alanforrest
    alanforrest's Profile: http://forums.novell.com/member.php?userid=90508
    View this thread: http://forums.novell.com/showthread.php?t=453791

    Hi Alan,
    I'm not quite sure what you mean by "3 or 4 attributes", but here are
    some guidelines:
    Part of the Collector development process is to make a best-effort
    attempt to parse out semantically distinct fields from the input and map
    them to the Sentinel schema in a normalized way. Sometimes this is easy
    - there's an IP address that's the target of a connection, extract it
    and map it to TargetIP (TargetIP should already exist in Rec2Evt.map and
    you just need to list the 'rec' attribute into which you parsed that
    target IP). Sometimes this requires a little more work, for example
    timestamps and whatnot that need normalization. Sometimes this is really
    tricky, and you can't find a nice match to a Sentinel schema field.
    Let's break this down into the following categories:
    1) Simple 1:1 matches, like the IP address example above
    2) 1:N matches, where you need to subparse a bit. An example might be a
    path like C:\WINDOWS\system32\etc\hosts; this would map to
    TargetDataName = 'hosts', TargetDataContainer = '/windows/system32',
    TargetDataNamespace = 'c' (note that since Windows is case-insensitive,
    everything has been lowercased and the path separators normalized - we
    provide some utility flags and methods for this in the latest SDK which
    will be out soon.
    3) Mapped matches: in this scenario, you have a field maybe that
    indicates severity using some arbitrary proprietary scale, and you need
    to map this to Sentinel's 0-5 Severity. In this case it's good to use a
    KeyMap, put all your possible input values in the LHC, and then map them
    to Sentinel Severities in the RHC. Then you can use lookup() to look up
    your input and map it to the correct output, put that output in a 'rec'
    attribute, and then list that attribute in Rec2Evt.map (in this example
    on the RHS after 'Severity,'
    4) No schema match, doesn't need to be correlated: An example here
    might be "session type", which is something that Windows provides but
    that we don't (yet) have a dedicated schema field for (although we are
    considering it). Let's say you want to record that information in the
    event, but you don't need to correlate on that value. In that case you
    can use the 'add2EI()' method to add an JSON NVP to the
    ExtendedInformation field, something like 'LoginType: interactive'.
    5) No schema match, need to correlate: This is the trickiest case,
    where you can't find a place to put your data but you need it in a
    separate field so you can correlate on it. For this scenario you can use
    one of the many unallocated ReservedVarXX fields. What you need to do is
    pick an unused field, add it to Rec2Evt.map, and map your data to it.
    The trick is that you can't guarantee that some other Collector is not
    using that field for a different purpose, so you have to be a bit more
    careful when writing correlation rules etc to filter for your data
    only.
    In other words, the only attributes you should ever be adding to
    Rec2Evt.map are ReservedVar fields. BTW, the event schema is fully
    documented here:'Sentinel Event Schema'
    (http://www.novell.com/developer/plug...nt_schema.html)
    but note that not all fields are present in all platforms.
    DCorlette
    DCorlette's Profile: http://forums.novell.com/member.php?userid=4437
    View this thread: http://forums.novell.com/showthread.php?t=453791

  • Impact in adding VLANs to MST instances dynamically

    I am wondering if it's viable to add and remove VLANs dynamically to MST instances. If so, I can make a good traffic balance. However, I read in CCO site that each time some VLAN is added, the MST database is reinitialized, but will it stop the traffic a little, for example ?
    Thanks !
    Cleber

    Hi Cleber,
    MST needs to ensure that there is a consistent vlan to instance mapping, else there would be a possibility of bridging loops (see example in the end). That's why the concept of region was introduced: a region is basically a group of switches with same vlan to instance mapping.
    As a result, if you are changing the vlan to instance mapping on one switch, MST needs to reconverge because this switch just moved to a different region. The simple solution that was chosen in order to implement that was to restart the MST process from scratch as soon as you change the configuration. There might have been complex optimization, but considering that MST needs to reconverge anyway, they were not going to be very useful. This is basically an issue of MST, not of the implementation. The best work around I know of is to pre-provision vlans and instances in the MST configuration so that they are available when you need them.
    Regards,
    Francois
    Loop example with inconsistent vlan to instance mapping:
    Suppose you have two bridge A and B running two instances 1 an 2. A has two ports p1 and p2 connecting to two ports p1 and p2 on B. Instance 1 is blocking p1 on A, instance 2 is blocking p2 on B. Now suppose that vlan X is mapped to instance 2 on A but mapped to instance 1 on B -> vlan X has a permanent loop between A and B. MST prevents this issue by putting A and B in different region, and forcing all the vlans to be handle by a single instance (the CIST) between region.

  • List of Manual Setup required for iSetup to work

    Hi All,
    This is Mugunthan from iSetup development. Based on my interaction with customers and Oracle functional experts, I had documented list of manual setups that are required for smooth loading of selection sets. I am sharing the same. Please let me know if I anyone had to enter some manual setup while using iSetup.
    Understanding iSetup
    iSetup is a tool to migrate and report on your configuration data. Various engineering teams from Oracle develop the APIs/Programs, which migrates the data across EBS instances. Hence all your data is validated for all business cases and data consistency is guarantied. It requires good amount of setup functional knowledge and bit of technical knowledge to use this tool.
    Prerequisite setup for Instance Mapping to work
    ·     ATG patch set level should be same across all EBS instances.
    ·     Copy DBC files of each other EBS instances participating in migration under $FND_SECURE directory (refer note below for details).
    ·     Edit sqlnet.ora to allow connection between DB instacnes(tcp.invited_nodes=(<source>,<central>))
    ·     Make sure that same user name with iSetup responsibility exists in all EBS instances participating in migration.
    Note:- iSetup tool is capable of connecting to multiple EBS instances. To do so, it uses dbc file information available under $FND_SECURE directory. Let us consider three instances A, B & C, where A is central instance, B is source instance and C is target instances. After copying the dbc file on all nodes, $FND_SECURE directory would look like this on each machine.
    A => A.dbc, B.dbc, C.dbc
    B => A.dbc, B.dbc
    C => A.dbc, C.dbc
    Prerequisite for registering Interface and creating Custom Selection Set
    iSetup super role is mandatory to register and create custom selection set. It is not sufficient if you register API on central/source instance alone. You must register the API on all instances participating in migration/reporting.
    Understanding how to access/share extracts across instances
    Sharing iSetup artifacts
    ·     Only the exact same user can access extracts, transforms, or reports across different instances.
    ·     The “Download” capability offers a way to share extracts, transforms, and loads.
    Implications for Extract/Load Management
    ·     Option 1: Same owner across all instances
    ·     Option 2: Same owner in Dev, Test, UAT, etc – but not Production
    o     Extract/Load operations in non-Production instances
    o     Once thoroughly tested and ready to load into Production, download to desktop and upload into Production
    ·     Option 3: Download and upload into each instance
    Security Considerations
    ·     iSetup does not use SSH to connect between instances. It uses Concurrent Manager framework to lunch concurrent programs on source and target instances.
    ·     iSetup does not write password to any files or tables.
    ·     It uses JDBC connectivity obtained through standard AOL security layer
    Common Incorrect Setups
    ·     Failure to complete/verify all of the steps in “Mapping instances”
    ·     DBC file should be copied again if EBS instance has been refreshed or autoconfig is run.
    ·     Custom interfaces should be registered in all EBS instances. Registering it on Central/Source is not sufficient.
    ·     Standard Concurrent Manager should up for picking up iSetup concurrent requests.
    ·     iSetup financial and SCM modules are supported from 12.0.4 onwards.
    ·     iSetup is not certified on RAC. However, you may still work with iSetup if you could copy the DBC file on all nodes with the same name as it had been registered through Instance Mapping screen.
    Installed Languages
    iSetup has limitations where it cannot Load or Report if the number and type of installed languages and DB Charset are different between Central, Source and Target instances. If your case is so, there is a workaround. Download the extract zip file to desktop and unzip it. Edit AZ_Prevalidator_1.xml to match your target instance language and DB Charset. Zip it back and upload to iSetup repository. Now, you would be able to load to target instance. You must ensure that this would not corrupt data in DB. This is considered as customization and any data issue coming out this modification is not supported.
    Custom Applications
    Application data is the prerequisite for the most of the Application Object Library setups such as Menus, Responsibility, and Concurrent programs. iSetup does not migrate Custom Applications as of now. So, if you have created any custom application on source instance, please manually create them on the target instance before moving Application Object Library (AOL) data.
    General Foundation Selection Set
    Setup objects in General foundation selection set supports filtering i.e. ability to extract specific setups. Since most of the AOL setup data such as Menus, Responsibilities and Request Groups are shipped by Oracle itself, it does not make sense to migrate all of them to target instance since they would be available on target instance. Hence, it is strongly recommended to extract only those setup objects, which are edited/added, by you to target instance. This improves the performance. iSetup uses FNDLOAD (seed data loader) to migrate most of the AOL Setups. The default behavior of FNDLOAD is given below.
    Case 1 – Shipped by Oracle (Seed Data)
    FNDLOAD checks last_update_date and last_updated_by columns to update a record. If it is shipped by Oracle, the default owner of the record would be Oracle and it would skip these records, which are identical. So, it won’t change last_update_by or last_updated_date columns.
    Case 2 – Shipped by Oracle and customized by you
    If a record were customized in source instance, then it would update the record based on last_update_date column. If the last_update_date in the target were more recent, then FNDLOAD would not update the record. So, it won’t change last_update_by column. Otherwise, it would update the records with user who customized the records in source instance.
    Case 3 – Created and maintained by customers
    If a record were newly added/edited in source instance by you, then it would update the record based on last_update_date column. If the last_update_date of the record in the target were more recent, then FNDLOAD would not update the record. So, it won’t change last_update_by column. Otherwise, it would update the records with user who customized the records in source instance.
    Profiles
    HR: Business Group => Set the name of the Business Group for which you would like to extract data from source instance. After loading Business Group onto the target instance, make sure that this profile option is set appropriately.
    HR: Security Profile => Set the name of the Business Group for which you would like to extract data from source instance. After loading Business Group onto the target instance, make sure that this profile option is set appropriately.
    MO: Operating Unit => Set the Operating Unit name for which you would like to extract data from source instance. After loading Operating Unit onto the target instance, make sure that this profile option is set if required.
    Navigation path to do the above setup:
    System Administrator -> Profile -> System.
    Query for the above profiles and set the values accordingly.
    Descriptive & Key Flex Fields
    You must compile and freeze the flex field values before extracting using iSetup.
    Otherwise, it would result in partial migration of data. Please verify that all the data been extracted by reporting on your extract before loading to ensure data consistency.
    You can load the KFF/DFF data to target instance even the structures in both source as well as target instances are different only in the below cases.
    Case 1:
    Source => Loc1 (Mandate), Loc2 (Mandate), Loc3, and Loc4
    Target=> Loc1, Loc2, Loc3 (Mandate), Loc4, Loc5 and Loc6
    If you provide values for Loc1 (Mandate), Loc2 (Mandate), Loc3, Loc4, then locations will be loaded to target instance without any issue. If you do not provide value for Loc3, then API will fail, as Loc3 is a mandatory field.
    Case 2:
    Source => Loc1 (Mandate), Loc2 (Mandate), Loc3, and Loc4
    Target=> Loc1 (Mandate), Loc2
    If you provide values for Loc1 (Mandate), Loc2 (Mandate), Loc3 and Loc4 and load data to target instance, API will fail as Loc3 and Loc4 are not there in target instance.
    It is always recommended that KFF/DFF structure should be same for both source as well as target instances.
    Concurrent Programs and Request Groups
    Concurrent program API migrates the program definition(Definition + Parameters + Executable) only. It does not migrate physical executable files under APPL_TOP. Please use custom solution to migrate executable files. Load Concurrent Programs prior to loading Request Groups. Otherwise, associated concurrent program meta-data will not be moved even through the Request Group extract contains associated Concurrent Program definition.
    Locations - Geographies
    If you have any custom Geographies, iSetup does not have any API to migrate this setup. Enter them manually before loading Locations API.
    Currencies Types
    iSetup does not have API to migrate Currency types. Enter them manually on target instance after loading Currency API.
    GL Fiscal Super user--> setup--> Currencies --> rates -- > types
    Associating an Employee details to an User
    The extract process does not capture employee details associated with users. So, after loading the employee data successfully on the target instance, you have to configure them again on target instance.
    Accounting Setup
    Make sure that all Accounting Setups that you wish to migrate are in status “Complete”. In progress or not-completed Accounting Setups would not be migrated successfully.
    Note: Currently iSetup does not migrate Sub-Ledger Accounting methods (SLA). Oracle supports some default SLA methods such as Standard Accrual and Standard Cash. You may make use of these two. If you want to use your own SLA method then you need to manually create it on target instances because iSetup does not have API to migrate SLA. If a Primary Ledger associated with Secondary Ledgers using different Chart of Accounts, then mapping rules should be defined in the target instance manually. Mapping rule name should match with XML tag “SlCoaMappingName”. After that you would be able to load Accounting Setup to target instance.
    Organization API - Product Foundation Selection Set
    All Organizations which are defined in HR module will be extracted by this API. This API will not extract Inventory Organization, Business Group. To migrate Inventory Organization, you have to use Inventory Organization API under Discrete Mfg. and Distribution Selection Set. To extract Business Group, you should use Business Group API.
    Inventory Organization API - Discrete Mfg & Distribution Selection Set
    Inventory Organization API will extract Inventory Organization information only. You should use Inventory Parameters API to move parameters such as Accounting Information. Inventory Organization API Supports Update which means that you can update existing header level attributes of Inventory Organization on the target instance. Inventory Parameters API does not support update. To update Inventory Parameters, use Inventory Parameters Update API.
    We have a known issue where Inventory Organization API migrates non process enabled organization only. If your inventory organization is process enabled, then you can migrate them by a simple workaround. Download the extract zip file to desktop and unzip it. Navigate to Organization XML and edit the XML tag <ProcessEnabledFlag>Y</ProcessEnabledFlag> to <ProcessEnabledFlag>N</ProcessEnabledFlag>. Zip it back the extract and upload to target instance. You can load the extract now. After successful completion of load, you can manually enable the flag through Form UI. We are working on this issue and update you once patch is released to metalink.
    Freight Carriers API - Product Foundation Selection Set
    Freight Carriers API in Product Foundation selection set requires Inventory Organization and Organization Parameters as prerequisite setup. These two APIs are available under Discrete Mfg. and Distribution Selection Set. Also,Freight Carriers API is available under Discrete Mfg and Distribution Selection Set with name Carriers, Methods, Carrier-ModeServ,Carrier-Org. So, use Discrete Mfg selection set to load Freight Carriers. In next rollup release Freight Carriers API would be removed from Product Foundation Selection Set.
    Organization Structure Selection Set
    It is highly recommended to set filter and extract and load data related to one Business Group at a time. For example, setup objects such as Locations, Legal Entities,Operating Units,Organizations and Organization Structure Versions support filter by Business Group. So, set the filter for a specific Business Group and then extract and load the data to target instance.
    List of mandatory iSetup Fwk patches*
    8352532:R12.AZ.A - 1OFF:12.0.6: Ignore invalid Java identifier or Unicode identifier characters from the extracted data
    8424285:R12.AZ.A - 1OFF:12.0.6:Framework Support to validate records from details to master during load
    7608712:R12.AZ.A - 1OFF:12.0.4:ISETUP DOES NOT MIGRATE SYSTEM PROFILE VALUES
    List of mandatory API/functional patches*
    8441573:R12.FND.A - 1OFF:12.0.4: FNDLOAD DOWNLOAD COMMAND IS INSERTING EXTRA SPACE AFTER A NEWLINE CHARACTER
    7413966:R12.PER.A - MIGRATION ISSUES
    8445446:R12.GL.A - Consolidated Patch for iSetup Fixes
    7502698:R12.GL.A - Not able to Load Accounting Setup API Data to target instance.
    Appendix_
    How to read logs
    ·     Logs are very important to diagnose and troubleshoot iSetup issues. Logs contain both functional and technical errors.
    ·     To find the log, navigate to View Detail screens of Extracts/ Transforms/Loads/Standard/Comparison Reports and click on View Log button to view the log.
    ·     Generic Loader (FNDLOAD or Seed data loader) logs are not printed as a part of main log. To view actual log, you have to take the request_id specified in the concurrent log and search for the same in Forms Request Search Window in the instance where the request was launched.
    ·     Functional errors are mainly due to
    o     Missing prerequisite data – You did not load one more perquisite API before loading the current API. Example, trying to load “Accounting Setup” without loading “Chart of Accounts” would result in this kind of error.
    o     Business validation failure – Setup is incorrect as per business rule. Example, Start data cannot be greater than end date.
    o     API does not support Update Records – If the there is a matching record in the target instance and If the API does not support update, then you would get this kind of errors.
    o     You unselected Update Records while launching load - If the there is a matching record in the target instance and If you do not select Update Records, then you would get this kind of errors.
    Example – business validation failure
    o     VONAME = Branches PLSQL; KEY = BANKNAME = 'AIBC‘
    o     BRANCHNAME = 'AIBC'
    o     EXCEPTION = Please provide a unique combination of bank number, bank branch number, and country combination. The 020, 26042, KA combination already exists.
    Example – business validation failure
    o     Tokens: VONAME = Banks PLSQL
    o     BANKNAME = 'OLD_ROYAL BANK OF MY INDIA'
    o     EXCEPTION = End date cannot be earlier than the start date
    Example – missing prerequisite data.
    o     VONAME = Operating Unit; KEY = Name = 'CAN OU'
    o     Group Name = 'Setup Business Group'
    o     ; EXCEPTION = Message not found. Application: PER, Message Name: HR_ORG_SOB_NOT_FOUND (Set of books not found for ‘Setup Business Group’)
    Example – technical or fwk error
    o     OAException: System Error: Procedure at Step 40
    o     Cause: The procedure has created an error at Step 40.
    o     Action: Contact your system administrator quoting the procedure and Step 40.
    Example – technical or fwk error
    o     Number of installed languages on source and target does not match.
    Edited by: Mugunthan on Apr 24, 2009 2:45 PM
    Edited by: Mugunthan on Apr 29, 2009 10:31 AM
    Edited by: Mugunthan on Apr 30, 2009 10:15 AM
    Edited by: Mugunthan on Apr 30, 2009 1:22 PM
    Edited by: Mugunthan on Apr 30, 2009 1:28 PM
    Edited by: Mugunthan on May 13, 2009 1:01 PM

    Mugunthan
    Yes we have applied 11i.AZ.H.2. I am getting several errors still that we trying to resolve
    One of them is
    ===========>>>
    Uploading snapshot to central instance failed, with 3 different messages
    Error: An invalid status '-1' was passed to fnd_concurrent.set_completion_status. The valid statuses are: 'NORMAL', 'WARNING', 'ERROR'FND     at oracle.apps.az.r12.util.XmlTransmorpher.<init>(XmlTransmorpher.java:301)
         at oracle.apps.az.r12.extractor.cpserver.APIExtractor.insertGenericSelectionSet(APIExtractor.java:231)
    please assist.
    regards
    girish

  • How do I hit the PrtSc Key on my mac book pro 2012?

    Dear Support Community ,
    I'm using VNC onto  windows PC that has an application requiring me to use the PRTSC to start a recorder , but I can't find a combination of keys for PRTSC. Can you help please?
    I'm using a MacBookPro 2012 and REALVNC  onto a windows 2008 server using a recording package. To start the recorder I need to hit PRTSC - How do this on my Mac so it hits a PRTSC on the remote PC? ?
    With thanks for an answer.

    Print screen on Mac's is Command Shift 3, don't think that will work, but you can try it.
    Isn't there a way to invoke key commands in Windows via the command line?
    Or map the PrtScreen command to another key the Mac does use?
    For instance map it to Control Alt 3 or something?
    http://www.howtogeek.com/howto/windows-vista/map-any-key-to-any-key-on-windows-x p-vista/

  • How to specify the storage access key for a ResourceFile?

    The azure batch tutorial shows how to put program file into a public container in a storage account and let azure batch to download them to TVMs and run.
    In real world scenario, if I don't want to use a public container or a shared access signature and want the azure batch to use a access key to access the container where my task program file is located, is it possible? How to do it?

    I see that you are conversant with the issues here but for other readers let me provide a quick review:
    The properties of a task (ICloudTask/CloudTask) include a collection of ResourceFile instances. ResourceFile instances
    map blobs in Azure Storage to local files in the Container/VM/Guest-OS.  Azure Batch copies the files from storage into the VM before the task runs and it uses the SAS (and other data) in the ResourceFile to do so.
    The ICloudTask/CloudTask.FilesToStage collection exposes the object model's mechanism for customizable file staging.
     The collection accepts instances of IFileStagingProvider which ultimately are invoked to create/augment the ResourceFile collection on the task.
    A default implementation is provided: FileToStage.
    An instance of FileToStage maps a file local to the client library to a file ultimately in the VM (indirecting through
    blob storage/SAS).  When instances of FileToStage are added to the CloudTask.FilesToStage the following occurs on Commit()/AddTask:
    A container is created in the given storage account.  The name is constructed to avoid collisions.
    The container is given a restricted SharedAccessBlobPolicy.
    All of the local files specified are uploaded to that container
    An SAS for each blob is created
    (24hr expiry)
    and a ResourceFile is constructed for each FileToStage
    The ResourceFile for each FileToStage is added to the CloudTask.ResourceFiles collection.
    FileToStage and the FilesToStage collection are intended to assist the customers that either want a shortcut around the issues of blob containers and SAS or want to control the file staging process via a custom implementation of IFileStagingProvider.
    When using the default implementation FileToStage to stage local files, care should be taken to monitor the number of containers created and the storage cost implications.
    Your concerns about SAS based methods are not directly addressed by the default implementation.  I would only note that SAS values can be re-used across tasks and jobs so the existing implementation can be used to get local data into storage and usable
    SAS values.  However, you already have these sorts of features implemented it seems and as you point out, there is the problem of SAS expiry. 
    daryl

  • Help required on XML format for Sourcing system

    Hi All,
    We are Integrating PI with SAP Sourcing and Sourcing is expecting the XML in below structure.
    XML Required by the target system:
    <?xml version="1.0" encoding="UTF-8"?>
    <sapesourcing defaultlanguage="" xsi:noNamespaceSchemaLocation="Locations.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
          <objects>
    <object classname="">
                <fields>
    Field1....n
                </fields>
            </object>
          </objects>
    </sapesourcing>
    I have converted the XML in to XSD and imported in to PI, but I have no idea how to Populate the tag as it required by Sourcing system
    tag must be required by sourcing system is:
    "<sapesourcing defaultlanguage="" xsi:noNamespaceSchemaLocation="Locations.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
    Mapping generates below tag :
    <?xml version="1.0" encoding="UTF-8"?>
    <sapesourcing defaultlanguage="">  ........
    Regards,
    Mani

    Hi Mani,
    Try to add the xml tag using XSLT mapping.  Go through with the below link.
    add Namespace in message mapping | SCN
    Thanks,
    Satish.

  • Updater keeps restarting

    Hi ,
    Iam getting an issue, where the updater instance (as part of SAP convergent charging) keeps restarting.
    The trace is as follows:
    2011-10-28 06:11:32.742 - INFORM - [LAUNCHER_PROCESS_ID] - The process identifier is 6,964.
    2011-10-28 06:11:32.746 - INFORM - [LAUNCHER_STARTING] - Starting SAP Convergent Charging 2.0 4.3.7.0.
    2011-10-28 06:11:32.746 - INFORM - [LAUNCHER_SYSTEM_INFO] - SAP AG 1.6.0_26 platform on Windows Server 2008 R2 6.1 amd64.
    2011-10-28 06:11:32.831 - INFORM - [tracing] - Starting with dynamic system definition. System ID is 'C07'.
    2011-10-28 06:11:32.845 - INFORM - [tracing] - System Discovery Browser started. Will search for 60 seconds maximum step 2 seconds for system 'C07' on internal interface.
    2011-10-28 06:11:32.845 - INFORM - [tracing] - System Discovery Browser started. Now searching for system 'C07' on internal interface.
    2011-10-28 06:11:34.858 - INFORM - [tracing] - System Discovery Browser stopped. System 'C07' found.
    2011-10-28 06:11:35.002 - INFORM - [tracing] - Heart beat service: connection (dispatcher#[email protected]:2100) is added.
    2011-10-28 06:11:37.174 - INFORM - [LAUNCHER_WAITING_AUTHORIZATION] - Instance waiting authorization from dispatcher.
    2011-10-28 06:11:42.182 - INFORM - [LAUNCHER_AUTHORIZATION_RECEIVED] - Authorization received from dispatcher.
    2011-10-28 06:11:43.215 - FATAL - [LAUNCHER_INVALID_INSTANCE_ID] - No instance info for this instance id 'updater#1'.
    2011-10-28 06:11:43.215 - FATAL - [LAUNCHER_INSTANCE_MAP_INITIALIZATION_FAILURE] - Instance map initialization failure. Reason is 'No instance info for this instance id 'updater#1'.'.
    2011-10-28 06:11:43.215 - WARN - [tracing] - Dump occurred exception ...  Exception: No instance info for this instance id 'updater#1'. Stacktrace: com.highdeal.admin.LaunchingAbortException: No instance info for this instance id 'updater#1'.
         at com.highdeal.launcher.Instance.initializeFromInstanceMap(Instance.java:365)
         at com.highdeal.launcher.Instance.initialize(Instance.java:219)
         at com.highdeal.launcher.Launcher.main(Launcher.java:116)
    2011-10-28 06:11:43.215 - FATAL - [LAUNCHING_ABORTED] - SAP Convergent Charging 2.0 launching is aborted (please read previous messages).
    Please advice.
    Thanks.
    Edited by: Zain Ahmed on Oct 28, 2011 5:21 PM

    Hi. Try repairing the disk and disk permissions by using the install disk.

Maybe you are looking for

  • How to disable the blue highlight fields around links in a PDF in Adobe Reader 10.2?

    I've created a document in InDesign CS5, and when viewed on Adobe Reader 10.2 (on an iPad) all the table of contents links within the document, and all links to external websites have a blue highlight field around them. While I appreciate the fact th

  • CS5.5 Master Collection on Case-Sensitive File System

    Hello, The first thing I'll say is that I already know it's not possible to install on a case-sensitive file system. So, on to my question.  I have been tasked to find out a way to get this installed on our 30-35 Macs with case-sensitive file systems

  • "Download a copy" shrinks downloaded file name

    I'm using SharePoint Server 2013 with Russian language pack. When in a document library, I choose "Download a copy" menu item in ECB on a file, and SharePoint 2013 downloads a file correctly (it can be opened and the file content is OK), but the file

  • Quarter screen for games on N80

    Downloaded a few games onto nokia N80 and they're all installing ok, but on loading they're coming up in just the top left quarter of the screen? Is this cause games aren't specifically designed for smart phones? Have I got the wrong setting somewher

  • Do I need to turn computer (Mac) off before turn LTO5 on?

    I never use any LTO before. Someone told me that I need to shut computer down before turn Ultrium LTO 5 (3000) on. I want to make sure if it is must or can be turn on LTO even computer is on?  I am using Macintosh. Thanks