Dedicated SolMan required for ...?

Hi,
i have a question which i was not able to answer by reading guides or searching in the sdn, so maybe there is not such a question in this forum already or i am blind.
My question is:
As it is finally possible to use a SolMan for totally different landscaped or customers, are there limitations in the kind, that there has to be a dedicated SolMan for new releases? I heard that if you have a Portal at version 7.0, you have to have a dedicated SolMan, instead of a shared one, otherwise SAP will not maintain any support for the Portal landscape.
Is that only a rumor, or is it true? I was not able to validate this information.
thanks in advance for any information

Hello,
No you do not need to dedicate a Solution Manager system to the Portal. We have customers with their portal managed along with other solutions, and they have had no need to dedicate a Solutiion Manager server to the Portal.
The whole point of having a Solution Manager system is to manage the customer's system landscape.
Of course depending on how large the the landscape is, you may need to distribute the load over clustered Solution Manager systems, or size to a larger server. The answer really depends upon your available resources. The Portal is just a managed system. Now if you are runnign service desk, this can  place a load on the solution manager system, so if it is not sized large enough, you may elect to dedicate a service to run Support Desk.
But the requirement would only be a requirement if your current server(s) are not sized properly.
I hope this answers your question.

Similar Messages

  • How many SLD's require for 2 solman system requirements?

    Scenario1:
    One solman system for entire landscape->One SLD is required.
    Is this SLD should be a seperate system or can be within solman?
    Scenario2:
    One solman system for DEV/QAS and other solman system for PRD.
    How many SLDs are required? Can we manage with single SLD?
    If two SLDs, Then both should be seperate systems. i.e, One SLD -> DEV/QAS Solman, Another SLD ->PRD Solman.
    If only one SLD, Then it should be seperate system or can be within solman?
    We do have EP in client enviroinment. Can we use SLD of EP?

    Hi,
    you can use one SLD for all your systems if you want. You can either use an existing one or the one which comes with Sol Man. That's up to you.
    Any scenario is possible.
    /cheers

  • Hardware Requirements for SOLMAN

    Hi everybody,
    I'll appreciate (with points ) if somebody can give me the minimum hardware requirement for installing Solution Manager in a Windows machine.
    We are about to install a ECC 6.0 (with IS-U/CCS) for building a Demo.
    Thanks,
    Jessica.

    H/W specification for Solman 4  for Windows/Oracle Standalone
    RAM -  3 GB min
    HDD - 80 GB 
    CPU - 2 (1 CPU also will do)
    You can check under https://websmp107.sap-ag.de/instguides under SAP Components->Solution Manager
    You check the hardware and software requirements for your operating system (OS) and the SAP instances using the Prerequisite Checker tool.
    The Prerequisite Checker provides information about the requirements that you need to meet before you start the installation. For example, it checks the requirements for the different installation services.

  • Is Solution Manager required for ECC 6.0 upgrade?

    We will upgrade our system from 4.7 to ECC 6.0.  Is Solution Manager a required tool in ECC 6.0?  My understanding is Solution Manager has to be run on a different server, ie.  we will have to buy a new server.   We have a small SAP implementation with only FI & CO modules, so we are trying to cut our costs if we could.  Can anybody share their experience?  Thanks for any input.

    In an Upgrade , Solution Manager is required for Upgrade Keys and downloading the support packs . Now a days to download the support packs you have to approve it in Solution Manager otherwise it won't show up in your download basket.
    May be SAP can provide you the keys & approve the SPs ( they used to do that before ) , in that case it isn't required.
    You can also manage your upgrade project in Solman . Sooner or later you need to install solution manager. it's a nice system.Many good features like central monitoring, early watch alert etc etc ...
    Thanks
    Prince Jose

  • Minumum hardware requirement for Siebel Document server?

    what is the minimum hardware requirement for Siebel document server?

    To install the Document Server, perform the following steps:
    Obtain the required hardware.
    Install the Siebel Server.
    Install third-party applications.
    After installing the Siebel Server, you may have to follow some of the steps described in Configuring the Document Server. The Document Server is a component of Siebel eDocuments, which can be installed during installation of the Siebel Server. Consult with your system administrator to verify that the Document Server has been installed and configured according to the requirements listed in the following sections.
    Hardware Requirements
    The Document Server is supported on the Windows 2000 platform. To maintain stability and performance, the Document Server must run on a dedicated host machine. The Siebel Server on this machine should host one instance of the Siebel eDocuments component group. No other components should be running other than required system components.
    To support more users, you can run additional instances of the Siebel eDocuments component group on additional dedicated host machines.
    Installing the Siebel Server
    Install the Siebel Server on the dedicated host machine of the Document Server. For more information about installing the Siebel Server, see Siebel Server Installation Guide for Microsoft Windows.
    During installation, you will have the option to enable component groups. Select the Siebel eDocuments component group. Do not enable any other component groups.
    This step can also be performed after installation is complete, as described in the section Enabling the Component Group.
    Installing Third-Party Applications
    You must install Microsoft Word.
    To install Microsoft Office
    Log on to the server machine using the same user account that the Siebel Server NT service uses. This user account must belong to the Administrators group.
    NOTE: The install must use the same account to be used by the NT Service. The installer configures COM security settings so that the installing user account will have the correct permissions to access and launch the application. If the install uses an account other than the Siebel Server NT service account, errors may occur when the Document Server tries to launch the application.
    Install Microsoft Office using a Typical, or complete, installation. Verify that your version of Microsoft Office will install the Web Authoring Tools (HTML) component with the Typical installation. If it does not, you will need to use the Custom installation option, and install all of the typical components as well as the Web Authoring Tools (HTML) component.
    Start the applications that will be used by the Document Server. This forces the applications to register themselves.
    Close the applications.
    Regards,
    Joseph Arul Dass

  • PC requirements for Diadem 9.0

    We want to change in our lab the PC dedicated for Diadem 9.0 for a new one with better perfomance, then I wanted to know what are the PC requirements for Diadem 9.0 (RAM memory, processor, Motherboard,...), or what you recommend me.

    Hi there,
    You can go to your diadem readme file and check it. If you don't have it:
    1. Go to www.ni.com/support
    2. Go to drivers and updates and select diadem version 9.0
    3. From there you can download diadem software and every link also includes the readme file. In it you will see the minimum requirements needed.
    For Diadem 9.0 SP1 (9.0.1) these are the minimum requirements:
    ===================
    System requirements
    ===================
    DIAdem has the following prerequisites for smooth operation:
    Hardware
    - IBM compatible personal computer with an Intel Pentium (II, III
    or IV) or AMD (Athlon, K6 or K7) processor
    - RAM memory: at least 256 MB; recommended 512 MB
    - Hard disk memory: at least 190 MB free space; at least 450 MB
    free space during installation
    - Graphics board: color depth at least 16 bit (High Color);
    recommended 24 or 32 bit (True Color) and a screen resolution of
    1024 x 768 or higher
    Operating systems
    - Windows 98 SE
    - Windows ME
    - Windows NT 4, Service Pack 6 or later
    - Windows 2000, Service Pack 3 or later
    - Windows XP Pro, Service Pack 1 or later
    Warning: DIAdem cannot run on Windows XP Pro without Service Pack
    1 or later. DIAdem cannot run on Windows 95.
    Miscellaneous
    Internet Explorer Version 5 or later
    Of course a better PC will optimize the SW.
    Regards,
    Jaime Cabrera
    Regards,
    Jaime Cabrera
    NI Applications Engineering Spain

  • Hardware required for RAC setup at home

    Hi All,
    Could you please suggest the Hardware required for practicing RAC at home.
    Regards,
    VN

    Hi, "user7202581",
    For practicing purposes, you could consider a VBox / OVM based setup at home. We have used those for Hands On Labs in the course of Oracle Open World more than once. I recommend you use a machine with at least 8GB memory and dual core (HT enabled) processors. Some configuration examples: Lenovo Laptop, ThinkPad, 8GB, 320GB HDD, i5 dual core, 2.5 GHz, Win 7 or a Mac Mini, latest version, i5 dual core, 2.3GHz and 8GB memory would do the trick for a two node cluster. Note that I mention the hosting OS in case you use VBox. For OVM based setups you either have to use a dual boot system or use a dedicated machine. You will be somewhat restricted with respect to the hardware failures you can impose on this system obviously (due to the VM setup), but for practicing purposes, you should find those environments sufficient, I think. Also see: RAC on OVM templates: http://www.oracle.com/technetwork/server-storage/vm/rac-template-11grel2-166623.html
    Hope that helps. Thanks,
    Markus

  • Updated Information Required for KB: 159865

    Dear "dedicated
    team for the development issues who will be able to assist you accordingly."
    Please assist RE: 
    http://answers.microsoft.com/en-us/windows/forum/windows8_1-performance/updated-information-required-for-kb-159865/e74cd617-49f7-4725-b0ac-3c1cc7cbc3f7
    Steve F

    Hi Kevin! I have been unable to reply on this forum for quite a while. Unfortunately, after being advised to delete and re-create my LIVE account, by a member of a Microsoft Support team, I tried replying, but received IIS errors, then the page simply wouldn't
    display the "reply box", no matter what browser or version!
    Beyond that, I don't have much of an update. I didn't think you were serious. What would you like me to check "%windir%\inf\setupapi.dev.log"
    for, specifically? I'm asking where to find a mapping, without even knowing how the mapping is done, or what it would be logged as, if it is logged at all! 
    Asking a vague question in response to a specific question, is by no means helpful. Again, I have various certifications from various companies as a Storage Administrator, Unfortunately, I have yet to get a certification of divination in regards to the source
    code of proprietary software.
    Someone who knows what the error message means in physical reality needs to explain it, because the message itself does not refer to physical reality in any coherent
    way. It seems to refer to an abstract identifier, although, that is of course, speculation at best.

  • What is Minimum database requirement for EBS R12

    What is Minimum database requirement for EBS R12? for example if it works only with enterprise edition or with standard edition also.
    Regards,
    Sandeep V

    Question is very interesting and very important. The link does not answer if "standard edition" can be used. Obviously, Rapid Install installs EE database. But we can move the database to another dedicated database server.
    And the question becomes "Can we move it to a Standard Edition database server"? Huge ramifications on costs. Extremely important. Nowhere I could find positive answer. I wonder if anyone had found an answer?AFAIK, it is not supported.
    My own research shows that VIS database (built at home) has partitioned tables in APPLSYS and APPS schemas. Partitioning is not supported in Standard Edition.Correct.
    I wonder if we can migrate the database and manually move several partitioned tables into non-partitioned Standard Edition. Will EBS R12 break? Will Oracle Support cancel its support?Oracle will not support this -- Please log a SR to confirm this with Oracle Support.
    Thanks,
    Hussein

  • ORACLE requirement for 8500 TPS

    Hi All,
    I am designing a Database system in which i have required 8500 TPS limit (Transactions per second) . I want to use HP Blade Server Quad processors with 2.0 GHZ and 4 GB RAM. OS will be RHEL 5.0 . Can any one suggest me how many nodes are required for design this kind of Database.
    Thanx
    Rajesh Bansal

    30 Milli Second(MAX)Is it just a portion (time spent in DB) in total response time or is it end-user time? What kind of client(s) will you have?
    Looking at the proposed amount of server memory, number of sessions and extremely low maximum timings (even if it's just time in DB), I would say you'll have to have at least 4 of such servers and very small DB size (less than a dozen of GB): for 1000 dedicated Oracle processes at least 4-5GB of PGA is required. Suppose your SGA would be 1.7 GB (since 64bit installation is not necessary for that type of server, and 1.7GB is default possible maximum for SGA on a 32bit Linux), out of which 384MB - shared pool, 128MB - large+java pools+redo log buffer, and the rest part for cache (around 1100MB). ~1 GB should be reserved for OS. Around 1GB per server is left for PGA. That's why 4 servers is low bound with 4GB of memory. And my calculations are actually for single instance, not a RAC node. For RAC there are additional requirements (additional processes, clusterware, maybe ASM, etc.).
    I wouldn't even try to run an application with that requirement to response time on a nodes with such a low amount physical memory. 8-32 GB is a more appropriate choice, IMO.

  • System Requirements for Hosting an Event - What about Siebel Remote ?

    Siebel 7.8 bookshelf, chapter 8 "Hosting an Event", section "System Requirements for Hosting an Event" mentions that a dedicated connection to Siebel Database is required. Does this really mean you cannot use Siebel Remote to manage Events on a local DB if you are on site with no access to the network ? Are you understanding the same ? Any experiences or comments ?
    Best Regards to All
    Fred

    My guess is you can, unless there is some Server triggered stuff that doesn't get triggered from the Remote Client. Although usually the trigger still happens as soon as you synchronize.
    If you are not doing any "complex" stuff, you should still be able to manage Accounts and Contacts, for example.
    You can test this yourself and see if you need to activate any Views for local use. To be really sure I think it is better to talk to your TAM or open a low prio SR.
    If you really need to be connected, you better investigate the use of UMTS, GPRS, etc.

  • Table for temporarily stock /requirement  for tocde /afs/mdo4

    Dear expart,
    I developed a zreport for display STO number, Production order number, operation etc.
    mainly I use here AFPO,AFRU, MSEG, MCHB & J_3ABDSI Table.
    My problem is, when I compare with Tcode /afs/md04 tab-temporarily stock /requirement  .
    for some MATNR
    data show properly.
    and some MATNR are blank  with message Last MRP run on 04.04.2011 or such date.
    Hhow i can filter the in Z-report which MATNR are not in Tcode /afs/md04 tab-temporarily stock /requirement  .
    my code is.
    SELECT  j_3abdsiaufnr j_3abdsimatnr j_3abdsij_4krcat j_3abdsimbdat j_3abdsi~menge INTO TABLE it_eket FROM j_3abdsi
        FOR ALL ENTRIES IN it_final1
        WHERE
              j_3abdsi~j_4krcat = it_final1-j_4ksca AND
              j_3abdsi~matnr = it_final1-matnr AND
              j_3abdsi~werks = it_final1-werks AND
              j_3abdsi~bdart = 'TB' AND
              j_3abdsi~plart = 'B' AND
              j_3abdsi~bsart = 'UB'.
    Pls help .
    Rayhan
    Edited by: Abu Rayhan on Apr 5, 2011 10:24 AM

    CLEAR i_data1.
      REFRESH i_data1.
      LOOP AT i_mara.
        READ TABLE i_marc WITH KEY matnr = i_mara-matnr  BINARY SEARCH .
        IF sy-subrc = 0 .
          CALL FUNCTION 'J_3AM_DISPOSITION_DISPL'
            EXPORTING
              i_matnr                 = i_mara-matnr
              i_werks                 = p_werks
          I_DIALOG                = ' '
          I_SPERR                 = ' '
          I_AUFRUF                = ' '
          I_BANER                 = ' '
             i_todate                = todate
          I_HEADER_ONLY           = ' '
           IMPORTING
             ex_dbba                 = i_data3
          E_MDKP                  =
          EX_PBBD                 =
          EX_MELD                 =
          E_CM61M                 =
           EXCEPTIONS
             material_gesperrt       = 1
             wbz_fehler              = 2
             material_prgr           = 3
             dispo_gesperrt          = 4
             OTHERS                  = 5
          IF sy-subrc <> 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
          ELSE.
            IF i_data3[] IS NOT INITIAL.
              LOOP AT i_data3 INTO i_data4 .
                  IF ( i_data4-j_3astat ='A' OR i_data4-j_3astat ='T') AND i_data4-j_3abskz ='C'   .
                    READ TABLE i_t001l WITH KEY lgort = i_data4-lgonr  BINARY SEARCH .
                    IF sy-subrc = 0 .
                      CLEAR i_data1str.
                      i_data1str-matnr = i_data4-matnr.
                      i_data1str-j_3asize = i_data4-j_3asize .
                      i_data1str-lgort = i_data4-lgonr.
                      i_data1str-menge = i_data4-menge .
                      COLLECT i_data1str INTO i_data1.
                    ENDIF.
                  ENDIF.
              ENDLOOP.
            ENDIF.
          ENDIF.
      ENDLOOP.
    Questions
    i_mara  recordset 500 material
    It take more than 3 house to finished this report.
    do changed ?
    do you help me ?
    Think.

  • What are the settings required for QM in procurement

    Hi Team,
    What are the settings required for QM in procurement. I have  set indicator for QM in procurement in QM view in material master.
    I am not clear about  following fields to be maintained in QM view.
    QM Control Key
    Certificate type
    Target QM system
    Tech. delivery terms Indicator.
    Please suggest me in which case to be used this fiels. Is it relivant to Quality Certificates.
    Thanks

    Hi,
    All meaning are
    QM Control Key :
    If you activate the indicator for QM in procurement in the material master record at the client level, you must also store a control key at the plant level for quality management in procurement.
    Certificate type :
    Certificate types applies to the Certificate processing in procurement  &  Certificate creation
    Target QM system :
    whether the vendor's verified QM system, according to vendor master record or quality info-record (for a combination of vendor/material) meets the requirements for QM systems as specified in the material master
    -  If you activate the indicator for QM in procurement in the material master record at the client level, you must also store a control key at the plant level for quality management in procurement. If you want Procurment control then accordingly define Control Key.
    -  If you want Vendor's perticular certificate for Material then you have to define Certificate type.
    Also, you have to maintain Material, Vendor's Info record at plant level.
    Thanks,
    JM

  • List of Manual Setup required for iSetup to work

    Hi All,
    This is Mugunthan from iSetup development. Based on my interaction with customers and Oracle functional experts, I had documented list of manual setups that are required for smooth loading of selection sets. I am sharing the same. Please let me know if I anyone had to enter some manual setup while using iSetup.
    Understanding iSetup
    iSetup is a tool to migrate and report on your configuration data. Various engineering teams from Oracle develop the APIs/Programs, which migrates the data across EBS instances. Hence all your data is validated for all business cases and data consistency is guarantied. It requires good amount of setup functional knowledge and bit of technical knowledge to use this tool.
    Prerequisite setup for Instance Mapping to work
    ·     ATG patch set level should be same across all EBS instances.
    ·     Copy DBC files of each other EBS instances participating in migration under $FND_SECURE directory (refer note below for details).
    ·     Edit sqlnet.ora to allow connection between DB instacnes(tcp.invited_nodes=(<source>,<central>))
    ·     Make sure that same user name with iSetup responsibility exists in all EBS instances participating in migration.
    Note:- iSetup tool is capable of connecting to multiple EBS instances. To do so, it uses dbc file information available under $FND_SECURE directory. Let us consider three instances A, B & C, where A is central instance, B is source instance and C is target instances. After copying the dbc file on all nodes, $FND_SECURE directory would look like this on each machine.
    A => A.dbc, B.dbc, C.dbc
    B => A.dbc, B.dbc
    C => A.dbc, C.dbc
    Prerequisite for registering Interface and creating Custom Selection Set
    iSetup super role is mandatory to register and create custom selection set. It is not sufficient if you register API on central/source instance alone. You must register the API on all instances participating in migration/reporting.
    Understanding how to access/share extracts across instances
    Sharing iSetup artifacts
    ·     Only the exact same user can access extracts, transforms, or reports across different instances.
    ·     The “Download” capability offers a way to share extracts, transforms, and loads.
    Implications for Extract/Load Management
    ·     Option 1: Same owner across all instances
    ·     Option 2: Same owner in Dev, Test, UAT, etc – but not Production
    o     Extract/Load operations in non-Production instances
    o     Once thoroughly tested and ready to load into Production, download to desktop and upload into Production
    ·     Option 3: Download and upload into each instance
    Security Considerations
    ·     iSetup does not use SSH to connect between instances. It uses Concurrent Manager framework to lunch concurrent programs on source and target instances.
    ·     iSetup does not write password to any files or tables.
    ·     It uses JDBC connectivity obtained through standard AOL security layer
    Common Incorrect Setups
    ·     Failure to complete/verify all of the steps in “Mapping instances”
    ·     DBC file should be copied again if EBS instance has been refreshed or autoconfig is run.
    ·     Custom interfaces should be registered in all EBS instances. Registering it on Central/Source is not sufficient.
    ·     Standard Concurrent Manager should up for picking up iSetup concurrent requests.
    ·     iSetup financial and SCM modules are supported from 12.0.4 onwards.
    ·     iSetup is not certified on RAC. However, you may still work with iSetup if you could copy the DBC file on all nodes with the same name as it had been registered through Instance Mapping screen.
    Installed Languages
    iSetup has limitations where it cannot Load or Report if the number and type of installed languages and DB Charset are different between Central, Source and Target instances. If your case is so, there is a workaround. Download the extract zip file to desktop and unzip it. Edit AZ_Prevalidator_1.xml to match your target instance language and DB Charset. Zip it back and upload to iSetup repository. Now, you would be able to load to target instance. You must ensure that this would not corrupt data in DB. This is considered as customization and any data issue coming out this modification is not supported.
    Custom Applications
    Application data is the prerequisite for the most of the Application Object Library setups such as Menus, Responsibility, and Concurrent programs. iSetup does not migrate Custom Applications as of now. So, if you have created any custom application on source instance, please manually create them on the target instance before moving Application Object Library (AOL) data.
    General Foundation Selection Set
    Setup objects in General foundation selection set supports filtering i.e. ability to extract specific setups. Since most of the AOL setup data such as Menus, Responsibilities and Request Groups are shipped by Oracle itself, it does not make sense to migrate all of them to target instance since they would be available on target instance. Hence, it is strongly recommended to extract only those setup objects, which are edited/added, by you to target instance. This improves the performance. iSetup uses FNDLOAD (seed data loader) to migrate most of the AOL Setups. The default behavior of FNDLOAD is given below.
    Case 1 – Shipped by Oracle (Seed Data)
    FNDLOAD checks last_update_date and last_updated_by columns to update a record. If it is shipped by Oracle, the default owner of the record would be Oracle and it would skip these records, which are identical. So, it won’t change last_update_by or last_updated_date columns.
    Case 2 – Shipped by Oracle and customized by you
    If a record were customized in source instance, then it would update the record based on last_update_date column. If the last_update_date in the target were more recent, then FNDLOAD would not update the record. So, it won’t change last_update_by column. Otherwise, it would update the records with user who customized the records in source instance.
    Case 3 – Created and maintained by customers
    If a record were newly added/edited in source instance by you, then it would update the record based on last_update_date column. If the last_update_date of the record in the target were more recent, then FNDLOAD would not update the record. So, it won’t change last_update_by column. Otherwise, it would update the records with user who customized the records in source instance.
    Profiles
    HR: Business Group => Set the name of the Business Group for which you would like to extract data from source instance. After loading Business Group onto the target instance, make sure that this profile option is set appropriately.
    HR: Security Profile => Set the name of the Business Group for which you would like to extract data from source instance. After loading Business Group onto the target instance, make sure that this profile option is set appropriately.
    MO: Operating Unit => Set the Operating Unit name for which you would like to extract data from source instance. After loading Operating Unit onto the target instance, make sure that this profile option is set if required.
    Navigation path to do the above setup:
    System Administrator -> Profile -> System.
    Query for the above profiles and set the values accordingly.
    Descriptive & Key Flex Fields
    You must compile and freeze the flex field values before extracting using iSetup.
    Otherwise, it would result in partial migration of data. Please verify that all the data been extracted by reporting on your extract before loading to ensure data consistency.
    You can load the KFF/DFF data to target instance even the structures in both source as well as target instances are different only in the below cases.
    Case 1:
    Source => Loc1 (Mandate), Loc2 (Mandate), Loc3, and Loc4
    Target=> Loc1, Loc2, Loc3 (Mandate), Loc4, Loc5 and Loc6
    If you provide values for Loc1 (Mandate), Loc2 (Mandate), Loc3, Loc4, then locations will be loaded to target instance without any issue. If you do not provide value for Loc3, then API will fail, as Loc3 is a mandatory field.
    Case 2:
    Source => Loc1 (Mandate), Loc2 (Mandate), Loc3, and Loc4
    Target=> Loc1 (Mandate), Loc2
    If you provide values for Loc1 (Mandate), Loc2 (Mandate), Loc3 and Loc4 and load data to target instance, API will fail as Loc3 and Loc4 are not there in target instance.
    It is always recommended that KFF/DFF structure should be same for both source as well as target instances.
    Concurrent Programs and Request Groups
    Concurrent program API migrates the program definition(Definition + Parameters + Executable) only. It does not migrate physical executable files under APPL_TOP. Please use custom solution to migrate executable files. Load Concurrent Programs prior to loading Request Groups. Otherwise, associated concurrent program meta-data will not be moved even through the Request Group extract contains associated Concurrent Program definition.
    Locations - Geographies
    If you have any custom Geographies, iSetup does not have any API to migrate this setup. Enter them manually before loading Locations API.
    Currencies Types
    iSetup does not have API to migrate Currency types. Enter them manually on target instance after loading Currency API.
    GL Fiscal Super user--> setup--> Currencies --> rates -- > types
    Associating an Employee details to an User
    The extract process does not capture employee details associated with users. So, after loading the employee data successfully on the target instance, you have to configure them again on target instance.
    Accounting Setup
    Make sure that all Accounting Setups that you wish to migrate are in status “Complete”. In progress or not-completed Accounting Setups would not be migrated successfully.
    Note: Currently iSetup does not migrate Sub-Ledger Accounting methods (SLA). Oracle supports some default SLA methods such as Standard Accrual and Standard Cash. You may make use of these two. If you want to use your own SLA method then you need to manually create it on target instances because iSetup does not have API to migrate SLA. If a Primary Ledger associated with Secondary Ledgers using different Chart of Accounts, then mapping rules should be defined in the target instance manually. Mapping rule name should match with XML tag “SlCoaMappingName”. After that you would be able to load Accounting Setup to target instance.
    Organization API - Product Foundation Selection Set
    All Organizations which are defined in HR module will be extracted by this API. This API will not extract Inventory Organization, Business Group. To migrate Inventory Organization, you have to use Inventory Organization API under Discrete Mfg. and Distribution Selection Set. To extract Business Group, you should use Business Group API.
    Inventory Organization API - Discrete Mfg & Distribution Selection Set
    Inventory Organization API will extract Inventory Organization information only. You should use Inventory Parameters API to move parameters such as Accounting Information. Inventory Organization API Supports Update which means that you can update existing header level attributes of Inventory Organization on the target instance. Inventory Parameters API does not support update. To update Inventory Parameters, use Inventory Parameters Update API.
    We have a known issue where Inventory Organization API migrates non process enabled organization only. If your inventory organization is process enabled, then you can migrate them by a simple workaround. Download the extract zip file to desktop and unzip it. Navigate to Organization XML and edit the XML tag <ProcessEnabledFlag>Y</ProcessEnabledFlag> to <ProcessEnabledFlag>N</ProcessEnabledFlag>. Zip it back the extract and upload to target instance. You can load the extract now. After successful completion of load, you can manually enable the flag through Form UI. We are working on this issue and update you once patch is released to metalink.
    Freight Carriers API - Product Foundation Selection Set
    Freight Carriers API in Product Foundation selection set requires Inventory Organization and Organization Parameters as prerequisite setup. These two APIs are available under Discrete Mfg. and Distribution Selection Set. Also,Freight Carriers API is available under Discrete Mfg and Distribution Selection Set with name Carriers, Methods, Carrier-ModeServ,Carrier-Org. So, use Discrete Mfg selection set to load Freight Carriers. In next rollup release Freight Carriers API would be removed from Product Foundation Selection Set.
    Organization Structure Selection Set
    It is highly recommended to set filter and extract and load data related to one Business Group at a time. For example, setup objects such as Locations, Legal Entities,Operating Units,Organizations and Organization Structure Versions support filter by Business Group. So, set the filter for a specific Business Group and then extract and load the data to target instance.
    List of mandatory iSetup Fwk patches*
    8352532:R12.AZ.A - 1OFF:12.0.6: Ignore invalid Java identifier or Unicode identifier characters from the extracted data
    8424285:R12.AZ.A - 1OFF:12.0.6:Framework Support to validate records from details to master during load
    7608712:R12.AZ.A - 1OFF:12.0.4:ISETUP DOES NOT MIGRATE SYSTEM PROFILE VALUES
    List of mandatory API/functional patches*
    8441573:R12.FND.A - 1OFF:12.0.4: FNDLOAD DOWNLOAD COMMAND IS INSERTING EXTRA SPACE AFTER A NEWLINE CHARACTER
    7413966:R12.PER.A - MIGRATION ISSUES
    8445446:R12.GL.A - Consolidated Patch for iSetup Fixes
    7502698:R12.GL.A - Not able to Load Accounting Setup API Data to target instance.
    Appendix_
    How to read logs
    ·     Logs are very important to diagnose and troubleshoot iSetup issues. Logs contain both functional and technical errors.
    ·     To find the log, navigate to View Detail screens of Extracts/ Transforms/Loads/Standard/Comparison Reports and click on View Log button to view the log.
    ·     Generic Loader (FNDLOAD or Seed data loader) logs are not printed as a part of main log. To view actual log, you have to take the request_id specified in the concurrent log and search for the same in Forms Request Search Window in the instance where the request was launched.
    ·     Functional errors are mainly due to
    o     Missing prerequisite data – You did not load one more perquisite API before loading the current API. Example, trying to load “Accounting Setup” without loading “Chart of Accounts” would result in this kind of error.
    o     Business validation failure – Setup is incorrect as per business rule. Example, Start data cannot be greater than end date.
    o     API does not support Update Records – If the there is a matching record in the target instance and If the API does not support update, then you would get this kind of errors.
    o     You unselected Update Records while launching load - If the there is a matching record in the target instance and If you do not select Update Records, then you would get this kind of errors.
    Example – business validation failure
    o     VONAME = Branches PLSQL; KEY = BANKNAME = 'AIBC‘
    o     BRANCHNAME = 'AIBC'
    o     EXCEPTION = Please provide a unique combination of bank number, bank branch number, and country combination. The 020, 26042, KA combination already exists.
    Example – business validation failure
    o     Tokens: VONAME = Banks PLSQL
    o     BANKNAME = 'OLD_ROYAL BANK OF MY INDIA'
    o     EXCEPTION = End date cannot be earlier than the start date
    Example – missing prerequisite data.
    o     VONAME = Operating Unit; KEY = Name = 'CAN OU'
    o     Group Name = 'Setup Business Group'
    o     ; EXCEPTION = Message not found. Application: PER, Message Name: HR_ORG_SOB_NOT_FOUND (Set of books not found for ‘Setup Business Group’)
    Example – technical or fwk error
    o     OAException: System Error: Procedure at Step 40
    o     Cause: The procedure has created an error at Step 40.
    o     Action: Contact your system administrator quoting the procedure and Step 40.
    Example – technical or fwk error
    o     Number of installed languages on source and target does not match.
    Edited by: Mugunthan on Apr 24, 2009 2:45 PM
    Edited by: Mugunthan on Apr 29, 2009 10:31 AM
    Edited by: Mugunthan on Apr 30, 2009 10:15 AM
    Edited by: Mugunthan on Apr 30, 2009 1:22 PM
    Edited by: Mugunthan on Apr 30, 2009 1:28 PM
    Edited by: Mugunthan on May 13, 2009 1:01 PM

    Mugunthan
    Yes we have applied 11i.AZ.H.2. I am getting several errors still that we trying to resolve
    One of them is
    ===========>>>
    Uploading snapshot to central instance failed, with 3 different messages
    Error: An invalid status '-1' was passed to fnd_concurrent.set_completion_status. The valid statuses are: 'NORMAL', 'WARNING', 'ERROR'FND     at oracle.apps.az.r12.util.XmlTransmorpher.<init>(XmlTransmorpher.java:301)
         at oracle.apps.az.r12.extractor.cpserver.APIExtractor.insertGenericSelectionSet(APIExtractor.java:231)
    please assist.
    regards
    girish

  • Questions on SETSPN syntax and what is required for MANUAL AD auth

    I'll preface this by stating that I don't need to do all the extra stuff for Vintela SSO, SSO to database, etc.  I just need to know precisely what is necessary to do to get AD authentication working.  I managed to get it working in XIr2 previously but it's been so long and I'm not 100% sure that everything I wound up doing was absolutely necessary that I wanted to sort it out for good as we look at going to XI 3.1 SP3.
    In the XI 3.1 SP3 admin guide, page 503, the SETSPN command which is
    used as part of the setup process to establish a service account to
    enable AD authentication is outlined as follows:
    SETSPN.exe -A <ServiceClass>/<DomainName> <Serviceaccount>
    The guide suggests that the <ServiceClass> can be anything you want to
    arbitrarily assign. If I choose something other than the
    suggested "BOBJCentralMS" value, is there anywhere else I have to
    specify this value to allow the service account to function properly?
    The guide suggests that the <DomainName> should be the domain name on
    which the service account exists however I've seen many posts online which seem to
    indicate this <DomainName> should actually be the FQDN of the server
    running the CMS service instead of the general domain name.
    Clarification there would be very helpful if anyone has some insight.

    The CMS account can have an SPN of spaghetti/meatballs, there are no requirements (cept 2 characters on each side of the / I believe). The SPN created should be the value entered in the CMC > Authentication > Windows AD
    The account must run the SIA and it therefore must have AD permissions. Now if you are using IIs or client tools you don't even need an SPN. The SPN is for kerberos only which is required for java app servers.
    The vintela SSO white paper in the this forums sticky post explains the roles of a service account.
    Regards,
    Tim

Maybe you are looking for