Charm Configuration for Multiple Landscape

Dear Guru's,
We're having 5 landscapes (ECC, BI, etc) and one solution manager system, is it possible to configure ChaRm for the multiple landscape and control those 5 landscape in single solution manager system.
Expecting your positive reply.
regards,
Guna

Hi,
You may create 5 Locical components and Assign them in the Maintenance Projects.
and mainetenance cycle project each of them.
There should be domain links from solman to each domain controllers in your landscape.
Follow configuration as in SPRO
Hope this helps.
Feel free to revert back.
-=-Ragu

Similar Messages

  • How to execute a MTS (Master Test Script) in SAP ECATT Test Configuration for multiple variants.

    I have a MTS (Master Test Script) which references 4 Test Scripts. As of now, I am able to run this MTS directly by opening it as a Test Script and running it for default test data.
    Now, I want to run this MTS in SAP ECATT Test Configuration for multiple variants/test data. I am not able to see any parameter if I enter MTS in the Test Configuration.
    Below thread is similar to my requirement but I am not able to understand the solution properly.
    eCATT - how to run multiple test scripts with input variables in different test scripts?
    Any help in this case would be highly appreciated.
    Thanks & Regards,
    Vaibhav Gupta

    I have a MTS (Master Test Script) which references 4 Test Scripts. As of now, I am able to run this MTS directly by opening it as a Test Script and running it for default test data.
    Now, I want to run this MTS in SAP ECATT Test Configuration for multiple variants/test data. I am not able to see any parameter if I enter MTS in the Test Configuration.
    Below thread is similar to my requirement but I am not able to understand the solution properly.
    eCATT - how to run multiple test scripts with input variables in different test scripts?
    Any help in this case would be highly appreciated.
    Thanks & Regards,
    Vaibhav Gupta

  • One CTS+ for multiple landscapes?

    We have about 15 landscapes (ECC, R3, CRM, BI, PI, etc.)
    Each landscape has its own single SLD.
    We want to configure ONLY one CTS+ on an SM7.0 to handle all of the change
    request management for the 15 landscapes.
    Is this possible?
    Thanks!

    Hi,
    You can have a single CTS on multiple landscapes.
    Checkout the pdf which shows the SAP best practice
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/10456aac-44f7-2a10-1fbe-8b7bcd7bcd58
    Rakesh

  • Best Configuration for Multiple Video Captures

    Hello,
    What would be faster and the most efficient for multiple video captures:
    - 9 video feeds written to 3 separate internal hard drives (3 feeds per hard drive)
    - 9 video feeds written to 3 internal hard drives in a combined RAID configuration
    - 9 video feeds written to 9 separate external hard drives via eSATA
    I have a Mac Pro 3.06GHz 12-core Intel Xeon machine that will be dedicated to a security camera application (SecuritySpy). The Mac OS and SecuritySpy will be on one hard drive and send the captured video from nine network HD cameras to the other hard drives. The computer, eSATA card, and all the hard drives are 6Gb/s capable. I also have access to SoftRAID for the RAID configuration, as well as an 8-bay eSATA hard drive enclosure if I choose. There are pros and cons for each of the configurations, but I am looking for the most reliable setup in terms of efficiency and longevity.
    “Technological change is like an axe in the hands of a pathological criminal.” (Albert Einstein, 1941),
    Dr. Z. 

    Just wanted to post the details of my final set up, maybe this will be useful to others looking to do a similar setup:
    1. Connected and set up the external hard drive (I bought a Western Digital My Book Pro Edition II 1TB) via Firewire 800 on iMac #1.
    2. Set up iMac #1 to backup via Time Machine. Even though the other computers will see the drive, apparently they will not recognize it in Time Machine unless you set it up on the computer the drive is connected to first (I tried).
    3. On iMac #2, connect to iMac #1 via file sharing, this is even easier now that you can access any computer on your network under "SHARED" in the sidebar of any finder window.
    4. Double-click the external drive. If you don't see the drive, click "Connect as" in the top right of the window and login. It's not necessary to do this, but if you want to confirm you've connected to the network drive, you can go to Finder -> Preferences -> General and under "Show these items on the Desktop" check the box next to "Connected Servers". This will make an icon for the external drive appear on your desktop.
    5. Turn on Time Machine on iMac #2. Repeat for any other computers you want to back up.
    That's it. So far this has worked great for me. Obviously the backups run a little slower on the computers running through file share, but its a small trade-off compared to either manually moving the drive around or getting three separate drives.

  • Charm configuration for a 4 system landscape

    Hi Experts,
    We are planning to implement charm for a 4 system landscape: Development Server, Integration Server, Quality Server and Production Server.
    We could configure TMS related steps, activated Charm in Solution Manager, created a maintenance project in solar_project_admin.
    However, the standard delivered transaction types like Normal correction and Urgent Correction are not working in a 4 system landscape.  They are only working for 3 systems (DEV, QAS and PRD).
    Are there any documents or blogs that would specially explain all the steps required for 4 systems? Any kind of help would be highly appreciated.
    Thanks,
    Ravali

    Development --> Integration --> Quality --> Production.
    First, I will import the transport of copies into integration
    Yes, and you test the changes here.
    Second, the whole transport is released and imported to Integration - in the Test phase of the maintenance cycle and NC status being consolidated.
    Third - Now, keeping the maintenance cycle in the TEST phase and NC phase in consolidated, will I be able to schedule the transport into quality server?
    Nope. You consolidate your normal correction, which releases your original transport and import them into Integration, and then immediately into Quality (you already proved it worked in Integration) BEFORE you set the phase to Test. Otherwise the original transports are NOT released. And if you set your phase to Test before importing the originals, you'll get a message that you cannot import transports into Quality during this phase.
    If you need to get transports into Quality for break fixes during the Test phase you need to use a Test Message and these are imported during Go-live as well.
    Fourth - if the above process works, I will then change the MC to Go Live and schedule the import to Production.
    With the corrected process mentioned above, yes, you will then change the phase to Go-live and it will import everything that was brought into Quality. 
    Hope this helps.

  • How to set up multiple log4j configurations for multiple WAR packages?

    hi all,
    the EAR that i am trying to deploy has multiple WARs.
    I need to configure separate loggers for each these WARs.
    Could any of experts can help me out.

    Hi
    Depending on your needs, you may find that Set level offers the best approach:
    Within a Set you have the Playbacks (Set Level, so they work for the entire set) with individual patches in the set "live" for the EWI etc as needed.
    You need only map one set of controls for all the Playbacks since only 1 set can run at a time.
    CCT

  • How to configure for Multiple XML

    Hi,
    We were successful in working with single xml as a input. Now we are using multiple xml as a input for a single transaction. With the references and the suggestion in documents(working with xml files.pdf) related to Documaker we have changed configuration settings in FSISYS.INI and AFGJOB.JDT.
    Down below we have mentioned those changes.
    Input Extract file has been changed as F_Sch.DAT; following is the content in the extract file
    COM_LOB_1111 .\INPUT\F_SCH1.xml
    COM_LOB_1111 .\INPUT\F_SCH2.xml
    COM_LOB_1111 .\INPUT\F_SCH3.xml
    F_SCH1.xml,F_SCH2.xml,F_SCH3.xml
    <?xml version="1.0"?>
    <global>
         <lm>
              <COM>COM</COM>
              <LOB>LOB</LOB>
              <TRANSACTIONID>121212</TRANSACTIONID>
         </lm>     
         <Foodcard>
              <Accname>Alex</Accname>
              <Email>[email protected]</Email>
              <Accid>1234123412341234</Accid>
              <Accno>1234123412341234</Accno>
              <Accaddr1>address1</Accaddr1>
              <Accaddr2>address2</Accaddr2>
              <Accaddr3>address3</Accaddr3>
         </Foodcard>     
    </global>
    Changes to FSISYS.INI
    < ExtractKeyField >
    Key = 1,3
    SearchMask = 1,COM_LOB_1111
    < TRN_FIELDS >
    COM = 1,3,N
    LOB = 5,3,N
    PolicyNum = 9,3,N
    < DATA >
    ExtrFile = C:\FAP\DLL\Computer\INPUT\F_Sch.DAT
    AFGJOB.JDT
    ;ImportXMLFile;2;SCH=1,COM_LOB_1111 15,19
    Inspite of those changes that made, we were not able to generate the output using three xml as a input.
    ***error details.***
    [04:16:29PM] Warning: Company - LOB - Transaction
    [04:16:29PM] Warning: DM17115: : The DownLoadFAP option is set to Yes, this option should be set to No for optimal performance. Check the RunMode control group, DownLoadFAP option.
    [04:16:29PM] Error: Company - LOB - Transaction
    [04:16:29PM] Error: DM12041: : FAP library error: Transaction:<>, area:<DXMLoadXMLRecs>
    code1:<48>, code2:<0>
    msg:<XML Parse Error: The 1 chars before error=<C>, the 20 chars starting at error=< >>.
    [04:16:29PM] Error: Company - LOB - Transaction
    [04:16:29PM] Error: DM12041: : FAP library error: Transaction:<>, area:<DXMLoadXMLRecs>
    code1:<48>, code2:<0>
    msg:<syntax error at line 1 column 0>.
    [04:16:29PM] Error: Company - LOB - Transaction
    [04:16:29PM] Error: DM10292: in <RULXMLExtract()>: Unable to <DXMLoadXMLRecs()>.
    [04:16:29PM] Warning: Company - LOB - Transaction
    [04:16:29PM] Warning: DM13023: in RCBSendToErrBatch(): Unable to assign the transaction to the error batch. The SentToManualBatch field is not defined in the TrnDfdFile.
    [04:16:29PM] Error: Company - LOB - Transaction
    [04:16:29PM] Error: DM10947: in NoGenTrnTransactionProc(): Unable to RULLoadXtrRecs(pRPS).
    [04:16:51PM] Error: Company - LOB - Transaction
    [04:16:51PM] Error: DM12018: in RPDoBaseFormsetRulesForward(): Unable to <WINNOGENTRNTRANSACTIONPROC>().
    [04:16:51PM] Error: Company - LOB - Transaction
    [04:16:51PM] Error: DM12066: in RPProcessOneTransaction(): Unable to RPDoBaseFormsetRulesForward(pRPS, RP_PRE_PROC_A).
    [04:16:51PM] Error: Company - LOB - Transaction
    [04:16:51PM] Error: DM12064: in RPProcessTransactions(): Unable to RPProcessOneTransaction(pRPS). Skipping the rest of the transactions for this Base. See INI group:< GenDataStopOn > option:TransactionErrors.
    [04:16:51PM] Error: Company - LOB - Transaction
    [04:16:51PM] Error: DM12004: in RPProcessOneBase(): Unable to RPProcessTransactions(pRPS).
    [04:16:51PM] Error: Company - LOB - Transaction
    [04:16:51PM] Error: DM12001: in RPProcessBases(): Unable to RPProcessOneBase(pRPS). Skipping the rest of the Bases for this batch run. See INI group:< GenDataStopOn > option:BaseErrors.
    [04:16:51PM] Error: Company - LOB - Transaction
    [04:16:51PM] Error: DM12127: in RPRun(): Unable to RPProcessBases(pRPS).
    [04:16:51PM] Error: An error occurred during processing.
    Kindly let us know in case of any solution for this issue.
    Regards,
    Balaji R.

    It looks like it is unable to load your XML files. Depending on how your configuration is laid out, you may want to list full path names in your extract file - we make use of the same technique in 11.5, which I would imagine isn't terribly different in this respect from 12.0. Our configuration is such that we run Gendata at one level, read an extract file in the deflib folder, and the input files are in an input folder, so each entry in the extract file is C:\fap\mstrres\sampco\input\input1.xml.
    Not sure if that is analogous to your setup or not, but based on the errors, either it can't find the file, so it is a pathing issue, or the file contents are invalid.
    Tony

  • Disk and tempdb configuration for multiple Instances in one SQL cluster

    Hi Everyone ,
      I am in planing to build SQL Cluster on Blade server . Two blades are allocated for SQL. and planning to cluster those two blades. It will be windows 2012R2 and SQL 2012/2014. Initially it was plan to put most of database on one SQL instance. but
    due to other requirements I will be Installing more SQL Instance on same cluster. 
    for now Everything will be on  RAID 10 but do not have more details about storage yet that how it will get configured
    256 GB Ram on Each blade server.
    1) what are the Disc configuration recommendation for data and log files when each instance get added?
    2) what are the Disc configuration recommendation for  tempdb files when each instance get added?
    3) Each Instance Tempdb file on Same drive is good practice?
    Any other Best practices should follow for this setup?
    Thank you ....
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    1) what are the Disc configuration recommendation for data and log files when each instance get added?
    Keep both the sql instances in diffèrent drives.read the below link for RAID levels and according to your usage you can cofigure it..
    http://technet.microsoft.com/en-us/library/ms190764%28v=sql.105%29.aspx
    Storage best practice.
    http://technet.microsoft.com/en-us/library/cc966534.aspx
    2) what are the Disc configuration recommendation for  tempdb files when each instance get added?
    Its good to have sepreate drive for tempdb and each instance too with correct allocation or configuration..RAID5
    for data if you must, but have to keep a spare drive around for that.
    Best is 10 everywhere. If that's too expensive, then 1 or 10 for log and 5 for data (and 10 for TempDB)
    RAID 5 is terrible for logs because it has a high write overhead. Tran logs are write-heavy, not read-heavy.
    3) Each Instance Tempdb file on Same drive is good practice?
    Tempdb configuration is depends, its completelty based on your server and tempdb usage..Make sure to have equal size for all the files in tempdb, As a general rule, if the number of logical processors is less than 8, use the same number of data files
    as logical processors. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to
    acceptable levels or make changes to the workload/code.
    http://www.confio.com/logicalread/sql-server-tempdb-best-practices-initial-sizing-w01/#.U7SStvFBr2k
    http://msdn.microsoft.com/en-us/library/ms175527(v=sql.105).aspx
    http://www.sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-(1230)-tempdb-should-always-have-one-data-file-per-processor-core.aspx
    Raju Rasagounder Sr MSSQL DBA

  • BPC Client tool configuration for multiple appset selection

    Hi Expert,
    is there any way to configure BPC client to select multiple applications simulteniously instead of closing one application to select another application, nothing but multiple BPC sessions at a time like SAP GUI sessions.
    thanks
    Venkata

    Hi,
    I guess you are talking about BPC excel. You cant have different sessions in BPC. If you open a report1 from application1 and then if you want to open report2 from application2, you need to change the CV. Once the report2 has been opened in application2, the CV of report1 will still show application1. However, if you want to refresh report1, it will access application2. The CV of the last opened report holds valid for all the previously opened reports.
    Having said this, we do have a workaround for the above problem. When you open report1 from application1, go to workbook options and click on "save the active session CV with this workbook". Then you can change your CV to application2 to open report2. And again do the same thing for report2 as well. With this, both the reports will work in their own region. However, you wont be able to change the CV. Even if you do change the CV, the report will not be refreshed based on the new CV. It will still work for the saved CV.
    Hope this helps.

  • Dynamic configuration for multiple files

    Hi All,
    I am new to PI. I have a proxy to file scenario, wherein 3 files are to be generated depending upon the value in a field (site ID) on the same receiver. Site ID can only have 3 different vales. All these 3 files are to have a dynamically defined name.
    How can I configure this scenario? Is there any way in which all 3 files can have dynamically defined names? Please help.
    Regards
    Vipul

    if you have only one recored (one site id) in proxy at a time, thn u can write a UDF for dynamic file name..
    // UDF: public String SetFileNameDyn(String siteID, Container container) throws StreamTransformationException{
    DynamicConfiguration conf = (DynamicConfiguration) container.getTransformationParameters().get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);
    DynamicConfigurationKey key = DynamicConfigurationKey.create("http://sap.com/xi/XI/System/File","FileName");
    String myFileName = "";
    if(siteID.equals("firstValue")){
         myFileName  = "FirstFile1";
    }else if(siteID.equals("secondValue")){
         myFileName  = "SecondFile2";
    }else if(siteID.equals("thirdValue")){
         myFileName  = "ThirdFile3";
    conf.put(key, myFileName);
    return myFileName;

  • Automating custom software deployment and configuration for multiple nodes

    Hello everyone.
    Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10.
    Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example:
    * Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations.
    * Checking that target application folders exist and if not, then create them with pre-configured path values defined when the package was assembled.
    * Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them.
    * Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system.
    Obviously, doing this for many boxes (the goal in question) by hand will certainly be slow and error prone.
    I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another.
    1. Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort.
    2. Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P.
    3. Ant or Gradle scripts. They may be an option since the boxes also have Java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting.
    It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this.
    I thank you for your time and help.

    Configuration Management is a big topic today with a few noteworthy alternatives for Solaris:
    - CFEngine (http://www.cfengine.org)
    - Chef (http://wiki.opscode.com)
    - Puppet (http://www.puppetlabs.com)

  • How to configure for multiple mixed standbys

    Hi,
    I currently have a Primary database with 2 Physical Standbys set up. It is my intention to convert one of the Standbys to a Logical Standby.
    I am following Oracle Note 738643.1 and CH4 of the 11gR2 Data Guard Concepts and Admin manual
    However I am a bit confused as to what the log_archive_dest_1 setting should be on my primary and each of my standbys when I have a set up that includes a physical and a logical standby
    On my Primary
    ==========
    log_archive_dest_1=' location=USE_DB_RECOVERY_FILE_DEST'
    log_archive_dest_2= 'service=STANDBYL async valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=STANDBYL'
    If I plan this primary ever to be a logical standby ( in a switchover etc ) then I need to also add on the Primary Side
    log_archive_dest_3= 'location=USE_DB_RECOVERY_FILE_DEST valid_for=(STANDBY_LOGFILES,STANDBY_ROLE) db_unique_name=PRIMARYP'
    On my first Standby ( currently Physical and will remain so )
    ============
    log_archive_dest_1=' location=USE_DB_RECOVERY_FILE_DEST'
    log_archive_dest_2 ='service=PRIMARYP async valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=PRIMARYP'
    On my second Standby ( currently Physical but I am wishing to convert to logical )
    ==============
    log_archive_dest_1= 'location=USE_DB_RECOVERY_FILE_DEST'
    log_archive_dest_2 ='service=PRIMARYP async valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=PRIMARYP'
    To allow this Standby to be converted to a logical standby I need to also have
    log_archive_dest_3='location=USE_DB_RECOVERY_FILE_DEST valid_for=(STANDBY_LOGFILES,STANDBY_ROLE) db_unique_name=STANDBYL'
    There appears to be some confusion in the Oracle literature as to what the log_archive_dest_1 should be on the Primary, to facilitate both a physical and a logical standby. For a Logical Standby I have been quoted the following setting ( I think for the primary - though I am not sure ? )
    log_archive_dest_1=' location=USE_DB_RECOVERY_FILE_DEST' VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIMARYP'
    It seems that my current settings that use USE_DB_RECOVERY_FILE_DEST assume the default VALID_FOR=(ALL_LOGFILES,ALL_ROLES) which is ok for Physical Standbys ( as there is no need to separate out Online Redo Logs from Standby Redo Logs ), however this setting is not ok for Logical Standbys. Instead for Logical Standbys I have to use the line above where VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES)
    I am just not precisely sure for my setup exactly where I need to make this change !
    Q1. Is the proposed 'location=USE_DB_RECOVERY_FILE_DEST' VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIMARYP'
    to be set on log_archive_dest_1 on my Primary and/or on log_archive_dest_1 or 3 of the Standby I wish to convert to a logical ?
    Q2. If I am meant to change the log_archive_dest_1 on my Primary, how do I ensure the settting is suitable for both Physical and Logical Standbys ?
    any clarity greatly appreciated.
    Jim

    Have never done but since you posted this 12 hours ago and have no hits I will add the link to the white on it.
    http://www.oracle.com/us/products/database/maa10gr2multiplestandbybp-1-131937.pdf
    Best Regards
    mseberg

  • How to Configure Cisco ASA 5512 for multiple public IP interfaces

    Hi
    I have a new ASA 5512 that I would like to configure for multiple public IP support.  My problem may be basic but I am an occasional router admin and don't touch this stuff enough to retain everything I have learned.
    Here is my concept.    We have a very basic network setup using three different ISPs that are currently running with cheap routers for internet access.  We use these networks to open up access for Sales to demo different products that use a lot of bandwidth (why we have three)
    I wanted to use the 5512 to consolidate the ISPs so we are using one router to manage the connections.  I have installed an add on license that allows multiple outside interfaces along with a number of other features.
    Outside Networks (I've changed the IPs for security purposes)
    Outside1 E 0/0 : 74.55.55.210  255.255.255.240 gateway 74.55.55.222
    Outside2 E 0/2: 50.241.134.220 255.255.248 gateway 50.241.134.222
    Inside1 : E 0/1 192.168.255.1 255.255.248.0
    Inside2 : E 0/3 172.16.255.1 255.255.248.0
    My goal is to have Inside 1 route all internet traffic using Outside1 and Inside 2 to use Outside2.    The problem is I can't seem to do this. I can get inside 1 to use outside 1 but Inside2 uses Outside 1 as well.
    I tried adding static routes on Outside2 to have all 172.16.248.0/21 traffic use gateway 50.241.134.222 but that doesn't seem to work.   
    I can post my config up as needed.  I am not well versed in Cisco CLI, I've been using the ASDM 7.1 app.  My ASA 5512 is at 9.1.   
    Thanks in advance for the suggestions/help

    I have been away for a while and am just getting caught up on some posts. so my apology for a delayed response.
    I find the response very puzzling. It begins by proclaiming that to achieve the objective we must use Policy Based Routing. But then in the suggested configuration there is no PBR. What it gives us is two OSPF processes using one process for each of the public address ranges and with some strange distribute list which uses a route map. I am not clear what exactly it is that this should accomplish and do not see how it contributes to having one group of users use one specific ISP and the other group of users use the other ISP>
    To the original poster
    It seems to me that you have chosen the wrong device to implement the edge function of your network. The ASA is a good firewall and it does some routing things. But fundamentally it is not a router. And to achieve what you want were a group of users will use a specified ISP and the other group of users will use the other ISP you really need a router. You want to control outbound traffic based on the source of the traffic, and that is a classic situation where PBR is the ideal solution. But the ASA does not do PBR.
    HTH
    Rick

  • Can Co-Exist two CHARM setups for managing an Unique Landscape?

    Hi Guys
    We are actually migrating and Upgrading an existing Solution Manager 7.01 to 7.1
    Since we are performing tests on this Sandbox system, we would like to replicate CHARM configuration and execute full integration tests against managed systems.
    There is the actual Productive Solution Manager (SSM), which manages an BW Landscape and a ERP Landscape
    If we configure CHARM this Sandbox Solution Manager upgraded to 7.1 (SMT), to manage the same actual BW and ERP Landscape, the configuration would be affected on the original Systems?
    In other words, Can we have two CHARM Clients (Test and Productive) to manage BW and ERP Landscape without affecting each other configuration
    On the Sandbox CHARM, we only want to test configuration and perform a Change Request Cycle only for testing.
    Any comments or suggestions would be much appreciated.
    Regards!

    Hi Martin,
    Your development solman might not be connected to erp dev qa and prod so you can just create 3 clients within solman to do this or
    take a erp sandbox and create 3 clients within it having roles of Dev Qa and Prd like in below blogs
    How To - Configure ChaRM on Single Managed System with multiple client for PoC(Part1)
    How To - Configure ChaRM on Single Managed System with multiple client for PoC(Part2)
    hope above helps
    Thanks
    Prakhar

  • ChaRM configuration issue for 2 Clients...

    Dear Friends,
    I have configured ChaRM in Solution Manager 7.0 EhP1 system. It is working fine without any issues. But now I came across a new requirement and finding trouble in configuring the required scenario.
    Currently there is only single client in DEV, QA and PRD system for which ChaRM is configured. Now, I want to configure the ChaRM for 2 clients in each system.
    For example,
    <ZCHARM_1> -- SOL:001 -- > DEV : 100 --> QAS : 200 --> PRD1 : 300
    <ZCHARM_2> -- SOL:001 -- > DEV : 500 --> QAS : 600 --> PRD2 : 700
    (Please note, ZCHARM_X is a project and SOL-001 is a Solution Manager ChaRM client).
    First up all, I would like to understand, is it possible? is it supported by SAP ?
    If yes, how can we configure client specific route for SAP standard transport layer (SAP)? What are the other concerns in using above system landscape in ChaRM?
    Please note, we do not want to use separate hardware for either DEV or QA system.
    Please can someone guide in this case?
    Rajesh Narkhede

    Rajesh,
    I am not aware if you have searched on this issue, but You may want to take a look at this thread.
    Change Request Management for multiple production clients
    This configuration should work fine, because in essence, its a 3 system landscape, so there should not be major issues on the way.
    From the TMS Standpoint you need to define a transport layer for each development client.
    You can create a single project with the logical components or you can create multiple projects with each set of systems. Remember to do the TMS configuration right and you should be set.
    Cheers!
    Edited by: Banerjee Vivek on Mar 29, 2011 1:02 PM

Maybe you are looking for

  • BINDING.JCA-11821 error while importing Stored procedure through DB adapter

    Hi I am trying to create a DB adapter in Jdeveloper. While completing the steps in the adapter configuration am getting the follwing error.The procedure contains are record type variable as input. JDeveloper version 11.1.1.4.0 Database version Oracle

  • Purchase order field EKPO - AFNAM displayed in any standard report?

    Hello gurus, I can't find any standard report which contains in the layout the field Requisitioner (EKPO-AFNAM) of the Purchase Orders. Can you please advise if there is any? Do I need to make a query between EKKO and EKPO to get this field displayed

  • Order confirmation after Subcon gr made

    Dear guru I am using external operation (subcon) as 0030th operation in order previous operation 0020  i confirmed 1000 nos and PO is made for 1000 nos Gr is done only for 500 nos form vendor side (balance 500 nos remaining in vendor processing) now

  • JNDI NIS object access problem: urgent help

    Hi all, After long fight, i'm now able to connect to my NIS server on my network. the initial context factory is 'com.sun.jndi.nis.NISCtxFactory' and provider url are given and i obtain namingennumeration of items in the NIS directory. purpose of my

  • BAPI  for Outbound Delivery

    Hi All, Is there any BAPI for creation of Outbound Delivery? If yes, please specify the BAPI and source code if any...Thank you! Regards, Mackoy