Question regarding Transport of System Dependant Objects after Refresh

Hi, We are doing a System Refresh of our Non-Productive BW Systems (DEV and QA).  Once this is done all of our landscapes will be "Harmonized".  This also means for example that all DTP's will have the same Technical ID's.  
In the new 7.x when you transport System Dependant Objects they are assigned a different Technical ID than the source system.
What will happen in our case if we transport one of our DTP's from BW DEV into QA?
Will we end up with 2 DTP's in QA or will it know which one to replace (and how)? 
Thanks!

Hi,
After your transports move in, if the object is a new one, it will be added to the target system & if the object already exists in the target system, then it will be replaced i.e. you will not have 2 objects but just one..
Regards
Vishal.

Similar Messages

  • How to transport source system dependent objects

    Hi everybody,
    I've read several questions about how to transport source dependent objects between systems and I still don't get whole picture
    say I have BWDEV attached to R3DEV and
               BWQA attached to R3QA
    and I want to transport transfer rules from BWDEV to BWQA
    1) should I create, by hand, the source system R3QA in BWQA and replicate its datasources in the first place?
    2) or it has to be transported from BWDEV and "some how" converted in BWQA
    3) where does the source system name mapping has to be done: in BWDEV, BWQA or in both
    4) in order to do the mapping of names in BWQA I need to create the corresponding R3QA, otherwise I don't have the entry under TargSrceSy when executing tc RSLGMP
    I really apreciate an step by step process for the above scenario
    thanks a lot

    Hi Juan,
    Let me try to shed some light. Before you can transport dependent objects in the BW side, you have to make sure that the necessary dependencies in the R/3 side have already been transported. For example, if you have a datasource in R/3, you have to transport that from dev to QA first (before you transport anything in BW). Then, in the BW QA box you have to replicate the datasource before you can transport any BW objects in the BW QA. Then before transporting anything to BW QA and after replicating the datasource in the BW QA you have to go to RSA1->Tools Menu->Assignment of Source System to Source System ID menu item and make sure that R3DEV is mapped to R3QA. It is only after all the said checks have been done that you can transport objects in the BWQA.
    Here are my answers to your questions:
    Q1: This depends on your datasource. Some datasources can be transported. Some can be created in the QA box itself. But in both cases, you have to make sure that DataSource exists in R3QA and then replicate it in the BWQA.
    Q2: The conversion follows the setup in "Assignment of Source System to Source System ID" in RSA1.
    Q3: The "Assignment of Source System to Source System ID" in RSA1 has to be done in the BWQA.
    Q4: Yes, you have to setup the R3QA Source System in the BWQA.
    Hope this helps.

  • Virtual Transport Path to move BI source system dependant objects.

    Currently, our BI Test system (which contains all Transported Objects from BI Dev) is connected to ECC client 100. We want to change BI connection to different Client i.e ECC 115 (this client contain better test Data). We have removed BI connection from Client 100 and connected to Client 115. But our entire Source System Dependent Objects in BI still point to ECC Client 100. To change these Objects to point to the correct Client 115, we are executing the following steps:
    1 Remove Source System ECC Client 100 from BI2
    2 Connect New Source System ECC Client 115 to BI2
    3 Ensure that New Source System Connection to Client 115 have been set up
    4 Create a virtual transport path within BI2 for re-import into BI2 queue for the “ ? ” Transport Package
    5 Open BI2 to assign user authorization in order to be able to create a transport in BI2
    6 Copy Source System dependent rules & create Transport
    7 Change the Source System Assignments on the target client
    8 In SE09 - check that only the required source system has been copied
    9 Move the transport in through the import queue
    We are stuck at step 4. Basis team says it is not possible to create a transport path from BI2 system to the Virtual System (in the same BI system) and move it. But I am pretty sure that this can be done as I did this earlier in my previous project. This is something related to Basis so I can’t help them.
    Do you have any idea, how this can be done?
    Any help will be highly appreciated.

    Hi
    I do not understand few things :
    - why did you remove the source system ?
    - what do you mean by BI2 ? It is the same QA system as before....So why do you need to create a virtaul path for transport ?
    The steps you should have followed is to use BDLS transaction to rename ECC100 to ECC115
    The source dependant objects would have been renamed automatically.
    By the way, you now can start over from your dev and collect the source system dependant objects
    Transport the request in QA, paying attention to change the transcodification table to target ECC115 and not ECC100...
    Hope it helps
    PYG

  • Changing Original System and Transport Target System for Objects

    Hi Gurus,
             I need to do mass transport of all the Infoobjects to newly defined system. Since these objects have different Original system. Its giving me error. Again, for other objects, Transport Target system is QAS, But we want to change it to NBW (A new system to which we want to send transport) . I am able to change package of Infoobjects. Is there mechanism to change Original System and Transport Target for System? Please let me know ASAP.
    Regards,
    Harpal
    Message was edited by:
            Harpalsinh Gohil

    If you simply edit your original post, it will go to the top of the list. There's no need to post duplicates.
    Subjects with a subject of "Urgent" will be filtered out by a lot of responders.
    Why not close this thread and bump the other one?
    Rob

  • System Tablespace objects after upgrade (mdsys, outln, ctxsys, etc)

    I have objects in my system tablespace under listed owners. I believe by default with a new 10g install their home is SYSAUX. Is that correct? Oracle changes the default, but doesn't bother to move the objects during upgrade??!!?!!?!!!

    Issue was I didn't know what shoudl be in the SYSTEM tablespace and what shoudl be in the SYSAUX tablespace. And what the default tablespace shoudl be for all these id's. I know I didn't communicate that, but I don't think I realized the full extent of the issue until I did further research, which, undfortunately I did after posting the message.
    What I found is;
    SYS, some SYSTEM and OUTLN objecst can reside in the SYSTEM tablespace
    all others shoudl now be in SYSAUX and SYSAUX shoud be the default tablepsace for them.
    SYS and OUTLN use SYSTEM as thier default tablespace, however, I haven't definitively identified the default tablespace for SYSTEM db userid.
    Facilities owned by the SYSTEM db userid seem to have been diminished in stature, or maybe just determined to be detrimental to SYSTEM tablespace. I have seen notes in during the upgrae process from 8i to 10g that SYSTEM userid should not use SYSTEM as its default tablespace, but exactly what should be used is not clearly defined.
    I support Financials E-BS, and we have migrated from 10.7 on 7.x to 11.5.10.2 on 10g. I am afraid that between patching and addressing user expectations, I have not caught all the nuances and adjustments that have coincided with the database upgrades. We have reached a relative level of stability (OCT and JAN CPU's still need to be applied), so I am looking at performance, database standards (through OEM), and tuning.
    Thanks for your response.

  • Issue regarding Transport Mangament System

    Hello,
    This is with regard to adding a virtual system to the current 2 system landscape(DEV and PRD) and automating transports.
    Kindly let me know if it is possible to add a virtual system between DEV and PRD
    and create a route like DEV -> VDE -> PRD, at the same time automate transports in the PRD system,by scheduling report RSTMSIMP or any other way.
    Kindly suggest.
    Thanks,
    Sumit

    Hi Sharma,
    Once you cvreated virtual systems and then goto transport routes and configutation -->change --> standard configuration --> threesystem group. specify SID's of three systems and save then specift three system configuration and distridute configuration.
    Once you have done above thing add VDE and PRD in to landscape.
    1. Login to PRD system 000 client with DDIC user.
    2.STMS --> prompts to create domain controller > select other configuration and add to existing domain controller ( i.e DEV because DEV is domain controller).> specify system details for PRD system.
    3.Once you added and saved. login to DEV system --> 000 with DDIC --> stms --> system overview --> select PRD system --> SAP system --> Approve and save.
    4. follow the same steps for VDE also.
    5. A communication user will be created between domain controller and remaining systems in a landscape. ie TMSADM
    6. the entire configuration will be stored in DOMAIN.cfg file in trans/bin folder
    Regards,
    Suraj

  • Question Regarding A Full System Restore...

    My MacBook is just over a year old and I'm noticing some odd behaviors of late. More beach balls and jumping cursors when typing emails etc. ( The cursor issue is irritating... I'll be typing and suddenly the entire message will delete, or the cursor will jump to another area within the email and I don't realize it... as I'm typing it's inserted in the middle of something I've already typed.)
    I was wondering if I should do a full system restore? This is my first Mac, so my only experience doing a system restore back to the factory settings has been with a Microsoft PC and THAT is a pain. I do regular SuperDuper and Time Machine backups on my MacBook, but if I do a system restore and THEN do a restore from my SuperDuper or Time Machine won't that just put back onto my MacBook whatever is causing the beach balls and cursor issues? I would prefer to not have to do a full system restore and then reinstall individually all of my current applications as well as reconfigure my Airport Extreme/ Express and Apple TV etc.
    Does anyone have any thoughts or suggestions?
    Thanks !
    Chuck

    Hi - there's a few things i would do first if you haven;t done them already of i were you. Firstly, check your disk permissions in disk utility - go to spotlight and type in disk utility - click on the application and then in the right hand pane select you macintosh hd - once you've selected that go to the bottom middle of the whole pane where it says repair disk permissions - this can sometimes help rectify any cranky issues you have.
    You can also fix your preferences - this site tells you how along with some other useful tips on sorting out system problems:
    http://www.macattorney.com/ts.html
    Another thing to try is to run your routine maintenance scripts - your machine will run these automatically but only if your machine is on in the early hours of the morning! I've heard that under Leopard if your machine is not on the scripts will run as soon as they can after missing the scheduled time but i've no evidence of this in my Console logs. Anyway, manually running them is easy - there are daily, weekly and monthly scripts. To run them all at once:
    open the terminal application from spotlight
    at the command prompt type:
    sudo periodic daily weekly monthly
    press return and it may ask you for your system password. type that in and press return
    it will take a few minutes to run these scripts.
    you may also need to clean out your caches as per the detail in the article listed above. you can get software that performs these tasks for you e.g. macjanitor, onyx, maccleanse etc
    and if you do have to reinstall, i'd go for the fresh install - yes it is a bit time consuming but it loads and loads quicker than windows! make sure you back up your important data first.
    all the best!

  • Question regarding running parrallel system

    Hi, i didn't know how to state my subject correctly, anywayz, here is my question.
    I am going to buy an IMAC and a mini IMAC, and i really want to install window vista onto my apple computers. I know you can get the parrallel "thing" so that you can run mac and window at the same time. So my question is at what spec can both mac and window run smoothly at the SAME TIME. Mini mac only have 2.0 Ghz, so im reallie concerned, because i use a lot of window softwares and many softwares can only run by window and i don't want to switch on and off between mac and window either. So i really like to run mac and window at the same time.
    please give your thoughts ^^
    thank you
    -JJ

    iMac and Mac Mini; the software youa re talking about is Parallels, and their competition is VMWare, you also have the option of setting up your system dual-boot using Apple's BootCamp (but it will only run one OS at a time).
    Parallels and VMWare will both run well on any current model of Mac. Vista doesn't really run very well on anything, and the Microsoft end-user license agreement prohibits you from using home versions of a Vista in a virtual machine.
    One you've gotten around that and the problems with Vista and Vista's driver support, both Parallels and VMWare do a decent job. Microsoft suggests that you have a minimum of 1G of RAM to run vista, 2G if you want all the features of the Vista Aero UI. That's on top of the Mac OSX and virtual machine software requirements, so if you go the VM route you'll want 3G of RAM to run Vista (on the iMac, Mac Minis have a max 2G of RAM). The VM's 3D acceleration is OK on the iMac, but will be slower than direct-to-hardware (which you'd have under BootCamp). I don't know if the 3D hardware acceleration will work on the Mac Mini (it uses an Intel GMA 950 GPU).
    Having used Vista, I'd say that at this point you'd probably be better off with XP right now, if possible.

  • Source System Dependent BI Objects

    When working with Source System Dependent Objects like
    1) DataSources (based on source system ABC)
    2) InfoPackage (based on source system ABC)
    3) DTP (source is datasource that is based on source system ABC)
    Question 1)
    Once we create the obove objects in DEV system.
    If we update the IMG activty in QA (Source system conversion after transport).
    and transport the above objects to QA system.
    All the above objects should be available with new source system in QA? Is this the right way to do it?
    Question 2)
    Once we create the obove objects in DEV system.
    If we DO NOT update the IMG activty in QA (Source system conversion after transport).
    and transport the above objects to QA system.
    Some of the above objects are not available with the new source system?
    Workaround: Go to RSA1 (Top Menu -> tools --> they are couple of options where we can enter 1) source and target source system info AND 2) do a check, activate and replicate source systems.
    After this some more objects are visible but not all? Is there any other way i can complete the conversion?
    Question 3) I tried the below in an other landscape
    Once we create the obove objects in DEV system.
    we DID NOT update the IMG activty in QA (Source system conversion after transport).
    and transported the above objects to QA system.
    FYI: All objects with source system in QA are available. I am wondering how the source system conversion took place?
    Question 4) When we enter source system for DEV and QA in QA system, Do we need to enter even PC file source systems?
    Example:
    1) ABC in DEV = BCD in QA
    2) PC_FILE in DEV = PC_FILE in QA
    Thanks for your help.

    Below are the following ways to do the mapping in Target System for Sourse System Conversion:
    1) rsa1->tools->source system mapping OR
    2)rsa1->transport connection->'conversion of log.system'(yellow box with 'x' and 'conversion' icon) OR
    3) spro->bus info warehouse->transport settings->change source system name after transport (transaction rslgmp) OR
    4)maintain table RSLOGSYSMAP (sm30)
    Que1.
    Yup
    Que2.
    if no maintenance done, system will transport as the same source system name. You have to convert everything manually...cumbersome process.
    Que4.
    Normaly for flat file we use the same name, to avoid any warnings, we just maintain the 'conversion' with the same name

  • Question about Best Practices - Redwood Landscape/Object Naming Conventions

    Having reviewed documentation and posts, I find that there is not that much information available in regards to best practices for the Redwood Scheduler in a SAP environment. We are running the free version.
    1) The job scheduling for SAP reference book (SAP Press) recommends multiple Redwood installations and using export/import to move jobs and other redwood objects from say DEV->QAS->PROD. Presentations from the help.sap.com Web Site show the Redwood Scheduler linked to Solution Manager and handling job submissions for DEV-QAS-PROD. Point and Shoot (just be careful where you aim!) functionality is described as an advantage for the product. There is a SAP note (#895253) on making Redwood highly available. I am open to comments inputs and suggestions on this issue based on SAP client experiences.
    2) Related to 1), I have not seen much documentation on Redwood object naming conventions. I am interested in hearing how SAP clients have dealt with Redwood object naming (i.e. applications, job streams, scripts, events, locks). To date, I have seen in a presentation where customer objects are named starting with Z_. I like to include the object type in the name (e.g. EVT - Event, CHN - Job Chain, SCR - Script, LCK - Lock) keeping in mind the character length limitation of 30 characters. I also have an associated issue with Event naming given that we have 4 environments (DEV, QA, Staging, PROD). Assuming that we are not about to have one installation per environment, then we need to include the environment in the event name. The downside here is that we lose transportability for the job stream. We need to modify the job chain to wait for a different event name when running in a different environment. Comments?

    Hi Paul,
    As suggested in book u2018job scheduling for SAP from SAPu2019 press it is better to have multiple instances of Cronacle version (at least 2 u2013 one for development & quality and other separate one for production. This will have no confusion).
    Regarding transporting / replicating of the object definitions - it is really easy to import and export the objects like Events, Job Chain, Script, Locks etc. Also it is very easy and less time consuming to create a fresh in each system. Only complicated job chains creation can be time consuming.
    In normal cases the testing for background jobs mostly happens only in SAP quality instance and then the final scheduling in production. So it is very much possible to just export the verified script / job chain form Cronacle quality instance and import the same in Cronacle production instance (use of Cronacle shell is really recommended for fast processing)
    Regarding OSS note 895253 u2013 yes it is highly recommended to keep your central repository, processing server and licencing information on highly available clustered environment. This is very much required as Redwood Cronacle acts as central job scheduler in your SAP landscape (with OEM version).
    As you have confirmed, you are using OEM and hence you have only one process server.
    Regarding the conventions for names, it is recommended to create a centrally accessible naming convention document and then follow it. For example in my company we are using the naming convention for the jobs as Z_AAU_MM_ZCHGSTA2_AU01_LSV where A is for APAC region, AU is for Australia (country), MM is for Materials management and then ZCHGSTA2_AU01_LSV is the free text as provided by batch job requester.
    For other Redwood Cronacle specific objects also you can derive naming conventions based on SAP instances like if you want all the related scripts / job chains to be stored in one application, its name can be APPL_<logical name of the instance>.
    So in a nutshell, it is highly recommend
    Also the integration of SAP solution manager with redwood is to receive monitoring and alerting data and to pass the Redwood Cronacle information to SAP SOL MAN to create single point of control. You can find information on the purpose of XAL and XMW interfaces in Cronacle help (F1). 
    Hope this answers your queries. Please write if you need some more information / help in this regard.
    Best regards,
    Vithal

  • Query regarding transport request

    Hey guys,
    I have a question regarding transport requests.
    If only the report source code is included in the transport request, the text elements of the program will not be transported to the next box?
    when you transprot the main program or the function pool, does it follow that all the changes in FM's under the function pool will be transported to the next box?
    Thanks a lot!
    Rgds,
    Mark

    Hello,
    If you create transport request with just the report, text element will not be transported:
    LIMU REPS <Program>   
    For text element, you need the following transport entry:
    LIMU REPT <Program> 
    If you want to transport both, you can have the following entry:
    R3TR PROG <Program>         
    If you have the following transport, all the components of function pool will be transported.
    R3TR FUGR <Function Group>   
    Thanks,
    Venu

  • REGARDING TRANSPORTATION OF OBJECTS

    hello friends,
    I NEED INFO REGARDING TRANSPORT INFO
    first i transported the datasources from source system(dev) to source system(quality) and then i replicated them in to the bw(quality) system. after that i given the source system mappings in the bw(quality system) and then i started transporting the objects independently in the following manner.
    transport connections:
    collection mode: (start manual collection)
    and then i collected all infoobjects and clicked on the gather dependent objects ( the option which is available for the start manual collection option in the collection mode tab) and then i set a package and then i transported . in the same manner i created transport requests seperately under the same package for the infosources , datatargets,update rules. and then i released the requests and in the quality system we successfully imported the objects(infoobj,infosources etc....). but one infoobject for example 'X' which is a dependent attribute for the another infoobject 'Y' is inactive in the quality system(bw). but X is active in the bw(dev) and when i imported the object it exists inn the bw(quality) but in inactive mode. because of this all the dependent objects like datatargets are not active. once again i released the infoobject 'X' [which is active in bw(dev)] seperately by creating a seperate request and then imported but still it is inactive. i tried several times but i am failed. shall i install that infoobject('X' infoobject ) from business content(in bw quality). if i do so does it effect all the imported objects. pls let me know.
    thanks & regards,
    harish

    dear bwer and anand raj and friends,
    when i am trying to import '0crm_f_cust'(X) object i am getting the following error
      Start of the after-import method for object type R3TR IOBJ (Activation Mode)
      Error/warning in dict. activator, detailed log    > Detail
      Value table /BI0/SCRM_F_CUST is not active
      Search help /BI0/OCRM_F_CUST is not active or does not have parameters
      Enhancement category for table missing
      Enhancement category for table missing
      No active nametab exists for /BI0/SCRM_F_CUST
      Termination due to inconsistencies
      Table /BI0/SCRM_F_CUST (Statements could not be generated)
      Flag: 'Incorrect enhancement category' could not be updated
      Enhancement category for table missing
      Table /BI0/SCRM_F_CUST is not a database table => not suitable as selection method
      Row type /BI0/SCRM_F_CUST is not active or does not exist
      Table /BI0/SCRM_F_CUST is either not active or inconsistent
      Srch Help /BI0/OCRM_F_CUST could not be activated
      Table /BI0/SCRM_F_CUST could not be activated
      Table Type /BI0/WSCRM_F_CUST could not be activated
      View /BI0/RCRM_F_CUST could not be activated
      Domain /BI0/OCRM_F_CUST was activated (error in the dependencies)
      Data Element /BI0/OICRM_F_CUST was activated (error in the dependencies)
      Return code..............: 8
      DDIC Object TABL /BI0/SCRM_F_CUST has not been activated
      Error when activating InfoObject 0CRM_F_CUST
      Start of the after-import method for object type R3TR IOBJ (Delete Mode)
      Errors occurred during post-handling RS_AFTER_IMPORT for IOBJ L
      RS_AFTER_IMPORT belongs to package RS
      The errors affect the following components:
         BW-WHM (Warehouse Management)
      Post-import method RS_AFTER_IMPORT completed for IOBJ L, date and time: 20060620132453
    pls let me know the solution.
    regards,
    harish

  • Is there a way to create a transport to mass-delete objects from another system?

    I am facing a unique issue where I have to generate a transport that will successfully import into an external SAP system and delete over 1,000 objects (reports, data elements, UI elements, transactions, function groups, the whole lot...). All of these objects can be easily looked up because they are in the same namespace.
    Essentially, I need to remove all objects with a given namespace via transport.
    Is there any way to do this other than manually deleting all of the objects in the source system and adding them tediously to a single workbench request? This would take hours if not days to do manually, especially due to dependency issues.
    Thanks for any and all constructive feedback!

    Hi,
    I did a quick test: created a new package (development class) with one table, one function group using the table in global data definitions containing one function module referring to the table in interface, and one program that's calling the function module and using table in data definitions. Released the transport to Q.
    Then, in the second our dev system, I created the same package and TADIRs for the three objects manually (SM30); released transport to Q of that system. Then I manually created transport containing all the three objects, the export log looked like this:
    TADIRs in second dev system were gone after export.
    Then I added transport to the first development system and imported
    - the objects were gone (including their TADIRs) leaving the empty package.
    In retrospect, I should have deleted the empty package in second system so it's cleaned up by the same transport.
    If you do have access to "clean" system, doing via ABAP the steps I did manually should not be too much of a challenge.
    I may have simply gotten lucky and if this works with all objects, for all kinds of dependencies, remains to be seen... though I was until now under impression that import itself doesn't check any dependencies at all... What kind of messages were you getting?
    cheers
    Janis

  • A question regarding Management pack dependency.

    Hi All,
    I am new to SCOM, I have a question regarding management pack dependency.
    My question is, Is Dependency is required when New alerts are created in a unsealed MP and the object class selected during alert creation is (i.e Windows server 2012 full operating system) and it is on a Sealed management pack ? 
    For example i have a Sealed Windows server 2012 monitoring management pack.
    I have made a custom one, for windows server 2012, So if it the custom is not dependent on the sealed Windows server 2012 monitoring management pack, Then cant i create any alerts in the custom management pack targeting the class Windows server
    2012 full operating system ?

    Hi CyrAz,
    Thank you for the reply. Now if your's and my understanding is the same, Then look at the below what happened.
    I created a Alert monitor targeting a Windows Server 2012 class in my custom management pack which
    is not dependent on the Windows server 2012 management pack, But how was i successfully able to create them when the dependency is not there at all, If our understanding is same, then there must be an Error thrown while creating the monitor its self right
    ? But how was SCOM able to create that ?
    Look at the below screenshot.
    I was able to create a monitor targeting Windows server 2012 Full operating system and create a alert on the custom management pack which is not at all dependent
    on the Windows server 2012 Sealed MP.
    Look at the dependency of the management pack where i do not have the Windows server 2012 management as my custom management is not dependent on that.
    Then how come this is possible ?

  • IView and system admin objects appear changed or not registered when transported

    Hi,
    Recently we moved a few objects - iViews, pages and roles as epa and system admin objects as config archive and imported them manually to Test system. But when checked in the next landscape, the iView properties appear different. Eg. In Development system, the iView was unchecked for visibility option i.e. iView should not be visible. But when transported to test system, the property was checked and gave error. On reimporting, the same thing happened and we had to manually correct the property. This is not an expected behaviour and manual changes are not to happen after development changes.
    Similarly, though config archive import showed the 2 new system objects in the respective location of Test system, the expected functionality was not working. I checked this by deleting one of them and manually creating the same entry and it worked fine. So I also had to delete the 2nd one and create a manual entry. The system admin objects we used pertained to System Configuration  --> Content Mgmt--> Form Based Publishing --> Forms Availablity --> Folder Settings
    We also moved kmcs and they are reflecting properly with correct folder permissions.
    We don't understand why this is so occurring (iView and system admin entry appearing different or not getting registered )and never experienced such a case in previous projects. We don't want to face the issue going forward when transporting changes to staging and Prod systems.
    Please assist as to what went wrong or any probable cause and solution for same.
    Thanks,
    Janani

    Hi,
    Has anyone faced such a situation before and know the reason for same ?
    Regards,
    Janani

Maybe you are looking for