Dev and Prod Environment Sync.
Hello Experts,
I would like to ask for your kind help in the following matter: How to resynch the dev and production environments.
For some (inextricable) reason, the production and development servers are out of sync. Mainly the SLD content is in different versions. The production one is older. Oh and there is no QA environment.
Besides that the business systems were created manually in each server and have inconsistencies. No SLD information was transported between them, only the repository and some directory information. Other directory information was created in the production environment directly. And I am affraid some development took place in production server directly too.
So I have been asked to produce a plan to get all this mess into shape. So far at a macro level I have these tasks planned:
10.- Backup prod and dev servers.
20.- Load latest CIM model and CR content on both servers (and see what happens).
30.- Connect the SAP systems as SLD data sources to prod SLD.
40.- Redo configuration in prod to point to automatically created business systems (and retest).
50.- Cleanup of manually created business systems the prod environment.
60.- Config the prod SLD as data source for dev SLD.
70.- Redo configuration in dev server to make it match prod SLD.
80.- Cleanup of manually created business systems the dev environment.
90.- Create in the dev server the repository objects that were directly created in prod.
100.- Test everything for consistency.
Please help me improve the task list/order.
-Sam.
Hi Sam
You are planning a lot of activity on a production environment. Two coments:
1. When you have your dev SLD configured as you require, with CIM, PI Content & Business Systems updated etc .. perhaps you should do a level of regression testing on this environment before embarking on changing the Live env.
2. Once you are happy that the Dev env It is possible to Export and import manually the SLD content from the Dev to the Live to align both systems, refer to link:
https://websmp104.sap-ag.de/~sapidb/011000358700000315022005E.PDF
This is available through the Export Link in Administration page.
Thanks
Damien
Similar Messages
-
How to set up DEV, TEST and PROD environment?
We have used BI publisher Enterprise (standalone), Oracle BI Publisher 10.1.3.4.1. Our admin set up DEV, TEST and PROD environment based on the folder. For example, there is DEV folder, TEST folder and PROD folder. Developer is developing reports under DEV folder. Under TEST and PROD folder, there are many sub-folder based on the login user role. Sometimes a report has to be assigned to a multiple sub-folder under PROD. So our admin create symbolic links in the Linux box which BI server is located. That way, if a report is updated, there is no need to update the report in all sub-folder.
The issue I have is we are not allowed to touch any files under TEST/PROD folder. Only admin will move the report from DEV folder to TEST/PROD folder because those links admin created might be broken. However, as a developer, we still have permission to delete/rename/copy report under those restricted folders. Yesterday one report under PROD has been renamed by a developer. And admin complains because the links he has created not working anymore. Just wonder if admin doesn't allow developers to touch the reports under those folders, is there a way to remove the write permission on those folders? Also do you think this is a good practice to set up DEV, TESDT and PROD environment? Any input will be greatly appreciated.We have used BI publisher Enterprise (standalone), Oracle BI Publisher 10.1.3.4.1. Our admin set up DEV, TEST and PROD environment based on the folder. For example, there is DEV folder, TEST folder and PROD folder. Developer is developing reports under DEV folder. Under TEST and PROD folder, there are many sub-folder based on the login user role. Sometimes a report has to be assigned to a multiple sub-folder under PROD. So our admin create symbolic links in the Linux box which BI server is located. That way, if a report is updated, there is no need to update the report in all sub-folder.
The issue I have is we are not allowed to touch any files under TEST/PROD folder. Only admin will move the report from DEV folder to TEST/PROD folder because those links admin created might be broken. However, as a developer, we still have permission to delete/rename/copy report under those restricted folders. Yesterday one report under PROD has been renamed by a developer. And admin complains because the links he has created not working anymore. Just wonder if admin doesn't allow developers to touch the reports under those folders, is there a way to remove the write permission on those folders? Also do you think this is a good practice to set up DEV, TESDT and PROD environment? Any input will be greatly appreciated. -
Best method to transport changes from DEV to Prod Environment
Hi SDNers,
I have a requirement here :
I have made some changes to the DEV environment, which i want to move to the production Environment.
Changes mainly include the Structural changes, XSD's, Import maps, Remote systems, Export Maps, Validations etc. Please note that I DO NOT WANT to transport the data from DEV to Prod environment.
I can see 2 options available :
1. Export/Import Repository schema - But as far as i know, the import and export maps will not be transported auomatically through Export/Import Schema.
2. Archive/Unarchiving Without data -
Here are my questions :
1. Is the 2nd option feasible looking at the changes I need to transport? If yes, is there any recommended guide available by SAP suggesting the 2nd option for trnasport mechanism.( Please sendme the links)
2. Will it affect the data in Prodcution in any chance?
3. Is there anything Archiving/Unarchiving without data cannot handle.
4. Will it impact any other normal running of Production environment.
Please respond to my queries.
Thanks and Regards
Nitin JainHi Nitin,
If you use Archive/UnArchive option without data then you cannot persists the PROD env since you need to UnArchive the entire repository which is empty and then you need to move the PROD data into this repository.
1. Is the 2nd option feasible looking at the changes I need to transport? If yes, is there any recommended guide available by SAP suggesting the 2nd option for trnasport mechanism.( Please sendme the links)
Ans. I don't think, 2nd option is feasible as a mentioned earlier. This option should be used when we need the entire repository without data which is not the case you are looking for.
2. Will it affect the data in Prodcution in any chance?
Ans. If you are using 2nd option then this question is not related because when you UnArchive, you will get the new repository so this wont affect the existing repository at all.
3. Is there anything Archiving/Unarchiving without data cannot handle.
Ans. Workflows
4. Will it impact any other normal running of Production environment.
Ans. If you are using 1st option then you need to unload the PROD repository first and then import the changes. For option 2 its not related.
Regards,
Jitesh Talreja -
Consistency between dev and prod systems!!!!
Hi All,
I wuld like to check the consistency between Dev and Prod systems for all the Data model and other objects existing. I would like to do this sanity check to see whether the two systems are in sync. Are there any tools or transaction codes within SAP framework which can guide me in this direction.
Kind Regards,
Surya Tamada.Hi Surya ,
First build whole data flow :
1 .Create Infoarea ,Infoobjects & Masterdata ,Infoproviders ,Infoobject Catelog under Infoarea,MetaData - Characteristics MeteData - Key Figures
2 DSOs ,Infocubes ,Infosets, Multiproviders ,Views in the development system ,Datasources for BW system,Flat File Datasources,Transformations
then get data from the se11 tables as per your requirement :
like RSDVCHA ,RSDBCHATR,RSDCUBEIOBJ,RSDICMULTIIOBJ etc and built view on it as per your requirement .
then built flat file data sources and transformation .
get all data downloaded to flat file in ur PC .In the same way from other system.
make flow from PC File Datasource -> DSO -> Infocube and build queries for different objects .load the downloaded files to them and then run query .
ex : query to compare cube will take cube name and systems name as input and compare objects .
Regards,
Jaya -
SYSTEM LANDSCAPE - DEV and PROD - refresh
Hi All,
We have a situation like this.
1. Our BW Q&A is a copy of PROD
2. Our BW DEV is not either a copy of Q&A or PROD.
3. We have issue with a particular process chain in PROD and we need to correct (and follow the lanscape..that is want to do the correction in DEV and the transport to QA and then to PROD)
4. Since this particular process chain is not existing in the DEV environment, our Basis team is advising us to check the difference in DEV and PROD for as this process chain concern and then want to refresh it to DEV system so that we can carry out the change/modification.
We don't know really what exactly we need to find the differences between DEV and PROD for this PC and then to inform to Basis team...
Is it correct that since the particular process chain associated with various queries,data targets,update/transfer structures...etc will be over writen / copied / refreshed in DEV system from PROD?
Please advise..Please help..Hi George, SAT, BWer,
Thanks for your messages.
George : Answers to your question:
1. Our Basis team want to copy all the configuration (related to the particular process chain only)and not Data's.
2. How to pack the process chain in a portable file and extract this file. then import it into DEV..and what is the way not to get error when doing so (knowing that fact that obejects will be different..) I don't know..whether to copy the objects are or not..
3.I agree with you.. Even we are ready to work with DEV to PROD..
SAT: Answers to your questions:
1. I do see some Process chains in the DEV and not related to this particular PC. Many development PC are there..and I don't know where the originally developed PC's ???!!
2. I am new to this Client..sorry..how this has been deleted...
BWer :Answers to your questions:
1. Yes..PC is Latest in the PROD.
2. We don't want to create the process chain all the way from scratch in the DEV. Yes..This is one of the process chain used in the metachain...Totally there are five chains in the PROD related to this..We need to modify only the second process chain..which we are trying to copy in the DEV and then do the changes...
Hope , you are clear about our requirement.
Please advise me what exactly we need to look into the DEV and PROD system so that required PC is copied alomg with (????) to DEV system..Please help.
Thank you very much in Advance.. -
Performance problems between dev and prod
I run the same query with identical data and indexes, but one system takes a 0.01 seconds to run while the production system takes 1.0 seconds to run. TKprof for dev is:
Rows Row Source Operation
1 TABLE ACCESS BY INDEX ROWID VAP_BANDVALUE
3 NESTED LOOPS
1 NESTED LOOPS
41 NESTED LOOPS
41 NESTED LOOPS
1 TABLE ACCESS BY INDEX ROWID VAP_PACKAGE
1 INDEX UNIQUE SCAN SYS_C0032600 (object id 51356)
41 TABLE ACCESS BY INDEX ROWID VAP_BANDELEMENT
41 AND-EQUAL
82 INDEX RANGE SCAN IDX_BE2 (object id 53559)
41 INDEX RANGE SCAN IDX_BE1 (object id 53558)
41 TABLE ACCESS BY INDEX ROWID VAP_BAND
41 INDEX UNIQUE SCAN SYS_C0034599 (object id 53556)
1 INDEX UNIQUE SCAN SYS_C0032549 (object id 51335)
1 INDEX RANGE SCAN IDX_BV1 (object id 53557)Tkprof for Prod is :
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BANDVALUE' (TABLE)
52001 NESTED LOOPS
26000 NESTED LOOPS
26000 NESTED LOOPS
26000 NESTED LOOPS
1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_PACKAGE' (TABLE)
1 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0018725' (INDEX (UNIQUE))
26000 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BANDELEMENT' (TABLE)
26000 INDEX MODE: ANALYZED (RANGE SCAN) OF 'IDX_BE2' (INDEX)
26000 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'VAP_BAND' (TABLE)
26000 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0030648' (INDEX (UNIQUE))
26000 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'SYS_C0018674' (INDEX (UNIQUE))
26000 INDEX MODE: ANALYZED (RANGE SCAN) OF 'IDX_BV1' (INDEX).The row count varies greatly. But it shouldn't as the data is the same.
Any ideas?From DEV you show the Row Source Operations for the query. The column named "Rows" signifies the actual number of rows processed with each step.
From PROD you show the Execution Plan for the query; that is, tkprof was executed with the EXPLAIN option which generates the execution plan as of the time when tkprof was run. The "Rows" column in the Explain Plan output comes from the PLAN_TABLE.CARDINALITY, which represents an estimate by the CBO for the number of rows [expected to be] processed with each step.
So, if by <quote>The row count varies greatly</quote> you meant these "Rows" columns outputs then you're are comparing actuals from a database with estimates from another. Get the Row Source Operations from both.
"Identical data and indexes":
1. data may be the same, but it is not necessarily stored physically the same way.
2. Indexes being the same means their definitions are the same; again, physically they are not necessarily identical
In other words, data in PROD (the way it is stored on disk) may have evolved as a result of discrete deletes/updates/inserts ... in DEV it could be, for example, stored more compact if you took a copy of PROD and moved into DEV. So, the number of blocks for your segments will likely be different between PROD and DEV, the clustering factor for your indexes are likely different, etc ... things which could [and do] influence the CBO. The statistics may be different.
I guess what I'm saying is ... it is quite hard, if not outright impossible, to get two identical databases/instances/load ... hence, don't expect the executions to be 100% identical, even if you have "identical data and indexes". By all means compare between DEV and PROD (make sure you compare the same thing though) and use the observed differences as an indicator for further investigation ... don't chase the goal of 100% identical behavior.
Now, by all means look at that query taking 1 second in PROD ... I have only addressed <quote>The row count varies greatly. But it shouldn't as the data is the same.</quote> -
免 Chaines Char not accepted in QA system, working fine in Dev and PROD
Hi Experts,
I am stuck withchines char e.g. 免
BW QA is not accepting it, I can see the PSA data 免 converted to #, where as same records show proper 免 in BW DEV and PROD.
I checked the data in source system (RSA3) properly showing 免.
I rechecked the SM59 RFC setting in QA and compare with DEV.
All system are unicode supported.
Please help to resolve.
Thank-You.
Regards,
VBI crated a new flat file text datasource and tried to load data (chines char) and that to work in QA.
Don't understand what's issue with 0CUST_SALES_ATTR extractor. ?
Any help/experience is welcome.
Thank-You.
Regards,
VB -
Multiple connections to dev and prod
Hi All,
I trying to connect multiple environments (Dev and Prod) from shared connections in Smartview.
When I right click on the existing planning I couldn't see anything (there were suggestions from other posts that by right clicking you will get an option to add new server).
Also I have differetn URL's for dev and Prod.I think from 11.1.2.2 onwards the ability to add servers in Shared Connection is removed and an enhancement is filed to have that ability.
You can use private connection to connect to different instances
http://DEV:19000/aps/APS for Essbase
http://DEV:19000/HyperionPlanning/SmartView HyperionPlanning
If you are on 11.1.2.3 you can create an XML file and use that as your Shared Connection
Essbase Labs: Smart View News
Regards
Celvin
http://www.orahyplabs.com -
Query Execution time is different in DEV and PROD
Hi,
Query is as follows:
SELECT DISTINCT tds.*
FROM testdev.contract_trans npq, testdev.contract_trans_group tds
WHERE npq.contract_trans_group_id = tds.id
AND tds.contract_id <> :contractid
AND tds.tradingday_date BETWEEN :fromtradingday AND :totradingday
AND tds.category_id = :categoryid
AND tds.state_id = :entitystateid
AND EXISTS
(SELECT *
FROM testdev.contract_trans npq1,
testdev.contract_trans_group tds1
WHERE npq1.contract_trans_group_id = tds1.id
AND tds1.contract_id = :contractid
AND (npq.network_point_id = npq1.network_point_id
OR (npq.network_point_id IS NULL
AND npq1.network_point_id IS NULL))
AND tds.tradingday_date = tds1.tradingday_date
AND tds.category_id = tds1.category_id
AND tds.state_id = tds1.state_id)
This query is taking 500ms of time to execute in DEV and when it is executing in Prod it is execuiting in 1.10min. I also analyze the table as well as use hint on this. But the time is not getting reduce in Prod. I compare the tables of dev and prod but data is on shink.
row execution plan for DEV :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.01 0 0 0 0
Execute 1 1.62 1.57 0 0 0 0
Fetch 2 0.81 0.78 0 1208 0 364
total 4 2.43 2.37 0 1208 0 364
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 5 (SYSTEM)
Rows Row Source Operation
364 HASH UNIQUE (cr=1208 pr=0 pw=0 time=783194 us)
545 FILTER (cr=1208 pr=0 pw=0 time=775342 us)
545 HASH JOIN (cr=1208 pr=0 pw=0 time=774670 us)
99432 TABLE ACCESS FULL CONTRACT_TRANS (cr=382 pr=0 pw=0 time=465 us)
18225 HASH JOIN (cr=826 pr=0 pw=0 time=359469 us)
1000 HASH JOIN (cr=445 pr=0 pw=0 time=108528 us)
46 TABLE ACCESS FULL CONTRACT_TRANS_GROUP (cr=223 pr=0 pw=0 time=48274 us)
675 TABLE ACCESS FULL CONTRACT_TRANS_GROUP (cr=222 pr=0 pw=0 time=51497 us)
99432 TABLE ACCESS FULL CONTRACT_TRANS (cr=381 pr=0 pw=0 time=182 us)
And row execution plan for PROD:
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 80.03 78.40 219 319722 0 364
total 4 80.03 78.41 219 319722 0 364
Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: 5 (SYSTEM)
Rows Row Source Operation
364 HASH UNIQUE (cr=319722 pr=219 pw=0 time=78406184 us)
545 FILTER (cr=319722 pr=219 pw=0 time=78385884 us)
545 HASH JOIN (cr=319722 pr=219 pw=0 time=78384017 us)
675 TABLE ACCESS FULL CONTRACT_TRANS_GROUP (cr=222 pr=219 pw=0 time=328724 us)
2380118 NESTED LOOPS (cr=319500 pr=0 pw=0 time=71521450 us)
837 HASH JOIN (cr=603 pr=0 pw=0 time=315980 us)
46 TABLE ACCESS FULL CONTRACT_TRANS_GROUP (cr=222 pr=0 pw=0 time=48647 us)
99432 TABLE ACCESS FULL CONTRACT_TRANS (cr=381 pr=0 pw=0 time=100990 us)
2380118 TABLE ACCESS FULL CONTRACT_TRANS (cr=318897 pr=0 pw=0 time=70077111 us)
Can any body help me out why this is happening?
Regards,
SaptadipWelcome to the forum.
Please edit your post and use the {noformat}{noformat} tag, so your examples will stay formatted and indented.
Put the tag before and after all text you want to stay formatted.
{noformat}..{noformat} - Displays everything between the tags as programming code.
The query on PROD is doing lots more of reads/processing much more rows then on DEV.
For example, take table CONTACT_TRANS:
- less than 100,000 rows on DEV
vs
more than 2,300,000 on PROD.
but data is on shink.What does that mean? -
Hello All,
I have situation with a client. The Client wants to install a BPEL DEV and TEST environment on one Server Instance and One Database Instance.
I know we can have multiple App Server Instances on a single Physical Server.
But the question is about having just one Database Instance. Can multiple BPEL schemas reside on same database with different schema names like orabpelDEV and orabpelTEST? Can the default schemas names orabpel, oraesb be renamed for different environment? If so how do I do it?
Any other suggestions is appreciated.
Thanks,
Abhay.
PS: BPEL 10.1.2 is being used with MID TIER InstallationIn the documentation of the App Server in chapter 4 it's all about BPEL clustering:
http://download-west.oracle.com/docs/cd/B31017_01/integrate.1013/b28980/clusteringsoa.htm
This can also apply to your environment. It is possible to have multiple BPEL PM server using one signle database and the same dehydration store. This helps you to increase the througput, scalibility and reailibility.
You can also implement multiple BPEL domains on a single BPEL server. In this way you can implement a OT(AP) environment. -
# Need help setting up DW CS3 local, dev and prod sites
Hi all,
I'm currently building a site in DW CS3 and wanted to
structure things so I could develop, test, then publish in a more
structured manner. What I'd like to have is the follow setup (sorry
for the botched diagram):
Computer 1 Computer 2
(local dev1) (local dev2)
| |
| |
|
|
Test Environment
(common)
|
|
Prod Environment
(on my web host server)
I assume that there's a way to do this in DW, but I can't
seem to find a good tutorial on it. Now I'm working on all pages on
my iMac, previewing in the browser to check, and then pushing them
to the web host to update my site. This is happening in bits and
pieces, and I'd much rather develop everything locally and then
push the site in one (or just fewer) moves.
Also, I'd like for another user to be able to develop and
hopefully publish to the test environment before pushing to prod.
Is there a way to organize and control things so our files stay
somewhat in sync?
Thanks in advance,
MikeI recommend downloading and installing XAMPP
-
Setup of Customer Dev and Test environment
Customer wants to set up a Dev and Test env for Apex on the same machine. Been trying to wrap my head around the way to do this. Seems like these are the options:
1) Separate databases, Two APEX homes, Two HTTP Servers
2) One database, different Workspaces, One APEX home, One HTTP Server
# 1 is the most 'pristine' from the perspective of separation of everything (and patching), but #2 seems like a more manageable environment. Only down side I see to #2 is that DEV can never have a newer release of APEX than TEST.
BTW -the customer wants TEST to look like PROD except during code push time. They also want TEST env refreshed regularly from PROD in between code releases.
Suggestions? Other options?
Thanks,
DwightHello Dwight,
Check your dads.conf file. There is something like this:
<Location /pls/apex>
Order deny,allow
PlsqlDocumentPath docs
AllowOverride None
PlsqlDocumentProcedure wwv_flow_file_mgr.process_downloadd
PlsqlDatabaseConnectString localhost:1521:XE ServiceNameFormat
PlsqlNLSLanguage AMERICAN_AMERICA.AL32UTF8
PlsqlAuthenticationMode Basic
SetHandler pls_handler
PlsqlDocumentTablename wwv_flow_file_objects$
PlsqlDatabaseUsername APEX_PUBLIC_USER
PlsqlDefaultPage apex
PlsqlDatabasePassword apex_public_user
PlsqlRequestValidationFunction wwv_flow_epg_include_modules.authorize
Allow from all
</Location>You can easily change the Location - /pls/apex - to (e.g.) /dev add another Location (/test) and provide it with the different connect strings.
Then you can access your Dev environment using http://localhost:7778/dev and Test with http://localhost:7778/test.
You can also change the (default) settings to your /i/ (virtual) directory to point to different dev and test directories.
Example:
Alias /dev/ "c:\Oracle\OAS\Apache/dev/images/"
Alias /tst/ "c:\Oracle\OAS\Apache/test/images/"
and change the value of the Image Prefix in your Application Definition accordingly.
Greetings,
Roel
http://roelhartman.blogspot.com/
You can reward this reply by marking it as either Helpful or Correct ;-) -
Comparing data in dev and prod cubes
We have a development environment and a production environment.
I use the deployment wizard to move from dev to prod, but I haven't done this for several months.
I have a finance cube which in theory I haven't changed except for some changes to some shared dimensions.
I would like to compare the data and/or structure of the two finance cubes (dev vs prod) and make sure the only changes are the differences in shared dimensions.
Can anybody tell me how to do, that?
Thanks in advance.Red Gate's free SSAS Compare tool is probably the easiest way to compare two SSAS Multidimensional databases:
http://www.red-gate.com/labs/free-tools/ssas-compare
It will show up a few spurious differences (minor property values that aren't important) and it's not supported, but it works pretty well.
If you're using Tabular rather than Multidimensional, you can use BISM Normalizer:
https://visualstudiogallery.msdn.microsoft.com/5be8704f-3412-4048-bfb9-01a78f475c64
Chris
Check out my MS BI blog I also do
SSAS, PowerPivot, MDX and DAX consultancy
and run public SQL Server and BI training courses in the UK -
Risk Analysis result different in DEV and PROD
Hi Gurus,
I have modified few functions in development and transported the changes but after transport there is a new SOD produced at the user level and with same access in Development there is no violation when I checked the function permission there is two duplicate entries in production ruleset compared to Development. Do I need to remove the duplicate entries in production and then run risk analysis is this going to fix SOD ?
My assumption is GRC 10 doesn't have ability to transport changes for deletion in functions.
Regards,
SalmanDear Salman,
looks that the rules have been appended so it shows twice.
I suggest to download the correct rules from DEV and upload in PROD again. You can use program GRAC_UPLOAD_RULES to upload in PROD. Please make sure you set the option to overwrite, and not append.
With GRAC_RULE_DELETE you can also delete the rules before you upload (not necessary, but possible).
Hope this helps.
Regards,
Alessandro -
Source System Mapping between BW Dev and Prod system
Hi,
I have a DB Connect Source System in BW Dev system called DBSRCDEV which points to one of our Oracle Dev DB.
In Production, the DB Connect Source System is called DBSRCPROD and it points to the corresponding Oracle Prod DB.
In Development, I created a data source on this Source System (DBSRCDEV) and I tried to transport it to Production. But it failed with error saying that DBSRCDEV does not exist.
Where and how can I maintain or configure the mapping between DBSRCDEV and DBSRCPROD Source System so that I can transport my data source to Production?
Please help.
Thanks,
CHHi,
usually you need to maintain the setting for the change of the logical system name (source system) in each target system of a transport. So in your prod system goto rsa1->tools->change logical system names...
After maintaining that table, try to import your request again.
regards
Siggi
Maybe you are looking for
-
Photoshop CS4 shuts down when I access type tool
Recently, I started having a problem that I wasn't having before. I can import photos from Bridge and edit and run actions, but as soon as I hit the Type tool, photoshop shuts down. When I open it back up, it has reverted back to original format an
-
ATP in SD. Advantage of ATP server & locks for multiple users.
hi, I am working on ATP in SD. We have to design our own ATP calculation logic. 1. What is advantage of ATP server (logical partition) in SD. does it helps in regulating the calls for ATP calculation. 2. In standard there is lock for plant & material
-
Keynote 5.1.1 print errors
Can't print Keynote 5.1.1 file. Printed page has postscript print errors: ERROR "invalidacess" and OFFENDING COMMAND "readonly". Tried printing to 2 different postscript printers and saving as new name, printed from 3 different computers (OSX 10.6.8
-
Why does the hdmi sound out stop working once I start using eyetv?
I upgraded to Lion and now I have a few problems. I use the Elgato EyeTV to record television on my Mac Mini. I have been using it for a few years. Now once I watch something on my EyeTV the hdmi sound output stops working and I have to reboot to be
-
Howto create a dynamic link with current domain
We need to create a dynamic link on a html page in the portal. The link should contain a dynamic part containing the domain where the CLIENT is in. It should look like this : http://mytool.windowsdomain.corp/tool where "windowsdomain.corp" is dynamic