Best practice for data migration install v1.40 - Error 2732 Directory manag

Hi
I'm attempting to install SAP Best Practice for Data migration 1.40 on Win Server 2008 R2 (64 bit).
Prerequisite error
Installation program stops with missing file error
The following file was not found
... \migration\InstallationWizard\BusinessObjects Data Services\setup.exe
The file is necessary for successful installation. Please connect to internet or refer to Quick Guide (available on SAP note 1527151) for information regarding the above file.
Windows installer log displays
Error 2732 Directory Manager not initialized
SAP note 1527151 does not exist or is internal.
Any help appreciated  on what is the root cause of the error as the file does not exist in that folder in the installation zip file.
Other prerequisite of .NET 3.5.1 met already.
Patch is released since 20.11.2011 so I presume that it is a good installation set.
Thanks,
Alan

Hi Alan,
There are details on data migration v1.4 installations on SAP website and market place. The below link should guide to the right place. It has a power point presentation and other useful links as well.
http://help.sap.com/saap/sap_bp/DMS_V140/DMS_US/html/index.htm
Arun

Similar Messages

  • SAP Best Practices for Data Migration :repositories only on MS SQL Server ?

    Hi,
    I'm implementing the "SAP Best Practices for Data Migration" (see https://websmp109.sap-ag.de/bp-datamigration).
    As part of the installation you have to install MS SQL Server Express Edition. The installation guide contains detailed steps to do this. All repositories for Data Services should be running on SQL Server, according to the installation guide.
    The customer I'm working for now does not want to use SQL Server, but DB2, as company standard.
    So I use DB2 for the local and profiler repositories.
    I notice however that the web application http://localhost:8080/MigrationServices does not support DB2.The only database type you can select in the configuration area is MS SQL Server.
    Is this a limitation, a by design ?

    Hans,
    The current release of SAP Best Practices for Data Migration, v1.32, supports only MS SQL Server.  The intent when developing the DM content was to quickly set up a temporary, standardized data migration environment, using tools that are available to everyone.  SQL Server Express was chosen to host the repositories, because it is easy to set up and can be downloaded for free.  Sone users have successfully deployed the content on Oracle XE, but as you have found, the MigrationServices web application works only with SQL Server.
    The next release, including the web app, will support SQL Server and Oracle, but not DB2.
    Paul

  • Best practices for data migration

    Hi All,
    This thread is useful for those who can use their opportunity to share the ideas and knowledge for making better and best use of data migration using Business Objects Data Services.

    Hans,
    The current release of SAP Best Practices for Data Migration, v1.32, supports only MS SQL Server.  The intent when developing the DM content was to quickly set up a temporary, standardized data migration environment, using tools that are available to everyone.  SQL Server Express was chosen to host the repositories, because it is easy to set up and can be downloaded for free.  Sone users have successfully deployed the content on Oracle XE, but as you have found, the MigrationServices web application works only with SQL Server.
    The next release, including the web app, will support SQL Server and Oracle, but not DB2.
    Paul

  • The best practice for data archiving

    Hi
    My client has been using OnDemand for almost 2 years, there are around 2M records in the system(Activities), so just want to know what is the best practice of data archiving, we dont care much about the data in the 6 month ago.

    Hi Erik,
    Archival is nothing but deletion.
    Create a backup cube in BW. Copy the data from your planning cube to the backup cube, and then delete that data region from your planning cube.
    Archival will definitely improve the performance of your templates, scripts, etc; since the system will now search from a smaller dataset.
    Hope this helps.

  • Best Practices for Data Access

    Good morning!
    I was wondering if someone might give me some advice on some best practices for retrieving data from a SQL server in the cloud via a desktop application?
    I'm curious if I embed into my desktop application the server address (IP, or Domain or whatever) and allow the users to provide their own usernames and passwords when using the application, if there was anything "wrong" with that? Where-in my
    application collects the username and password from the user, connects to a server with that username and password, retrieves the data and uses it in-app.
    I'm petrified of security issues and I would hate to start using a SQL database with this setup only to find out that anyone could download x, y or z and connect to the database and see everything.
    Assuming I secure all of the users with limited permissions, is there anything wrong with exposing a SQL server to the web for my application to use? If so, what and what would be a reasonable alternative?
    I really appreciate any help and feedback!

    There are two options, none of them very palatable:
    1) One is to create a domain, and add the VM and your local box to it.
    2) Stick to a workgroup, but have the same user name and password on both machines.
    In practice, a better option is to create an SQL login that is member of sysadmin - or who have rights to impersonate an account that is member of sysadmin. And for that matter, you could use the built-in sa account - but you rename it to something else.
    The other day I was looking at the error log from a server that apparently had been exposed on the net. The log was full with failed login attempts for sa, with occasional attempts for names like usera and so on. The server is in Sweden - the IP address
    for the login attempts were in China.
    Just so know what you can expect.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Best practice for database migration in 11g

    Hello,
    Database migration is required due to OS change.  Here, I have two database instances say A and B in the old server where RDBMS_VERSION is 11.1.0.7.0. They need to be migrated into a new OS where the oracle has been installed with version 11.2.0.2.0.
    Since all data + objects need to be migrated into the new server, I want to know what the best practice is and how to do that. Thanks in advance for your necessary guidance.
    Thanks and Regards,
    Prosenjit

    Hi Prosenjit,
    you have some options.
    1. RMAN Restore: you can restore your database via rman to the new host, and then upgrade it.
        Please follow instruction from MOS Note: RMAN Restore of Backups as Part of a Database Upgrade (Doc ID 790559.1)
    2. Data Guard: check the MOS Note: Mixed Oracle Version support with Data Guard Redo Transport Services (Doc ID 785347.1)
    3. Full Export / Import (DataPump)
    Borys

  • What is the best practice for voicemail migration?

    Hello Tech Gurus,
    Am looking into a way to migrate our customer voicemail where their voicemail is on NME-CUE module. They want to migrate their voicemail's configurations, licenses and related (to SRE module) and I would like to know what is the best practice or guidelines that I can refer to.
    Thank you very much!
    Regards,
    Alex.

    Hi Alex,
    I was seeing the DOC  which says that 
    Cisco supports transfer of CUE licenses, with some restrictions. Transfer is supported for CUE devices that are of the same type, for an RMA or in cases in which a license was wrongly installed. This process is not intended for transferring licenses from one generation to another (for example, from NM-CUE to NME-CUE, or from NME-CUE to SRE devices). Transferring a license is accomplished using a process called rehosting. The rehosting process transfers a license from one UDI to another by revoking the license from the source device and installing in a new device
    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/unity_exp/rel7_1/Licensing/CUELicensing_book/csa_overview_CUE.html#wp1101175
    You can still speak to licensing team along with show license udi from SRE module along with old licenses details from NME-CUE fro rehosting.
    regds,
    aman

  • Best practice for preparation of install of Leopard from Tiger

    Hi all, I am about to upgrade my iMac with an Archive and Install of Leopard 10.5 from Tiger 10.4.11.
    I've done this dozens of times on other machines, but I never really thought about the best way to prepare for an install, I have already backed up this machine using Carbon Copy Cloner, is there anything else I should do before hitting the button?

    Run the Disk Utility Repair function (or at least Verify), and also the Repair Permissions, which takes many minutes. This will help assure Integrity of your directory before you begin.
    I am not sure why you are choosing to Archive and Install. An Upgrade install will be offered, and is thought to be just as effective, and can be a little more convenient in that your non-Apple Applications are not moved into the "Previous System" folder.
    The absolute BEST practice would be (after you make TWO backups) to erase your Hard Drive with the Write Zeroes/Zero All Data Option. This will take many hours, but will force the drive to substitute spares blocks for any found to be defective after the Zeroing. But I must say that in the absence of error messages indicating disk trouble developing, this is incredible overkill. You asked, so I am answering the question as you asked it.
    Message was composed over a long period of time due to multiple interruptions.

  • ASM BEST PRACTICES FOR 'DATA' DISKGROUP(S)

    In our quest to reduce operating costs we are consolidating databases and eliminating RAC in favor of standalone servers. This is a business decision that is a certainty.  Our SAN has been upgraded, and the new database servers are newer, faster, etc.
    Our database version is 11.2.0.4 with Grid Infrastructure 12.1.0.1. Our data diskgroup is RAID-5 and our fra is RAID-1+0.  ASM has external redundancy.  All disks are of equal size with equal storage performance and availability.
    Previously our databases were on separate clusters by function: OLTP, REPORTING and ENTERPRISE CONTENT MANAGEMENT. Development/Acceptance shared a cluster, while production was separate.
    The new architecture combines different functions onto one server for dev/acc, and another for production.  This means they will all be using the same ASM instance.  Typically we followed Oracle’s recommendation to have two disk groups, one for data and the other for FRA.  That followed well when the database was the only one using the data diskgroup.  Now that we are coming databases, is the best practice still to have one data diskgroup and one FRA diskgroup?  For example, production will house 3 databases.  OLTP is 500 GB, Reporting is 1.3 TB, and Enterprise Content Management is 6 TB and growing.
    My consideration is that if all 3 databases accessing the same data diskgroup, the smaller OLTP must traverse through the 6 TB of content management.  Or is this thinking flawed?
    Does this warrant separate diskgroups?  Are there pros and cons to this?
    Any insights are appreciated.
    Best Regards,
    Sherrie

    I have many issues to deal with in this 'consolidation', but budget reduction is happening in state and regional government.  Our SAN storage is for our enterprise infrastructure and not part of my money-savings directive.  We are also migrating to UCS blades for the infrastructure, also not part of my budget reduction contribution. Oracle licensing is our biggest software cost, this is where my directive lies.  We've always been conservative and done more with less, now we will do with less, but different because the storage and hardware are awesome. 
    We've been consolidating databases onto RAC clusters and standalones since we started doing Oracle.  For the last 7 years we've supported ASM, 6 databases and 2 passive standby instances (with Data Guard) on a 2-node cluster totalling 64gb of memory.  The new UCS blades have 256gb of memory.  I get that each database must support its background processes.  If I add up the sga, pga allocated, background processes they take up about 130gb of memory, but also consider that there is an overhead to RAC.  In all the years we've had Oracle, most of our failures, outages or downtime was because of RAC.  On the plus side of that, the seamless failover saved us most times (not all times), but required administrative time for troubleshooting.
    I would love to go the Oracle 12c and use its multitenant architecture, but I have 3rd party applications that don't yet support it.  11.2 might be our last release unless I can reduce costs.  Consolidation is real and much needed, I believe why Oracle responded to the market with multitenancy. 
    But back to my first question about how many diskgroups to service a group of databases.  What I hearing, and think I agree to, is that one data group will suffice because the ASM instance knows where to retrieve the data and waste will be reduced, as well as management. 
    I still need to do some ciphering and by no means have a final plan, but thank you all for your insights and contributions.

  • Best practice for data persistance for monitoring without BAM

    Greetings,
    We are modeling a business process in a large organization using BPEL Process Manager. The key point is that business people needs to monitor the execution of the business process in several key sectors of the process execution as well as they need to get report information of the process.
    To model this in our project, we decided to create a new Oracle Database Schema that is going to hold the information about the business process execution (we decided that because for this initial offering the customer is not buying BAM). In this context, the BPEL process is going to be sending this key information to the repository so business people can then view real time information about the process execution as well as historical information in form of reports.
    The important issue here is, if there is a best practice to send the information to the Database Schema ? it could be just using single database adapters ? maybe using sensors sending the data using topics connections ?
    Any help will be highly appreciated.
    Thanks in advance.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • The best practice for data mart to different BW System.

    Hi All,
    Would you like to suggest me what i have to do for this case ??
    I have 2 SAP BW systems e.g. BW A & BW B.
    I wanna transfer data from info cube within BW A into info cube within BW B.
    The things that I did :
    1. 'Generate Export Data Sources' for info cube BW A.
    2. Replicate source system in BW B. In BW B, it will show datasource from info cube BW A.
    3. In SAP BW B, I create info package, then I can fetch data from SAP BW A.
    What I wanna ask are:
    1. Could I make it automatically?? Because everytime I wanna fetch data from Info Cube SAP BW A, I must run info package in SAP BW B / what's the best practice.
    2. Could RDA make it automatically ?? Automatic in my case is everytime I have new/update data in Info cube SAP BW A, I don't have to run info package in SAP BW B.
    SAP BW B will automatically fetch the data from Info Cube A.
    If yes, could you give me step-by-step how to use RDA to solve my case please..
    Really need ur guidances all .
    Thanks,
    Best regards,
    Daniel N.

    Hi Daniel,
    You can create a process chain to load your cube in BW A. SImilarly create a process chain in your BW B system to load its cube.
    Now in your system BW A you create a process chain to load your cube. After that you can run automatically the procewss chain in BW B. You can use the Remote chain option for this.
    This will trigger a chain automatically in the remote system.
    Regards,
    Mansi

  • Best practices for data representation

    I'm curious about the best data representation for a constant or variable when there is an obvious choice of two.
    For example, take the Timeout terminal of the Event structure. This terminal takes a Long (I32) data type, but I'm wiring to it a constant value of 100 and therefore could use an Unsigned Byte (U8). Setting the constant to be I32 prevents an automatic conversion step from happening, but setting it to be U8 saves a little bit of unnecessary allocated space.
    Which is better?

    Practically
    speaking it more than likely will not matter until the data sets get
    large however as a "best practices" go it is best to keep the data
    consistent and in the type that the control, property node etc expects. Directly from the NI user manual (LV 7.1)
    "Coercion
    dots appear on block diagram nodes to alert you that you wired two
    different numeric data types together. The dot means that LabVIEW
    converted the value passed into the node to a different representation.
    Coercion dots can cause a VI to use more memory and increase its run
    time. Try to keep data types consistent in VIs."
    Cheers,
    --Russ

  • Best practices for data entry online system

    Hi all
    I am(with a team of 4 members) going to build an online data entry system which may have approximately 30 screens. I am going to use Spring BlazeDS remoting to connect middleware.
    Anyone could please suggest me some good practices to follow in flex side to do such a "DATA ENTRY" application.
    The below points are some very few common best practices we need to follow while doing coding .But i am not sure how to achive them in flex side.
    User experience (Probably i can get little info regarding this from my client)
    Code maintanability
    Code extendibility
    memory and CPU optimization
    Able to work with team members(Multiple checkouts)
    Best framework
    So i am looking for valueble suggestion from great minds.

    There are two options, none of them very palatable:
    1) One is to create a domain, and add the VM and your local box to it.
    2) Stick to a workgroup, but have the same user name and password on both machines.
    In practice, a better option is to create an SQL login that is member of sysadmin - or who have rights to impersonate an account that is member of sysadmin. And for that matter, you could use the built-in sa account - but you rename it to something else.
    The other day I was looking at the error log from a server that apparently had been exposed on the net. The log was full with failed login attempts for sa, with occasional attempts for names like usera and so on. The server is in Sweden - the IP address
    for the login attempts were in China.
    Just so know what you can expect.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Table Onwers and Users Best Practice for Data Marts

    2 Questions:
    (1)We are developing multiple data marts that share the same Instance. We want to deny access to the users when tables are being updated. We have one generic user (BI_USER) with read access through one of the popular BI Tools. The current (first) data mart we denied access by revoking the privilege to the BI_USER, however going forward with other data marts the tables will get updated on a different schedule and we do not want to deny access to all the data marts. What is the best approach?
    (2) What is the best Methodology for table ownership of tables in different data marts that share tables across marts? Can we create one generic ETL_USER to update tables with different owners?
    Thanx,
    Jim Masterson

    If you have to go with generic logins, I would at least have separate generic logins for each data mart.
    Ideally, data loads should be transactional (or nearly transactional), so you don't have to revoke access ever. One of the easier tricks to accomplish this is to load data into a shadow table and then rename the existing table and the shadow table. If you can move the data from the shadow table to the real table in a single transaction, though, that's even better from an availability standpoint.
    If you do have to revoke table access, you would generally want to revoke SELECT access to the particular object from a role while the object is being modified. If this role is then assigned to all the Oracle user accounts, everyone will be prevented from viewing the table. Of course, in this scenario, you would have to teach your users that "table not found" means that the table is being refreshed, which is why the zero downtime approach makes sense.
    You can have generic users that have UPDATE access on a large variety of tables. I would suggest, though, that you have individual user logins to the database and use roles to grant whatever ad-hoc privileges users need. I would then create one account per data mart, with perhaps one additional account for the truely generic tables, that own each data mart's objects. Those users would then grant different roles different database privileges, and you would then grant those different roles to different users. That way, Sue in accounting can have SELECT access to portions of one data mart and UPDATE access to another data mart without granting her every privilege under the sun. My hunch is that most users should not be logging in to, let alone modifying, all the data marts, so their privileges should reflect that.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Best Practice for data pump or import process?

    We are trying to copy existing schema to another newly created schema. Used export data pump to successfully export schema.
    However, we encountered some errors when importing dump file to new schema. Remapped schema and tablespaces, etc.
    Most errors occur in PL/SQL... For example, we have views like below in original schema:
    CREATE VIEW *oldschema.myview* AS
    SELECT col1, col2, col3
    FROM *oldschema.mytable*
    WHERE coll1 = 10
    Quite a few Functions, Procedures, Packages and Triggers contain "*oldschema.mytable*" in DML (insert, select, update) statement, for exmaple.
    Getting the following errors in import log:
    ORA-39082: Object type ALTER_FUNCTION:"TEST"."MYFUNCTION" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TEST"."MYPROCEDURE" created with compilation warnings
    ORA-39082: Object type VIEW:"TEST"."MYVIEW" created with compilation warnings
    ORA-39082: Object type PACKAGE_BODY:"TEST"."MYPACKAGE" created with compilation warnings
    ORA-39082: Object type TRIGGER:"TEST"."MYTRIGGER" created with compilation warnings
    A lot of actual errors/invalid objects in new schema are due to:
    ORA-00942: table or view does not exist
    My question is:
    1. What can we do to fix those errors?
    2. Is there a better way to do the import with such condition?
    3. Update PL/SQL and recompile in new schema? Or update in original schema first and export?
    Your help will be greatly appreciated!
    Thank you!

    I routinely get many (MANY) errors as follows and they always compile when I recompile using utlrp.
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_LASTOUTPUNCH" created with compilation warnings
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_REFPERIODENDFOREMP" created with compilation warnings
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."RPTSF_WR_TAILOFFSECS" created with compilation warnings
    ORA-39082: Object type ALTER_FUNCTION:"TKCSOWNER"."FN_GDAPREPORTGATHERER" created with compilation warnings
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ABSENT_EXCEPTION" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_BAL_PROJ" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_DETAILS" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACCRUAL_SUMMARY" created with compilation warnings
    ORA-39082: Object type ALTER_PROCEDURE:"TKCSOWNER"."ACTUAL_SCHEDULE" created with compilation warnings
    It works. In all my databases: peoplesoft, kronos, and others...
    I should qualify that it may still be necessary to 'debug' specific problems, but most common typical problems are easily resolved using the utlrp.sql. The usual problems I run into are typically because of a database link that points to another database such as in a production environment that we firewall our test and development databases from linking to (for obvious reasons).

Maybe you are looking for

  • Error 404 -- NOT FOUND when running a form in form 11g

    Hi All, I got this problem when I run the forms using internet explorer. I work on Forms 11g which uses weblogic 10.3 for web deployment of the forms. Error 404--Not Found From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1: 10.4.5 404 Not Found Th

  • Logical Database - PNPCE - Hide 'Org Structure' 'Search Help' etc buttons

    Hi, We have created a report by making a z copy of RCATS_APPROVE_ACTIVITIES. In the Z version we need to hide the buttons on the titlebar of the selection screen. Buttons are - Org Structure - Search Help - Dynamic Selection - Selection Fields. Can y

  • Problem in creating workbook

    Hi All, I have a problem in creating workbook. After inserting the query and saving as workbook, when the workbook is opened again there is no query inserted into it. Some values looks like alpha numeric values are populating. I'm using BEx 3.5 and M

  • Inactive objects in ABAP

    All, Can you help me to answer this questions ? Is there anyway to find all the inactive objects on ABAP ? During Enhp4  upgrade, we touched the buffer during ddic activiation and not sure how many objects were in active because of this ? Though the

  • Merging two rpds in OBIEE 11g

    Hi, To merge two RPDs in OBIEE 11g, do I need to have both RPDs online? What are the pre requecities? Do I need to modify ini file to mention those RPDs.? Please tell me the steps involved. Thanks and Regards Santosh