46B Parallel Export and Import

Dear All,
We would like to have a SAP Migration from AIX to Windows 2003 but we would like to know more information in this case.
Server A: AIX
Server B: Windows 2003
DB: Oracle 10g
1. Can we have a Parallel Export and Import in 46B?
2. If yes, would you mind to provide a procedure to me?
many many thanks.

> 1. Can we have a Parallel Export and Import in 46B?
> 2. If yes, would you mind to provide a procedure to me?
To migrate a 46B system you need to get the migration tool CD which you get only if you have an extended maintenance contract.
You can split tables but this functionality is not integrated in the setup tools (R3SETUP) and must be done "manually". Since 46B is long out of support the necessary documentation is either spread around in notes or is not available at all.
I'd not do a production migration without a certified migration consultant (see http://service.sap.com/osdbmigration).
Markus

Similar Messages

  • Export and Import of mappings/process flows etc

    Hi,
    We have a single repository with multiple projects for DEV/UAT and PROD of the same logical project. This is a nightmare for controlling releases to PROD and in fact we have a corrupt repository as a result I suspect. I plan to split the repository into 3 separate databases so that we have a design repos for DEV/UAT and PROD. Controlling code migrations between these I plan to use the metadata export and subsequent import into UAT and then PROD once tested. I have used this successfully before on a project but am worried about inherent bugs with metadata export/imports (been bitten before with Oracle Portal). So can anyone advise what pitfalls there may be with this approach, and in particular if anyone has experienced loss of metadata between export and import. We have a complex warehouse with hundreds of mappings, process flows, sqlldr flatfile loads etc. I have experienced process flow imports that seem to lose their links to the mappings they encapsulate.
    Thanks for any comments,
    Brandon

    This should do the trick for you as it looks for "PARALLEL" therefore it only removes the APPEND PARALLEL Hint and leaves other Hints as is....
    #set current location
    set path "C:/TMP"
    # Project parameters
    set root "/MY_PROJECT"
    set one_Module "MY_MODULE"
    set object "MAPPINGS"
    set path "C:/TMP
    # OMBPLUS and tcl related parameters
    set action "remove_parallel"
    set datetime [clock format [clock seconds] -format %Y%m%d_%H%M%S]
    set timestamp [clock format [clock seconds] -format %Y%m%d-%H:%M:%S]
    set ext ".log"
    set sep "_"
    set ombplus "OMBplus"
    set omblogname $path/$one_Module$sep$object$sep$datetime$sep$ombplus$ext
    set OMBLOG $omblogname
    set logname $path/$one_Module$sep$object$sep$datetime$ext
    set log_file [open $logname w]
    set word "PARALLEL"
    set i 0
    #Connect to OWB Repository
    OMBCONNECT .... your connect tring
    #Ignores errors that occur in any command that is part of a script and moves to the next command in the script.
    set OMBCONTINUE_ON_ERROR ON
    OMBCC "'$root/$one_Module'";      
    #Searching Mappings for Parallel in source View operators
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Searching for Loading/Extraction Operators set at Parallel";
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Searching for Loading/Extraction Operators set at Parallel";
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
    foreach mapName [OMBLIST MAPPINGS] {
    foreach opName [OMBRETRIEVE MAPPING '$mapName' GET TABLE OPERATORS] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT)] {
    if { [ regexp $word $prop1] == 1 } {
    incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    foreach opName [ OMBRETRIEVE MAPPING '$mapName' GET DIMENSION OPERATORS ] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT) ] {
    if {[regexp $word $prop1] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    foreach opName [ OMBRETRIEVE MAPPING '$mapName' GET CUBE OPERATORS ] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT) ] {
    if {[regexp $word $prop1] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    foreach opName [ OMBRETRIEVE MAPPING '$mapName' GET VIEW OPERATORS ] {
    foreach prop1 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (LOADING_HINT) ] {
    if {[regexp $word $prop1] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Loading Operator: $opName, Property: $prop1"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (LOADING_HINT) VALUES ('');
    OMBCOMMIT;
    foreach prop2 [OMBRETRIEVE MAPPING '$mapName' OPERATOR '$opName' GET PROPERTIES (EXTRACTION_HINT) ] {
    if {[regexp $word $prop2] == 1 } {
         incr i
    puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Mapping: $mapName, Extraction Operator: $opName, Property: $prop2"
    OMBALTER MAPPING '$mapName' MODIFY OPERATOR '$opName' SET PROPERTIES (EXTRACTION_HINT) VALUES ('');
    OMBCOMMIT;
    if { $i == 0 } {
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Not found any Loading/Extraction Operators set at Parallel";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Not found any Loading/Extraction Operators set at Parallel";
         } else {
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Fixed $i Loading/Extraction Operators set at Parallel";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Module: $one_Module, Object_type: Mapping";
              puts $log_file "[clock format [clock seconds] -format "%Y%m%d %H:%M:%S"] --> Fixed $i Loading/Extraction Operators set at Parallel";
    close $log_file;
    Enjoy!
    Michel

  • How to do fast export and import

    i have a windows 2003 server and oracle 10.2.0.3 installed on it.
    here i want to ask how i can speedup my export and import using expdp.

    Hi User,
    user11798002 wrote:
    i have a windows 2003 server and oracle 10.2.0.3 installed on it.
    here i want to ask how i can speedup my export and import using expdp.You can utilize parallelizing export when using data pump,but for traditional export use the following
    Export of Databases - reads data by running a select statement of the data and generating the DDL to perform the import process in future.
    Fast Exports
    1) Use direct=y
    2) Until disk is not fully utilized try running exports in parallel.
    3) Keep export file on different disk then the datafiles.
    4) Run exports in two part rather than one i.e. 1st rows=n and second as indexes=n rows=y constraints=n.
    Fast Imports
    1) Use the first file out of two files created with the exports, It will insert all data but will not create any indexes and constraints. Once the data insertion is done run imp with indexfile option to extract script of index creation in text file. edit the file to include parallel clause and set parameters db_file_multiblock_read_count to 128 with sort_area_size to a higher value and workarea_size_policy=AUTO.
    A R P I T S I N H A
    [oracledba.in]

  • Query on Database Export and Import

    Hi Techies,
    Currently we are running our Database on Oracle 10g and SAP on4. and OS is HP UX 11.11.
    We have a plan to migrate our HW from PA-RISC to Itanium and at the time of Production migration we are planning to use Export and Import method to get free space.
    Our plan is as below:
    We will not touch the original Production, Just restore the DB into new server.
    And post restore we will create the space on new server equallent to our DB size.
    Then will perform DB export from the new system to the null space
    Then Import the DB into same system.
    Here my queries are:
    1) Is it possible to export and Import the Database from/to null space?
    2) We have 2T size of DB, And good resources like 32G Ram, 12 CPUs etc. How much time can be expected to perform Export and Import?
    3) What are the challenges we can expect?
    4) Minimum how much of free space we can expect with this option?
    Regards,
    Nick Loy

    So with test runs I can expect rapid speed in DB export and Import (1T/H)........If I have good system then Database export and Import gets complete within 2 hrs (Database size is 2T).
    Well 1tb is at the very top of expectations here, you should be careful. I did an export/import of a 1.5tb database of an ERP system lately. We did parallel export (40 processes) / import (20 processes) using distmon, source was HP IA64, target Linux x64_64. The disks were midrange SAN storage systems on both sides.  After tuning we managed to do in 6-7hrs.
    But in your case, if you only have one system, this could mean you have to drop the source db first and then recreate the target db on the same disks. The creation of the 1-2tb database files alone can take up more than 1 hour , besides that you don't have an easy fallback.
    If you have a test system that is comparable from size and hardware perspective, then i suggest you try a test export to get a feeling for it.
    What about the Online re-org of database? What would be the best way to get free space within minimum downtime?
    Theoretically you should be able to gain more or less the same amount of space doing online reorgs. The advantage is less downtime, the downside is the reorgs will be running over a longer time period and put additional load on the system.
    Cheers Michael

  • Separate Distribution Monitor Export and Import Processes on Multiple Machines

    Hi,
    Would you kindly let me know whether it is possible (means officially supported way) to run Distribution Monitor Export and Import processes on different machines?
    As per SAP note 0001595840 "Using DISTMON and MIGMON on source and target systems", it says as below.
    > 1. DISTMON expects the export and import to be carried out on the same server
    I think it means that export and import processes for the same tables must be run on the same machine, is it correct? If yes, Export on the machine A, and then Import those exported data on the other machine B is not the officially supported way... (However, I know it is technically possible.)
    Kind regards,
    Yutaka

    Hi Yutaka,
    Point no. 2 & 3 clarify the confusion. However let me explain it briefly:
    Distribution Monitor is used basically in case of migration of large SAP systems (database). It provides the feature to increase parallelism of export and import, distributing the process across available systems.
    You have to prepare the system for using DistMon. A common directory needs to be created as"commDir" and in case you use multiple systems for executing more number of processes of export and import then that "commDir" should be shared across all those systems.  And this is what the Point no.1 in KBA 1595840 mentions about. Distribution Monitor will run both the export and import process from the machine which is prepared for using DistMon and DistMon itself will control the other processes i.e. MigMon. No need to start separate MigMon.
    For example: You are performing a migration of SAP system based on OS:AIX and DB:DB2 to  OS: HP-UX and DB: Oracle. You need to perform the export using DistMon and you are having 4 Windows servers which can be used for parallel export/import. Once you have prepared the system for DistMon which hosts the "commDir" you'll have to provide the information of involved host machines in the "distribution_monitor_cmd.properties" file. Now when DistMon is executed it will distribute the export and import process across the systems which were defined in "distribution_monitor_cmd.properties" file automatically.
    Best regards,
    SUJIT

  • Question regarding export and import of Hyperion Security during upgrade

    Hi Guys,
    We are upgrading Essbase, Integration Services from 7x to 9x which are utilizing Hyperion Hub and we are going to follow the method of uninstalling 7x and reinstalling 9x components.
    Now my question is, what is the best way of transferring security from 7x to 9x. I heard that Advanced Security Manager can be used to export and import back security. Or is there any
    other way of doing it??
    Can someone please enlighten me on this.
    Thanks in advance
    K

    Ihatelightroom wrote:
    First let me say that any software that comes without a save button should be sold with a warning label.
    Why?
    Question 1:  I have having an issue comprehending how to save a photo.  In my case  I select the photo, zoom in on the subject, export it to my desktop. The pciture on my desktop does not incorporate the change. Am I missing a step? What do I need to do to export it with this change? I actually watched a You Tube video on this and could not see what i was not doing.
    You must have selected the wrong option in the Export dialog box. Under "File Settings", you need to select JPG and not "Original". Of course, you probably need to do some additional viewing of videos (or some reading) to learn that most people's workflow does not automatically include a "Save" or "Export" after editing the photo. It's not a necessary part of Lightroom's workflow, unless you need the photo for some non-Lightroom activity.
    Question 2: I just installed Lightroom and am trying to import my 12k strong photo collection. The Import button pulls in about 2k and then cannot find anymore. The photos are stored in folders by date within a master folder. I am selecting the master folder. I can go in and import the sub-fodler individually. However i do not want to do that 200 times.There is no apparent way to go into the subfolder level and select more than one folder.
    In the Import dialog box, on the left, under "Source", there is a checkbox that says "Include SubFolders". Make sure this is checked.
    Seriously, you need to spend some time reading introductory material about LR because Lightroom does not work like any other photographic software you might have used in the past. You are handling it as if it was no different than standard photo editing software, and you are going to be frustrated if that is your mindset. See the videos at adobe.tv and read this: http://www.flickr.com/groups/adobe_lightroom/discuss/72157603590978170/

  • I am using Iphoto 11 ver 9.4.3 on mac using oxs 10.8.5 i want to export calendar projects to an external hard drive. what is the easiest way to do this? i have tried export and import but it didn't seem to work.

    I am using Iphoto 11 ver 9.4.3 on mac using oxs 10.8.5 i want to export calendar projects to an external hard drive. my goal is to store them in an external hard drive so it doesn't use up memory on the mac hard drive. is it possible to copy the specific projects without copying the entire library? what is the easiest way to do this? i have tried export and import but it didn't seem to work.

    What do you not understand?
    You can duplicate the iPhoto library (command - D ) and delete everything except the project and its photos from the copy and move that
    Or
    However the calendar takes very little space - it is simpy database entries - it is the photos in the calendar that take space - and for most people you would wnat to keep those photos in your library
    you can use a photo in 50 calendars and it still is only one photo in your library - as I explained calenders do not exist as such - they are simply database entries telling iPhotop how to display the calendar - they take almost no space at all
    LN

  • SQL Developer 2.1: Problem exporting and importing unit tests

    Hi,
    I have created several unit tests on functions that are within packages. I wanted to export these from one unit test repository into another repository on a different database. The export and import work fine, but when running the tests on the imported version, there are lots of ORA-06550 errors. When debugging this, the function name is missing in the call, i.e. it is attempting <SCHEMA>.<PACKAGE> (parameters) instead of <SCHEMA>.<PACKAGE>.<FUNCTION> (parameters).
    Looking in the unit test repository itself, it appears that the OBJECT_CALL column in the UT_TEST table is null - if I populate this with the name of the function, then everything works fine. Therefore, this seems to be a bug with export and import, and it is not including this in the XML. The same problem happens whether I export a single unit test or a suite of tests. Can you please confirm whether this is a bug or whether I am doing something wrong?
    Thanks,
    Pierre.

    Hi Pierre,
    Thanks for pointing this out. Unfortunately, it is a bug on our side and you have found the (ugly) "work-around".
    Bug 9236694 - 2.1: OTN: UT_TEST.OBJECT_CALL COLUMN NOT EXPORTED/IMPORTED
    Brian Jeffries
    SQL Developer Team

  • Require help in understanding exporting and importing statistics.

    Hi all,
    I am bit new to this statistics.
    Can anyone please explain me in detail about these commands.
    1) exec DBMS_STATS.GATHER_TABLE_STATS (ownname => 'MRP' , tabname => 'MRP_ATP_DETAILS_TEMP', estimate_percent => 100 ,cascade => TRUE);
    2) exec DBMS_STATS.CREATE_STAT_TABLE ( ownname => 'MRP', stattab => 'MRP_ATP_3');
    3) exec DBMS_STATS.EXPORT_TABLE_STATS ( ownname => 'MRP', stattab => 'MRP_ATP_3', tabname => 'MRP_ATP_DETAILS_TEMP',statid => 'MRP27jan14');
    4) exec DBMS_STATS.IMPORT_TABLE_STATS ( ownname => 'MRP', stattab => 'MRP_ATP_3', tabname => 'MRP_ATP_DETAILS_TEMP');
    I understand that these commands are used to export and import table statistics.
    But please anyone help me in understanding this indetail.
    Thanks in advance.
    Regards,
    Shiva.

    Shiva,
    Please post the details of the application release, database version and OS.
    Please see (FAQ: Statistics Gathering Frequently Asked Questions (Doc ID 1501712.1) -- What is the difference between DBMS_STATS and FND_STATS).
    For exporting/importing statistics summary of FND_STATS Subprograms can be found in (Doc ID 122371.1)
    http://etrm.oracle.com/pls/et1211d9/etrm_pnav.show_details?c_name=FND_STATS&c_owner=APPS&c_type=PACKAGE&c_detail_type=source
    http://etrm.oracle.com/pls/et1211d9/etrm_pnav.show_details?c_name=FND_STATS&c_owner=APPS&c_type=PACKAGE%20BODY&c_detail_type=source
    Thanks,
    Hussein

  • Language Problem while exporting and importing data

    hi,
    I have Oracle version 8.1.7.0.0 installed on one server and 9.2.0.1.0 installed on new server.
    I'm copying and pasting my version info from SQL*Plus:
    SQL*Plus: Release 8.1.7.0.0 - Production on Mon Aug 22 10:46:31 2005
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    Connected to:
    Oracle8i Enterprise Edition Release 8.1.7.0.0 - Production
    With the Partitioning option
    JServer Release 8.1.7.0.0 - Production
    SQL>
    SQL*Plus: Release 9.2.0.1.0 - Production on Mon Aug 22 12:30:06 2005
    Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
    Connected to:
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    With the Partitioning, OLAP and Oracle Data Mining options
    JServer Release 9.2.0.1.0 - Production
    SQL>
    I created new user on my new server from enterprise manager.
    Exported user from the old server and imported in the new server.
    i.e: from Oracle8i Enterprise Edition Release 8.1.7.0.0, I did
    c:\>exp system/manager file=abc.dmp owner=abc
    Then on the new server Release 9.2.0.1.0, I did
    c:\>imp system/manager file=abc.dmp fromuser=abc touser=abc
    I'm using Arabic Language on my both servers. NLS_LANG parameter on both the servers is AMERICAN_AMERICA.WE8MSWIN1252.
    On both the servers I'm able to insert and select data in arabic.
    However, after I export the data from old server to the new server, the arabic data comes in question marks.
    If I create new table and insert arabic data on new server's user abc it is displaying well. Only the data which I exported and imported is not showing arabic.
    On both old and new servers operating system is Windows XP.
    I'm stuck with this problem. Anybody having any idea about how to solve this problem please help.
    Thank you all in advance.
    Regards

    Let me be clear here. Storing Arabic data in a WE8MSWIN1252 database is not supported by Oracle and will lead to problems. You are incorrectly using the NLS_LANG to prevent proper conversion and your data appears to be okay when you use utilties like SQL*PLUS to view your data. When you write applications that don't rely on the NLS_LANG like JDBC thin driver for instance you will realize your data is in fact invalid. To learn more about the NLS_LANG you can take a look at this FAQ: http://www.oracle.com/technology/tech/globalization/htdocs/nls_lang%20faq.htm
    To migrate your database to a proper character set you can refer to this paper:
    http://www.oracle.com/technology/tech/globalization/pdf/mwp.pdf
    But please do not ask for help in supporting your current configuration in this forum.

  • Exporting and Importing of Schema

    The Export and Import of Schema is not working consistently in SP4.
    When I create a new repository from the schema, all the subtables get created. The main table is has only one field in it. It reads Category.
    The other fields in the main table are missing and are not imported during the Import Schema operation.
    Has anyone encountered the same issue?
    If so, please let me know how you overcame this errorneous behaviour?

    Hi Adhappan
    I have tried importing without success. The import is not consistent. The same schema creates different sets of tables & fields each time. Sometimes if you reimport some more tables which were not set up the first time gets selected for import.
    I did a simple exercise to test the import / export functionality.
    1. I unarchived a repository from standard content provided by SAP
    2. I exported the schema
    3. I tried importing the schema without any modifications to schema
    4. The results were highly inconsistent.
    Arvind
    Pls reward useful answers

  • Exporting and importing applications/workspaces

    I am currently on a testing workspace and want to put the application/workspace to another workspace. I know the export and import commands but my concern is the application import. The application is linked to a sample table caled "Samp" and now I would need to link it to a table called "test". This table is almost eactly the same as "Samp" witht he exeption of 2 or 3 new columns. I am wondering what I need to do to change the link to the new table and whether the application will still work.
    Thanks

    "419008",
    In cases like this, where I know the table name is going to be different across HTML DB instances, what I usually do is create a view on top of the table, and then build my HTML DB applications against the view. Then, assuming the definition of the tables are the same, it should work without further modification.
    So in your testing workspace you could have:
    create view foo as select * from Samp
    and in your other workspace you could have:
    create view foo as select * from test
    Joel

  • Exporting and Importing Destination Controls Error

    When I export table within destination control and import the table to another Ironport again within destination control I receive error - "Wrong format of the destination config file: ip_sort_pref is required for the global settings."
    We are updating the ASYNC on each of our de clustered Ironport.  So when one is not in use we need the other to handle all the traffic, including domain controls.
    The output displays
    [ABCDE.com]
    max_host_concurrency = 500
    limit_apply = system
    limit_type = host
    max_messages_per_connection = 50 ......
    How can I import this without losing or altering the data within.
    ^^^^^^^^^^^^^^^^^^^^^^
    Solution
    We upgrade one of our SMTP ASYNC to 8.0.1 which creates a additional line within default of the upgraded Ironport exported control destination table that is named... wait for it.... ip_sort_pref. Just add it.

    The applications and drivers have been imported into the Dev SCCM server however I cannot find the actual content so I am not sure this has worked properly. 
    When you exported did you choose to "Export all content for selected task sequence and dependencies".
    80070003 = The system cannot find the path specified.
    See here for exporting and importing task sequences
    http://blogs.technet.com/b/inside_osd/archive/2012/01/19/configuration-manager-2012-task-sequence-export-and-import.aspx
    Gerry Hampson | Blog:
    www.gerryhampsoncm.blogspot.ie | LinkedIn:
    Gerry Hampson | Twitter:
    @gerryhampson

  • Exporting and importing table using R3trans program between 2 clients

    Hi,
    How to export and import a table between to clients in a same system using R3trans program?
    I need to copy a table from Client 020 in a system to client 040 of the same system using R3 trans. I need to know the procedure.
    Can any one advice
    Regards,
    Suresh

    This is how you do a export and import of table entries.
    Export:
    Open Notepad and type the following,
    export
    client = 020
    file = 'clone.export.<sid>.<client no>.data'
    select * from <client_dependent_tablename1>
    select * from <client_dependent_tablename2>
    select * from <client_dependent_tablenamen>
    Save the file as export.ctl
    Run R3trans export.ctl
    and the data of these files will be stored in a file called clone.export.<sid>.data in the directory from which you have called R3trans
    Import:
    Open Notepad,
    import
    client = 040
    file = clone.export.<sid>.<client no>.data
    buffersync = yes                                               
    Save the file as import.ctl
    Run R3trans import.ctl
    Cheers!
    Bidwan
    Message was edited by:
            Bidwan Baruah

  • Exporting and importing just table definitions

    Hi,
    I have this production database that has a huge amount of data in it. I was asked to set up a test database based on the exact same schema as the live database. When I tried to do an export (from live) and import (to test), with the parameters rows=N and compress=y, the target (test database) data file will still grow enormously, presumably because of the huge number of extents already allocated to the table in the live database. My test database of course, has a limited hard-disk space.
    Is there a way to export and import the table definitions without having the target database experiencing a huge growth in the size of the tablespace?
    Thanks,
    Chris.

    If an export with compress=n is still creating initial extents that a too large, you can still build with the import file but it will take a little work.
    run imp with indexfile=somefile.sql
    when imp is finished, edit somefile.sql by:
    1. remove all the REM statements.
    2. remove all the storage clauses (tables and indexes)
    Make sure your tablespaces have a small (say 1k) default initial extent.
    run imp again with rows=n
    All your tables and indexes will be created with the default tablespace initial extent.

Maybe you are looking for

  • Delete Boot Camp Partition with Lion

    To Delete boot camp partition with lion follow steps below : - • open finder and goto applications folder then open utilities folder ,select Boot Camp Assistance •now  click on continue to go to next screen select "i have already downloaded the windo

  • How to set up Speech Recognition?

    Referring to following link for detailed discussion, I would like to know if I create a batchfile in following directory, then after getting through those training, does anyone have any suggestions on how to set where the speech locates this batch fi

  • JDBC connecting with MYSQL

    i has install JAVA 2 SDK and MYSQL to do my DBMS, i try to making connection between JAVA and MYSQL, after complie and run java file, i got the Error: java.sql.SQLException: No suitable driver

  • How to implement multiple files upload

    RichFaces has a cool component for uploading multiple files with flash. But we use ADF and we need to deploy it on a 10G server. With multiple upload I mean that it should be possible to press a browse button and in the file dialog select multiple fi

  • Can AIR execute Windows .EXE files?

    Can AIR execute Windows .EXE files?