Parallel export/import
For reasons that I won't go into, I need to export ALL data when employing the mxl import & export for the nightly defrag. I am using parallel export which in theory should allow the file to be larger than 2 gig assuming that the output is spread over 4 files. Things were fine until this week. It looks like essbase on Solaris doesn't split the output as evenly as I had hoped. Here are the output files.
-rw-r--r-- 1 aphynspr hynspr 246K Nov 23 03:17 3.txt
-rw-r--r-- 1 aphynspr hynspr 283K Nov 23 03:17 4.txt
-rw-r--r-- 1 aphynspr hynspr 304K Nov 23 03:17 2.txt
-rw-r--r-- 1 aphynspr hynspr 2.0G Nov 23 03:50 1.txt
-rw-r--r-- 1 aphynspr hynspr 15M Nov 23 03:50 1_1.txt
All of the data is in the first file.
Does anyone know how essbase slices the data? Other than exporting level zero or adding a preprocessor to determine the number of files, does anyone have a suggestion so that I can import the appropriate files?
Thanks,
Dave
Glenn you nailed it! The anchor dimension seems to control how the data is allocated to the threads. This database was organized as a modified hour glass with the non-aggregating sparse dimensions last. The anchor dimension has a dozen members but 99
% of the data is in one member. Coincidentally the one of the output files contained 99% of the data. When we made the largest sparse dimension the anchor, data was allocated to the threaded output files evenly. Good call.
Gary, you’ve convinced me to look at the dataexport anyway. Back in the summer ran a test extract from essbase. I tested a report script, dataexport as a text file and dataexport to oracle 11i. The times were 7 seconds, 1 minute and 40 minutes respectively. I’ve never used the binary format with dataexport but I’m guessing that I can find a use for it.
Thanks for all of the suggestions
Dave
Similar Messages
-
Parallel export/Import for exchange directory
Hi,
We have executed the homogeneous system copy(Win08/Ora 11) on ECC6 EHP4 and in order to speed up the process i have selected the parallel export/Import option on target system as export on source system is been executing by some other vendor.
During Import abap phase it's displaying as" waiting -- mins for packahe data in exhange directory" as i have provided the path during input phase of "communication parameters for Import" in Exchange directory as <export directory>/Data.
is this correct path and i have read some where that <TABART>.SQL will be generated if on source system the same parallel expot/import option has enabled.And i couldn't find *.SQL files on export directory.
Does it mean that guy who is executing on source system has not consider this option/not generated the SQl file with SMIGR_CREATE_DDL ?
NB: i am executing import with SAPINST
Regards,
DheerajDear Dheeraj,
Kindly check in the export directory path, whether <package>.SGN files been generating or not .
Also, we can check in exportmonitor.log.After export of individual pacakge, log says package transfered to network share directory.Even if it not merntions in log, parallel exp/imp not possible
If it generates, parallel import will happen;If not, it will not.
Its nothing to do with SQL files.
Regards
Ram -
Multiple servers in parallel export/import question
Hello all,
We plan to use parallel export/import using distribution monitor, and want to use several servers in export and also in import.
The export_monitor.cmd.properties has FTP option ftphost, whic is the hostname of the import server, which looks like it will only use 1 application server (AS) from the source system and 1 in the AS in the target system -
- is this true or can we use more AS in both export and import?
- in the export, how to specify the AS to use?
- in the export, how to specify more than 1 AS?
- in the imprt, how to specify more than 1 AS ?
Thanks,
TerryYou can use the Distribution Monitor for that purpose. Check note 855772 - Distribution Monitor
Markus -
46B Parallel Export and Import
Dear All,
We would like to have a SAP Migration from AIX to Windows 2003 but we would like to know more information in this case.
Server A: AIX
Server B: Windows 2003
DB: Oracle 10g
1. Can we have a Parallel Export and Import in 46B?
2. If yes, would you mind to provide a procedure to me?
many many thanks.> 1. Can we have a Parallel Export and Import in 46B?
> 2. If yes, would you mind to provide a procedure to me?
To migrate a 46B system you need to get the migration tool CD which you get only if you have an extended maintenance contract.
You can split tables but this functionality is not integrated in the setup tools (R3SETUP) and must be done "manually". Since 46B is long out of support the necessary documentation is either spread around in notes or is not available at all.
I'd not do a production migration without a certified migration consultant (see http://service.sap.com/osdbmigration).
Markus -
Regarding Distribution Monitor for export/import
Hi,
We are planning to migrate the 1.2TB of database from Oracle 10.2g to MaxDB7.7 . We are currently testing the database migration on test system for 1.2TB of data. First we tried with just simple export/import i.e. without distribution monitor we were able to export the database in 16hrs but import was running for more than 88hrs so we aborted the import process. And later we found that we can use distribution monitor and distribute the export/import load on multiple systems so that import will get complete within resonable time. We used 2 application server for export /import but export completed within 14hrs but here again import was running more than 80hrs so we aborted the import process. We also done table splitting for big tables but no luck. And 8 parallel process was running on each servers i.e. one CI and 2 App servers. We followed the document DistributionMonitorUserGuide from SAP. I observerd that on central system CPU and Memory was utilizing above 94%. But on 2 application server which we added on that servers the CPU and Memory utilization was very low i.e. 10%. Please find the system configuration as below,
Central Instance - 8CPU (550Mhz) 32GB RAM
App Server1 - 8CPU (550Mhz) 16GB RAM
App Server2 - 8CPU (550Mhz) 16GB RAM
And also when i used top unix command on APP servers i was able to see only one R3load process to be in run state and all other 7 R3load process was in sleep state. But on central instance all 8 R3load process was in run state. I think as on APP servers all the 8 R3load process was not running add a time that could be the reason for very slow import.
Please can someone let me know how to improve the import time. And also if someone has done the database migration from Oracle 10.2g to MaxDB if they can tell how they had done the database migration will be helpful. And also if any specific document availble for database migration from Oracle to MaxDB will be helpful.
Thanks,
Narendra> And also when i used top unix command on APP servers i was able to see only one R3load process to be in run state and all other 7 R3load process was in sleep state. But on central instance all 8 R3load process was in run state. I think as on APP servers all the 8 R3load process was not running add a time that could be the reason for very slow import.
> Please can someone let me know how to improve the import time.
R3load connects directly to the database and loads the data. The quesiton is here: how is your database configured (in sense of caches and memory)?
> And also if someone has done the database migration from Oracle 10.2g to MaxDB if they can tell how they had done the database migration will be helpful. And also if any specific document availble for database migration from Oracle to MaxDB will be helpful.
There are no such documents available since the process of migration to another database is called "heterogeneous system copy". This process requires a certified migration consultant ot be on-site to do/assist the migraiton. Those consultants are trained specially for certain databases and know tips and tricks how to improve the migration time.
See
http://service.sap.com/osdbmigration
--> FAQ
For MaxDB there's a special service available, see
Note 715701 - Migration to SAP DB/MaxDB
Markus -
Materialized View with "error in exporting/importing data"
My system is a 10g R2 on AIX (dev). When I impdp a dmp from other box, also 10g R2, in the dump log file, there is an error about the materialized view:ORA-31693: Table data object "BAANDB"."MV_EMPHORA" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
Desc mv_emphora
Name Null? Type
C_RID ROWID
P_RID ROWID
T$CWOC NOT NULL CHAR(6)
T$EMNO NOT NULL CHAR(6)
T$NAMA NOT NULL CHAR(35)
T$EDTE NOT NULL DATE
T$PERI NUMBER
T$QUAN NUMBER
T$YEAR NUMBER
T$RGDT DATEAs i ckecked here and Metalink, I found the info is less to do with the MV? what was the cause?The total lines are 25074. So I used the GREP from the OS to get the lines involved with the MV. Here are:
grep -n -i "TTPPPC235201" impBaanFull.log
5220:ORA-39153: Table "BAANDB"."TTPPPC235201" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
5845:ORA-39153: Table "BAANDB"."MLOG$_TTPPPC235201" exists and has been truncated. Data will be loaded but all dependent meta data will be skipped due to table_exists_action of truncate
8503:. . imported "BAANDB"."TTPPPC235201" 36.22 MB 107912 rows
8910:. . imported "BAANDB"."MLOG$_TTPPPC235201" 413.0 KB 6848 rows
grep -n -i "TTCCOM001201" impBaanFull.log
4018:ORA-39153: Table "BAANDB"."TTCCOM001201" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
5844:ORA-39153: Table "BAANDB"."MLOG$_TTCCOM001201" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
9129:. . imported "BAANDB"."MLOG$_TTCCOM001201" 9.718 KB 38 rows
9136:. . imported "BAANDB"."TTCCOM001201" 85.91 KB 239 rows
grep -n -i "MV_EMPHORA" impBaanFull.log
8469:ORA-39153: Table "BAANDB"."MV_EMPHORA" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
8558:ORA-31693: Table data object "BAANDB"."MV_EMPHORA" failed to load/unload and is being skipped due to error:
8560:ORA-12081: update operation not allowed on table "BAANDB"."MV_EMPHORA"
25066:ORA-31684: Object type MATERIALIZED_VIEW:"BAANDB"."MV_EMPHORA" already exists
25072: BEGIN dbms_refresh.make('"BAANDB"."MV_EMPHORA"',list=>null,next_date=>null,interval=>null,implicit_destroy=>TRUE,lax=>
FALSE,job=>44,rollback_seg=>NULL,push_deferred_rpc=>TRUE,refresh_after_errors=>FALSE,purge_option => 1,parallelism => 0,heap_size => 0);
25073:dbms_refresh.add(name=>'"BAANDB"."MV_EMPHORA"',list=>'"BAANDB"."MV_EMPHORA"',siteid=>0,export_db=>'BAAN'); END;the number in front of each line is the line number of the import log.
Here is my syntax of import pmup:impdp user/pw SCHEMAS=baandb DIRECTORY=baanbk_data_pump DUMPFILE=impBaanAll.dmp LOGFILE=impBaanAll.log TABLE_EXISTS_ACTION=TRUNCATEYes I can create the MV manually and I have no problem to refresh manually it after the inmport. -
Database 10.2.0
While doing export/import operation what are factors effecting the speed of operation..?Below are the points which I collected from different links in my Oracle Notes:
Faster Export:
1. direct=y
2. subsets by query option
3. proper buffer size by rows in array * max row size
4. place the dump file to another disk (which is not having any datafile; to avoid I/O contention)
5. when database activity is low (in the morning or late eve)
6. PARALLEL
Faster Import:
1. indexes=n
2. constraints=n
3. user large buffer parameter
4. increase recordlength
5. commit=n
6. statistics=none
7. enough undo size
8. enough pga/sort_area_size
9. enough / Increase db_cache_size/db_block_buffers size and share_pool_size
10. when database activity is low (in the morning or late eve)
11. PARALLEL
Now check how your import is going on:
select
substr(sql_text,instr(sql_text,'into "'),30) table_name,
rows_processed, round((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_minute
from
sys.v_$sqlarea
where
sql_text like 'insert %into "%' and command_type = 2 and open_versions > 0;
Regards
Girish Sharma -
System copy using R3load ( Export / Import )
Hi,
We are testing System copy using R3load ( Export / Import ) using our production data.
Environment : 46C / Oracle.
while executing export, it takes more than 20 hours, for the data to get exported, we want to reduce the export time drastically. hence, we want to achieve it by scrutinizing the input parameters.
during export, there is a parameter by name splitting the *.STR and *.EXT files for R3load.
the default input for the above parameter is No, do not split STR and EXT files.
My question 1 : If we input yes instead of No and split the *.STR and *.EXT files, will the export time get reduced ?
My Question 2: If the answer is yes to Question 1, will the time reduced will be significant enough ? or how much percentage of time we can expect it to reduce, compare to a No Input.
Best Regards
L RaghunahthHi
The time of the export depends on the size of your database (and the size of your biggest tables) and your hardware capacity.
My question 1 : If we input yes instead of No and split the *.STR and *.EXT files, will the export time get reduced ?
In case you have a few very large tables, and you have multiple cpu's and a decent disk storage, then splitting might significantly reduce the export time.
My Question 2: If the answer is yes to Question 1, will the time reduced will be significant enough ? or how much percentage of time we can expect it to reduce, compare to a No Input.
As you did not tell us about your database size and hardware details there is no way to give you anything but very basic metrics. Did you specify a parallel degree at all, was your hardware idling for 20 hrs or fully loaded already?
20 hrs for a 100gb database is very slow. It is reasonable (rather fast in my opinion) for a 2tb database.
Best regards, Michael -
Export/import vs. sqlloader performance
Does anyone have any benchmarks or observations regarding which is faster execution wise?
We have some very large tables and I need to know if sqlloader would work faster than export/import in those situations.
Thanks.It would help to know what you are trying to accomplish.
Import / Export are designed to provide a transfer mechanism to move database objects from one database to another. In order to use the import utility, you must already have a file which was created with the export utility.
SQL*Loader is designed to provide a mechanism to load data from external files into Oracle.
There are various optimizations available for each technique (including parallel processing, direct load, etc). It really depends on the problem you are trying to solve.
Also, other techniques might be useful, including:
- external tables (available starting with 9i, allows querying text files with SQL - no SQL*Loader necessary)
- database links (can be used as an alternative to import/export if your goal is simply to move data from one Oracle database to another)
- Data Pump, starting with Oracle 10g -
Hl All,
I want to reduce the total time spent during export-import.
Restrictions:
=========
1)Cross Platform
2)exp and imp from lower version to upper version
3)No 10G database
Basically i want to do exp and imp in parallel so it reduces the total time spent on this activity. I thought of doing schema level exp-imp in parallel but i am afraid of the dependencies.
Is there any other way to achieve the same or if i go with the above specified approach, can anyone provide some valuable inputs to that.
I am trying to automate the above so that it becomes one time effort and rest all the times it(script) should do at his own.
Thanks and regards
NeerajHi All,
Data volume is not less than 60GB and not more than 150GB.
If i use a pipe on unix in between exp-imp what if my exp is slower at any
point of time and import is faster(Any reason)?
is import going to wait for the contents coming into the pipe through export or import will fail.
I can consider creating indexes using the flat file.
is there any way to get only the indexes in the flat file, i mean if i use indexfile
option for import it gives me "CREATE TABLE..." statements too which means import utility is reading full
dumpfile and giving me the output, i want only the "CREATE INDEX..." statement in the flat file.
What about the schema level export and import?
Any valuable inputs or proper steps from anyone out there
Any restrictions while importing the schema's.
Thanks and Regards -
Migrating cubes using export/import
Dear All,We need to migrate a cube (approx 6 gig) to another server. We are running Essbase 6.1p3. I have done a "save as" to copy the outline to the other server and performed an export of "all data" from the source db (export file is approx. 6 gig). With the limitation of 2 Gig in import, I am having problem migrating the data. The option of exporting at input level/ level 0 and then reaggreating the db will not work as all aggregation were performed using dynamic calc scripts.Please help.
You might try the parallel export. We use the command:PAREXPORT "-threads 8" -IN "D:\HYPERION\ESSINTEGRATION\APP\INTEXP.TXT" "1" "0"; In the file - INTEXP.TXT, we have 8 filenames for it to write the exports to.We are also running 6.1 patch 3a. We use this to get around the 2 gig limit. It breaks the exports into 8 files. However, it doesn't break them evenly so you may still run into the 2 gig limit.
-
SAP Export/Import via R3load
Hello,
SAP recommended a sorted export/import for oracle databases. A unsorted export/import ist very very quicker than a sorted export/import.
very quicker means sorted = eg. 56 hours and unsorted eg. 12 hours.
Is there any problems if i take the unsorted export?
regards,
stefanHello Stefan,
>> SAP recommended a sorted export/import for oracle databases
That fact relies on database internal facts like clustering factor of indexes to the table. You will benefit of a sorted export in cases of primary indexes access and so on... but this topic is much bigger than your question.
>> very quicker means sorted = eg. 56 hours and unsorted eg. 12 hours.
Where did you get this values from? You can split big tables or packages ... or define parallel access paths and so on... and it depends on your database configuration (sort areas, PGA, etc..)
I have done some migrations/exports with distmon... and optimized it manually ... so i got from round about 72 hours (normal export with some split packages) to only 12 hours. I have done only sorted exports.
>> Is there any problems if i take the unsorted export?
Some tables must be exported sorted (for example cluster tables) - have a look at sapnote #954268 ... but generally there are no "problems".
If you have a BW system... some rules are changing...
@ Markus:
>> You mean the usage of "R3load -loadprocedure fast"? Or how do you do the "unsorted unload"?
The option "loadprocedure fast" has nothing to do with the unload process. It speeds up the import (insert) by bypassing the buffer cache. Refer to sapnote #1045847 and the following link:
http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96524/c21dlins.htm
Regards
Stefan -
Export ,import efficiency
Hi,
I have a basic question here. Which is more efficient exp/imp or the expdp/impdp and why? I would like to read about it. good documents are welcomed.
Thanks
KrisHi ,
Definetly expdp/impdp(Datapump export import) is much better than original exp/imp which is more
used in oracle 9i Databases.
Top 10 difference between exp/imp(export/import) and expdp/impdp(Datapump export and import) are:
1)Data Pump Export and Import operate on a group of files called a dump file set
rather than on a single sequential dump file.
2)Data Pump Export and Import access files on the server rather than on the client.
This results in improved performance. It also means that directory objects are
required when you specify file locations.
3)The Data Pump Export and Import modes operate symmetrically, whereas original
export and import did not always exhibit this behavior.
For example, suppose you perform an export with FULL=Y, followed by an import using SCHEMAS=HR. This will produce the same results as if you performed an
export with SCHEMAS=HR, followed by an import with FULL=Y.
4)Data Pump Export and Import use parallel execution rather than a single stream of
execution, for improved performance. This means that the order of data within
dump file sets and the information in the log files is more variable.
5)Data Pump Export and Import represent metadata in the dump file set as XML
documents rather than as DDL commands. This provides improved flexibility for
transforming the metadata at import time.
6)Data Pump Export and Import are self-tuning utilities. Tuning parameters that
were used in original Export and Import, such as BUFFER and RECORDLENGTH,
are neither required nor supported by Data Pump Export and Import.
7)At import time there is no option to perform interim commits during the
restoration of a partition. This was provided by the COMMIT parameter in original
Import.
8)There is no option to merge extents when you re-create tables. In original Import,
this was provided by the COMPRESS parameter. Instead, extents are reallocated
according to storage parameters for the target table.
9)Sequential media, such as tapes and pipes, are not supported.
10)The Data Pump method for moving data between different database versions is
different than the method used by original Export/Import. With original Export,
you had to run an older version of Export (exp) to produce a dump file that was
compatible with an older database version. With Data Pump, you can use the
current Export (expdp) version and simply use the VERSION parameter to specify the target database version
For more details and options:
exp help=y
imp help=y
expdp help=y
impdp help=y
Fine manuals for referring:
http://www.oracle-base.com/articles/10g/OracleDataPump10g.php
Hope it helps.
Best regards,
Rafi.
http://rafioracledba.blogspot.com -
Memory Limitation on EXPORT & IMPORT Internal Tables?
Hi All,
I have a need to export and import the internal tables to memory. I do not want to export it to any data base tables. is there a limitation on the amount of memroy that is can be used for the EXPORT & IMPORT. I will free the memory once I import it. The maximum I expect would be 13,000,000 lines.
Thanks,
Alex (Arthur Samson)You don't have limitations, but try to keep your table as small as possible.
Otherwise, if you are familiar with the ABAP OO context, try use Shared Objects instead of IMPORT/EXPORT.
<a href="http://help.sap.com/saphelp_erp2004/helpdata/en/13/dc853f11ed0617e10000000a114084/frameset.htm">SAP Help On Shared Objects</a>
Hope this helps,
Roby. -
Using set/get parameters or export/import in BSP.
Hi All,
Is it possible to use set/get or export/import in BSP?
We need to set/export some variables from a BADI and get/ import them in the BSP application.
Code snippet will be of great help..
Thanks,
AnubhavHi Anubhav,
You can use the Export / Import statements for your requirement,
from the BADI use EXPORT to send the variable data to a unique memory location
with IDs
e.g.
*data declaration required for background processing
DATA: WA_INDX TYPE INDX.
**here CNAME is the variable you want to export
EXPORT PNAME = CNAME TO DATABASE INDX(XY) FROM WA_INDX CLIENT
SY-MANDT ID 'ZVAR1'.
and in the BSP application use the IMPORT statement to fetch back the values
set with the IDs above.
IMPORT PNAME = LV_CNAME
FROM DATABASE INDX(XY) TO WA_INDX CLIENT
SY-MANDT ID 'ZVAR1'.
deletes the data to save wastage of memory
DELETE FROM DATABASE INDX(XY)
CLIENT SY-MANDT
ID 'ZVAR1'.
Regards,
Samson Rodrigues
Maybe you are looking for
-
Bridge Crashes on startup. How to get it to run?
With CS6 on Win7 64b or 32b, Bridge crashes on startup (output listed below). From browsing discussions, I suspected it has something to do with admin priviledges so I set it to run as administrator in my account for both 64b and 32b. Now it doesn'
-
MacBook Pro late 2007 overheating with lion
Greetings: I seem to be having overheating problems on my late 2007 MacBook Pro running Lion. Is there a possibility that I may have to clean the insides of my MBP? I have smcFan control running at 4,000 RPM and that seems to keep it somewhat cool. T
-
My systems preferences are not saved from each previous session! Mark
Each time I start up my new iMac computer, the system preferences I set up in my previous session are lost! Can anybody help me figure out what I need to do to make my preferences persist?
-
Can your iPhone get hacked from playing games with online users?
Can your iPhone get hacked from playing games with online users? Like the " play online with a random" type of game
-
Adjusting Mesh Density of a 3D model in Photoshop CS6
Is there a way to increase/decrease the mesh density of a 3D model in Photoshop CS6 for export? I've searched within PS and on the web, and I'm not finding any way to do this. Am I missing something, for can this not be done? Thanks, TH