R3load  process

Dear All,
I need to do database export and Import method.we are using win2003 and oracle 9i with ecc5 IA64.
What is the process to R3load  and what caution do i have to take when export the data form PRD system.
Regards,

Hi,
I have some queries we are have cluster in environment.
The option Export Preparation I am not able to find in SAPinst(System copy >source system>abap system>oracle >Unicode>
There is only to options Table splitting preparation and ABAP database context export)
Where will be export data will go(In windows temp),How we can change the path of export data.
The step  are performing Database export import using SAP inst
1. login with admin (eg.PRDADMIN)
2. Stop the sap (SAP services should be stop)
3. Running the SAP inst (SAPinst, choose the installation service Export Preparation.)
4. Split (what is the best option to do fast export)
and in the System copy guide there is mention to delete  the table QCM tables
In source .
WE CAN DO SYSTEM COPY SOURCE 64 BIT and TARGET 32 BIT
Please suggest.
Regards,

Similar Messages

  • Dynamically increase R3load processes during import phase

    I am currently importing a system export and sapinst didn't give me the option to choose the allocation of R3load jobs to do the import with. Currently, there are only 3 R3load processes running at this time. I have 8CPU on this system and it's not even phasing it with this load. I'd like to be able to dynamically (as in i don't want to restart sapinst) switch this to now use 8 processes (maybe even as high as 12). How would I go about doing this without impacting the jobs that are currently running?
    thanks for any help you folks can provide.

    Hello,
    There is no standard way to do it.
    If you have just started the Export and if you feel it is really going to take a long time, then better to 'cancel' the current execution of Export and Restart with new Export.
    Thanks

  • Errors in R3load processes during Copy Source System ERP 2005

    Hello,
    I'm executing the Copy Source from an ERP 2005 SR2 nounicode system. In windows2003 32 bits.
    During R3load processes I have the follow error for all packages.
    I used the users "administrator" and <SID>adm, and in both the error is the same:
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe: START OF LOG: 20091214170332
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#20 $ SAP
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe: version R7.00/V1.4
    Compiled Aug 24 2009 02:22:21
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe -ctf E X:\EXP_QAS\ABAP\DATA\SAPAPPL1_77.STR X:\EXP_QAS\ABAP\DB\DDLORA.TPL SAPAPPL1_77.TSK ORA -l SAPAPPL1_77.log
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe: job completed
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe: END OF LOG: 20091214170332
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe: START OF LOG: 20091214170332
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe: sccsid @(#) $Id: //bas/700_REL/src/R3ld/R3load/R3ldmain.c#20 $ SAP
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe: version R7.00/V1.4
    Compiled Aug 24 2009 02:22:21
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe -e SAPAPPL1_77.cmd -l SAPAPPL1_77.log -stop_on_error
    DbSl Trace: ORA-1403 when accessing table SAPUSER
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): US7ASCII
    (DB) INFO: Export without hintfile
    Alternate NameTab for type "BAPIPAREX_HELP" was missing and has been simulated.
    Tab field CUEX-[9] has type DDic:168 ABAP:9
    (SYSLOG) INFO: k CQ9 : CUEX&DDic:168&ABAP:9&field:9&                        rscpgdio 41
    Tab field CUKN-[7] has type DDic:168 ABAP:9
    (SYSLOG) INFO: k CQ9 : CUKN&DDic:168&ABAP:9&field:7&                        rscpgdio 41
    Tab field CURSCOD-[5] has type DDic:168 ABAP:9
    (SYSLOG) INFO: k CQ9 : CURSCOD&DDic:168&ABAP:9&field:5&                     rscpgdio 41
    Tab field EDID2-[7] has type DDic:168 ABAP:9
    (SYSLOG) INFO: k CQ9 : EDID2&DDic:168&ABAP:9&field:7&                       rscpgdio 41
    Tab field EDID4-[7] has type DDic:168 ABAP:9
    (SYSLOG) INFO: k CQ9 : EDID4&DDic:168&ABAP:9&field:7&                       rscpgdio 41
    Tab field EDIDD_OLD-[6] has type DDic:168 ABAP:9
    (SYSLOG) INFO: k CQ9 : EDIDD_OLD&DDic:168&ABAP:9&field:6&                   rscpgdio 41
    Alternate NameTab for type "T52C5" was missing and has been simulated.
    Tab field TPRI_PAR-[13] has type DDic:172 ABAP:8
    (SYSLOG) INFO: k CQ9 : TPRI_PAR&DDic:172&ABAP:8&field:13&                   rscpgdio 41
    (GSI) INFO: dbname   = "QAS20061101081107                                                                                "
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "LDCTES15                                                        "
    (GSI) INFO: sysname  = "Windows NT"
    (GSI) INFO: nodename = "LDCTES15"
    (GSI) INFO: release  = "5.2"
    (GSI) INFO: version  = "3790 Service Pack 1"
    (GSI) INFO: machine  = "4x Intel 801586 (Mod 33 Step 2)"
    (BEK) ERROR: SAPSYSTEMNAME not in environment
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe: job finished with 1 error(s)
    F:\usr\sap\QAS\SYS\exe\run\R3load.exe: END OF LOG: 20091214170332
    Best regards, Alberto.

    > (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): US7ASCII
    First I'd suggest you switch the database codepage to a supported one:
    Note 102402 - Changing the database character set
    > Tab field CUEX-[9] has type DDic:168 ABAP:9
    > (SYSLOG) INFO: k CQ9 : CUEX&DDic:168&ABAP:9&field:9&                        rscpgdio 41
    > Tab field CUKN-[7] has type DDic:168 ABAP:9
    > (SYSLOG) INFO: k CQ9 : CUKN&DDic:168&ABAP:9&field:7&                        rscpgdio 41
    > Tab field CURSCOD-[5] has type DDic:168 ABAP:9
    > (SYSLOG) INFO: k CQ9 : CURSCOD&DDic:168&ABAP:9&field:5&                     rscpgdio 41
    > Tab field EDID2-[7] has type DDic:168 ABAP:9
    > (SYSLOG) INFO: k CQ9 : EDID2&DDic:168&ABAP:9&field:7&                       rscpgdio 41
    > Tab field EDID4-[7] has type DDic:168 ABAP:9
    > (SYSLOG) INFO: k CQ9 : EDID4&DDic:168&ABAP:9&field:7&                       rscpgdio 41
    > Tab field EDIDD_OLD-[6] has type DDic:168 ABAP:9
    > (SYSLOG) INFO: k CQ9 : EDIDD_OLD&DDic:168&ABAP:9&field:6&                   rscpgdio 41
    > Alternate NameTab for type "T52C5" was missing and has been simulated.
    > Tab field TPRI_PAR-[13] has type DDic:172 ABAP:8
    > (SYSLOG) INFO: k CQ9 : TPRI_PAR&DDic:172&ABAP:8&field:13&                   rscpgdio 41
    I'd execute a DDIC vs. database check in DB02OLD.
    > (BEK) ERROR: SAPSYSTEMNAME not in environment
    Is that variable set for <sid>adm?
    Markus

  • OS DB migration R3SETUP -f DBEXPORT.R3S export : slow R3load process

    Hi,
    we're dealing with an 40 B system for ehp5 upgrade, but before,
    we have to migrate OS from SunOS (Solaris 5.9) 4092M - 2 CPUS to Linux (Suse SLES9).
    the system is  : Solaris 9 -  SAP release 40 B -  database : 650Go - Oracle 9.2.0.7
    the last 2 export process last since 24hours, it is very slow
    (the firts 13 ended in 12hours)
    we have checked different note 936441(Oracle setting for R3load), we don't use splitting (migmon)
    but how could we improve it?
    it seems lasting hours for just exporting  a 3 go table?
    Thanks for help.
    Phetsada

    Hi,
    In the MIGMON.SAR archive you can find the Users Guide - MigrationMonitor.pdf.
    Read the section:
    Versions before WebAS 6.40 (R3E 4.7 SR1 and Previous)
    Best Regards,
    Yuri
    Edited by: Yuri Shikunov  on Feb 6, 2009 10:20 AM

  • Is R3load used in Upgrade process

    Hi all,
    During Upgrade in CONFIGURATION PHASE when we select
    > MANUAL SELECTION OF PARAMETERS & provide values R3load is by default 3
    I am pretty confused where R3load is used in Upgrade process
    Why is R3load not mentioned in the Upgrade Guide if it is effectively used
    SAP ERP 6.0 Including Enhancement Package 4 Support Release 1 ABAP
    Based on SAP NetWeaver 7.0 Including Enhancement Package 1
    Pls advice on the following
    -Rahul

    Hi,
    As already mentioned R3load is what will load the database tables during Upgrade process. If you have enough resources you can speed up the downtime Upgrade Process using more R3load parallel processes but this will depend in how many CPU's and memory you have. Usually each R3load process will take around 512Mb of RAM and you can use 1.5 R3load processes per CPU.
    I have used this numbers given to us by SAP and did not have any problem at all.
    Good luck,

  • R3load EXPORT tips for 1,5 TB MAXDB Database are welcome!

    Hello to ALL;
    this post is addressed to ALL MAXDB-GURU's !!
    I'v few questions to MAXDB Performance-Guy's for R3load Export on MAXDB!
    (remark to my person:
    I'm certified OS/DB migration consultant and have already done over 500 OS/DB migrations since 1995 successfully)
    BUT now I'm face with following setup for Export:
    HP- IA64 BL870c,  4 CPUu2019s Dual-Core + Hyperthreading activ (mean's 16 "CPU'S" shown via top or glance)
    64 GB RAM, HPUX V11.31;
    MAXDB  7.7.07
    ECC 5.0; Basis/Aba 640.22
    SAP-Kernel 640-Unicode Patch 381
    8 sapdatas are configured on unique 200GB LUN's and VG's in HP-UX; on storage-side the 8 220GB LUN's are located on (only) 9 300GB-Disks with Raid-5 Level and within HP-UX each VG/LVOL is mirrored via LVM to a second desasterdatacenter (200m distance)
    LOGFILES:
    2 x 4 GB LUN Raid-1 on Storage and via LVM  also mirrored to failover-center
    MAXDB-Datasize: 1600 GB Overall and within this 1350 GB used, TEMPUsage about 25 GB !
    MAXDB-Parameter-Settings
    MAXCPUu2019s 10  (4 x IA64 QuadCore+Hyperthreading shows 16 Cores within top and galcne
    Cache_SIZE= I/O Buffer Cache = 16 GB (2.000.000 Pages)
    Data-Cache-HitRatio:  99.61%
    Undo-Cache = 99,98%
    the following sapnote for MAXDB Peformance and Migrations are well known and already processes
    928037, 1077887, 1385089, 1327874, 954268, 1464560, 869267,
    My major problem is the export-runtime with over 4 days on the first dry-run (6 R3load's running on Linux-APPL-Server), and 46h on second runtime, 6 R3loads running on DB-Server IA64)
    the third trail-run was aborted by me after 48hours and 50% export-dump-space was written. In all 3 dry-runs, no more than approx 3.5GB DUMP-Space were written per hour!
    My first question to all MAXDB-Guru'S: How can I influence/optimize the TEMP - Area in MAXDB?? I didn't find any hint's in SDN or SAPNOTES or MaxDB Wiki or google....As fare as I know, this TEMP area "resides" within MAXDB-datafiles, thus it's seperated on my 48 datafiles, spreaded over 8 LUN/VG/disks. But I see LESS throughput on disk's and MAXDB-Kernel uses only ONE of ten's cpu-cores (approx 80% - 120% of 1000%).
    The throughput or cpu-usage doesn't change when I use 2 or 4 or 10 or 16 R3load' processes in parallel. The "result" is always the same: approx. 3,5 GB Export-Dump and 1 CPU MAX-DB Kernelprocess is used !
    so the BIG Question for me: WHERE is the bottleneck ?? (RAID-5 Disk-LUNS mirrored with LVM ???)
    on HP-UX scsi_queue_depth_length I'v increased default value from 8 to 32 to 64 to 128 --> no impact
    2nd question:
    I'v read  OS-Note 1327874 - FAQ: SAP MaxDB Read Ahead/Prefetch, and we are running MAXDB 7.7.07.19! (MaxDB 7.8 is not suppored via PAM for IA64 640-UC-Kernel) and as far as I understood, this parameter will no HELP me on EXPORT with primary-key-sequence ! is this correct?  THUS: which parameter HELPS for speeding up Export-Runtime?
    MAXCPU is set to 10, but ONLY 1 of them is used??
    so this post is for ALL MAXDB GURU'S!
    who will be intrested to contriubte on this "high-sophisticated" migration-project with 1.5TB MAXDB-Database-size and ONLY 24h Downtime !!
    all tips and hints are welcome and I will give us coninued updates to this running project until WE did a successfull migration job.
    PS: Import is not yet started, but should be done within vSphere 5 and SLES 11 SP1 on MAXDB 7.8 ....and of yours in parallel to export with migration monitor, BUT again a challenge: 200km distance from source to traget system !!!
    NICE PROJECT;
    best regards Alfred

    Hi Alfred,
    nice project ... just some simple questions:
    Did you open a message at SAP? Maybe you could buy some upgrade support, this could be usefull to get direct access to the SAP support...
    Which byte order do you use? I know Itanium could use both. But it should be different, otherwise you would use a backup for the migration.
    And the worst question, I do not even want to ask: What about your MAXCPUS parameter? Is it set to more than 1? This could be the problem why only one CPU is used.
    Best regards
    Christian

  • Unicode Migration on MaxDB - create indices - R3Load

    Hi Folks!
    Currently we are performing an Unicode Migration (CU&UC) ECC 6.0 on Windows 2003 / MaxDB 7.6.
    We are using migration monitor for parallel export and import with 12 R3Load processes on each side.
    Export is finished successfully but import is taking very long for index creation.
    There is one strange behaviour:
    import_state.propeties -> all packages are marked with "+" except one:
    S006=?
    When I take a look at S006.STR there is only one table (S006) and two indices (S006ADQ, S006Z00) listed.
    Taking a look at Import installDir:
    S006.TSK shows:
    T S006 C ok
    P S006~0 C ok
    D S006 I ok
    I S006~ADQ C ok
    S006.TSK.bck shows:
    T S006 C xeq
    P S006~0 C xeq
    D S006 I xeq
    I S006~ADQ C xeq
    I S006~Z00 C xeq
    S006.log shows:
    (DB) INFO: S006 created#20081203192452
    (DB) INFO: Savepoint on database executed#20081203192453
    (DB) INFO: S006~0 created#20081203192453
    (DB) INFO: Savepoint on database executed#20081203193256
    (IMP) INFO: import of S006 completed (4858941 rows) #20081203193256
    (DB) INFO: S006~ADQ created#20081203193351
    (DB) INFO: S006~ADQ created#20081203193351
    (DB) INFO: COSP~1 created#20081203193504
    (DB) INFO: COSP~2 created#20081203193920
    (DB) INFO: RESB~M created#20081203194032
    (and many many more)
    There is only one R3Load running at the moment (others are finished)
    I am not a MaxDB pro but what it looks like (to me):
    All packages have been imported successfully and import_monitor is creating MaxDB indices using just one R3Load for creation and S006.log to log all the (secondary) indices which have been created.
    This procedure is taking very long (more than 12 hours now). Is this a normal behaviour or is there any way to speed up index creation after data has been imported?
    If I recall correctly, using Oracle each R3Load creates table, imports data and creates indices right afterwards.
    On MaxDB it looks like index creation is performed with just one R3Load after all tables have been imported.
    I have read a note about MaxDB parallel index creation which might be the reason for this behaviour (only one index creation at a time using parallel server tasks). But taking a look at the import runtime this doesn't seem to be efficient.
    Any ideas or suggestions?
    Thanks a lot.
    /cheers

    You´re right with what you see/saw. That behaviour is caused by the task how MaxDB created indexes. It uses "server tasks" to get all the blocks/pages from disk and create the index. If more than one index creation would run in parallel, it would take much more time because they would "fight" for the max. number of server tasks.
    You can see what the database is doing by executing
    x_cons <SID> sh ac 1
    Unfortunately there is no way of changing that behaviour....
    Markus

  • Database migration to MAXDB and Performance problem during R3load import

    Hi All Experts,
    We want to migrate our SAP landscape from oracle to MAXDB(SAPDB). we have exported database of size 1.2 TB by using package and table level splitting method in 16 hrs.
    Now I am importing into MAXDB. But the import is running very slow (more than 72 hrs).
    Details of import process as per below.
    We have been using distribution monitor to import in to target system with maxdb database 7.7 release. We are using three parallel application servers to import and with distributed R3load processes on each application servers with 8 CPU.
    Database System is configured with 8CPU(single core) and 32 GB physical RAM. MAXDB Cache size for DB instance is allocated with 24GB. As per SAP recommendation We are running R3load process with parallel 16 CPU processes. Still import is going too slow with more that 72 hrs. (Not acceptable).
    We have split 12 big tables in to small units using table splitting , also we have split packages in small to run in parallel. We maintained load order in descending order of table and package size. still we are not able to improve import performance.
    MAXDB parameters are set as per below.
    CACHE_SIZE 3407872
    MAXUSERTASKS 60
    MAXCPU 8
    MAXLOCKS 300000
    CAT_CACHE_SUPPLY 262144
    MaxTempFilesPerIndexCreation 131072
    We are using all required SAP kernel utilities with recent release during this process. i.e. R3load ,etc
    So Now I request all SAP as well as MAXDB experts to suggest all possible inputs to improve the R3load import performance on MAXDB database.
    Every input will be highly appreciated.
    Please let me know if I need to provide more details about import.
    Regards
    Santosh

    Hello,
    description of parameter:
    MaxTempFilesPerIndexCreation(from version 7.7.0.3)
    Number of temporary result files in the case of parallel indexing
    The database system indexes large tables using multiple server tasks. These server tasks write their results to temporary files. When the number of these files reaches the value of this parameter, the database system has to merge the files before it can generate the actual index. This results in a decline in performance.
    as for max value, I wouldn't exceed the max valuem for 26G value 131072 should be sufficient. I used same value for 36G CACHE SIZE
    On the other side, do you know which task is time consuming? is it table import? index creation?
    maybe you can run migtime on import directory to find out
    Stanislav

  • Split a large table into multiple packages - R3load/MIGMON

    Hello,
    We are in the process of reducing the export and import downtime for the UNICODE migration/Conversion.
    In this process, we have identified couple of large tables which were taking long time to export and import by a single R3load process.
    Step 1:> We ran the System Copy --> Export Preparation
    Step 2:> System Copy --> Table Splitting Preparation
    We have created a file with the large tables which are required to split into multiple packages and where able to create a total of 3 WHR files for the following table under DATA directory of main EXPORT directory.
    SplitTables.txt (Name of the file used in the SAPINST)
    CATF%2
    E071%2
    Which means, we would like each of the above large tables to be exported using 2 R3load processes.
    Step 3:> System Copy --> Database and Central Instance Export
    During the SAPInst process at Split STR files screen , we have selected the option 'Split Predefined Tables' and select the file which has predefined tables.
    Filename: SplitTable.txt
    CATF
    E071
    When we started the export process, we haven't seen the above tables been processed by mutiple R3load processes.
    They were exported by a Single R3load processes.
    In the order_by.txt file, we have found the following entries...
    order_by.txt----
    # generated by SAPinst at: Sat Feb 24 08:33:39 GMT-0700 (Mountain
    Standard Time) 2007
    default package order: by name
    CATF
    D010TAB
    DD03L
    DOKCLU
    E071
    GLOSSARY
    REPOSRC
    SAP0000
    SAPAPPL0_1
    SAPAPPL0_2
    We have selected a total of 20 parallel jobs.
    Here my questions are:
    a> what are we doing wrong here?
    b> Is there a different way to specify/define a large table into multiple packages, so that they get exported by multiple R3load processes?
    I really appreciate your response.
    Thank you,
    Nikee

    Hi Haleem,
    As for your queries are concerned -
    1. With R3ta , you will split large tables using WHERE clause. WHR files get generated. If you have mentioned CDCLS%2 in the input file for table splitting, then it generates 2~3 WHR files CDCLS-1, CDCLS-2 & CDCLS-3 (depending upon WHERE conditions)
    2. While using MIGMON ( for sequencial / parallel export-import process), you have the choice of Package Order in th e properties file.
      E.g : For Import - In the import_monitor_cmd.properties, specify
    Package order: name | size | file with package names
        orderBy=/upgexp/SOURCE/pkg_imp_order.txt
       And in the pkg_imp_txt, I have specified the import package order as
      BSIS-7
      CDCLS-3
      SAPAPPL1_184
      SAPAPPL1_72
      CDCLS-2
      SAPAPPL2_2
      CDCLS-1
    Similarly , you can specify the Export package order as well in the export properties file ...
    I hope this clarifies your doubt
    Warm Regards,
    SANUP.V

  • Systemcopy using R3load - Index creation VERY slow

    We exported a BW 7.0 system using R3load (newest tools and SMIGR_CREATE_DDL) and now importing it into the target system.
    Source database size is ~ 800 GB.
    The export was running a bit more than 20 hours using 16 parallel processes. The import is still running with the last R3load process. Checking the logs I found out that it's creating indexes on various tables:
    (DB) INFO: /BI0/F0TCT_C02~150 created#20100423052851
    (DB) INFO: /BIC/B0000530000KE created#20100423071501
    (DB) INFO: /BI0/F0COPC_C08~01 created#20100423072742
    (DB) INFO: /BI0/F0COPC_C08~04 created#20100423073954
    (DB) INFO: /BI0/F0COPC_C08~05 created#20100423075156
    (DB) INFO: /BI0/F0COPC_C08~06 created#20100423080436
    (DB) INFO: /BI0/F0COPC_C08~07 created#20100423081948
    (DB) INFO: /BI0/F0COPC_C08~08 created#20100423083258
    (DB) INFO: /BIC/B0000533000KE created#20100423101009
    (DB) INFO: /BIC/AODS_FA00~010 created#20100423121754
    As one can see on the timestamps the creation of one index can take an hour or more.
    x_cons is showing constant CrIndex reading in parallel, however, the througput is not more than 1 - 2 MB/sec.  Those index creation processes are running now since over two days (> 48 hours) and since the .TSK files don't mentioned those indexes any more I wonder how many of them are to be created and how long this will take.
    The whole import was started at "2010-04-20 12:19:08" (according to import_monitor.log) so running now since more than three days with four parallel processes. Target machine has 4 CPUs and 16 GB RAM (CACHE_SIZE is 10 GB). The machine is idling though with 98 - 99 %.
    I have three questions:
    - why does index creation take such a long time? I'm aware of the fact, that the cache may not be big enough to take all the data but that speed is far from being acceptable. Doing a Unicode migration, even in parallel, will lead to a downtime that may not be acceptable by the business.
    - why are the indexes not created first and then filled with the data? Each savepoint may take longer but I don't think that it will take that long.
    - how to find out which indexes are still to be created and how to estimate the avg. runtime of that?
    Markus

    i Peter,
    I would suggest creating an SAP ticket for this, because these kind of problems are quite difficult to analyze.
    But let me describe the index creation within MaxDB. If only one index creation process is active, MaxDB can use multiple Server Tasks (one for each Data Volume) to possibly increase the I/O throughput. This means the more Data Volumes you have configured, the faster the parallel index creation process should be. However, this hugely depends on your I/O system being able to handle an increasing amount of read/write requests in parallel. If one index creation process is running using parallel Server tasks, all further indexes to be created at that time can only utilize one single User Task for the I/O.
    The R3load import process assumes that the indexes can be created fast, if all necessary base table data is still present in the Data Cache. This mostly applies to small tables up to table sizes that take up a certain amount of the Data Cache. All indexes for these tables are created right after the table has been imported to make use of the fact, that all the needed data for index creation is still in the cache. Many idexes may be created simultaneously here, but only one index at a time can use parallel Server Tasks.
    If a table is too large in relation to the total database size, then its indexes are being queued for serial index creation to be started when all tables were imported. The idea is that the needed base table data would likely have been flushed out of the Data Cache already and so there is additional I/O necessary rereading that table for index creation. And this additional I/O would greatly benefit from parallel Server Tasks accessing the Data Volumes. For this reason, the indexes that are to be created at the end are queued and serialized to ensure that only one index creation process is active at a time.
    Now you mentioned that the index creation process takes a lot of time. I would suggest (besides opening an OSS ticket) to start the MaxDB tool 'Database Analyzer' with an interval of 60 seconds configured during the whole import. In addition, you should activate the 'time measurement' to get a reading on the I/O times. Plus, ensure that you have many Data Volumes configured and that your I/O system can handle that additional loag. E.g. it would make no sense to have 25 Server Tasks all writing to a single local disk, I would assume that the disk would become a bottle neck...
    Hope my reply was not too confusing,
    Thorsten

  • R3load - what's the meaning of parameter "-para_cnt X"?

    During system copies/shell creations I always come across the parameter
    -para_cnt <count>    count of parallel R3load processes (MaxDB only)
    I wonder what's the usage of that parameter.
    Is that something like "if only one R3load is running use that to create the indexes"?
    Markus

    Hello Markus,
    The answer on the question:
    u201CWhat meaning has the R3load option "-para_cnt <count>"?u201D
    you will find in SAP note 1464560, p.8.
    The note will be released to SAP customers by MAXDB development soon.
    Thank you and best regards, Natalia Khlopina

  • Very very slow r3load

    hi,
    i am performing the import using R3load on oracle 10.2.0.2.0 databse which is on 4.5B before upgrade.
    init.ora parameters are maintained as per OSS 936441
    for 205GB database import is taking almost 13 hours which looks to be very high.
    DB is on 12GB ram 64bit hardware.
    please provide any tips.
    regards,

    how many parallel R3load processes are you running?
    Markus

  • Very very slow r3load import

    hi,
    i am performing the import using R3load on oracle 10.2.0.2.0 databse which is on 4.5B before upgrade.
    init.ora parameters are maintained as per OSS 936441
    for 205GB database import is taking almost 13 hours which looks to be very high.
    DB is on 12GB ram 64bit hardware.
    please provide any tips.
    regards,

    I'm not sure how long it took to export your database but for import..
    Following things to be analysed before import...
    1.See how many R3load processes are running for import.The formula  is 2 R3load processes per CPU  can be configured.Make sure you leave atleast 20% of resources free to go smooth. Dont overload server this may result in more errors.
    2.Also see if you can bump up the DB cache and DB writers.
    3.I'm not sure you can use  "PRIMARY KEY" in .TPL file or "-loadprocedure fast" while importing because this feature used in 700 version for fast import ...check with SAP. This really makes a difference in the import process.
    Regards,
    Vamshi.

  • Urgent help required in migration

    Hello,
    I am stuck on 77% of r3load process my all 15/16 processes are finished successfully only one process has failed I have cut paste the log (SAPAPPL1.log) below of the same. Request your urgent help.
    DbSl Trace: Connecting via TNS_ADMIN=/oracle/F01/920_64/network/admin, tnsname=F01
    DbSl Trace: Got NLS_LANG=AMERICAN_AMERICA.US7ASCII from environment
    DbSl Trace: Now I'm connected to ORACLE
    DbSl Trace: Database instance F01 is running on e027pae0 with ORACLE version 9.2.0.7.0
    trying to restart import ###
    (RIM) INFO: table "BKPF" truncated
    restart finished ###
    #START: 20070530101004
    TAB: BKPF
    (RD) ERROR: missing last block number for table "BKPF" in directory file "/sapcd/EXPDIR/DATA/SAPAPPL1.TOC"
    #STOP: 20070530101004
    Thanks & Regards,
    Bhushan

    Did you split the files?
    It seems that export file for this table got corrupt.
    Unfortunately, you ahve to start again. I had a similar problem and had to do it again.

  • Regarding Distribution Monitor for export/import

    Hi,
    We are planning to migrate the 1.2TB of database from Oracle 10.2g to MaxDB7.7 . We are currently testing the database migration on test system for 1.2TB of data. First we tried with just simple export/import i.e. without distribution monitor we were able to export the database in 16hrs but import was running for more than 88hrs so we aborted the import process. And later we found that we can use distribution monitor and distribute the export/import load on multiple systems so that import will get complete within resonable time. We used 2 application server for export /import but export completed within 14hrs but here again import was running more than 80hrs so we aborted the import process. We also done table splitting for big tables but no luck. And 8 parallel process was running on each servers i.e. one CI and 2 App servers. We followed the document DistributionMonitorUserGuide from SAP. I observerd that  on central system CPU and Memory was utilizing above 94%. But on 2 application server which we added on that servers the  CPU and Memory utilization was very low i.e. 10%. Please find the system configuration as below,
    Central Instance - 8CPU (550Mhz) 32GB RAM
    App Server1 - 8CPU (550Mhz) 16GB RAM
    App Server2 - 8CPU (550Mhz) 16GB RAM
    And also when i used top unix command on APP servers i was able to see only one R3load process to be in run state and all other 7 R3load  process was in sleep state. But on central instance all 8 R3load process was in run state. I think as on APP servers all the 8 R3load process was not running add a time that could be the reason for very slow import.
    Please can someone let me know how to improve the import time. And also if someone has done the database migration from Oracle 10.2g to MaxDB if they can tell how they had done the database migration will be helpful. And also if any specific document availble for database migration from Oracle to MaxDB will be helpful.
    Thanks,
    Narendra

    > And also when i used top unix command on APP servers i was able to see only one R3load process to be in run state and all other 7 R3load  process was in sleep state. But on central instance all 8 R3load process was in run state. I think as on APP servers all the 8 R3load process was not running add a time that could be the reason for very slow import.
    > Please can someone let me know how to improve the import time.
    R3load connects directly to the database and loads the data. The quesiton is here: how is your database configured (in sense of caches and memory)?
    > And also if someone has done the database migration from Oracle 10.2g to MaxDB if they can tell how they had done the database migration will be helpful. And also if any specific document availble for database migration from Oracle to MaxDB will be helpful.
    There are no such documents available since the process of migration to another database is called "heterogeneous system copy". This process requires a certified migration consultant ot be on-site to do/assist the migraiton. Those consultants are trained specially for certain databases and know tips and tricks how to improve the migration time.
    See
    http://service.sap.com/osdbmigration
    --> FAQ
    For MaxDB there's a special service available, see
    Note 715701 - Migration to SAP DB/MaxDB
    Markus

Maybe you are looking for

  • More than one iPhone, his and hers, how to share music?

    Hey folks We've had this issue since we've had iPods and its finally bubbled to the surface enough that I cannot take it anymore. I have a desktop mac with a 250gb music library - I buy all the music in the house and it lives on my machine. I have an

  • How can I move part of itunes library onto an external hard drive?

    I have a small hard drive (160 gb) and I want to keep my music on my internal hard drive so I can take my laptop around and listen to music throughout the day, but if I want to watch a movie I can plug in my external hard drive and launch them from i

  • Native performance library not loading in weblogic 9.1. help

    I'm trying to review an application and this is my first exposure to weblogic9.1. I've noticed that the native library failed to load in the logs but even after adding the right path, i keep seeing the same error message. This surely affect the perfo

  • Best Way To Synchronize Two Folders Via FTP?

    What's the best way to keep two folders in sync via FTP? Also, why is this site turning German? Will there still be a website in English?

  • Font color issue with animated text in Captivate 5

    When I prepare a new animated text and open the property and then click the square to choose a color , I only get a  little part of the color panel (when I click the color box) and I have  no possibilities to enter the color code either: I'm working