R3load question
Hi Gurus,
I ask your opinion about this unicode conversion situation. The preamble is that we cannont (for many reasons) upgrade our SAP kernel 620 (level 1773) NUC. Our SAP System is an R/3 4.7 ext 1.00 NUC and we want to convert it to unicode with a two server strategy.So we have made the following steps:
1)Preconversion checks on source system with kernel 620 (patch level 1773)
2)Export and conversion with R3load of sap kernel 620 (1773): export successfully completed
3)Import with R3load 640 in the new target system (R/3 4.7 ext 1.00 64 bit unicode) with kernel 640 UC: import successfully completed.
4)In the new target system all works fine.
Which are your considerations about this non standard operative modality?
We have thought this operative modality after reading the OSS notes 857734, 738858 and 659509.
Many thanks
Bob
No integrity check - R3load just unloads the data as is it there.
Homogeneous copies are better made using consistent online backups, that will work.
Markus
Similar Messages
-
R3load unsorted export questions
Hi,
To speed up a heterogenous systemcopy & UC conversion (OS change, database oracle) I am looking into unsorted unloading / loading. I have read note 954268. I still have some questions on this though.
Are there any drawbacks on performing unsorted unloads? Or is it advised to always perform unsorted unloads (except for cluster tables)? If I perform an unsorted unload, will the load take longer if I perform it sorted? Is it possible to perform unsorted loads (and does this make sense)?
What are the experiences with this?
Thanks,
Regards,
Bart>
B. Groot wrote:
> However, I am wondering if there are any drawbacks in using the unsorted unload. E.g. when the data is reloaded in the target database or on the performance of the target database if the records are not sorted.
> Regards,
> Bart
Hi,
In general 80% of the benefit comes from cleaning up fragmentation and giving
you better read ratio because of better cache quality.
There are just rare cases, where it helps to do a sort for the primary key
(which R3load is doing). The most obvious benefit for the unsorted export is,
that you save PSAPTEMP resources on the SOURCE metall.
So less need to buy new disks for the old box !
If you really desire a sorted table, do a reorg right after the migration
on the new metall. Will be more speedy and you'll have resources there anyway.
(And let me know which table this might be, I had none yet)
Best regards
Volker -
R3load EXPORT tips for 1,5 TB MAXDB Database are welcome!
Hello to ALL;
this post is addressed to ALL MAXDB-GURU's !!
I'v few questions to MAXDB Performance-Guy's for R3load Export on MAXDB!
(remark to my person:
I'm certified OS/DB migration consultant and have already done over 500 OS/DB migrations since 1995 successfully)
BUT now I'm face with following setup for Export:
HP- IA64 BL870c, 4 CPUu2019s Dual-Core + Hyperthreading activ (mean's 16 "CPU'S" shown via top or glance)
64 GB RAM, HPUX V11.31;
MAXDB 7.7.07
ECC 5.0; Basis/Aba 640.22
SAP-Kernel 640-Unicode Patch 381
8 sapdatas are configured on unique 200GB LUN's and VG's in HP-UX; on storage-side the 8 220GB LUN's are located on (only) 9 300GB-Disks with Raid-5 Level and within HP-UX each VG/LVOL is mirrored via LVM to a second desasterdatacenter (200m distance)
LOGFILES:
2 x 4 GB LUN Raid-1 on Storage and via LVM also mirrored to failover-center
MAXDB-Datasize: 1600 GB Overall and within this 1350 GB used, TEMPUsage about 25 GB !
MAXDB-Parameter-Settings
MAXCPUu2019s 10 (4 x IA64 QuadCore+Hyperthreading shows 16 Cores within top and galcne
Cache_SIZE= I/O Buffer Cache = 16 GB (2.000.000 Pages)
Data-Cache-HitRatio: 99.61%
Undo-Cache = 99,98%
the following sapnote for MAXDB Peformance and Migrations are well known and already processes
928037, 1077887, 1385089, 1327874, 954268, 1464560, 869267,
My major problem is the export-runtime with over 4 days on the first dry-run (6 R3load's running on Linux-APPL-Server), and 46h on second runtime, 6 R3loads running on DB-Server IA64)
the third trail-run was aborted by me after 48hours and 50% export-dump-space was written. In all 3 dry-runs, no more than approx 3.5GB DUMP-Space were written per hour!
My first question to all MAXDB-Guru'S: How can I influence/optimize the TEMP - Area in MAXDB?? I didn't find any hint's in SDN or SAPNOTES or MaxDB Wiki or google....As fare as I know, this TEMP area "resides" within MAXDB-datafiles, thus it's seperated on my 48 datafiles, spreaded over 8 LUN/VG/disks. But I see LESS throughput on disk's and MAXDB-Kernel uses only ONE of ten's cpu-cores (approx 80% - 120% of 1000%).
The throughput or cpu-usage doesn't change when I use 2 or 4 or 10 or 16 R3load' processes in parallel. The "result" is always the same: approx. 3,5 GB Export-Dump and 1 CPU MAX-DB Kernelprocess is used !
so the BIG Question for me: WHERE is the bottleneck ?? (RAID-5 Disk-LUNS mirrored with LVM ???)
on HP-UX scsi_queue_depth_length I'v increased default value from 8 to 32 to 64 to 128 --> no impact
2nd question:
I'v read OS-Note 1327874 - FAQ: SAP MaxDB Read Ahead/Prefetch, and we are running MAXDB 7.7.07.19! (MaxDB 7.8 is not suppored via PAM for IA64 640-UC-Kernel) and as far as I understood, this parameter will no HELP me on EXPORT with primary-key-sequence ! is this correct? THUS: which parameter HELPS for speeding up Export-Runtime?
MAXCPU is set to 10, but ONLY 1 of them is used??
so this post is for ALL MAXDB GURU'S!
who will be intrested to contriubte on this "high-sophisticated" migration-project with 1.5TB MAXDB-Database-size and ONLY 24h Downtime !!
all tips and hints are welcome and I will give us coninued updates to this running project until WE did a successfull migration job.
PS: Import is not yet started, but should be done within vSphere 5 and SLES 11 SP1 on MAXDB 7.8 ....and of yours in parallel to export with migration monitor, BUT again a challenge: 200km distance from source to traget system !!!
NICE PROJECT;
best regards AlfredHi Alfred,
nice project ... just some simple questions:
Did you open a message at SAP? Maybe you could buy some upgrade support, this could be usefull to get direct access to the SAP support...
Which byte order do you use? I know Itanium could use both. But it should be different, otherwise you would use a backup for the migration.
And the worst question, I do not even want to ask: What about your MAXCPUS parameter? Is it set to more than 1? This could be the problem why only one CPU is used.
Best regards
Christian -
R3load export of table REPOSRC with lob col - error ora-1555 and ora-22924
Hello,
i have tried to export data from our production system for system copy and then upgrade test. while i export the R3load job has reported error in table REPOSRC, which has lob column DATA. i have apsted below the conversation in which i have requested SAP to help and they said it comes under consulting support. this problem is in 2 rows of the table.
but i would like to know if i delete these 2 rows and then copy from our development system to production system at oracle level, will there be any problem with upgrade or operation of these prorgams and will it have any license complications if i do it.
Regards
Ramakrishna Reddy
__________________________ SAP SUPPORT COnveration_____________________________________________________
Hello,
we have are performing Data Export for System copy of our Production
system, during the export, R3load Job gave error as
R3LOAD Log----
Compiled Aug 16 2008 04:47:59
/sapmnt/DB1/exe/R3load -datacodepage 1100 -
e /dataexport/syscopy/SAPSSEXC.cmd -l /dataexport/syscopy/SAPSSEXC.log -stop_on_error
(DB) INFO: connected to DB
(DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): WE8DEC
(DB) INFO: Export without hintfile
(NT) Error: TPRI_PAR: normal NameTab from 20090828184449 younger than
alternate NameTab from 20030211191957!
(SYSLOG) INFO: k CQF :
TPRI_PAR&20030211191957&20090828184449& rscpgdio 47
(CNV) WARNING: conversion from 8600 to 1100 not possible
(GSI) INFO: dbname = "DB120050205010209
(GSI) INFO: vname = "ORACLE "
(GSI) INFO: hostname
= "dbttsap "
(GSI) INFO: sysname = "AIX"
(GSI) INFO: nodename = "dbttsap"
(GSI) INFO: release = "2"
(GSI) INFO: version = "5"
(GSI) INFO: machine = "00C8793E4C00"
(GSI) INFO: instno = "0020111547"
(DBC) Info: No commits during lob export
DbSl Trace: OCI-call 'OCILobRead' failed: rc = 1555
DbSl Trace: ORA-1555 occurred when reading from a LOB
(EXP) ERROR: DbSlLobGetPiece failed
rc = 99, table "REPOSRC"
(SQL error 1555)
error message returned by DbSl:
ORA-01555: snapshot too old: rollback segment number with name "" too
small
ORA-22924: snapshot too old
(DB) INFO: disconnected from DB
/sapmnt/DB1/exe/R3load: job finished with 1 error(s)
/sapmnt/DB1/exe/R3load: END OF LOG: 20100816104734
END of R3LOAD Log----
then as per the note 500340, i have chnaged the pctversion of table
REPOSRC of lob column DATA to 30, but i get the error still,
i have added more space to PSAPUNDO and PSAPTEMP also, still the same
error.
the i have run the export as
exp SAPDB1/sap file=REPOSRC.dmp log=REPOSRC.log tables=REPOSRC
exp log----
dbttsap:oradb1 5> exp SAPDB1/sap file=REPOSRC.dmp log=REPOSRC.log
tables=REPOSRC
Export: Release 9.2.0.8.0 - Production on Mon Aug 16 13:40:27 2010
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit
Production
With the Partitioning option
JServer Release 9.2.0.8.0 - Production
Export done in WE8DEC character set and UTF8 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table REPOSRC
EXP-00056: ORACLE error 1555 encountered
ORA-01555: snapshot too old: rollback segment number with name "" too
small
ORA-22924: snapshot too old
Export terminated successfully with warnings.
SQL> select table_name, segment_name, cache, nvl(to_char
(pctversion),'NULL') pctversion, nvl(to_char(retention),'NULL')
retention from dba_lobs where
table_name = 'REPOSRC';
TABLE_NAME | SEGMENT_NAME |CACHE | PCTVERSION | RETENTION
REPOSRC SYS_LOB0000014507C00034$$ NO 30 21600
please help to solve this problem.
Regards
Ramakrishna Reddy
Dear customer,
Thank you very much for contacting us at SAP global support.
Regarding your issue would you please attach your ORACLE alert log and
trace file to this message?
Thanks and regards.
Hello,
Thanks for helping,
i attached the alert log file. i have gone through is, but i could
not find the corresponding Ora-01555 for table REPOSRC.
Regards
Ramakrishna Reddy
+66 85835-4272
Dear customer,
I have found some previous issues with the similar symptom as your
system. I think this symptom is described in note 983230.
As you can see this symptom is mainly caused by ORACLE bug 5212539 and
it should be fixed at 9.2.0.8 which is just your version. But although
5212539 is implemented, only the occurrence of new corruptions will be
avoided, the already existing ones will stay in the system regardless of the patch.
The reason why metalink 452341.1 was created is bug 5212539, since this
is the most common software caused lob corruption in recent times.
Basically any system that was running without a patch for bug 5212539 at some time in the past could be potentially affected by the problem.
In order to be sure about bug 5212539 can you please verify whether the
affected lob really is a NOCACHE lob? You can do this as described in
mentioned note #983230. If yes, then there are basically only two
options left:
-> You apply a backup to the system that does not contain these
corruptions.
-> In case a good backup is not available, it would be possible to
rebuild the table including the lob segment with possible data loss . Since this is beyond the scope of support, this would have to be
done via remote consulting.
Any further question, please contact us freely.
Thanks and regards.
Hello,
Thanks for the Help and support,
i have gone through the note 983230 and metalink 452341.1.
and i have ran the script and found that there are 2 rows corrupted in
the table REPOSRC. these rows belong to Standard SAP programs
MABADRFENTRIES & SAPFGARC.
and to reconfirm i have tried to display them in our development system
and production system. the development systems shows the src code in
Se38 but in production system it goes to short dump DBIF_REPO_SQL_ERROR.
so is it possible to delete these 2 rows and update ourselves from our
development system at oracle level. will it have any impact on SAP
operation or upgrade in future.
Regards
Ramakrishna ReddyHello, we have solved the problem.
To help someone with the same error, what we have done is:
1.- wait until all the processes has finished and the export is stopped.
2.- startup SAP
3.- SE14 and look up the tables. Crete the tables in the database.
4.- stop SAP
5.- Retry the export (if you did all the steps with sapinst running but the dialogue window in the screen) or begin the sapinst again with the option: "continue with the old options".
Regards to all. -
My R3Load gives me an erorr, what can I do.
I have the following error, could you please help me (I can't find the question mark, sorry)
I'm performing an export from Maxdb to Oracle with same OS (AIX 64 Bits)
./R3SETUP -f DBEXPORT.R3S
It works fine until it gives me an error with R3load with 15 procces termineted with errors.
logs says.
DBEXPORT.LOG
INFO 2007-12-19 14:22:25 DBEXPR3LOADEXEC_IND_ADA R3loadPrepare:0
Total number of processes: 15
0 process(es) finished successfully:
15 process(es) finished with errors: 0 1 2 3 4 5 6 7 8 9 10 11 12
13 14
0 process(es) still running:
ERROR 2007-12-19 14:22:25 DBEXPR3LOADEXEC_IND_ADA R3loadPrepare:0
Processes started: 15
Ended with error: 15
load tool ended with error.
See above SAP*.log errors.
ERROR 2007-12-19 14:22:25 DBEXPR3LOADEXEC_IND_ADA R3loadFork:0
RC code form SyChildFuncWait = 255 .
ERROR 2007-12-19 14:22:25 DBEXPR3LOADEXEC_IND_ADA R3loadFork:0
See logfiles SAPlog and EXlog for further information.
ERROR 2007-12-19 14:22:25 DBEXPR3LOADEXEC_IND_ADA InternalInstallationDo:0
R3loadFork erred
ERROR 2007-12-19 14:22:25 DBEXPR3LOADEXEC_IND_ADA InstallationDo:0
Phase failed.
ERROR 2007-12-19 14:22:25 InstController Action:0
Step DBEXPR3LOADEXEC_IND_ADA could not be performed.
ERROR 2007-12-19 14:22:25 Main
Installation failed.
ERROR 2007-12-19 14:22:25 Main
Installation aborted.
I don't have any EX""".log and the SAP""".log are all identical saying the following
#START OF LOG: 20071219142222
R3load version @(#) $Id: //bas/46D/src/ins/R3load/R3load.c#16 $ SAP
R3load -e SAP""".cmd -p SAP""".log
start of syntax check ###
end of syntax check ###
(EXP) ERROR: DbSlControl failed
rc = 20
#STOP: 20071219142225
ONLY THAT
The """ are for the different process
I'm trying to change R3load o dbadaslib.o, maybe it works
But I need information of what can I do
Regards,
Roberto.Indeed, look at the path, it contains two // after DATA and before de file
Could that be the problem, and if it is, how can I solve it
INFO 2007-12-20 16:32:15 R3SZCHK_IND_ADA R3ldctlDo:0
/sapmnt/PRD/exe/R3szchk -s DD -g ora 8 -p /sapinst/migkit -f
/sapinst/EXPORT/DATA//SAPSDIC.STR
/sapinst/EXPORT/DATA//SAPUSER.STR
/sapinst/EXPORT/DATA//SAPAPPL1.STR
/sapinst/EXPORT/DATA//SAPAPPL2.STR
/sapinst/EXPORT/DATA//SAPAPPL0.STR
/sapinst/EXPORT/DATA//SAPCLUST.STR
/sapinst/EXPORT/DATA//SAPPOOL.STR
/sapinst/EXPORT/DATA//SAPSSRC.STR
/sapinst/EXPORT/DATA//SAPSPROT.STR
/sapinst/EXPORT/DATA//SAPSDOCU.STR
/sapinst/EXPORT/DATA//SAPSSEXC.STR
/sapinst/EXPORT/DATA//SAP0000.STR
/sapinst/EXPORT/DATA//SAPSSDEF.STR
/sapinst/EXPORT/DATA//SAPSLDEF.STR
/sapinst/EXPORT/DATA//SAPSLEXC.STR
/sapinst/EXPORT/DATA//SAPSLOAD.STR
INFO 2007-12-20 16:32:15 R3SZCHK_IND_ADA SyCoprocessCreateAsUser:300
Creating coprocess /sapmnt/PRD/exe/R3szchk -s DD -g ora 8 -p
/sapinst/migkit -f /sapinst/EXPORT/DATA//SAPSDIC.STR
/sapinst/EXPORT/DATA//SAPUSER.STR
/sapinst/EXPORT/DATA//SAPAPPL1.STR
/sapinst/EXPORT/DATA//SAPAPPL2.STR
/sapinst/EXPORT/DATA//SAPAPPL0.STR
/sapinst/EXPORT/DATA//SAPCLUST.STR
/sapinst/EXPORT/DATA//SAPPOOL.STR
/sapinst/EXPORT/DATA//SAPSSRC.STR
/sapinst/EXPORT/DATA//SAPSPROT.STR
/sapinst/EXPORT/DATA//SAPSDOCU.STR
/sapinst/EXPORT/DATA//SAPSSEXC.STR
/sapinst/EXPORT/DATA//SAP0000.STR
/sapinst/EXPORT/DATA//SAPSSDEF.STR
/sapinst/EXPORT/DATA//SAPSLDEF.STR
/sapinst/EXPORT/DATA//SAPSLEXC.STR
/sapinst/EXPORT/DATA//SAPSLOAD.STR as user prdadm and group
sapsys ...
Actually now I'm getting another error, when the R3SZCHK runs, abort and in the log file says
INFO 2007-12-20 16:32:15 R3SZCHK_IND_ADA R3ldctlDo:0
Error: Can not close lock
I need both solution si you have
Thanks in advance
Roberto -
7.8.01.18: lots of "Semaphore Wait" during R3load export
We're exporting a 1 TB database using R3load with 32 processes.
I see in x_cons
I
D UKT UNIX TASK APPL Current Timeout/ Region Wait
tid type pid state Priority cnt try item
T82 12 20443 User 29455* Running 10 67 15624900(r)
T83 10 20441 User 27169* Running 1 66 17362372(r)
T85 9 20440 User 27432* Running 13 18 22136285(r)
T86 8 20439 User 29601* Semaphore Wait 0 19929257(d)
T87 13 20444 User 29765* Semaphore Wait 0 18315751(d)
T92 9 20440 User 26841* IO Wait (R) 0 21 22136285(r)
T93 13 20444 User 29737* Semaphore Wait 0 18315751(d)
T95 16 20447 User 29461* Running 70 54 13705379(r)
T96 11 20442 User 27767* Semaphore Wait 0 15100349(r)
T100 7 20438 User 29619* Semaphore Wait 0 19535885(s)
T104 11 20442 User 27090* IOWait(R)(041) 0 15100349(r)
T107 13 20444 User 26750**Enter ExclLock 1 0 18 18315751(d)
T110 14 20445 User 29476**Enter ExclLock 1 0 66 17785567(d)
T113 16 20447 User 29097* IO Wait (R) 0 9 13705379(r)
T114 8 20439 User 29435**Enter ExclLock 1 0 18 19929257(d)
T121 12 20443 User 29125* IO Wait (R) 0 33 15624900(r)
T122 10 20441 User 27823* IO Wait (R) 0 15 17362372(r)
T123 11 20442 User 26630* Running 5 22 15100349(r)
T126 10 20441 User 28056* IOWait(R)(041) 0 17362372(r)
T127 16 20447 User 29139* IO Wait (R) 0 40 13705379(r)
T131 14 20445 User 28558* Semaphore Wait 0 17785567(d)
T135 16 20447 User 20574* IO Wait (R) 0 19 13705379(r)
T136 7 20438 User 27795* IOWait(R)(041) 0 19535885(s)
T146 9 20440 User 29062* IO Wait (R) 0 21 22136285(r)
T151 15 20446 User 21045* IO Wait (R) 0 41 17173861(s)
T154 9 20440 User 29068* IO Wait (R) 0 28 22136285(r)
T164 12 20443 User 23832* Semaphore Wait 0 15624900(r)
T166 16 20447 User 21231* IO Wait (R) 0 21 13705379(r)
Tables are not split.
Some question arise here for me:
- Why do we get so many "Semaphore Waits" although we are not accessing tables in parallel? Why do read operations have to be done atomically?
- Why do we get IOWait(R)(041)? Can it be so common that the same two pages are trying to be read twice? Those (041) occur, if I keep x_cons running, in half of the UKTs.
MarkusHi,
Nearly all tasks with the status "Semaphore Wait" run in threads being active for other tasks::
- T86 8 20439 User 29601* Semaphore Wait 0 19929257(d),
Aktive task in thread 8: T114 8 20439 User 29435**Enter ExclLock 1 0 18 19929257(d)
- T93 13 20444 User 29737* Semaphore Wait 0 18315751(d)
T87 13 20444 User 29765* Semaphore Wait 0 18315751(d)
Active task in thread 13: T107 13 20444 User 26750**Enter ExclLock 1 0 18 18315751(d)
- T96 11 20442 User 27767* Semaphore Wait 0 15100349(r)
Active task in thread 11: T123 11 20442 User 26630* Running 5 22 15100349(r)
- T131 14 20445 User 28558* Semaphore Wait 0 17785567(d)
Active task in thread 14: T110 14 20445 User 29476**Enter ExclLock 1 0 66 17785567(d)
- T164 12 20443 User 23832* Semaphore Wait 0 15624900(r)
Active task in thread 12: T82 12 20443 User 29455* Running 10 67 15624900(r)
Only thread 7 is not scheduled on CPU.
Nevertheless there was a bottleneck on the reagions 18 and 66 at this particular time of the x_cons run. The database might need more data cache stripes (parameter DataCacheStripes).
The status "IOWait(R)(041)" can mean the tasks are waiting for I/O pre-fetching. The system views IOJOBS and the x_cons command SHOW IOPENDING give information about the running I/O orders.
Best regards,
Werner Thesing -
System Copy Questions R/3 4.6c + Oracle + Solaris
dear gurus
goal system copy in R/3 4.6c + Oracle + Solaris
I am doing the db export using R3SETUP (quite slow, I have optimize it using split SRT files method)
I have questions here..
when I do export I download latest R3szchk, R3load, R3SETUP, R3check, R3LDCTL
I download from
Support Packages and Patches - Archive - Additional Components - SAP Kernel -SAP KERNEL 64-BIT - SAP KERNEL 4.6D 64-BIT
R3szchk, R3load, R3check, R3LDCTL using SAPEXEDB_2364-10001132.SAR (Kernel Part II for Stack Q1-2008)
R3SETUP using R3SETUP_29-10001132.CAR
source system : Oracle 8.1.7 - R/3 4.6C - Solaris 9
my question is
target system : Oracle 10.0.2 with patches - R/3 4.6C - Solaris 10
later on when i do import to target system, should I use the same kernel when I do the export, or should I use the one from
Support Packages and Patches -> Additional Components -> SAP Kernel -> SAP KERNEL 64-BIT -> SAP KERNEL 4.6D_EX2 64-BIT
another question
how to speed up the export ? i am stucked with bottle neck in IO, CPU usage is very low, please dont recommend me something related to hardware or disk architecture, if possible recommend me something related to Oracle parameter.
disk IO using IO stat showing 100% busy and service time 800-1500
thanks !later on when i do import to target system, should I use the same kernel when I do the export, or should I use the one from
Support Packages and Patches -> Additional Components -> SAP Kernel -> SAP KERNEL 64-BIT -> SAP KERNEL 4.6D_EX2 64-BIT
Yes - use that one. It uses the Oracle instantclient instead of the old 9.2 client. Make sure you add "-loadprocedure fast" to the load options, this will enable the fastloader.
Make also sure you install all the patchsets and interim patches BEFORE you start the data load.
another question
how to speed up the export ? i am stucked with bottle neck in IO, CPU usage is very low, please dont recommend me something related to hardware or disk architecture, if possible recommend me something related to Oracle parameter.
if you disk I/O subsystem can´t deliver more than you are lost with Oracle parameters.
Markus -
Split a large table into multiple packages - R3load/MIGMON
Hello,
We are in the process of reducing the export and import downtime for the UNICODE migration/Conversion.
In this process, we have identified couple of large tables which were taking long time to export and import by a single R3load process.
Step 1:> We ran the System Copy --> Export Preparation
Step 2:> System Copy --> Table Splitting Preparation
We have created a file with the large tables which are required to split into multiple packages and where able to create a total of 3 WHR files for the following table under DATA directory of main EXPORT directory.
SplitTables.txt (Name of the file used in the SAPINST)
CATF%2
E071%2
Which means, we would like each of the above large tables to be exported using 2 R3load processes.
Step 3:> System Copy --> Database and Central Instance Export
During the SAPInst process at Split STR files screen , we have selected the option 'Split Predefined Tables' and select the file which has predefined tables.
Filename: SplitTable.txt
CATF
E071
When we started the export process, we haven't seen the above tables been processed by mutiple R3load processes.
They were exported by a Single R3load processes.
In the order_by.txt file, we have found the following entries...
order_by.txt----
# generated by SAPinst at: Sat Feb 24 08:33:39 GMT-0700 (Mountain
Standard Time) 2007
default package order: by name
CATF
D010TAB
DD03L
DOKCLU
E071
GLOSSARY
REPOSRC
SAP0000
SAPAPPL0_1
SAPAPPL0_2
We have selected a total of 20 parallel jobs.
Here my questions are:
a> what are we doing wrong here?
b> Is there a different way to specify/define a large table into multiple packages, so that they get exported by multiple R3load processes?
I really appreciate your response.
Thank you,
NikeeHi Haleem,
As for your queries are concerned -
1. With R3ta , you will split large tables using WHERE clause. WHR files get generated. If you have mentioned CDCLS%2 in the input file for table splitting, then it generates 2~3 WHR files CDCLS-1, CDCLS-2 & CDCLS-3 (depending upon WHERE conditions)
2. While using MIGMON ( for sequencial / parallel export-import process), you have the choice of Package Order in th e properties file.
E.g : For Import - In the import_monitor_cmd.properties, specify
Package order: name | size | file with package names
orderBy=/upgexp/SOURCE/pkg_imp_order.txt
And in the pkg_imp_txt, I have specified the import package order as
BSIS-7
CDCLS-3
SAPAPPL1_184
SAPAPPL1_72
CDCLS-2
SAPAPPL2_2
CDCLS-1
Similarly , you can specify the Export package order as well in the export properties file ...
I hope this clarifies your doubt
Warm Regards,
SANUP.V -
Can R3load skip some tables during import?
We use DB-independent method to export/import SAP DB.
Now the export is done by R3load.
We want to exclude some tables in the import.
I know that R3load has an option "-o T" to skip tables.
However, we need to know
1) the exact syntax to put into export_monitor_cmd.properties
2) still need the table structures to be imported even no data is wanted.
Thanks!Lee and Rob,
Thank you both for your responses.
Rob, thank you sir... I appreciate greatly some possible solutions to implement. The CollectionPreseter sounds very interesting. I am relatively new to Light Room 3 (working through Kelby's book at the moment tryng to learn), so please excuse any lightly thought out questions.
You mentioned the idea of setting up multiple collections where each would have its own preset. So lets talk about a possible workflow scenario in order to help me understand whether I comprehend the functionality of what this plugin could do.
Lets say I have 3 Collections with each having one preset assigned.
Is it possible ->
Workflow A
That after I import photos and then assign into Collection 1, CollectionPreseter will assign the defined preset on Collection 1.
Once applied, does the ability exist to then move the pictures from Collection 1 into Collection 2 (while keeping Collection 1 preset) to apply it's preset and then lastly Moving the pictures from Collection 2 (while keeping Preset 1 and 2) into Collection 3 to apply its preset? with Final Export.
OR
Workflow B
Would the flow have to be something like this based on above:
Import and place into Collection 1 (preset 1 is applied). Export and Save
Reimport and place into Collection 2 (preset 2 is applied). Export and Save.
Reimport and place into Collection 3 (preset 3 is applied). Export and Save Final.?
The other that I have not raised is what about droplets (actions) with Photoshop CS? Are multiple droplets able to be applied and ran in a batch if I integrated with CS that way?
Thank you...
Steven -
SRM table CFF_CONT question : huge table, what is that table for ?
Hello
We are looking to have more details about table CFF_CONT on a SRM system
The tolat size of DB is 344 GB
and only for table CFF_CONT we do have 284 GB, compared to total size this is really huge
the rest of the DB is only about 60 GB
What is that table for ?
Would it be normal to see that table on SRM growing much more than the others ?
Is that table something that can be clean up ?
Those questions are poping up becasue we are doing a system copy with sapisnt and R3load
and we see that this table only is requiring a so huge amount of thime to be exported
RegardsBy the way ...
That CFF_CONT table belongs to cFolder components -
System copy using R3load ( Export / Import )
Hi,
We are testing System copy using R3load ( Export / Import ) using our production data.
Environment : 46C / Oracle.
while executing export, it takes more than 20 hours, for the data to get exported, we want to reduce the export time drastically. hence, we want to achieve it by scrutinizing the input parameters.
during export, there is a parameter by name splitting the *.STR and *.EXT files for R3load.
the default input for the above parameter is No, do not split STR and EXT files.
My question 1 : If we input yes instead of No and split the *.STR and *.EXT files, will the export time get reduced ?
My Question 2: If the answer is yes to Question 1, will the time reduced will be significant enough ? or how much percentage of time we can expect it to reduce, compare to a No Input.
Best Regards
L RaghunahthHi
The time of the export depends on the size of your database (and the size of your biggest tables) and your hardware capacity.
My question 1 : If we input yes instead of No and split the *.STR and *.EXT files, will the export time get reduced ?
In case you have a few very large tables, and you have multiple cpu's and a decent disk storage, then splitting might significantly reduce the export time.
My Question 2: If the answer is yes to Question 1, will the time reduced will be significant enough ? or how much percentage of time we can expect it to reduce, compare to a No Input.
As you did not tell us about your database size and hardware details there is no way to give you anything but very basic metrics. Did you specify a parallel degree at all, was your hardware idling for 20 hrs or fully loaded already?
20 hrs for a 100gb database is very slow. It is reasonable (rather fast in my opinion) for a 2tb database.
Best regards, Michael -
Systemcopy using R3load - Index creation VERY slow
We exported a BW 7.0 system using R3load (newest tools and SMIGR_CREATE_DDL) and now importing it into the target system.
Source database size is ~ 800 GB.
The export was running a bit more than 20 hours using 16 parallel processes. The import is still running with the last R3load process. Checking the logs I found out that it's creating indexes on various tables:
(DB) INFO: /BI0/F0TCT_C02~150 created#20100423052851
(DB) INFO: /BIC/B0000530000KE created#20100423071501
(DB) INFO: /BI0/F0COPC_C08~01 created#20100423072742
(DB) INFO: /BI0/F0COPC_C08~04 created#20100423073954
(DB) INFO: /BI0/F0COPC_C08~05 created#20100423075156
(DB) INFO: /BI0/F0COPC_C08~06 created#20100423080436
(DB) INFO: /BI0/F0COPC_C08~07 created#20100423081948
(DB) INFO: /BI0/F0COPC_C08~08 created#20100423083258
(DB) INFO: /BIC/B0000533000KE created#20100423101009
(DB) INFO: /BIC/AODS_FA00~010 created#20100423121754
As one can see on the timestamps the creation of one index can take an hour or more.
x_cons is showing constant CrIndex reading in parallel, however, the througput is not more than 1 - 2 MB/sec. Those index creation processes are running now since over two days (> 48 hours) and since the .TSK files don't mentioned those indexes any more I wonder how many of them are to be created and how long this will take.
The whole import was started at "2010-04-20 12:19:08" (according to import_monitor.log) so running now since more than three days with four parallel processes. Target machine has 4 CPUs and 16 GB RAM (CACHE_SIZE is 10 GB). The machine is idling though with 98 - 99 %.
I have three questions:
- why does index creation take such a long time? I'm aware of the fact, that the cache may not be big enough to take all the data but that speed is far from being acceptable. Doing a Unicode migration, even in parallel, will lead to a downtime that may not be acceptable by the business.
- why are the indexes not created first and then filled with the data? Each savepoint may take longer but I don't think that it will take that long.
- how to find out which indexes are still to be created and how to estimate the avg. runtime of that?
Markusi Peter,
I would suggest creating an SAP ticket for this, because these kind of problems are quite difficult to analyze.
But let me describe the index creation within MaxDB. If only one index creation process is active, MaxDB can use multiple Server Tasks (one for each Data Volume) to possibly increase the I/O throughput. This means the more Data Volumes you have configured, the faster the parallel index creation process should be. However, this hugely depends on your I/O system being able to handle an increasing amount of read/write requests in parallel. If one index creation process is running using parallel Server tasks, all further indexes to be created at that time can only utilize one single User Task for the I/O.
The R3load import process assumes that the indexes can be created fast, if all necessary base table data is still present in the Data Cache. This mostly applies to small tables up to table sizes that take up a certain amount of the Data Cache. All indexes for these tables are created right after the table has been imported to make use of the fact, that all the needed data for index creation is still in the cache. Many idexes may be created simultaneously here, but only one index at a time can use parallel Server Tasks.
If a table is too large in relation to the total database size, then its indexes are being queued for serial index creation to be started when all tables were imported. The idea is that the needed base table data would likely have been flushed out of the Data Cache already and so there is additional I/O necessary rereading that table for index creation. And this additional I/O would greatly benefit from parallel Server Tasks accessing the Data Volumes. For this reason, the indexes that are to be created at the end are queued and serialized to ensure that only one index creation process is active at a time.
Now you mentioned that the index creation process takes a lot of time. I would suggest (besides opening an OSS ticket) to start the MaxDB tool 'Database Analyzer' with an interval of 60 seconds configured during the whole import. In addition, you should activate the 'time measurement' to get a reading on the I/O times. Plus, ensure that you have many Data Volumes configured and that your I/O system can handle that additional loag. E.g. it would make no sense to have 25 Server Tasks all writing to a single local disk, I would assume that the disk would become a bottle neck...
Hope my reply was not too confusing,
Thorsten -
R3load - what's the meaning of parameter "-para_cnt X"?
During system copies/shell creations I always come across the parameter
-para_cnt <count> count of parallel R3load processes (MaxDB only)
I wonder what's the usage of that parameter.
Is that something like "if only one R3load is running use that to create the indexes"?
MarkusHello Markus,
The answer on the question:
u201CWhat meaning has the R3load option "-para_cnt <count>"?u201D
you will find in SAP note 1464560, p.8.
The note will be released to SAP customers by MAXDB development soon.
Thank you and best regards, Natalia Khlopina -
Question about Unicode and migration 32 -- 64 bit
Hello
We are using ECC 5.0 on Windows 32-bit and SQL Server 2000.
We want to migrate to platform Windows 64 bit. We know the procedure described on note 960769-Windows:Migration from 32-bit to 64-bit.
We know we can use R3load to do export/import, and we are going to install SQL server 2005 in target server.
So, with this procedure we will be in ECC 5.0 , Windows 64-bit and SQL server 2005, but the main question is.. how Unicode conversion play here?
Can we take advantadge of this migration performing Unicode Conversion too? How? Is Unicode Conversion obligatory here?
u2026
Thanks and regards
JavierHi Javier,
first of all, your upgrade to SQL Server 2005 64bit could be done without Migration. You just have to upgrade your database and you're done.
If you would like to migrate to Unicode you could do this by migrating from 32bit to 64bit with R3load. Actually when you want to migrate to Unicode, the you should do it with a small and simple system. Simple means here with single code page or few code pages. If you plan to install new languages which would introduce new code pages, then a Unicode migration should be done before you add the language. Small means, when your system grow significantly, then you should migrate soon. When you started an archiving project, then you probably better wait till this project lead to a smaller system. In other words. The less code pages you have and the smaller your system is, the better it is for your Migration. Downtime will be smaller and the project complexity will be less too.
Best regards
Ralph Ganszky -
Homegeneouscopy R3load procedure generated more tables on target system
Hi All
My source system is 46C SR2 /oracle 817/HP_UX PARISC and
Target System is 46CSR2 /Oracle 10202/HPUX IA
I used R3load procedure to export database from source system and completed import successfully into target system without any problem.The new system is up and running.
My Source system has total 23193 tables but the target system has 23771 tables and source system has 27011 indexes and the target system has 27701 indexes.
Can someone tell me why there are additional tables and indexes in my Target system.Is it due to oracle 10g.
Your help will be appreciated.
Thanks
AmitMarkus,
I got this information from DB02 first screen where it shows number of tables and indexes.
Actually I used r3load procedure to export data from source and prepared target system also by r3load,
The source systems had less tables but the target got more.I have upgraded the target system to ecc6
and working fine.UAT completed and gone live.
Auditors have raised the question .
Let me know if there is any way to find this difference of tables though target is now ECC6.
Thanks
Amit
Maybe you are looking for
-
F-48/F-47 DOWN PAYMENT AGAINST PO IN F.C.
Hi. I have created an imported PO valued 1000 with USD @50/- and maintained error message aginst "Order value will be exceeded "Message no. F5373". When I am posting down payent in foreign currency against this PO line item for the same value of 100
-
mac shut down due to problem; I rebooted fine. how do I determine what the problem was?
-
Need to reprint photo book created with iPhoto & My Publisher
I created two photo books in December of 2006 via iPhoto and used the My Publisher plugin to use that service for output. Anyway, I gave the book as a gift and now I would like to make a copy for myself. The problem is that My Publisher cannot reprin
-
Import from 20 MB powerpoint file yields gigantic 1.4 GB file!
Hi folks, I imported a 20 MB file from powerpoint and made it a little bit prettier in keynote, then saved it, only to find that I had inadvertantly created a 1.4 GB file! There are very few transitions, no movies or audio, so I don't know why the fi
-
Can anybody tell me what is the DOWN PAYMENT CLEARING ACCOUNT in BP Accounting Tab. Is this the Debit Expense Account posting to the Nominal Ledger? If "yes", this does not take this N/L into account when posting a Purchase Invoice and I stil have to