Unicode export:Table-splitting and package splitting

Hi SAP experts,
I know there are lot of forums related to this topic, but I have some new questions and hence posting a new thread.
We are in the process of doing unicode conversion in our landscape(CRM 7.0 system based on NW 7.01) and we are running on AIX 6.1 and DB2 9.5. The database size is around 1.5 TB and hence, we want to go in for optimization for export and import in order to reduce the downtime.As a part of the process, we have tried to do table-splitting and parallel export-import to reduce the downtime.
However, we are having some doubts whether this table-splitting has actually worked in our scenario,as the export has exeucted for nearly 28 hours.
The steps followed by us :
1.) Doing the export preparation using SAPINST
2.) Doing table splitting preparation, by creating a table input file having entries in the format <tablename>%<No.of splits>.Also, we have used the latest R3ta file and the dbdb6slib.o(belonging to version 7.20 even though our system is on 7.01) using SAPINST.
3.) Starting with the export using SAPINST.
some observations and questions:
1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
2.) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th split. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
3.) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
4.) Also, what exactly is the difference between table-splitting and package-splitting? Are they both effective together?
If you have any questions and or need any clarifications and further inputs, please let me know.
It would be great, if we could get any insights on this whole procedure, as we know a lot of things are taken care by SAPINST itself in the background, but we just want to be certain that we have done the right thing and this is the way it should work.
Regards,
Santosh Bhat

Hi,
First of all please ignore my very first response ... i have accidentally posted a response to some other thread...sorry for that 
Now coming you your questions...
> 1.) Can package splitting and table-splitting be used together? If yes or no, what exactly is the procedure to be followed. As I observed that, the packages also have entries of the tables that we decided to split. So, does package splitting or table-splitting override the other, and only one of the two can be effective at a time?
Package splitting and table splitting works together, because both serve a different purpose
My way of doing is ...
When i do package split i choose packageLimit 1000 and also split out the tables (which i selected for table split)  into seperate package (one package per table). I do it because that helps me to track those table.
Once the above is done i follow it up with the R3ta and wheresplitter for those tables.
Followed by manual migration monitor to do export/import , as mentioned in the previous reply above you need to ensure you sequenced you package properly ... large tables are exported first , use sections in the package list file , etc
> 2.) If you are well versed with table splitting procedure, could you describe maybe in brief the exact procedure?
Well i would say run R3ta (it will create multiple select queries) followed by wheresplitter (which will just split each of the select into multiple WHR files)  ...  
Best would go thought some document on table spliting and let me know if you have specific query. Dont miss the role of hints file.
> 3.) Also, I have mentioned about the version of R3ta and library file in my original post. Is this likely to be an issue?Also, is there a thumb rule to decide on the no.of splits for a table.
Rule is use executable of the kernel version supported by your system version. I am not well versed with 7.01 and 7.2 support ... to give you an example i should not use 700 R3ta on 640 system , although it works.
>1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
If you ask for 10 split .... you will get 10 splits or in some case 11 also, the reason might be the field it is using to split the table (the where clause). But not 100% sure about it.
> 2) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th plit. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
Not sure why you got 29 split when you asked for 36, one reason might be the field (key) used for split didn't have more than 28 unique records. I dont know how is PRCD_CLUST  split , you need to check the hints file for "key". One example can be suppose my table is split using company code, i have 10 company codes so even if i ask for 20 splits i will get only 10 splits (WHR's).
Yes the 29th file will always have less records, if you open the 29th WHR you will see that it has the "greater than clause". The 1st and the last WHR file has the "less than" and "greater than" clause , kind of a safety which allows you to prepare for the split even before you have downtime has started. This 2 WHR's ensures  that no record gets missed, though you might have prepared your WHR files week before the actual migration.
> 3) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
Not aware any thumb rule. First iteration you might choose something like 10 for 50 GB , 20 for 100 GB. If any of the tables overshoots the window. They you can give a try by  increase or decrease the number of splits for the table. For me couple of times the total export/import  time have improved by reducing the splits of some tables (i suppose i was oversplitting those tables).
Regards,
Neel
Edited by: Neelabha Banerjee on Nov 30, 2011 11:12 PM

Similar Messages

  • Export tables, sequence, and package question

    Hi all,
    I've export table like: exp username/password file=export.dmp log=export.log tables=A statistics=none.
    The export statement above just export only the table "A" structure not the table "A" data. So, how can I move all the data from table "A" to table "B".
    How do I export the sequence, and package and its data also?
    Thank you very much
    Kevin

    Can you create a DBlink in order to do the insert into B select * from A.
    Also, you can use the SQL*Plus Copy command.
    Here are some links for COPY command:
    <br>Copying Data
    <br>copy command
    <br>copy command vs sql*loader
    <br><br>
    Here is a link for moving code:
    <br>exporting packages,function etc. from one user to another.

  • Urgent: Unicode conversion - table splitting

    Hi all,
    I am having a problem when trying to perform the export step of the unicode conversion on ERP2005, MSSQL server 2005. Due to previously very long runtime, I am trying to use the table splitting option. I have performed the "Table Splitting Preparation" step with what seems to be a success.
    The problem is when I run the actual database instance export.
    First of all:
    Which filepath should I provide SAPinst when it is asking for "Table input file"?
    Second:
    How can I actually determine the Package Unload Order? I tried selecting this option, but I was not given the opportunity to change this order in the subsequent screen. (The F1 help in SAPinst sais that I would be able to...)
    Have anyone of you experience with this?
    Best Regards,
    Thomas

    Hi Thomas,
    As I can imagine from the date of your last posting, you probably have your answers. But just so other people, who are searching on this topic, can find the answer here, I will fill in the blanks.
    First, when using table splitting, you must export using the migration monitor (MIGMON). That means, when you run SAPINST, on the ABAP System -> Database Export screen, select the Export Method: Export using Migration Monitor.
    At this point, as you said, you have already run the Table Splitting option from SAPINST, so the WHR files are in the export DATA directory. This whr.txt file is also there.
    The file input screen you are mentioning does not appear when you select the Export method mentioned above. But you can create such a file for MIGMON. You have to create this on your own and can only do it after the SAPINST has split the STR files.
    When finished with SAPINST, create the .txt file (I usually call it table_order.txt) and add the names of the STR files which you want to make sure are exported first. After MIGMON completes that list, all other STR files are exported in alphabetical order.
    Create your table_order.txt file inserting the filenames of the packages that were created, without the STR/WHR extensions (using some large tables which STR splitter broke out into their own STR files during my last export). You have to look in DB02 and sort descending based on used space to determine the order in which these tables should be exported and listed in the .txt file:
    <SPLIT Table>-1
    <SPLIT Table>-2
    <SPLIT Table>-n
    BSIS
    RFBLG
    CE1VVOC
    CE1WWOC
    DBTABLOG
    CE3VVOC
    COEP
    ARFCSDATA
    GLPCA
    KONV
    SWW_CONT
    SOC3
    CDCLS
    CE3WWOC
    BSE_CLR
    STXL
    EDIDS
    COSP
    BSAD
    EDI40
    ACCTIT
    BSIM
    VBFS
    BSAS
    ACCTCR
    CDHDR
    CE4VVOC_ACCT
    SGOHIST
    MSEG
    In the export_monitor_cmd.properties file, you specify this file name for the 'orderBy=' parameter. Create and keep the table_order.txt file in the same directory as the export_monitor.sh/.bat files.
    For importing, again you will need to use the Migration Monitor. SAPINST will automatically stop and prompt you to start the import using MIGMON. Here, you can specify the same table_order.txt file. You might want to amend it to control when the rest of the packages are imported, if you found one or more tables holding up the completion of the export.
    I hope this helps someone.
    Best Regards,
    Warren Chirhart

  • AS EXPORT TABLES, PROCEDURES, AND TRIGGERS MORE?

    Hi,
    I've finished my application on Oracle XE, and make the export of Workspace, the Scheme, then
    and export tables, procedures, triggers, etc, from Oracle XE to install on another computer?
    I appreciate your partnership and Attention ...
    Reynel Salazar Martinez ...

    por la parte de utilidades se genera la opcion
    ddl...
    y listo

  • Export Table Output and Print Output differing

    I am trying to:-
    export a table to excel.
    print the table data.
    The output of the export and print are differing,the print does not capture the commandlink. The segment that I am using is :-
    <af:group>
    <af:commandToolbarButton text="Export" immediate="true"
    shortDesc="Export All Rows" icon="/images/table.png">
    <af:exportCollectionActionListener type="excelHTML" exportedId="t2"
    filename="export-tasks.xls" title="Export"
    exportedRows="all"/>
    </af:commandToolbarButton>
    <af:commandToolbarButton text="Print" shortDesc="Print" icon="/images/print.png">
    <af:showPrintablePageBehavior/>
    </af:commandToolbarButton>
    </af:group>
    I tried including rendered="#{adfFacesContext.outputMode eq 'printable'}" in the af:outputText for rows & columns in the table,however, as a result the data in the Tableitself was displayed incorectly.
    I got to know about rendered from the following thread Regarding printing a command link using <af:showPrintablePageBehavior>

    I've also exported the document to IDML - and I've unzipped that folder. I've read the information for the colour - there's only 1 reference to Rubine Red and that's giving the same percentages as the InDesign file.
    I don't know where the 2nd shade of pink is coming from.
    But I suspect that it's from the print driver itself.
    Also - just occured to me, I lost my PDF settings a while ago, InDesign crashed and the PDF defaults were missing - I dowloaded a copy of a set from someone on these forums through a google search.
    But I no longer have PDF x4 export option - where can I retrieve these.
    How can I restore the original PDF settings for InDesign?

  • InDesign CS6 ePub Export : Tables with header and footer in HTML

    Hey there,
    does anyone know, whether InDesign CS6 also exports Table Headers and footers correctly into the XHTML-File of the ePub.
    What I mean, is whether the elements <thead> and <tfoot> are created?
    Or is it only possible to steer this via the CSS-Classnames which can be given in the tableformats?
    Generally I think it would be better if the user had the chance to map other exporttags to its elements than just p, em, strong, h1-h6.
    it would be useful to also put in other elements by hand.
    Best regrads.

    Magnolee2 wrote:
    does anyone know, whether InDesign CS6 also exports Table Headers and footers correctly into the XHTML-File of the ePub.
    What I mean, is whether the elements <thead> and <tfoot> are created?
    By "also", do you mean the behavior is changed with respect to CS5/CS5.5? In those, thead and tfoot are created correctly. (Although, quite disconcerting, in the order "thead / tfoot / tbody". ePub renderers based on Webkit display them correctly nevertheless, but others do not. An extremely annoying free interpretation of the W3C rules.)

  • Exp/imp procedures, functions and packages question

    Hi
    I've a 9i R2 version Oracle database. I would like to export procedures, functions and packages from a schema. How do I do that?
    Is there any script or command lines can provide?
    Thanks

    Hello user12259190.
    You can do an export of the user itself, excluding table data as inH:\>exp
    Export: Release 10.2.0.1.0 - Production on Tue Dec 22 11:22:52 2009
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Username: db_user@db_sid
    Password:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, Data Mining and Real Application Testing options
    Enter array fetch buffer size: 4096 >
    Export file: EXPDAT.DMP >
    (2)U(sers), or (3)T(ables): (2)U > 2
    Export grants (yes/no): yes > no
    Export table data (yes/no): yes > no
    Compress extents (yes/no): yes > no
    Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    server uses UTF8 character set (possible charset conversion)
    Note: table data (rows) will not be exported
    Note: grants on tables/views/sequences/roles will not be exported
    . exporting pre-schema procedural objects and actions
    . exporting foreign function library names for user DB_USER
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions for user DB_USER
    About to export DB_USER's objects ...
    . exporting database links
    . exporting sequence numbers
    . exporting cluster definitions
    . about to export DB_USER's tables via Conventional Path ...
    . . exporting table  TABLE_NAMEs
    EXP-00091: Exporting questionable statistics.
    . exporting synonyms
    . exporting views
    . exporting stored procedures
    . exporting operators
    . exporting referential integrity constraints
    . exporting triggers
    . exporting indextypes
    . exporting bitmap, functional and extensible indexes
    . exporting posttables actions
    . exporting materialized views
    . exporting snapshot logs
    . exporting job queues
    . exporting refresh groups and children
    . exporting dimensions
    . exporting post-schema procedural objects and actions
    . exporting statistics
    Export terminated successfully with warnings.Unfortunately, you can't export just the objects you want to unless they are tables.
    Using import (imp) you can list the content of your packages, procedures, functions, views, etc. and perhaps that will give you what you need.
    Another choice would be to useSELECT * FROM user_source ORDER BY 2, 1, 3;to list the code.
    Hope this helps,
    Luke

  • EXP 11.2.0.1 export tables with at leat 1 row

    I have oracle db 11.2.0.1 and user the export utility to export
    tables, and row dati from the database to a file dmp.
    I have notice that mow it extract the table with at leat 1 row
    and do not export table empty
    and so when i execute an import i lost a lot of tables...
    Very dangerous ....
    There is and explanations ???

    You may get this effect in release 11.2.0.1 if the table has no segment. You probably have deferred_segment_creation=true. This query may help identify the tables without segments:
    select table_name from user_tables
    minus
    select segment_name from user_segments where segment_type='TABLE';
    (you will of course have to modify the query to handle more complex table structures).
    Data Pump does not have this problem.
    John Watson
    Oracle Certified Master DBA

  • Split a large table into multiple packages - R3load/MIGMON

    Hello,
    We are in the process of reducing the export and import downtime for the UNICODE migration/Conversion.
    In this process, we have identified couple of large tables which were taking long time to export and import by a single R3load process.
    Step 1:> We ran the System Copy --> Export Preparation
    Step 2:> System Copy --> Table Splitting Preparation
    We have created a file with the large tables which are required to split into multiple packages and where able to create a total of 3 WHR files for the following table under DATA directory of main EXPORT directory.
    SplitTables.txt (Name of the file used in the SAPINST)
    CATF%2
    E071%2
    Which means, we would like each of the above large tables to be exported using 2 R3load processes.
    Step 3:> System Copy --> Database and Central Instance Export
    During the SAPInst process at Split STR files screen , we have selected the option 'Split Predefined Tables' and select the file which has predefined tables.
    Filename: SplitTable.txt
    CATF
    E071
    When we started the export process, we haven't seen the above tables been processed by mutiple R3load processes.
    They were exported by a Single R3load processes.
    In the order_by.txt file, we have found the following entries...
    order_by.txt----
    # generated by SAPinst at: Sat Feb 24 08:33:39 GMT-0700 (Mountain
    Standard Time) 2007
    default package order: by name
    CATF
    D010TAB
    DD03L
    DOKCLU
    E071
    GLOSSARY
    REPOSRC
    SAP0000
    SAPAPPL0_1
    SAPAPPL0_2
    We have selected a total of 20 parallel jobs.
    Here my questions are:
    a> what are we doing wrong here?
    b> Is there a different way to specify/define a large table into multiple packages, so that they get exported by multiple R3load processes?
    I really appreciate your response.
    Thank you,
    Nikee

    Hi Haleem,
    As for your queries are concerned -
    1. With R3ta , you will split large tables using WHERE clause. WHR files get generated. If you have mentioned CDCLS%2 in the input file for table splitting, then it generates 2~3 WHR files CDCLS-1, CDCLS-2 & CDCLS-3 (depending upon WHERE conditions)
    2. While using MIGMON ( for sequencial / parallel export-import process), you have the choice of Package Order in th e properties file.
      E.g : For Import - In the import_monitor_cmd.properties, specify
    Package order: name | size | file with package names
        orderBy=/upgexp/SOURCE/pkg_imp_order.txt
       And in the pkg_imp_txt, I have specified the import package order as
      BSIS-7
      CDCLS-3
      SAPAPPL1_184
      SAPAPPL1_72
      CDCLS-2
      SAPAPPL2_2
      CDCLS-1
    Similarly , you can specify the Export package order as well in the export properties file ...
    I hope this clarifies your doubt
    Warm Regards,
    SANUP.V

  • Export with Table Splitting : ORA-01115: IO error reading block from file

    Hello,
    We are in perfroming the last dryrun of our CU&UC conversion.
    The are now in the process of exporting the ECC6 system (Oracle 10.2.0.4.0, HPUX ia64) using sapinst features, "table splitting preparation"
    When doing so, we are facing critical errors :
    Creating file /export_uni/sapinst_splitting/ora_query3_tmp3_1.sql.
    ERROR 2010-08-11 10:27:28.881
    CJS-00084  SQL statement or script failed. DIAGNOSIS: Error message: ORA-12801: error signaled in parallel query server P002
    ORA-01115: IO error reading block from file 90 (block # 16640)
    ORA-27072: File I/O error
    HPUX-ia64 Error: 22: Invalid argument
    Additional information: 4
    Additional information: 16640
    Additional information: -1
    ORA-01115: IO error reading block from file 90 (block # 16640)
    ORA-27072: File I/O error
    HPUX-ia64 Error: 22: Invalid argument
    ORA-06512: at "SAPR3.TABLE_SPLITTER", line 775
    ORA-06512: at line 1
    I have therefore perfmed a dbverify ; no corruption has been recorded.
    When trying to perfrom the EXPORT, without table splitting ; it works fine ...but the processing time is extremely long, as you can imagine. Any help would be highly appreciated.Regards.
    Raoul

    Thank you Stefan,
    Our HPUX Release seems to be indeed 11v3,
    [root@:/root]# uname -a
    HP-UX B.11.31 U ia64 2566039091 unlimited-user license
    I'll check the installation of the  patch and keep you informed
    Thank you
    Raoul
    Edited by: Raoul Shiro on Aug 11, 2010 11:57 AM

  • How to Export only some tables with procedures and packages

    Hi...
    I want to export only some tables and packages and procedures. Can anybody please guide me how to do this.
    Thanks in advance.......
    pal

    could you please more elobarate in your question? do you want to export data from tables or you want to get the table structure, source of procedures and packages?
    Thanks
    M Thiyagarajan

  • Smooth way to restart failed R3loads with table splitting

    Hi,
    We are running a Unicode conversion with largest tables split into 10 or more export/import packages.
    Now some of the packages fail, as they normally do.
    We use sockets technique.
    Do you find a smooth and reliable way to
    - delete already imported rows using where clause from the TSK files (both export and import servers)
    - replace "err" with "xeq" (or something else?) in TSK file (both export and import servers)
    - replace "=-" with "=0" in export_state.properties and import_state.properties
    We have done this manually with vi editor, but to me it sounds a bit too risky, as migration monitor itself might be trying to modify contents of .properties files simultaneously.
    We are looking for best practices on this one. All you techies, share your experience!
    Thanks a lot!
    BR.
    Samuli Kaisanlahti
    Certified Netweaver Basis consultant

    Dear Samuli,
    My experience with R3load is that editing and modifying TSK files could lead to problems and inconsistences, that will probably appear weeks after the import.
    I'd recommend that, in case of problems, you start the complete load from scratch.
    Regards
    Francisco

  • Using Table Splitting

    Hello,
    I am doing experiments in preparation of a unicode conversion of a 1 TB MAXDB system. Therfore I have exported the database with sapinst with a duration of 44 hours.
    To reduce the downtime I did another try using the table splitting option with the export monitor. Because I have not found detailed information I did a mistake in konfiguration of the file import_monitor_cmd.properties, so the export monitor did not use table splitting but needed also 44 hours for export (in addition to calculation of table splitting of around 20 hours which can be done online).
    At third try I konfigured the export monitor right, the duration increases to 7 days.
    Technical details:
    R3 Enterprise EXT 1.1, Kernel 640, MAXDB 7.5, actual R3ta
    10 CPU, 30 GB RAM, SAN
    8 parallel processes for export,
    8 tables to split to 16 parts each (COEP, COSS, COSP, COSB, GLPCA, CDCLS, AUSP, IBINVALUES)
    largest table: COSS with 85 GB
    DB-Size filled: around 1 TB, 50 Datafiles
    In special of MAXDB it would be best to use primary key for table splitting. But I found no documentation of the file R3ta_hints.txt where I can manipulate the splitting decision of R3ta. (I added IBINVALUES IN_RECNO as selective column).
    And now the questions:
    Do you know a detailed dokumentation of the file import_monitor_cmd.properties?
    Do you know a documentation of the R3ta table splitting options and input possibilities of R3ta_hints.txt?
    Can you give me some experiences of similiar tests or migrations?
    Best regards
    Andreas

    I´m actually trying to do a likewise thing with a 2,1 TB MaxDB 7.6.
    (in addition to calculation of table splitting of around 20 hours which can be done online).
    I´d be careful doing that online. If either at the time R3ta starts to evaluate the distinct values or after that table is finished someone does an insert to one of those tables you may miss that entry.
    I used DB50 - Statistics to find out the biggest tables. After I had those I created my own  R3ta_hints.txt checking each table in SE11 and DB50 to find out which field has the most distinct values. I have 85 tables in the R3ta_hints.txt.
    The R3ta run itself took almost three days (though database prefetching was activated).
    Since I had good experiences with packages of 250 MB in former migrations I spllited the rest of the tables with that value. At the end there were 1,114 packages.
    I started the export last friday, it´s still running.
    After my experiences the MaxDB can´t leverage the parallel unload as it would be expected. You may see a little better speed if you set the parameter LRU_FOR_SCAN to YES which will then use the full available data cache for table scans (as R3load does) instead of just 10 % of the cache if set to NO.
    What you see if you use parallel unload is (using x_cons  sh ac 1)
    T204   7     -1 User      21781 IO Wait (R)           0 0     45        98030278(s)
    T206   7     -1 User      21911 IOWait(R)(041)        0 0               98030278(s)
    T222   8     -1 User      21887 IOWait(R)(041)        0 0               132941587(s)
    T228   8     -1 User      21775 IOWait(R)(041)        0 0               132941587(s)
    T234   9     -1 User      21899 IOWait(R)(041)        0 0               105696431(s)
    T236   9     -1 User      21893 IOWait(R)(041)        0 0               105696431(s)
    T248  10     -1 User      21905 IO Wait (R)           0 0     40        101974717(s)
    T249  10     -1 User      21311 IO Wait (R)           0 0     41        101974717(s)
    The IOWaits with 041 are no real reads but in fact waits for another task reading the same tree/leafs.
    Markus

  • Error in import of tables splitted with SQL splitter of note 1043380

    I'm doing an import on AIX for a Sap system, while the source system was on Windows and during the export I splitted some table using the splitter of note 1043380.
    THe export went fine, and I see from the .TOC files into the export directory a huge number of rows has been exported for these splitted tables.
    Neverthless during the import I discovered the Sapinst mark these tables as completed but the edata is not imoported.
    Into the .log file for one of them I see this :
    RFF) ERROR: no entry for table "VBFA" in "/export/ABAP/DATA/VBFA-1.TOC"
    (IMP) INFO: import of VBFA completed (0 rows)
    On SDN I found the post:
    Problem with import of split tables.
    But it's not clear how to apply it. I try to modify the ..TSK files for the VBFA pieces but I'm not able to find out the two space  characters responsible for this behaviour.
    Below some row of one of the .TSK for the VBFA table:
    VBFA-1__TPI.TSK
    D VBFA I ok
    WHERE (ROWID >= CHARTOROWID('AAAaZsAAEAAAqYJAAA') and ROWID <= CHARTOROWID('AAAaZsAAEAAArYIEI/'))
    /*+ ROWID ("VBFA") */
    D VBFA I ok
    WHERE (ROWID >= CHARTOROWID('AAAaZsAAEAACGUJAAA') and ROWID <= CHARTOROWID('AAAaZsAAEAACIUIEI/'))
    /*+ ROWID ("VBFA") */
    Beside the post stated this is a well know bug of the splitter of note 1043380 but I'm not able to find any Oss note for that.
    I used the splitter already several times and never encountered this error.
    Any help is really appreciated.
    Best regards

    Sorry I have posted the wrong link, it should be:
    Re: Import error 1647, CLOB fields problem, need help!
    Regards
    Rob

  • Unicode Export - unable to retrieve nametab info for logic table BSEG

    Hi
    We are performing a unicode export (CUUC from 4.6C upgrade to ECC 6.0) and we have incurred this error.
            Without ORDER BY PRIMARY KEY the exported data may be unusable for some databases
    Our OS is HPUX11.31 & Database is 10.2.0.2
    myCluster (63.21.Exp): 1610: inconsistent settings for table position validity detected.
    myCluster (63.21.Exp): 1611: nametab says table positions are valid.
    myCluster (63.21.Exp): 1614: alternate nametab says table positions are not valid.
    myCluster (63.21.Exp): 1617: for field 310 of nametab displacement is 1877, yet dbtabpos shows 1885.
    myCluster (63.21.Exp): 1621: character length is 1 (in) resp. 2 (out).
    myCluster (63.21.Exp): 1257: unable to retrieve nametab info for logic table BSEG      .
    myCluster (63.21.Exp): 8358: unable to acquire nametab info for logic table BSEG      .
    myCluster (63.21.Exp): 2949: failed to convert cluster data of cluster item.
    myCluster: RFBLG      *400**AT10**0000100000**2004*
    myCluster (63.21.Exp): 322: error during conversion of cluster item.
    myCluster (63.21.Exp): 323: affected physical table is RFBLG.
    (CNV) ERROR: data conversion failed.  rc = 2
    (DB) INFO: disconnected from DB
    /usr/sap/SBX/SYS/exe/run/R3load: job finished with 1 error(s)
    /usr/sap/SBX/SYS/exe/run/R3load: END OF LOG: 20081102104452
    We checked the note 913783 as per the CUUC guide but the correction only for package SAPKB70004 to 6. but we are in package SAPKB70011.
    We had found two notes:
    1. Note 1238351 - Hom./Het.System Copy SAP NW 7.0 incl. Enhancement Package 1
    :Solution:
    There are two possible workarounds:
    1. Modify DDL<dbs>.TPL (<dbs> = ADA, DB2, DB4, DB6, IND, MSS, ORA) BEFORE the R3load TSK files are generated;
                  search for the keyword "negdat:" and add "CLU4" and "VER_CLUSTR" to thisline.
    2. Modify the TSK file (most probably SAPCLUST.TSK) BEFORE R3load import is(re-)started.
                  search for the lines starting with "D CLU4 I" and "D VER_CLUSTR I" and change the status (i.e. "err" or "xeq") to "ign" or remove the lines. "
    I tried the above solution by editing the file DDL*.TPL but it is skipping the table and marks it as completed but its not the good solution as we will be miss the data from the table RFBLG.
    2. Note 991401 - SYSCOPY EXPORT FAILS:SAPCLUST:ERROR: Code page conversion:
    Solution
    Activate the table.
    Then call the RADCUCNT report. Do not change the selected parameters, but ensure that 'Overwrite Entries' is selected.  Set the 'Unicode Length' to 2 and fill the last two fields 'Type' and 'Name' with TABL and TACOPAB respectively. Then select 'No Log' or specify a log name.
    Execute the RADCUCNT report and restart the export.
    We have not tried this solution, bcos SAP is still down and CDCLS job is still running.
    We would like to know whether you have faced any issues like the above one and what is your suggested approach and solution.
    Is it safe to start SAP now (when the CDCLS job runs) and then try to activate the table RFBLG?
    Regards
    Senthil
    Edited by: J. Senthil Murugan on Nov 3, 2008 1:41 AM
    Edited by: J. Senthil Murugan on Nov 3, 2008 3:36 AM

    Hi Senthil,
    If you have done your pre-conversion steps before upgrade and after upgrade successfully then you should not see the below errors. However changes to SPDD tables may some times also have some impact during conversion and throws nametab errors occurs. Program RADCUCNT runs in the end of upgrade to update nametab tables if any new changes happned during upgrade.
    You can do any no of exports to complete  jobs successfully.yeah When export running you shouldnt bring SAP up.
    The tables you have mentioned all are cluster tables and CDCLS being the biggest table it will take hrs to complete depending on your size of the database.
    Do not pay around with the .TSK file until if you are sure you want to re-execute it.Your first possiblility is skipped because there may be multiple same .TSK files present locally(where u r running distribution.monitor (or) sapinst ) and on the common directory. You may also look at the .TSK.bkp files because it get information and creates a new .TSK. This is not complicated but tricky.
    secound possibility is to update the changed tables(eg: RFBLG...)  to conversion tables.Follow the Note but make sure no R3load processes are running before you start SAP. If you dont want to wait long and sure to restart other processes which are running you can kill it and start SAP. Specify your error tables only and follow instructions given in the note.Once done bring down  SAP app. and restart the export process using ur sapinst or distribution monitor.
    Regards,
    Vamshi.

Maybe you are looking for

  • Table are not prefixed with Schema in  SQL request - Unable to get Data

    I began this new thread because I closed the [previous one|Unable to get data (DSN connection); a little bit early, I believed it was OK but no the problem still here. First my architecture : Oracle 9g +500 Reports made under CR developper 8.5 or 9.0

  • Have You Also Been Deceived About A Plan w/ iPhone6 Plus?

    Since I got no help from Verizon, thought I'd at least post on their site what I've been posting on other social media sites (yelp, etc.) Stay away from Verizon Wireless @ The River in Rancho Mirage!! They are deceptive and deceitful.  Right after Ch

  • Lion Server not handing out DHCP addresses to Snow Leopard client

    I have been pulling my hair out over this. Here is the layout Lion Server running on the newest Mac Mini and doing mail, DNS, DHCP, Software update and has a valid, not self assigned, certificate 3 clients running Lion Desktop (2 iMACs and a MBP pro

  • I want to send emails to List in my contacts

    I have several lists in my contacts, which I can see when just accessing contacts, but it won't let me 'select all'. then if I go to compose, and call up contacts, it doesn't let me see the lists. and I would like to copy from one email and paste to

  • Passing parameters to oracle query

    Hi Everyone, I'm a newbie to oracle and need help passing parameters to an oracle query. For example, I need to show all the employees in a certain department.So a list of "DEPT_CODE"'s will be displayed on a webpage and then the selected value will