Split table

Hi
I have a transactional table that I create and each time has a different
number of records. What I want is each time to split it in two other tables so that
each of the new table contains half of the original records.
But I want to avoid spliting records that belong to the same transaction in two.
eg.
Transaction_ID,     Attributes
1     ,     a
1     ,     b
1     ,     c
2     ,     a
3     ,     r
3     ,     r
Any ideas?
Thank u very much
K

After commit, the only way to tell which transaction a row belongs to is to use ROWSCNs, which will add 6 more bytes to each row. Could you define the problem? We may be able to come up with an alternate solution than the one proposed.

Similar Messages

  • How to split table in the report so it shows on the next page?

    Hi,
    How to split table in the report so it shows on the next page? Im trying to fit long (many columns) table into my report page. It is too long however. I want it to wrap and show the rest on the next page. What I get now is table cut at the page end and rest is not visible.

    Yes, this might be that the amount of data will cause table to grow and exceed 1, 2, 3.. pages. In that case I would probably want to have, lets say , one half of columns on 1st page then the other on the 2nd page and then repeatedly the same sequence down across all pages. Is there a way to achieve this?

  • Slow split table export (R3load and WHERE clause)

    For our split table exports, we used custom coded WHERE clauses. (Basically adding additional columns to the R3ta default column to take advantage of existing indexes).
    The results have been good so far. Full tablescans have been eliminated and export times have gone down, in some cases, tables export times have improved by 50%.
    However, our biggest table, CE1OC01 (120 GB), continues to be a bottleneck. Initially, after using the new WHERE clause, it looked like performance gains were dramatic, with export times for the first 5 packages dropping from 25-30 hours down to 1 1/2 hours.
    However, after 2 hours, the remaining CE1OC01 split packages have shown no improvement. This is very odd because we are trying to determine why part of the table exports very fast, but other parts are running very slow.
    Before the custom WHERE clauses, the export server had run into issues with SORTHEAP being exhausted, so we thought that might be the culprit. But that does not seem to be an issue now, since the improved WHERE clauses have reduced or eliminated excessive sorting.
    I checked the access path of all the CE1OC01 packages, through EXPLAIN, and they all access the same index to return results. The execution time in EXPLAIN returns similar times for each of the packages:
    CE1OC01-11: select * from CE1OC01  WHERE MANDT='212'
    AND ("BELNR" > '0124727994') AND ("BELNR" <= '0131810250')
    CE1OC01-19: select * from CE1OC01 WHERE MANDT='212'
    AND ("BELNR" > '0181387534') AND ("BELNR" <= '0188469413')
          0 SELECT STATEMENT ( Estimated Costs =  8.448E+06 [timerons] )
      |
      ---      1 RETURN
          |
          ---      2 FETCH CE1OC01
              |
              ------   3 IXSCAN CE1OC01~4 #key columns:  2
    query execution time [millisec]            |       333
    uow elapsed time [microsec]                |   429,907
    total user CPU time [microsec]             |         0
    total system cpu time [microsec]           |         0
    Both queries utilize an index that has fields MANDT and BELNR. However, during R3load, CE1OC01-19 finishes in an hour and a half, whereas CE1OC01-11 can take 25-30 hours.
    I am wondering if there is anything else to check on the DB2 access path side of things or if I need to start digging deeper into other aggregate load/infrastructure issues. Other tables don't seem to exhibit this behavior. There is some discrepancy between other tables' run times (for example, 2-4 hours), but those are not as dramatic as this particular table.
    Another idea to test is to try and export only 5 parts of the table at a time, perhaps there is a throughput or logical limitation when all 20 of the exports are running at the same time. Or create a single column index on BELNR (default R3ta column) and see if that shows any improvement.
    Anyone have any ideas on why some of the table moves fast but the rest of it moves slow?
    We also notice that the "fast" parts of the table are at the very end of the table. We are wondering if perhaps the index is less fragmented in that range, a REORG or recreation of the index may do this table some good. We were hoping to squeeze as many improvements out of our export process as possible before running a full REORG on the database. This particular index (there are 5 indexes on this table) has a Cluster Ratio of 54%, so, perhaps for purposes of the export, it may make sense to REORG the table and cluster it around this particular index. By contrast, the primary key index has a Cluster Ratio of 86%.
    Here is the output from our current run. The "slow" parts of the table have not completed, but they average a throughput of 0.18 MB/min, versus the "fast" parts, which average 5 MB/min, a pretty dramatic difference.
    package     time      start date        end date          size MB  MB/min
    CE1OC01-16  10:20:37  2008-11-25 20:47  2008-11-26 07:08   417.62    0.67
    CE1OC01-18   1:26:58  2008-11-25 20:47  2008-11-25 22:14   429.41    4.94
    CE1OC01-17   1:26:04  2008-11-25 20:47  2008-11-25 22:13   416.38    4.84
    CE1OC01-19   1:24:46  2008-11-25 20:47  2008-11-25 22:12   437.98    5.17
    CE1OC01-20   1:20:51  2008-11-25 20:48  2008-11-25 22:09   435.87    5.39
    CE1OC01-1    0:00:00  2008-11-25 20:48                       0.00
    CE1OC01-10   0:00:00  2008-11-25 20:48                     152.25
    CE1OC01-11   0:00:00  2008-11-25 20:48                     143.55
    CE1OC01-12   0:00:00  2008-11-25 20:48                     145.11
    CE1OC01-13   0:00:00  2008-11-25 20:48                     146.92
    CE1OC01-14   0:00:00  2008-11-25 20:48                     140.00
    CE1OC01-15   0:00:00  2008-11-25 20:48                     145.52
    CE1OC01-2    0:00:00  2008-11-25 20:48                     184.33
    CE1OC01-3    0:00:00  2008-11-25 20:48                     183.34
    CE1OC01-4    0:00:00  2008-11-25 20:48                     158.62
    CE1OC01-5    0:00:00  2008-11-25 20:48                     157.09
    CE1OC01-6    0:00:00  2008-11-25 20:48                     150.41
    CE1OC01-7    0:00:00  2008-11-25 20:48                     175.29
    CE1OC01-8    0:00:00  2008-11-25 20:48                     150.55
    CE1OC01-9    0:00:00  2008-11-25 20:48                     154.84

    Hi all, thanks for the quick and extremely helpful answers.
    Beck,
    Thanks for the health check. We are exporting the entire table in parallel, so all the exports begin at the same time. Regarding the SORTHEAP, we initially thought that might be our problem, because we were running out of SORTHEAP on the source database server. Looks like for this run, and the previous run, SORTHEAP has remained available and has not overrun. That's what was so confusing, because this looked like a buffer overrun.
    Ralph,
    The WHERE technique you provided worked perfectly. Our export times have improved dramatically by switching to the forced full tablescan. Being always trained to eliminate full tablescans, it seems counterintuitive at first, but, given the nature of the export query, combined with the unsorted export, it now makes total sense why the tablescan works so much better.
    Looks like you were right, in this case, the index adds too much additional overhead, and especially since our Cluster Ratio was terrible (in the 50% range), so the index was definitely working against us, by bouncing all over the place to pull the data out.
    We're going to look at some of our other long running tables and see if this technique improves runtimes on them as well.
    Thanks so much, that helped us out tremendously. We will verify the data from source to target matches up 1 for 1 by running a consistency check.
    Look at the throughput difference between the previous run and the current run:
    package     time       start date        end date          size MB  MB/min
    CE1OC01-11   40:14:47  2008-11-20 19:43  2008-11-22 11:58   437.27    0.18
    CE1OC01-14   39:59:51  2008-11-20 19:43  2008-11-22 11:43   427.60    0.18
    CE1OC01-12   39:58:37  2008-11-20 19:43  2008-11-22 11:42   430.66    0.18
    CE1OC01-13   39:51:27  2008-11-20 19:43  2008-11-22 11:35   421.09    0.18
    CE1OC01-15   39:49:50  2008-11-20 19:43  2008-11-22 11:33   426.54    0.18
    CE1OC01-10   39:33:57  2008-11-20 19:43  2008-11-22 11:17   429.44    0.18
    CE1OC01-8    39:27:58  2008-11-20 19:43  2008-11-22 11:11   417.62    0.18
    CE1OC01-6    39:02:18  2008-11-20 19:43  2008-11-22 10:45   416.35    0.18
    CE1OC01-5    38:53:09  2008-11-20 19:43  2008-11-22 10:36   413.29    0.18
    CE1OC01-4    38:52:34  2008-11-20 19:43  2008-11-22 10:36   424.06    0.18
    CE1OC01-9    38:48:09  2008-11-20 19:43  2008-11-22 10:31   416.89    0.18
    CE1OC01-3    38:21:51  2008-11-20 19:43  2008-11-22 10:05   428.16    0.19
    CE1OC01-2    36:02:27  2008-11-20 19:43  2008-11-22 07:46   409.05    0.19
    CE1OC01-7    33:35:42  2008-11-20 19:43  2008-11-22 05:19   414.24    0.21
    CE1OC01-16    9:33:14  2008-11-20 19:43  2008-11-21 05:16   417.62    0.73
    CE1OC01-17    1:20:01  2008-11-20 19:43  2008-11-20 21:03   416.38    5.20
    CE1OC01-18    1:19:29  2008-11-20 19:43  2008-11-20 21:03   429.41    5.40
    CE1OC01-19    1:16:13  2008-11-20 19:44  2008-11-20 21:00   437.98    5.75
    CE1OC01-20    1:14:06  2008-11-20 19:49  2008-11-20 21:03   435.87    5.88
    PLPO          0:52:14  2008-11-20 19:43  2008-11-20 20:35    92.70    1.77
    BCST_SR       0:05:12  2008-11-20 19:43  2008-11-20 19:48    29.39    5.65
    CE1OC01-1     0:00:00  2008-11-20 19:43                       0.00
                558:13:06  2008-11-20 19:43  2008-11-22 11:58  8171.62
    package     time      start date        end date          size MB   MB/min
    CE1OC01-9    9:11:58  2008-12-01 20:14  2008-12-02 05:26   1172.12    2.12
    CE1OC01-5    9:11:48  2008-12-01 20:14  2008-12-02 05:25   1174.64    2.13
    CE1OC01-4    9:11:32  2008-12-01 20:14  2008-12-02 05:25   1174.51    2.13
    CE1OC01-8    9:09:24  2008-12-01 20:14  2008-12-02 05:23   1172.49    2.13
    CE1OC01-1    9:05:55  2008-12-01 20:14  2008-12-02 05:20   1188.43    2.18
    CE1OC01-2    9:00:47  2008-12-01 20:14  2008-12-02 05:14   1184.52    2.19
    CE1OC01-7    8:54:06  2008-12-01 20:14  2008-12-02 05:08   1173.23    2.20
    CE1OC01-3    8:52:22  2008-12-01 20:14  2008-12-02 05:06   1179.91    2.22
    CE1OC01-10   8:45:09  2008-12-01 20:14  2008-12-02 04:59   1171.90    2.23
    CE1OC01-6    8:28:10  2008-12-01 20:14  2008-12-02 04:42   1172.46    2.31
    PLPO         0:25:16  2008-12-01 20:14  2008-12-01 20:39     92.70    3.67
                90:16:27  2008-12-01 20:14  2008-12-02 05:26  11856.91

  • Problem with import of split tables.

    We are creating copy of our  production system. The
    system copy has a change of Operating system from HP-UX to Windows
    X86_64 and is also has unicode conversion. We followed the following
    steps:
    1) Performed post copy steps and pre-unicode conversion steps
    2) Split big tables using OSS note 1043380 - Efficient Table Splitting
    for Oracle Databases
    3) Copied *.WHR files to export directory and performed export using SAPinst
    4) Copied files export dump to new server.
    5) Created whr.txt manually and copied it to the DATA directory of
    export dump
    6) Started import using SAPinst
    Surprisingly the import does not consider the split tables but marks
    them as complete in the TSK files. Also the rows imported is 0 for
    those tables.
    The only error is:
    (RFF) ERROR: no entry for table "CDCLS" in "F:\export\ABAP\DATA\CDCLS-1.TOC"
    (IMP) INFO: import of CDCLS completed (0 rows) #20110122022015
    Why is SAPinst not importing the tables that are split. All table that were not split are importing successfully and R3load is patched.

    Resolved.
    this is a known issue which is due to the sql splitter tool which
    generates space characters at the end of the 'where' clause within the
    TSK-file.
    At the import that leads to the fact that the entries are not
    recognized and nothing is imported, so in the import log you get
    no entry for table "BALDAT" in "F:\ccpexport\CCP\ABAP\DATA\BALDAT-1.TOC"
    As a workarround I would like you to apply the following:
    1. remove all blank characters at the end of the WHR clause in the TSK
    -file, that is i.e.:
    D BALDAT I ok
    WHERE ("LOG_HANDLE" < '4iLKIlCIjLpX0000p5bqa0')__ <-- I have marked the
    two blank characters at the end of the line by '_'
    remove this characters, so that the thera are no blanks at the end of
    the line
    2. set the status in the corresponding TSK-file from 'ok' to 'err'
    3. repeat the import of the corresponding packages
    I would like to mention that as per note # 1043380 you should use
    both for the export and import:
    R3load 7.00 version built newer than June 18th, 2008
    - where this issue should be fixed.

  • Split Table Procedure in Database Migration

    Hello,
    We are doing Database Migration from Informix to Oracle...
    We have a Big Table PCL4 in our system which is around 100 GB..
    I want to split the table and would like to know what is the procedure to do so...
    What changes are to be done at database level , in STR Files , in TOC files and if any in .cmd files...
    Thanks & Regards,
    RaHuL...

    RaHul,
    In my last experience with Informix migration (HeSC), I didn't use split table, because the following caution:
    CAUTION: This may not be possible on databases that need exclusive access on the table when creating an index (for example, Informix).
    I used the following informix features:
    1. We fragmented the BIG Table (In our case CDCLS) into four peices ( Four DBSPACES) and moved CDCLS~0 (Index) to others DBSPACE.
    2. We increased CPU VPs and used the following Informix parameters:
    PDQPRIORITY 100
    PSORT_NPROCS 32
    The results of the test was:  ~ 5hs50min less during export
    Cheers,
    Jairo

  • Split tables for DMO

    I am preparing to use DMO to upgrade/convert  an Oracle/Unix based BW7.1 to Bw7.4 at Hana/SUSE.
    Some tables are very big so that I plan to split them.
    However, the DMO document does not show the step where the table-split can be configured.
    Could you please share you experience on how to do this?
    Thanks!!

    Hi, Boris,
    We have many large tables, all greater than 100GB, but DMO only choose 3 large and 9 small tables to split. this leads to we wait for large table running at the end phase.
    Could you share us what the logic is for dmo to split table ?
    3 ETQ399 Looking for tables to be split:                                                                       
    4 ETQ399 /BIC/AZI_BLS0100               size   358470/  358470 MB split with segment size 0.500000             
    4 ETQ399 /BIC/B0004785000               size   349106/  349106 MB split with segment size 0.500000             
    4 ETQ399 /BIC/AZFIGLO0200               size   300177/  300177 MB split with segment size 0.500000             
    4 ETQ399 RSBATCHDATA                    size    50614/   25307 MB split with segment size 0.267057 (has 1 blobs)
    4 ETQ399 RSZWOBJ                        size     9406/    4703 MB split with segment size 0.500000 (has 1 blobs)
    4 ETQ399 ARFCSDATA                      size    25138/   25138 MB split with segment size 0.268852             
    4 ETQ399 RSBMREQ_DTP                    size    25147/   25147 MB split with segment size 0.268756             
    4 ETQ399 RSBMONMESS_DTP                 size    22417/   22417 MB split with segment size 0.301485             
    4 ETQ399 RSBMLOGPAR_DTP                 size    14525/   14525 MB split with segment size 0.465294             
    4 ETQ399 RSODSACTDATA                   size     7244/    3622 MB split with segment size 0.500000 (has 1 blobs)
    4 ETQ399 RSSELDONE                      size     6798/    6798 MB split with segment size 0.500000             
    4 ETQ399 BDLDATCOL                      size     9882/    4941 MB split with segment size 0.500000 (has 1 blobs)
    3 ETQ399 Identified 12 large tables out of 52282 entries.            

  • How to split table records

    Hi Gurus,
    Need your help in this.
    I am trying to split records in a table through ABAP routine. Following is the source table with 2 records
    To split the record I wrote the following code in Start routine
    DATA: ITAB_SOURCE TYPE _ty_t_SC_1,
           WA_SOURCE TYPE _ty_s_SC_1,
           TMP_COUNTER TYPE I VALUE 1.
    LOOP AT SOURCE_PACKAGE ASSIGNING <SOURCE_FIELDS>.
       CLEAR WA_SOURCE.
       MOVE <SOURCE_FIELDS> TO WA_SOURCE.
       TMP_COUNTER = 1.
       DO 12 TIMES.
         WA_SOURCE-CALMONTH2 = TMP_COUNTER.
         APPEND WA_SOURCE TO ITAB_SOURCE.
         TMP_COUNTER = TMP_COUNTER + 1.
         ENDDO.
         "TMP_COUNTER = 1.
       ENDLOOP.
       "REFRESH SOURCE_PACKAGE.
       MOVE ITAB_SOURCE TO SOURCE_PACKAGE
    However I could see only 1 record being split. Kindly point out the flaw in my routine.Following is the result for the routine.
    Please help

    it's already defined as a table, hence the update of 12 records
    the note (1258089) is really straight forward... not sure what I could add to make it more clear to you? you're obviously in scenario 1... instead of "changing" the value of WA_SOURCE-CALMONTH2 you'll also have to "update" the value of WA_SOURCE-RECORD
    your initial records have values 1 (for employee 9000125) and 2 (for employee 9001673) for RECORD
    since you don't correctly update the values for RECORD, they're just being overwritten in your second loop

  • Splitting table partition

    Hi All,
    I have a table with several partitions. The latest partition (key is date column) has values less than MAXVALUE. The table has several local indexes on it. It's too large and I'd like to split the latest partition. I have the following questions:
    1. Can I simply split the partion and expect everything to work
    2. What about indexes, do I have to re-create them or will they be split along with the table ?. I'd like to specify a new tablespace for the newly partitioned index. How can I do that
    3. How can I specify a tablespace for the newly partition data ? Should I move the rows between tablespaces after the split
    4. Can anyone share any experience with me.
    The database is Oracle 9i. Any help is higly appreciated.
    Thanks a lot
    Vissu

    1. Yes, it works fine
    2. Yes u have to recreate the index
    alter index my_index rebuild partition my_partition tablespace new_tablespace ;
    3. check the syntax of the split partition command :-)
    4. Yes :-) all works really fine, i have ever split partition, exchange partition, everything works really fine
    Fred

  • Split table record into several lines - pdf forms

    hello experts
    im trying to split a table record into several lines in order to present the whole table record in the form.
    for example:
    table T has 4 fields F1 F2 F3 F4
    if the tables has 2 records - R1 and R2 (every record contains 4 data fields) then i want to present my table in the following way:
    R1-F1 R1-F2
    R1-F3 R1-F4
    R2-F1 R2-F2
    R2-F3 R2-F4
    please do not refer me to links - i really need a specific procedure
    thanks ahead to all
    Eyal
    P.S i am using the adobe lifecycle - SFP tran.

    hey everyone
    it has been solved
    for the record:
    subform table (flowed content) contains 2 positioned subforms
    subforn header
    subform lines contains 2 flowed subforms 1 for the first two fields and 1 for the ladt two fields
    thank anyway
    Eyal

  • Split table outsite design form

    Hello,
    May be there is a simple answer for it but my problem is the following:
    I have created a table with a cuple of rows and coluns on my form. If I split a row the table gets wider. The table width is increased.
    Sometimes teh table goes over the edge of the table on the right side but on that moment I don't see the right side of the table om my form to decrease the table size. How could I decrease the table size when a table goes over the right side of the form after an insertion of a table colunm?
    Kind regards,
    Richard Meijn

    The table is created insite a subform but if you make a table with 1 column and 1 row with the width of the page. If you spit the table so you get 2 columns the second row goes outsite the page. Now I'm not able any more to pick up the right edge to move it back to the page size?
    kind regards,
    Richard Meijn

  • Oracle BI answers how to split table column?

    I have one issue and i don't know it is possible. If it is possible please tell me.
    In answer select cube's columns.
    result in table of answers
    --------------------------------|
    CUBE1 |
    --------------------------------|
    COL1 | COL2 | COL3 |
    --------------------------------|
    data1 | data2 | data3 |
    --------------------------------|
    I want to change like
    ---------------------------------------|
    NAME1 |
    ---------------------------------------|
    NAME2 | NAME3 |
    ---------------------------|
    NAME2 | NAME5 | NAME6 |
    ---------------------------------------|
    COL1 | COL2 | COL3 |
    ---------------------------------------|
    data1 | data2 | data3 |
    ----------------------------------------|
    help me

    Can somebody help on this please?

  • Error in import of tables splitted with SQL splitter of note 1043380

    I'm doing an import on AIX for a Sap system, while the source system was on Windows and during the export I splitted some table using the splitter of note 1043380.
    THe export went fine, and I see from the .TOC files into the export directory a huge number of rows has been exported for these splitted tables.
    Neverthless during the import I discovered the Sapinst mark these tables as completed but the edata is not imoported.
    Into the .log file for one of them I see this :
    RFF) ERROR: no entry for table "VBFA" in "/export/ABAP/DATA/VBFA-1.TOC"
    (IMP) INFO: import of VBFA completed (0 rows)
    On SDN I found the post:
    Problem with import of split tables.
    But it's not clear how to apply it. I try to modify the ..TSK files for the VBFA pieces but I'm not able to find out the two space  characters responsible for this behaviour.
    Below some row of one of the .TSK for the VBFA table:
    VBFA-1__TPI.TSK
    D VBFA I ok
    WHERE (ROWID >= CHARTOROWID('AAAaZsAAEAAAqYJAAA') and ROWID <= CHARTOROWID('AAAaZsAAEAAArYIEI/'))
    /*+ ROWID ("VBFA") */
    D VBFA I ok
    WHERE (ROWID >= CHARTOROWID('AAAaZsAAEAACGUJAAA') and ROWID <= CHARTOROWID('AAAaZsAAEAACIUIEI/'))
    /*+ ROWID ("VBFA") */
    Beside the post stated this is a well know bug of the splitter of note 1043380 but I'm not able to find any Oss note for that.
    I used the splitter already several times and never encountered this error.
    Any help is really appreciated.
    Best regards

    Sorry I have posted the wrong link, it should be:
    Re: Import error 1647, CLOB fields problem, need help!
    Regards
    Rob

  • Table split across pages

    Is it possible to setup Indesign to automatically split table across frames like Microsoft Word do across pages?

    ID splits tables only between cells rows, not mid-row, but it will certainly split a table between frames.

  • Urgent: Unicode conversion - table splitting

    Hi all,
    I am having a problem when trying to perform the export step of the unicode conversion on ERP2005, MSSQL server 2005. Due to previously very long runtime, I am trying to use the table splitting option. I have performed the "Table Splitting Preparation" step with what seems to be a success.
    The problem is when I run the actual database instance export.
    First of all:
    Which filepath should I provide SAPinst when it is asking for "Table input file"?
    Second:
    How can I actually determine the Package Unload Order? I tried selecting this option, but I was not given the opportunity to change this order in the subsequent screen. (The F1 help in SAPinst sais that I would be able to...)
    Have anyone of you experience with this?
    Best Regards,
    Thomas

    Hi Thomas,
    As I can imagine from the date of your last posting, you probably have your answers. But just so other people, who are searching on this topic, can find the answer here, I will fill in the blanks.
    First, when using table splitting, you must export using the migration monitor (MIGMON). That means, when you run SAPINST, on the ABAP System -> Database Export screen, select the Export Method: Export using Migration Monitor.
    At this point, as you said, you have already run the Table Splitting option from SAPINST, so the WHR files are in the export DATA directory. This whr.txt file is also there.
    The file input screen you are mentioning does not appear when you select the Export method mentioned above. But you can create such a file for MIGMON. You have to create this on your own and can only do it after the SAPINST has split the STR files.
    When finished with SAPINST, create the .txt file (I usually call it table_order.txt) and add the names of the STR files which you want to make sure are exported first. After MIGMON completes that list, all other STR files are exported in alphabetical order.
    Create your table_order.txt file inserting the filenames of the packages that were created, without the STR/WHR extensions (using some large tables which STR splitter broke out into their own STR files during my last export). You have to look in DB02 and sort descending based on used space to determine the order in which these tables should be exported and listed in the .txt file:
    <SPLIT Table>-1
    <SPLIT Table>-2
    <SPLIT Table>-n
    BSIS
    RFBLG
    CE1VVOC
    CE1WWOC
    DBTABLOG
    CE3VVOC
    COEP
    ARFCSDATA
    GLPCA
    KONV
    SWW_CONT
    SOC3
    CDCLS
    CE3WWOC
    BSE_CLR
    STXL
    EDIDS
    COSP
    BSAD
    EDI40
    ACCTIT
    BSIM
    VBFS
    BSAS
    ACCTCR
    CDHDR
    CE4VVOC_ACCT
    SGOHIST
    MSEG
    In the export_monitor_cmd.properties file, you specify this file name for the 'orderBy=' parameter. Create and keep the table_order.txt file in the same directory as the export_monitor.sh/.bat files.
    For importing, again you will need to use the Migration Monitor. SAPINST will automatically stop and prompt you to start the import using MIGMON. Here, you can specify the same table_order.txt file. You might want to amend it to control when the rest of the packages are imported, if you found one or more tables holding up the completion of the export.
    I hope this helps someone.
    Best Regards,
    Warren Chirhart

  • Better Practice - split dimension table or split hierarchy?

    Hi All,
    I have an Org dimension table that has geography and organizational structure within it (columns - state-city-etc.) and (columns division-department-etc.). Is it better to create a dimensional hierarchy with 2 trees (1 being geography sub-hierarchy from ALL and the other being ORG) or is it better to split table at BMM layer and create a logical geo and logical org tables separately. What do you think? Thank you

    Great help thank you. I was also leaning towards the same decision, but now I'm just sure. Venkat, when you're saying that OBIEE cannot handle 2 hierarchies, do you mean that a Fiscal Year / Calendar year model wouldn't work - and I think it's how it is in the OBI vanilla marketing app. I just want to make sure I understood your correctly. If yes, then are those 2 expected to have the same amount of children (i.e. I might have Week level in Calendar, but skip it for Fiscal). Thanks

Maybe you are looking for