Reg PLD Table Export

Hi All,
        I have designed the PLD for one company. So i need to use same format for most of the reports like Purchase Order, Sales Invoice  etc.. . For that, i woluld like to export RDOC, RITM tables to the New company DB. I am exporting with overwrite option.
Is it ok, or it will cause any other problem?
Pl suggest me to proceed
Regards
Suresh R

Hi,
i think the official way is to use the ReportLayoutsService Object.
have a look at it - you can create xml files from your report and use this to load the xml file in another company.
regards
David

Similar Messages

  • Slow split table export (R3load and WHERE clause)

    For our split table exports, we used custom coded WHERE clauses. (Basically adding additional columns to the R3ta default column to take advantage of existing indexes).
    The results have been good so far. Full tablescans have been eliminated and export times have gone down, in some cases, tables export times have improved by 50%.
    However, our biggest table, CE1OC01 (120 GB), continues to be a bottleneck. Initially, after using the new WHERE clause, it looked like performance gains were dramatic, with export times for the first 5 packages dropping from 25-30 hours down to 1 1/2 hours.
    However, after 2 hours, the remaining CE1OC01 split packages have shown no improvement. This is very odd because we are trying to determine why part of the table exports very fast, but other parts are running very slow.
    Before the custom WHERE clauses, the export server had run into issues with SORTHEAP being exhausted, so we thought that might be the culprit. But that does not seem to be an issue now, since the improved WHERE clauses have reduced or eliminated excessive sorting.
    I checked the access path of all the CE1OC01 packages, through EXPLAIN, and they all access the same index to return results. The execution time in EXPLAIN returns similar times for each of the packages:
    CE1OC01-11: select * from CE1OC01  WHERE MANDT='212'
    AND ("BELNR" > '0124727994') AND ("BELNR" <= '0131810250')
    CE1OC01-19: select * from CE1OC01 WHERE MANDT='212'
    AND ("BELNR" > '0181387534') AND ("BELNR" <= '0188469413')
          0 SELECT STATEMENT ( Estimated Costs =  8.448E+06 [timerons] )
      |
      ---      1 RETURN
          |
          ---      2 FETCH CE1OC01
              |
              ------   3 IXSCAN CE1OC01~4 #key columns:  2
    query execution time [millisec]            |       333
    uow elapsed time [microsec]                |   429,907
    total user CPU time [microsec]             |         0
    total system cpu time [microsec]           |         0
    Both queries utilize an index that has fields MANDT and BELNR. However, during R3load, CE1OC01-19 finishes in an hour and a half, whereas CE1OC01-11 can take 25-30 hours.
    I am wondering if there is anything else to check on the DB2 access path side of things or if I need to start digging deeper into other aggregate load/infrastructure issues. Other tables don't seem to exhibit this behavior. There is some discrepancy between other tables' run times (for example, 2-4 hours), but those are not as dramatic as this particular table.
    Another idea to test is to try and export only 5 parts of the table at a time, perhaps there is a throughput or logical limitation when all 20 of the exports are running at the same time. Or create a single column index on BELNR (default R3ta column) and see if that shows any improvement.
    Anyone have any ideas on why some of the table moves fast but the rest of it moves slow?
    We also notice that the "fast" parts of the table are at the very end of the table. We are wondering if perhaps the index is less fragmented in that range, a REORG or recreation of the index may do this table some good. We were hoping to squeeze as many improvements out of our export process as possible before running a full REORG on the database. This particular index (there are 5 indexes on this table) has a Cluster Ratio of 54%, so, perhaps for purposes of the export, it may make sense to REORG the table and cluster it around this particular index. By contrast, the primary key index has a Cluster Ratio of 86%.
    Here is the output from our current run. The "slow" parts of the table have not completed, but they average a throughput of 0.18 MB/min, versus the "fast" parts, which average 5 MB/min, a pretty dramatic difference.
    package     time      start date        end date          size MB  MB/min
    CE1OC01-16  10:20:37  2008-11-25 20:47  2008-11-26 07:08   417.62    0.67
    CE1OC01-18   1:26:58  2008-11-25 20:47  2008-11-25 22:14   429.41    4.94
    CE1OC01-17   1:26:04  2008-11-25 20:47  2008-11-25 22:13   416.38    4.84
    CE1OC01-19   1:24:46  2008-11-25 20:47  2008-11-25 22:12   437.98    5.17
    CE1OC01-20   1:20:51  2008-11-25 20:48  2008-11-25 22:09   435.87    5.39
    CE1OC01-1    0:00:00  2008-11-25 20:48                       0.00
    CE1OC01-10   0:00:00  2008-11-25 20:48                     152.25
    CE1OC01-11   0:00:00  2008-11-25 20:48                     143.55
    CE1OC01-12   0:00:00  2008-11-25 20:48                     145.11
    CE1OC01-13   0:00:00  2008-11-25 20:48                     146.92
    CE1OC01-14   0:00:00  2008-11-25 20:48                     140.00
    CE1OC01-15   0:00:00  2008-11-25 20:48                     145.52
    CE1OC01-2    0:00:00  2008-11-25 20:48                     184.33
    CE1OC01-3    0:00:00  2008-11-25 20:48                     183.34
    CE1OC01-4    0:00:00  2008-11-25 20:48                     158.62
    CE1OC01-5    0:00:00  2008-11-25 20:48                     157.09
    CE1OC01-6    0:00:00  2008-11-25 20:48                     150.41
    CE1OC01-7    0:00:00  2008-11-25 20:48                     175.29
    CE1OC01-8    0:00:00  2008-11-25 20:48                     150.55
    CE1OC01-9    0:00:00  2008-11-25 20:48                     154.84

    Hi all, thanks for the quick and extremely helpful answers.
    Beck,
    Thanks for the health check. We are exporting the entire table in parallel, so all the exports begin at the same time. Regarding the SORTHEAP, we initially thought that might be our problem, because we were running out of SORTHEAP on the source database server. Looks like for this run, and the previous run, SORTHEAP has remained available and has not overrun. That's what was so confusing, because this looked like a buffer overrun.
    Ralph,
    The WHERE technique you provided worked perfectly. Our export times have improved dramatically by switching to the forced full tablescan. Being always trained to eliminate full tablescans, it seems counterintuitive at first, but, given the nature of the export query, combined with the unsorted export, it now makes total sense why the tablescan works so much better.
    Looks like you were right, in this case, the index adds too much additional overhead, and especially since our Cluster Ratio was terrible (in the 50% range), so the index was definitely working against us, by bouncing all over the place to pull the data out.
    We're going to look at some of our other long running tables and see if this technique improves runtimes on them as well.
    Thanks so much, that helped us out tremendously. We will verify the data from source to target matches up 1 for 1 by running a consistency check.
    Look at the throughput difference between the previous run and the current run:
    package     time       start date        end date          size MB  MB/min
    CE1OC01-11   40:14:47  2008-11-20 19:43  2008-11-22 11:58   437.27    0.18
    CE1OC01-14   39:59:51  2008-11-20 19:43  2008-11-22 11:43   427.60    0.18
    CE1OC01-12   39:58:37  2008-11-20 19:43  2008-11-22 11:42   430.66    0.18
    CE1OC01-13   39:51:27  2008-11-20 19:43  2008-11-22 11:35   421.09    0.18
    CE1OC01-15   39:49:50  2008-11-20 19:43  2008-11-22 11:33   426.54    0.18
    CE1OC01-10   39:33:57  2008-11-20 19:43  2008-11-22 11:17   429.44    0.18
    CE1OC01-8    39:27:58  2008-11-20 19:43  2008-11-22 11:11   417.62    0.18
    CE1OC01-6    39:02:18  2008-11-20 19:43  2008-11-22 10:45   416.35    0.18
    CE1OC01-5    38:53:09  2008-11-20 19:43  2008-11-22 10:36   413.29    0.18
    CE1OC01-4    38:52:34  2008-11-20 19:43  2008-11-22 10:36   424.06    0.18
    CE1OC01-9    38:48:09  2008-11-20 19:43  2008-11-22 10:31   416.89    0.18
    CE1OC01-3    38:21:51  2008-11-20 19:43  2008-11-22 10:05   428.16    0.19
    CE1OC01-2    36:02:27  2008-11-20 19:43  2008-11-22 07:46   409.05    0.19
    CE1OC01-7    33:35:42  2008-11-20 19:43  2008-11-22 05:19   414.24    0.21
    CE1OC01-16    9:33:14  2008-11-20 19:43  2008-11-21 05:16   417.62    0.73
    CE1OC01-17    1:20:01  2008-11-20 19:43  2008-11-20 21:03   416.38    5.20
    CE1OC01-18    1:19:29  2008-11-20 19:43  2008-11-20 21:03   429.41    5.40
    CE1OC01-19    1:16:13  2008-11-20 19:44  2008-11-20 21:00   437.98    5.75
    CE1OC01-20    1:14:06  2008-11-20 19:49  2008-11-20 21:03   435.87    5.88
    PLPO          0:52:14  2008-11-20 19:43  2008-11-20 20:35    92.70    1.77
    BCST_SR       0:05:12  2008-11-20 19:43  2008-11-20 19:48    29.39    5.65
    CE1OC01-1     0:00:00  2008-11-20 19:43                       0.00
                558:13:06  2008-11-20 19:43  2008-11-22 11:58  8171.62
    package     time      start date        end date          size MB   MB/min
    CE1OC01-9    9:11:58  2008-12-01 20:14  2008-12-02 05:26   1172.12    2.12
    CE1OC01-5    9:11:48  2008-12-01 20:14  2008-12-02 05:25   1174.64    2.13
    CE1OC01-4    9:11:32  2008-12-01 20:14  2008-12-02 05:25   1174.51    2.13
    CE1OC01-8    9:09:24  2008-12-01 20:14  2008-12-02 05:23   1172.49    2.13
    CE1OC01-1    9:05:55  2008-12-01 20:14  2008-12-02 05:20   1188.43    2.18
    CE1OC01-2    9:00:47  2008-12-01 20:14  2008-12-02 05:14   1184.52    2.19
    CE1OC01-7    8:54:06  2008-12-01 20:14  2008-12-02 05:08   1173.23    2.20
    CE1OC01-3    8:52:22  2008-12-01 20:14  2008-12-02 05:06   1179.91    2.22
    CE1OC01-10   8:45:09  2008-12-01 20:14  2008-12-02 04:59   1171.90    2.23
    CE1OC01-6    8:28:10  2008-12-01 20:14  2008-12-02 04:42   1172.46    2.31
    PLPO         0:25:16  2008-12-01 20:14  2008-12-01 20:39     92.70    3.67
                90:16:27  2008-12-01 20:14  2008-12-02 05:26  11856.91

  • [SQL SERVER 2000] Generic table exporter

    Hello every body.
    First of all sorry for my bad english but I'm french ;-)
    My internship consits into making a generic table exporter (with a table list). Export into csv files.
    I have tried 2 solutions :
    1 - Create a DTS with a Dynamic Properties Task. The problem is I can easily change the destination file but when I change the table source I don't know how to remap the transformations between source and destination (do you see what I mean ?). Any idea ?
    2 - Use the bcp command. Very simple but how to do when tables contains the separator caracter ? for example : If a table line is "toto" | "I am , very happy" --> the csv file will look like this : toto, I am , very happy --> problem to get back the data ( to much comma ).
    Does someone has a solution ?
    Last point is how to export the table structure ? For the moment, using the table structure, I generate an sql query which creates a table (I write this query in a file). Isn't there any "cleaner" solution ?
    Thanks in advance and have a nice day all

    Answers,
    1. Use ActiveX script to transform. Refer
    http://technet.microsoft.com/en-us/library/aa933459(v=sql.80).aspx
    2. Replace the pipe delimiter first with comma if it is a single column and use bcp command. Refer
    http://technet.microsoft.com/en-us/library/aa174646(v=sql.80).aspx
    3. Regarding generating script refer
    http://stackoverflow.com/questions/4058977/exporting-tables-and-indexes-from-one-sql-server-2000-database-to-another
    Regards, RSingh

  • During the Unicode conversion , Cluster table export taken too much time ap

    Dear All
    during the Unicode conversion , Cluster table export taken too much time approximately 24 hours of 6 tables , could you please advise , how can  we   reduse  the time
    thanks
    Jainnedra

    Hello,
    Use latest R3load from market place.
    also refer note
    Note 1019362 - Very long run times during SPUMG scans
    Regards,
    Nitin Salunkhe

  • 많은 개수의 TABLE을 한번에 TABLE 별로 EXPORT받는 방법

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-12
    많은 개수의 TABLE을 한번에 TABLE 별로 EXPORT받는 방법
    =====================================================
    Purpose
    export를 table 별로 받아야 하는 데, export를 받아야 할 table이 매우
    많은 경우 tables option에 모두 적는 것이 불가능한 경우가 있다.
    이러한 경우에 대해 보다 쉽게 작업할 수 있는 방법을 알아보자.
    Explanation
    1. sqlplus scott/tiger로 login
    SQL> set heading off
    SQL> set pagesize 5000 (user가 소유한 table의 갯수이상)
    SQL> spool scott.out
    SQL> select tname from tab;
    SQL> exit
    2. 위와 같이하면 모든 scott user의 table들이 scott.out에 저장
    $ vi scott.out
         SQL> select tname from tab;
         BONUS
         DEPT
         DUMMY
         EMP
         SALGRADE
         SQL> exit
    vi editor로 불필요한 처음과 마지막 두라인 삭제후 table 이름뒤에
    있는 null문자를 제거 한다.
    < null문자 제거 및 export 화일을 만드는 사전 작업 >
    화일을 open 한 후
    1) :g/ /s///g      <--- table name뒤의 null문자 제거
    2) :1
    3) bonus table 뒤에 comma 를 append
    4) :map @ j$. 하고 Enter <--- 다음 라인에도 2번의 작업을 하기 위한 macro
    5) Shift+2 (계속 누르고 있음)<--- 다음 라인의 마지막에 comma 추가
    6) 제일 마지막 라인은 comma 불필요
    위의 out file을 100 개씩(table name이 길 경우는 그 이하로) 라인을
    쪼개어 화일을 나누어 개별 화일 이름을 부여하여 저장한다.
    예) 1~100은 scott1.out 101~200은 scott2.out .....과 같이 나누고
    화일의 제일 마지막 라인의 comma를 제거
    아래의 script4exp.c를 compile하여 export를 위한 shell script를
    작성한다. ( 필요하다면 script내의 export option을 수정하여 compile)
    compile이 끝난후
    $ script4exp scott1.out scott1.sh scott tiger scott1.dmp scott1.log
    $ script4exp scott2.out scott2.sh scott tiger scott2.dmp scott2.log
    하게 되면 scott1.sh, scott2.sh,.....가 생기며 이를 모드를 바꿔
    background job으로 수행하면 된다.
    주의) 1. 작업이 끝난후 *.sh의 file size를 check 한다.
    2. 가능한 큰 table은 outfile에서 빼내 따로 export한다.
    ====script4exp.c=================
    #include <stdio.h>
    #include <string.h>
    #define EXPCMD "exp %s/%s buffer=52428800 file=%s log=%s tables="
    main(int argc, char **argv)
    FILE ifp, ofp;
    char buff[256], *pt;
    if (argc != 7)
    printf("\nUSAGE :\n");
    printf("$ script4exp infile.out, outfile.sh, username,
    passwd, dmpfile.dmp, logfile.log\n\n");
    exit(0);
    if ((ifp = fopen(argv[1], "r")) == NULL)
    printf("%s file open fail !!\n", argv[1]);
    exit(0);
    if ((ofp = fopen(argv[2], "w")) == NULL)
    printf("%s file open fail !!\n", argv[1]);
    exit(0);
    fprintf(ofp, EXPCMD, argv[3], argv[4], argv[5], argv[6]);
    while((fgets(buff, 80, ifp)) != NULL)
    if ((pt = strchr(buff, '\n')) != NULL) *pt = NULL;
    fprintf(ofp, "%s", buff);
    memset(buff, 0, sizeof(buff));
    fprintf(ofp, "\n");
    fclose(ifp);
    fclose(ifp);
    }

  • Reg. Particulare table export and import the same table

    Dear Sir
    I am MM consultant, I would like to export only one table and import the same after some request released,how to do this. Please help me.
    I am working in Oracle 10.2.0.2.0 release
    Thanks in advance
    Rajakumar.K

    Hello Raja,
    you want to export some table, perform some changes on the system (releasing a transport) and then reimport the old state of the table? This sounds like a very bad idea. You are inviting desaster and compromise the consistency of your system.
    Go to http://help.sap.com chose your release and enter the search term brspace to find out
    supported ways to reorganize a table.
    Regards,
    Mark

  • Reg : FM tables parametrs

    Hi All,
             I heard that tables parameters is going to be outdated in the function modules as we can achieve the same functionlaity using the import and export parametrs.
             Can anyone give me the some  more details reg this?
    Thanks in advance.
    Regards
    Abhilash.

    Hi Abhilash,
    To some extent what u said is correct.
    We are gradually moving from conventional ABAP to Object oriented ABAP. In OOPs concepts we have EXPORTING, IMPORTING and CHANGING parameters(TABLES parameter will be replaced with CHANGING) but not IMPORT and EXPORT).
    All new technologies in ABAP like WebDynpro, Netweaver are mostly based on OOPs and linked with JAVA concepts.
    Hope i gave u some idea.
    Thanks,
    Vinod.

  • Reg:Tax Table In Purchase Order

    Hi All,
         In the Purchase Order Line Item Level We will Give Tax I.e In the Invoice tab We have our required tax codes the the taxes button will enable when we click taxes all the taxes will come my question is coils anybody say in which table all the tax details for a Purchase Order is Stored? i have checked the table KONV
    only the Header Conditions is Stored in that table.could you please help me?

    Hi Vijetasap,
       i have tried that function module it doesn't return the values thats y i asked the table name, i have attached the code for my reference just correct me where am wrong?
    LOOP AT I_EKPO.
        clear taxcom.
        CLEAR : I_KOMV.
        REFRESH : I_KOMV.
        SELECT SINGLE *
           INTO t001
           FROM t001
          WHERE bukrs = ekko-bukrs .
        taxcom-bukrs = i_ekpo-bukrs.
        taxcom-budat = ekko-bedat.
        taxcom-waers = ekko-waers.
        taxcom-kposn = i_ekpo-ebelp.
        taxcom-mwskz = i_ekpo-mwskz.
        taxcom-txjcd = i_ekpo-txjcd.
        taxcom-shkzg = 'H'.
        taxcom-xmwst = 'X'.
          IF I_EKPO-MTART = 'HAWA'.
            taxcom-wrbtr = i_ekpo-kzwi6.
          ELSEIF I_EKPO-MTART = 'ROH'.
            taxcom-wrbtr = i_ekpo-kzwi4.
          ENDIF.
        ELSE.
          taxcom-wrbtr = i_ekpo-zwert.
        ENDIF.
        taxcom-lifnr = ekko-lifnr.
        taxcom-land1 = ekko-lands.
        taxcom-ekorg = ekko-ekorg.
        taxcom-hwaer = t001-waers.
        taxcom-llief = ekko-llief.
        taxcom-bldat = ekko-bedat.
        taxcom-matnr = i_ekpo-ematn.
        taxcom-werks = i_ekpo-werks.
        taxcom-bwtar = i_ekpo-bwtar.
        taxcom-matkl = i_ekpo-matkl.
        taxcom-meins = i_ekpo-meins.
        taxcom-ebeln = i_ekpo-ebeln.
        taxcom-ebelp = i_ekpo-ebelp.
        IF ekko-bstyp EQ bstyp-best.
          taxcom-mglme = i_ekpo-menge.
        ELSE.
          IF ekko-bstyp EQ bstyp-kont AND i_ekpo-abmng GT 0.
            taxcom-mglme = i_ekpo-abmng.
          ELSE.
            taxcom-mglme = i_ekpo-ktmng.
          ENDIF.
        ENDIF.
        IF taxcom-mglme EQ 0.
          taxcom-mglme = 1000.
        ENDIF.
        taxcom-mtart = i_ekpo-mtart.
        IF NOT TAXCOM-mwskz IS INITIAL.
          CALL FUNCTION 'CALCULATE_TAX_ITEM'
          EXPORTING
      ANZAHLUNG                 = ' '
       DIALOG                    = ' '
       DISPLAY_ONLY              = ' '
       INKLUSIVE                 = ' '
       I_ANWTYP                  = ' '
       I_DMBTR                   = '0'
       I_MWSTS                   = '0'
              I_TAXCOM                  = taxcom
       PRUEFEN                   = ' '
       RESET                     = ' '
          IMPORTING
       E_NAVFW                   =
              E_TAXCOM                   = taxcom
       E_XSTVR                   =
       NAV_ANTEIL                =
          TABLES
              T_XKOMV                    =  i_komv
          EXCEPTIONS
                 MWSKZ_NOT_DEFINED          = 1
                MWSKZ_NOT_FOUND             = 2
                MWSKZ_NOT_VALID             = 3
                  STEUERBETRAG_FALSCH       = 4
                  COUNTRY_NOT_FOUND         = 5
                 OTHERS                     = 6
    .Note: Am not given Full Code but this is the part where am getting the tax values.

  • Internal table export and import in ECC 5.0 version

    Hi friends,
    I am trying to export and import internal table from one program to other program.
    The below… export and import commands are not working when I run the program in background (using SUBMIT zxxxx via JOB name NUMBER number…..)
    EXPORT ITAB TO MEMORY id 'ZMATERIAL_CREATE'.
    IMPORT ItAB FROM MEMORY ID 'ZMATERIAL_CREATE'.
    Normally it should work. Since It’s not working I am trying with another alternative..
    i.e EXPORT (ptab) INTERNAL TABLE itab.
    My sap version is ECC 5.0….
    For your information, here I am forwarding sap help. Pls have a look and explain how to declare ptab internal table.
    +Extract from SAP help+
    In the dynamic case the parameter list is specified in an index table ptab with two columns. These columns can have any name and have to be of the type "character". In the first column of ptab, you have to specify the names of the parameters and in the second column the data objects. If the second column is initial, then the name of the parameter in the first column has to match the name of a data object. The data object is then stored under its name in the cluster. If the first column of ptab is initial, an uncatchable exception will be raised.
    Outside of classes you can also use a single-column internal table for parameter_list for the dynamic form. In doing so, all data objects are implicitly stored under their name in the data cluster.
    My internal table having around 45 columns.
    pls help me.
    Thanks in advance
    raghunath

    The export/import should work the way you are using it. Just make sure you are using same memory id and make sure its unique - meaning u are using it only for this itab purpose and not overwriting it with other values. Check itab is not initial before you export in program 1 - then import it in prog2 with same memory id...also check case, I am not sure if its case sensitive...
    Here is how you use the second variant...
    Two fields with two different identifications "P1" and "P2" with the dynamic variant of the cluster definition are written to the ABAP Memory. After execution of the statement IMPORT, the contents of the fields text1 and text2 are interchanged.
    TYPES:
      BEGIN OF tab_type,
        para TYPE string,
        dobj TYPE string,
      END OF tab_type.
    DATA:
      id    TYPE c LENGTH 10 VALUE 'TEXTS',
      text1 TYPE string VALUE `IKE`,
      text2 TYPE string VALUE `TINA`,
      line  TYPE tab_type,
      itab  TYPE STANDARD TABLE OF tab_type.
    line-para = 'P1'.
    line-dobj = 'TEXT1'.
    APPEND line TO itab.
    line-para = 'P2'.
    line-dobj = 'TEXT2'.
    APPEND line TO itab.
    EXPORT (itab)     TO MEMORY ID id.
    IMPORT p1 = text2
           p2 = text1 FROM MEMORY ID id.

  • Timestamp datatype not output correctly in table export

    When using the data export from table view timestamp datatype is not handled correctly. It shows as oracle.sql.TIMESTAMP@14c0761.
    Works fine from SQL Explorer view though.

    Im using the same build. 1.0.0.15.27.
    You can try any export option. I tried SQL Insert.
    If you right click from the data grid (SQL Worksheet or Table view it works fine)
    In the table view, if you go to Actions -> Export -> SQL Insert then it doesn't.

  • Partition table export and import

    i am using Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production. And have a 133 partition in a single table. I want to take dump of each partition separately and restore it in a new schema. How to go about it.

    You can specify the partition you want to export by using the tables= parameter
    expdp user/password tables=your_table:your_partitoin ...
    If you eliminate your_partition, you will export the complete table. Do you want each partition to be a non-partitioned table? So if you have 133 partitions, do you want 133 tables? If so, just export the complete table and then on import use:
    impdp user/password partition_options=departitoin ...
    The result will be 1 table for each partition. The table will be called
    tablename_partitionname
    If it is too long, it gets truncated. The departition support was added in 11.1.0.6.
    Dean

  • User Master Records/Tables - Exporting User tables for Recovery

    Hi there,
    Work in the Security area ....Outside of the client export of SCC8 for user master records/roles. Is there another recommended method for saving user tables that can be re-imported on the chance of deletion of users and roles and can be selectively re-imported if needed. For instance if a particular User Grouip was deleted with X number of users, could that be re-imported by a selective means ? SU10 is handled delicately in prod when used but unwanted results still occur. Approaches appreciated ..

    Hi,
    Try the following..
    BAPI_USER_CREATE         Create a User
    BAPI_USER_CREATE1       Create a User 
    Hope that helps! 
    Regards,
    Tanveer
    <b>Please mark helpful answers</b>

  • FAGLFLEXA table export and import.

    Hello Gurus,
    I want to export FAGLFLEXA table from quality 200 client & import into developement 110 client.
    Want to know different ways to copy it between two different sap system. If anybody knows, help me for it.
    Thanks & Warm Regards,
    Prashant

    Thanks for Reply,
    Actually i have done client copy from qas 200 to dev 110 client, in that all records in FAGLFLEXA tables are not copied, so i wan to copy that apporx. 2,000,000 records from qas to dev. I already tried with r3tans, but it is not exporting records from this table.
    Thanks & Warm Regards,
    Prashant
    Edited by: PrashantCKunjir on Jun 29, 2011 6:11 AM
    Edited by: PrashantCKunjir on Jun 29, 2011 6:12 AM

  • Total number of records of partition tables exported using datapump

    Hi All,
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    OS: RHEL
    I exported a table with partitions using datapump and I would like to verify the total number of all the records exported of all these partition tables.The export has a query on it with WHERE ITEMDATE< TO_DATE (1-JAN-2010). I need it to compare with the exact amount of records in the actual table if it's the same.
    Below is the log file of the exported table. It does not show the total number of rows exported but only individually via partition.
    Starting "SYS"."SYS_EXPORT_TABLE_05": '/******** AS SYSDBA' dumpfile=data_pump_dir:GSDBA_APPROVED_TL.dmp nologfile=y tables=GSDBA.APPROVED_TL query=GSDBA.APPROVED_TL:"
    WHERE ITEMDATE< TO_DATE(\'1-JAN-2010\'\)"
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 517.6 MB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q3" 35.02 MB 1361311 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q4" 33.23 MB 1292051 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q4" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q1" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q3" 30.53 MB 1186974 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q3" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q1" 30.44 MB 1183811 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q2" 30.29 MB 1177468 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2009_Q4" 30.09 MB 1170470 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q2" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q2" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2010_Q1" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q3" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2011_Q4" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q1" 5.875 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2006_Q3" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2006_Q4" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q1" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q2" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q3" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2007_Q4" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q1" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2008_Q2" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q2" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q3" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_2012_Q4" 0 KB 0 rows
    . . exported "GSDBA"."APPROVED_TL":"APPROVED_TL_MAXVALUE" 0 KB 0 rows
    Master table "SYS"."SYS_EXPORT_TABLE_05" successfully loaded/unloaded
    Dump file set for SYS.SYS_EXPORT_TABLE_05 is:
    /u01/export/GSDBA_APPROVED_TL.dmp
    Job "SYS"."SYS_EXPORT_TABLE_05" successfully completed at 12:00:36
    Edited by: 831134 on Jan 25, 2012 6:42 AM
    Edited by: 831134 on Jan 25, 2012 6:43 AM
    Edited by: 831134 on Jan 25, 2012 6:43 AM

    I assume you want this so you can run a script to check the count? If not and this is being done manually, then just add up the individual rows. I'm not very good at writing scripts, but I would think that someone here could come up with a script that would sum up the row count for the partitions of the tables from your log file. This is not something that Data Pump writes to the log file.
    Dean

  • Include Row/Column Headers in Pivot Table Export to Excel

    I am using JDeveloper version 11.1.2.3
    I am trying to export my pivot table to excel using dvt:exportPivotTableData. I'd like to include column/row headers in the export, but can't seem to find a way to do that. Is there a way to do this in my jdeveloper version?

    I am using JDeveloper version 11.1.2.3
    I am trying to export my pivot table to excel using dvt:exportPivotTableData. I'd like to include column/row headers in the export, but can't seem to find a way to do that. Is there a way to do this in my jdeveloper version?

Maybe you are looking for

  • Can't install AIR on Mac OS X 10.6.6

    Hello, getting the following error in the console log: Jan 17 18:05:31 MBP /Volumes/Adobe AIR/Adobe AIR Installer.app/Contents/MacOS/Adobe AIR Installer[3524]: Runtime Installer begin with version 2.5.1.17750 on Mac OS 10.6.6 x86 Jan 17 18:05:31 MBP

  • Oracle 10g problem on win 2000 server!!

    Hi, I have installed oracle 10g on win 2000 server with service pack 4 to my home machine but the problem I am facing is that when I start my db on the system my system gets slow it consumes most of the Ram.I alter few SGA parameters in init.ora file

  • Better compression at lower bit rates

    I've not had much luck getting good h.264 compression from AME at lower bit rates. Are there some secrets that someone has shared? I used to be quite adept back in the day, so it's frustrating having to upload giant files to Vimeo. Any good guidance?

  • MBeans missing from WLST "directory"

    I want to be able to automate the creation of WLI Event generators, Trading partner profiles, worklist configuration, AI configuration amoung other things using WLST. When I look at the available MBeans using WLShell, I can see them in the GUI. For e

  • Popup + Singleton = don't work

    I have the following situation: 1) Popup opened by a main application; 2) Popup work with singleton: one singleton for a toolbar, other for the data; 3) Popup can be close any time; The problem: when open by the first time, the popup works with no pr