Insert and IMPDP on same table at a time

Hi,
We have table tableA, Which is around 20gb
This table is subject to " ONLY INSERTS"
Now I want to IMPDP the data (200gb) into same table with option of "TABLE_EXISTS_ACTION=APPEND" (Duplicates rows are allowed on column)
The environment is OLTP so the new transactions (INSERTS only) will be happening into  this table while the IMPDP is in process.
So kindly let me know whether it will affect the performance(because of two simultaneous insertion)???????
Or it will take more time?????
Kindly revert ........

It is such a pity most people here
- don't know how to use Google or refuse to use Google
- don't know how to search the Forums
- refuse to specify their four digit version number and platform info
In general, treat a forum, an asynchronous communication mechanism, as a chatroom.
Also I don't know what 'more time' means. Surely the classes in English in your locale are not that bad. 'More time' is meaningless.
You are wasting more time by asking here than by trying in a test database.
Sybrand Bakker
Senior Oracle DBA

Similar Messages

  • How to insert into more than one table at a time also..

    hi,
    i am a newbee.
    how to insert into more than one table at a time
    also
    how to get a autoincremented value of an id say transactionid for a particular accountid.
    pls assume table as
    transactionid accountid
    101 50
    102 30
    103 50
    104 35
    i want 102 for accountid 30 and 103 for accountid 50.
    thank u

    @blushadow,
    You can only insert into one table at a time. Take a look here :
    Re: insert into 2 tables
    @Raja,
    I want how to extract the last incremented value not to insert.Also, I don't understand your thread title... which was "how to insert into more than one table at a time also.. "
    Insert, extract... ? Can you clarify your job ?
    Nicolas.

  • How to insert multiple rows in same table at once

    hi ,
    How can I insert more than one row in the same database table on single submit button.
    (i am using the ADF , EJB and Toplink for this example.)
    EMPLOYEEand DEPARTMENT tables will hold a common coloum deptno
    The method I have tried is as follows.
    I have created the UI that holds EMPLOYEE(DETAIL) details and a DEPARTMENT(MASTER) table and I have created two separate bean classes which hold the getters and setters for the corresponding tables.
    I have created a method in Department bean which will be called when we add the employee details
    public String addEmpdetails() {
    this.employeedetailslist.add(empdetails);
    return null;
    where employeedetailslist is a Arraylist and I want to pass the reference of employeebean in the array list.
    But this method will fail as I need to create the new employee bean object every time when ever I need to pass.
    How can I store values of multiple rows in bean.
    In the EJB session bean how can I commit multiple EMPLOYEE rows and DEPARTMENT values at once.

    The use of &variable in a script is actually syntax for a "substitution variable" in the SQL*Plus tool (other tools may also do the same), not an inherent part of SQL or PL/SQL itself.
    Whenever SQL*Plus is given a script it parses through it and if it encounters one of these it prompts for a value. This value is then substituted into the script before the script actually get's sent to the SQL or PL/SQL engine (process) on the database server. Once the script has gone to the database server it executes there and the results are passed back for SQL*Plus to display. However, the SQL and PL/SQL processes on the database server have no way to interface to the client machine, so they themselves cannot prompt for input from the client and you can't expect to prompt inside a loop as you are doing.
    What you need is a user interface on the client that can prompt repeatedly for values and then re-send the script, or call a procedure on the database each time. This can be done using shell scripts or dos batch files (depending on your client being unix/dos based) or using a front end application tool such as Java, .NET, Powerbuilder, PHP, Application Express (APEX) etc.

  • Using Insert and Delete icons in table control wizard.

    Can anyone tell me how to perform a new row insertion or deletion in a table created using the table control wizard.
    I see there is a form fcode_insert_row and fcode_delete_row, but dont know how to call them and what parameters to pass and all.
    Since iam new to SAP-ABAP, some code samples will be a great help.
    Thanks to all in advance.

    Hi Lavanya ,
    You have to add the icons personally in the table control.. . Put fcode for addition button as INSE and delete as DELE ..coding will be already thr in the wizard no need to anything just add icons in the table control by selecting from f4 help on icons option of screen.
    Thanks,
    Vishnu .

  • Reg. Particulare table export and import the same table

    Dear Sir
    I am MM consultant, I would like to export only one table and import the same after some request released,how to do this. Please help me.
    I am working in Oracle 10.2.0.2.0 release
    Thanks in advance
    Rajakumar.K

    Hello Raja,
    you want to export some table, perform some changes on the system (releasing a transport) and then reimport the old state of the table? This sounds like a very bad idea. You are inviting desaster and compromise the consistency of your system.
    Go to http://help.sap.com chose your release and enter the search term brspace to find out
    supported ways to reorganize a table.
    Regards,
    Mark

  • Same table, Oracle 5 times slower than MySQL

    Hi
    I have several sites with the same aplication using a database as a log device and to later retrieve reports from. Some tables are for setup and one are for all the log data. The log data table has the following columns: LINEID, TAG, DATE_, HOUR_, VALUE, TIME_ and CHANGED. Typical data is: 122345, PA01_FT1_ACC, 2008-08-01, 10, 985642, "", 0.
    Index (TAG,DATE_)
    When calling a report the software querys for typical 3-5 select querys like the following, only different TAG: SELECT * FROM table WHERE TAG='PA01_FT1_ACC' AND DATE_ BETWEEN '2008-08-01' AND '2008-08-31' AND HOUR_=24
    Since our customers have different preferences some sites have Oracle and some have MySQL. And I have registered that the sites running Oracle uses 24-30 sec on the report, MySQL uses 3-6 sec on a similar report with the same tables and querying software.
    How is this?
    Is there anything I can do to make Oracle work faster?
    Should HOUR_ also be in the index?
    Since I guess this slowness is not something consistant in Oracle, there must be something to do.
    Thanks for any help.

    Histograms on varchar2 columns are based on the
    first 6 bytes of the column. If the database is using
    a character set that uses 1 byte per character, every
    entry in the DATE_ column since the beginning of the
    year looks like '2008-0' to the optimizer when
    determining cardinality to produce the "best"
    execution plan. For character sets that require
    multiple bytes per character, the situation is worse
    - every entry in the column representing this century
    appears to be the same value to the optimizer when
    determining cardinality
    That's a very good point and I didnt know about it
    before, about first 6 bytes being used. Can you point
    me in the docs where it is listed if its there or
    some other document/s which has this detail?Aman,
    I am having a bit of trouble finding the information in the documentation about the number of bytes used by a histogram on a VARCHAR2 column.
    References:
    http://www.freelists.org/archives/oracle-l/08-2006/msg00199.html
    "Cost-Based Oracle Fundamentals" page 117 shows a demonstration, and describes the use of ENDPOINT_ACTUAL_VALUE starting on Oracle 9i.
    "Cost-Based Oracle Fundamentals" page 118-120 describes selectivity problems when histograms are not used and a date is placed into a VARCHAR2 column.
    "Troubleshooting Oracle Performance", likely around page 130-140 also indicates that histograms only use the first 6 bytes.
    See section "Followup November 12, 2005 - 4pm US/Eastern"
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563
    An interesting test setup that almost shows what I intended - but Oracle 10.2.0.2 was a little smarter than I expected, even though it selected to use an index to retrieve more than 50% of a table... Take a look at the TO_CHAR representation of the ENDPOINT_VALUE from DBA_TAB_HISTOGRAMS to understand what I was trying to decribe in my original post in this thread.
    CREATE TABLE T1 (DATE_ VARCHAR2(10));
    INSERT INTO T1
    SELECT
      TO_CHAR(TO_DATE('2008-01-01','YYYY-MM-DD')+ROWNUM-1,'YYYY-MM-DD')
    FROM
      DUAL
    CONNECT BY
      LEVEL<=250;
    250 rows created.
    COMMIT;
    CREATE INDEX IND_T1 ON T1(DATE_);
    SELECT
      MIN(DATE_),
      MAX(DATE_)
    FROM
      T1;
    MIN(DATE_) MAX(DATE_)
    2008-01-01 2008-09-06
    SELECT
      COLUMN_NAME,
      NUM_DISTINCT,
      NUM_BUCKETS,
      HISTOGRAM
    FROM
      DBA_TAB_COL_STATISTICS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1';
    no rows selected
    SELECT
      SUBSTR(COLUMN_NAME,1,10) COLUMN_NAME,
      ENDPOINT_NUMBER,
      ENDPOINT_VALUE,
      SUBSTR(ENDPOINT_ACTUAL_VALUE,1,10) ENDPOINT_ACTUAL_VALUE
    FROM
      DBA_TAB_HISTOGRAMS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1';
    no rows selected
    EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',METHOD_OPT=>'FOR COLUMNS SIZE 254 DATE_',CASCADE=>TRUE);
    PL/SQL procedure successfully completed.
    SELECT
      COLUMN_NAME,
      NUM_DISTINCT,
      NUM_BUCKETS,
      HISTOGRAM
    FROM
      DBA_TAB_COL_STATISTICS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1';
    COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
    DATE_                                   250         250 HEIGHT BALANCED
    SELECT
      SUBSTR(COLUMN_NAME,1,10) COLUMN_NAME,
      ENDPOINT_NUMBER,
      ENDPOINT_VALUE,
      SUBSTR(ENDPOINT_ACTUAL_VALUE,1,10) ENDPOINT_ACTUAL_VALUE
    FROM
      DBA_TAB_HISTOGRAMS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1'
    ORDER BY
      ENDPOINT_NUMBER;
    COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_A
    DATE_                    1     2.6059E+35 2008-01-01
    DATE_                    2     2.6059E+35 2008-01-02
    DATE_                    3     2.6059E+35 2008-01-03
    DATE_                    4     2.6059E+35 2008-01-04
    DATE_                    5     2.6059E+35 2008-01-05
    DATE_                    6     2.6059E+35 2008-01-06
    DATE_                    7     2.6059E+35 2008-01-07
    DATE_                    8     2.6059E+35 2008-01-08
    DATE_                    9     2.6059E+35 2008-01-09
    DATE_                   10     2.6059E+35 2008-01-10
    DATE_                  243     2.6059E+35 2008-08-30
    DATE_                  244     2.6059E+35 2008-08-31
    DATE_                  245     2.6059E+35 2008-09-01
    DATE_                  246     2.6059E+35 2008-09-02
    DATE_                  247     2.6059E+35 2008-09-03
    DATE_                  248     2.6059E+35 2008-09-04
    DATE_                  249     2.6059E+35 2008-09-05
    DATE_                  250     2.6059E+35 2008-09-06
    ALTER SESSION SET EVENTS '10053 TRACE NAME CONTEXT FOREVER, LEVEL 1';
    SELECT
      DATE_
    FROM
      T1
    WHERE
      DATE_<='2008-01-15';
    15 rows selected.
    From the 10053 trace:
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table: T1  Alias: T1
        #Rows: 250  #Blks:  5  AvgRowLen:  11.00
    Index Stats::
      Index: IND_T1  Col#: 1
        LVLS: 0  #LB: 1  #DK: 250  LB/K: 1.00  DB/K: 1.00  CLUF: 1.00
    SINGLE TABLE ACCESS PATH
      Column (#1): DATE_(VARCHAR2)
        AvgLen: 11.00 NDV: 250 Nulls: 0 Density: 0.002
        Histogram: HtBal  #Bkts: 250  UncompBkts: 250  EndPtVals: 250
      Table: T1  Alias: T1    
        Card: Original: 250  Rounded: 15  Computed: 15.00  Non Adjusted: 15.00
      Access Path: TableScan
        Cost:  3.01  Resp: 3.01  Degree: 0
          Cost_io: 3.00  Cost_cpu: 85607
          Resp_io: 3.00  Resp_cpu: 85607
      Access Path: index (index (FFS))
        Index: IND_T1
        resc_io: 2.00  resc_cpu: 49621
        ix_sel: 0.0000e+000  ix_sel_with_filters: 1
      Access Path: index (FFS)
        Cost:  2.00  Resp: 2.00  Degree: 1
          Cost_io: 2.00  Cost_cpu: 49621
          Resp_io: 2.00  Resp_cpu: 49621
      Access Path: index (IndexOnly)
        Index: IND_T1
        resc_io: 1.00  resc_cpu: 10121
        ix_sel: 0.06  ix_sel_with_filters: 0.06
        Cost: 1.00  Resp: 1.00  Degree: 1
      Best:: AccessPath: IndexRange  Index: IND_T1
             Cost: 1.00  Degree: 1  Resp: 1.00  Card: 15.00  Bytes: 0
    ============
    Plan Table
    ============
    | Id  | Operation         | Name    | Rows  | Bytes | Cost  | Time      |
    | 0   | SELECT STATEMENT  |         |       |       |     1 |           |
    | 1   |  INDEX RANGE SCAN | IND_T1  |    15 |   165 |     1 |  00:00:01 |
    Predicate Information:
    1 - access("DATE_"<='2008-01-15')
    INSERT INTO T1
    SELECT
      TO_CHAR(TO_DATE('2008-09-07','YYYY-MM-DD')+ROWNUM-1,'YYYY-MM-DD')
    FROM
      DUAL
    CONNECT BY
      LEVEL<=250;
    COMMIT;
    EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',METHOD_OPT=>'FOR COLUMNS SIZE 254 DATE_',CASCADE=>TRUE);
    PL/SQL procedure successfully completed.
    SELECT
      COLUMN_NAME,
      NUM_DISTINCT,
      NUM_BUCKETS,
      HISTOGRAM
    FROM
      DBA_TAB_COL_STATISTICS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1';
    COLUMN_NAME                    NUM_DISTINCT NUM_BUCKETS HISTOGRAM
    DATE_                                   500         254 HEIGHT BALANCED
    SELECT
      SUBSTR(COLUMN_NAME,1,10) COLUMN_NAME,
      ENDPOINT_NUMBER,
      TO_CHAR(ENDPOINT_VALUE) ENDPOINT_VALUE,
      SUBSTR(ENDPOINT_ACTUAL_VALUE,1,10) ENDPOINT_ACTUAL_VALUE
    FROM
      DBA_TAB_HISTOGRAMS
    WHERE
      OWNER=USER
      AND TABLE_NAME='T1'
    ORDER BY
      ENDPOINT_NUMBER;
    COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE                           ENDPOINT_A
    DATE_                    0 260592218925307000000000000000000000     2008-01-01
    DATE_                    1 260592218925307000000000000000000000     2008-01-02
    DATE_                    2 260592218925307000000000000000000000     2008-01-04
    DATE_                    3 260592218925307000000000000000000000     2008-01-06
    DATE_                    4 260592218925307000000000000000000000     2008-01-08
    DATE_                    5 260592218925307000000000000000000000     2008-01-10
    DATE_                    6 260592218925307000000000000000000000     2008-01-12
    DATE_                    7 260592218925307000000000000000000000     2008-01-14
    DATE_                    8 260592218925307000000000000000000000     2008-01-16
    DATE_                    9 260592218925307000000000000000000000     2008-01-18
    DATE_                   10 260592218925307000000000000000000000     2008-01-20
    DATE_                  242 260592219234792000000000000000000000     2009-04-26
    DATE_                  243 260592219234792000000000000000000000     2009-04-28
    DATE_                  244 260592219234792000000000000000000000     2009-04-29
    DATE_                  245 260592219234792000000000000000000000     2009-05-01
    DATE_                  246 260592219234792000000000000000000000     2009-05-02
    DATE_                  247 260592219234792000000000000000000000     2009-05-04
    DATE_                  248 260592219234792000000000000000000000     2009-05-05
    DATE_                  249 260592219234792000000000000000000000     2009-05-07
    DATE_                  250 260592219234792000000000000000000000     2009-05-08
    DATE_                  251 260592219234792000000000000000000000     2009-05-10
    DATE_                  252 260592219234792000000000000000000000     2009-05-11
    DATE_                  253 260592219234792000000000000000000000     2009-05-13
    DATE_                  254 260592219234792000000000000000000000     2009-05-14
    SELECT
      DATE_
    FROM
      T1
    WHERE
      DATE_ BETWEEN '2008-01-15' AND '2008-09-15';
    245 rows selected.
    From the 10053 trace:
    BASE STATISTICAL INFORMATION
    Table Stats::
      Table: T1  Alias: T1
        #Rows: 500  #Blks:  5  AvgRowLen:  11.00
    Index Stats::
      Index: IND_T1  Col#: 1
        LVLS: 1  #LB: 2  #DK: 500  LB/K: 1.00  DB/K: 1.00  CLUF: 2.00
    SINGLE TABLE ACCESS PATH
      Column (#1): DATE_(VARCHAR2)
        AvgLen: 11.00 NDV: 500 Nulls: 0 Density: 0.002
        Histogram: HtBal  #Bkts: 254  UncompBkts: 254  EndPtVals: 255
      Table: T1  Alias: T1    
        Card: Original: 500  Rounded: 240  Computed: 240.16  Non Adjusted: 240.16
      Access Path: TableScan
        Cost:  3.01  Resp: 3.01  Degree: 0
          Cost_io: 3.00  Cost_cpu: 148353
          Resp_io: 3.00  Resp_cpu: 148353
      Access Path: index (index (FFS))
        Index: IND_T1
        resc_io: 2.00  resc_cpu: 111989
        ix_sel: 0.0000e+000  ix_sel_with_filters: 1
      Access Path: index (FFS)
        Cost:  2.01  Resp: 2.01  Degree: 1
          Cost_io: 2.00  Cost_cpu: 111989
          Resp_io: 2.00  Resp_cpu: 111989
      Access Path: index (IndexOnly)
        Index: IND_T1
        resc_io: 2.00  resc_cpu: 62443
        ix_sel: 0.48031  ix_sel_with_filters: 0.48031
        Cost: 2.00  Resp: 2.00  Degree: 1
      Best:: AccessPath: IndexRange  Index: IND_T1
             Cost: 2.00  Degree: 1  Resp: 2.00  Card: 240.16  Bytes: 0
    ============
    Plan Table
    ============
    | Id  | Operation         | Name    | Rows  | Bytes | Cost  | Time      |
    | 0   | SELECT STATEMENT  |         |       |       |     2 |           |
    | 1   |  INDEX RANGE SCAN | IND_T1  |   240 |  2640 |     2 |  00:00:01 |
    Predicate Information:
    1 - access("DATE_">='2008-01-15' AND "DATE_"<='2008-09-15')I am sure that there are much better examples than the above, as the above generates a very small data set, and is still an incomplete test setup.
    Charles Hooper
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • How to insert records dynamically in a table at run time

    hi, all
      please help me out,
      my problum is how can i insert records from on table to another table at  run time dynamically. Initally the records are coming  from R/3 backend.
    regards

    Hi,
    One way is to first create a Value node (NewNode) with structure binding of that of the model node. Then iterate through the model node, create NewNode elements and set the value from model node elements into it.
    IPrivate<view>.I<model node> mele;
    IPrivate<view>.I<NewNode> nele;
    for(int=0;i<wdContext.node<output>().node<record>().size();i++)
    mele = wdContext.node<output>().node<record>().get<record>ElementAt(i);
    nele = wdContext.node<NewNode>().create<NewNode>Element();
    wdContext.node<NewNode>().addElement(nele);
    nele.set<attr>(mele.get<attr>());
    Second way is to create that NewNode inside the model node and create a supply function.
    Regards,
    Piyush.

  • Insert the data into two tables at a time.

    Hi ,
    i have these two tables
    create table [dbo].[test1](
    [test1_id] [int] identity(1,1) primary key,
    [test2_id] [int] not null
    create table [dbo].[test2](
    [test2_id] [int] identity(1,1) primary key,
    [test1_id] [int] not null
    alter table [dbo].[test1]
    add constraint [fk_test1_test2_id] foreign key([test2_id])
    references [dbo].[test2] ([test2_id])
    alter table [dbo].[test2] add constraint [fk_test2_test2_id] foreign key([test1_id])
    references [dbo].[test1] ([test1_id])
    I want to insert the data into two tables in one insert statement. How can i do this using T-SQL ?
    Thanks in advance.

    You can INSERT into both tables within one Transaction but not in one statement. By the way, you would need to alter your dbo.Test1 table to allow null for first INSERT test2_id column
    See sample code below:
    CREATE TABLE #test1(test1_ID INT IDENTITY(1,1),test2_id INT NULL)
    CREATE TABLE #test2(test2_ID INT IDENTITY(1,1),test1_ID INT)
    DECLARE @Test1dentity INT
    DECLARE @Test2dentity INT
    BEGIN TRAN
    -- Insert NULL as test2_ID value is unknown
    INSERT INTO #test1(test2_ID)
    SELECT NULL;
    -- get inserted identity value
    SET @Test1dentity = SCOPE_IDENTITY();
    INSERT INTO #test2(test1_ID)
    SELECT @Test1dentity;
    -- get inserted identity value
    SET @Test2dentity = SCOPE_IDENTITY();
    -- Update test1 table
    UPDATE #test1
    SET test2_ID = @Test2dentity
    WHERE test1_ID = @Test1dentity;
    COMMIT
    SELECT * FROM #test1;
    SELECT * FROM #test2;
    -- Drop temp tables
    IF OBJECT_ID('tempdb..#test1') IS NOT NULL
    BEGIN
    DROP TABLE #test1
    END
    IF OBJECT_ID('tempdb..#test2') IS NOT NULL
    BEGIN
    DROP TABLE #test2
    END
    web: www.ronnierahman.com

  • 8.0.2 Insert and Update on same page?

    With 8.0.2 is it possible to have an update and insert on the
    same page?

    On Fri, 19 Jan 2007 08:34:35 -0600, Lee
    <[email protected]>
    wrote:
    >Did you have to tweak the code to get them to work
    together (changing
    >function names) or did they work automatically?
    >
    >Perhaps it's just what I am trying to do with the page. I
    have two forms
    >and depending on which they need, either the update is
    used or the
    >insert is used.
    >
    >Does this still sound like it should work?
    Reply in the App Dev forum.
    Steve
    steve at flyingtigerwebdesign dot com

  • Dynamic Table - Add rows and columns in same table

    Hi there,
    I wonder if someone could help please? I'm trying to create and table where a user can add both rows and columns (preferably with separate buttons) but am having trouble trying to figure out how to do this. Is it possible? If so how? I'm not familar with script but have found examples of seprate tables where you can add a row and then another table where you can add columns and essentailly want to merge the two but cannot make it work.
    Any help much appreciated!
    Thanks,
    Ken

    It is great example....you can learn the concepts there and apply....however you may have to think twice before you implement column adding dynamically....because the technique here is make copy of what we already have and reproduce it as a new item and this technique works great for rows as they all have every thing in common. But when it comes to columns it may have unique visible identity as column head and displaying repeatedly the same column head may not look good. Of-Course you can do few extra lines of code and change the column appearance based on users input each time. Situations where users need to add additional column is very unlikely (sure your requirement might be an exception).
    Key in allowing adding/removing instances is managing design mode settings under Object>>Binding>>....and select the checkbox "Repeat <subform/row/...> for Each Data Item" and then set Min, Max and Initial count values.
    Also you need to club your effots by using simple scipt with button clicks....
    for the example refered in URL you posted following is what I did to make the first table allow Adding/Removing Rows....
    1. Opened the form in LC designer.
    2. Add two buttons AddRow & RemoveRow right next to RemoveColumn
    3. For AddRow I used following JS code....
          Table1._Row1.addInstance(1);//that means any time this button is clicked a new instance of Row1 is added use _Row2 or Row3 based on your needs
          var fVersion = new Number(xfa.host.version); // this will be a floating point number like "7.05"
          if (fVersion < 8.0) // do this for Acrobat versions earlier than 8.0
           // make sure that the new instance is properly rendered
           xfa.layout.relayout();
    4.  For RemoveRow I used following JS code....
         Table1._Row1.removeInstance(1);//Syntax is...<objectReference>.removeInstance(<index of the repeating object that needs to be removed>); //in this case since we used 1 alwasys second object from top gets deleted.
          var fVersion = new Number(xfa.host.version); // this will be a floating point number like "7.05"
          if (fVersion < 8.0) // do this for Acrobat versions earlier than 8.0
           // make sure that the new instance is properly rendered
           xfa.layout.relayout();
    5. Now time to update settings at Object>>Binding tab and set "Repeat......" and also set Min, Max and Initial count as explained above.
         Those settings needs to be updated for Row1 (or your choice of row) of the table
    6. Set the Height to Expand of the Subform, where the table is housed....  this is done under Layout pallet
    7. Save the PDF as dynamic template and verify the results...
    If you still run into issues I can send you copy that works on my machine, but you need send me an email at n_varma(AT)lycos.com
    Good luck,

  • Filter and Join on same table

    Hi All,
    I am having a bit of hard time, implementing following.
    All suggestions welcome.
    (1) I have a file being mapped into an initial table with say, 10 fields (field1...field10).
    (2) I want to execute following logic for mapping
    For all records in the table, in a cursor
    If field1 = 10 Then
    update field10=x
    Else If field2 = 20 Then
    update field10=y
    End if
    If field10=x then
    update field3 =222
    End if
    I was thinking of using the filter, but am grappling with the problem that after I define the filter, how do I merge it with the original table data and execute the last conditional update (based on field10) on all records in the table?
    - Am thinking of doing as below
    Join the output from mother table with output from filter into a temp table (using non equal row id as the join condition), but this creates duplicate column names, and am wondering how to collapse them back into one column set again?
    is there an alternative possible, for this is very kludgy (if it works at all).
    Question 2
    (1) I have a sql expression defined, which I want to use in the filter bifurcation and also after the join. The sql expression's input column and output columns are same, just the target are to be different?
    Is it possible to do this, or do I have to duplicate the sql expression?
    Question 3
    My current load/stage is in PL/SQL procedure, which I am trying to model with OWB. Is there a guideline/ recommended best practice for doing this kind of activity?
    Appreciate your help.
    Deepak

    1. I think I already gave an answer on this question (and there is one more of the same in the forum), but here it is again:
    You can use an expression with a case statement:
    CASE field1
    WHEN 10 THEN 'x'
    WHEN 20 THEN 'y'
    WHEN ... THEN...
    ELSE field10 END CASE
    The input to this expression are fields field1 and field10, the output goes to field 10. So field10 will be updated with the value coming out of the expression - if field1 is 10 then it will be updated with 'x', when field1 is 20 then it will be updated with 'y' etc... when none of the CASE conditions are true (the ELSE case) it will be updated with the field10 (passt-hrough).
    2. The best solution would be to create a transformation (a function, for example) that contains your expression, then use it throughout the project without having to retype it.
    3. You should:
    - Import the source object structures and (where possible) the target ones
    - Design the new objects in OWB
    - Import your custom transformation library, if there is one
    - Design the extraction processes as mappings in OWB (you will not be able to reuse much of your old code if you want to take advantage of OWBs metadata management, runtime management etc. and if you want to maintain the system through OWB)
    - Run the two systems side by side for some time until you are confortable that the process logic you designed in OWB gives the same results as the old process.
    - Move the OWB system to production and switch the old system off.
    Regards:
    Igor

  • Update/Insert/Delete into the same table

    Hi,
    I'm writing pl/sql code :
    nvoice_id will be passed as parameter and based on invoice_id I need to see whether any records exist in history table.If not any then insert record into table.If more than one record exist then I need delete all except most recent one based hist_seq column in that table.
    Then I need to update that record for description column .How to put dml operation one after another in an efficient manner?As one statement depend on another do I need to use autonomous transaction?
    Thanks,
    Kiran

    user518071 wrote:
    Hi,
    I'm writing pl/sql code :
    nvoice_id will be passed as parameter and based on invoice_id I need to see whether any records exist in history table.If not any then insert record into table.If more than one record exist then I need delete all except most recent one based hist_seq column in that table.
    Then I need to update that record for description column .How to put dml operation one after another in an efficient manner?As one statement depend on another do I need to use autonomous transaction?
    Thanks,
    KiranBasic idea.
    create procedure p_deal_with_invoice
       p_invoice_id  number
       l_max_hist_seq number;
    is
    begin
    select max(hist_seq)
    into l_max_hist_seq
    from your_table
    where invoice_id = p_invoice_id;
    if l_max_hist_seq is not null
    then
       delete from your_table
       where invoice_id = p_invoice_id
       and hist_seq != l_max_hist_seq;
       update your_table set some_columns = some_new_value
       where invoice_id = p_invoice_id
       and hist_seq = l_max_hist_seq;
    else
       insert into your_table () values ();
    end if;
    end p_deal_with_invoice;
    /An autonomous transaction would probably be the last thing you'd want here. Just a simple plsql procedure that accepts a INVOICE_ID as an input.
    Cheers,
    Edited by: Tubby on Jun 18, 2012 2:53 PM

  • How to copy and paste the same answer choices multiple times

    I am making testing forms and each question has the same multiple choice answer selection.  I have been having to type each one of them individually instead of being able to duplicate it for each question.  How can I do what in Word would just be a "copy and paste" function?

    So you want the user/person to do the "cut&paste" action (ie. ctrlc ctrlv) and have the data paste itself across rows correctly much like Excel does. I don't think this "intelligence" is built into Adobe table controls as it is in Excel (and Excel is smart enough to even handle it different ways depending on how the user selects cells).
    The only thing I can think of is that you could "capture" the user trying to paste into a cell/row in your table (maybe in the onFocus or onClick events), then grab the data, parse it out (but looking for some character) and then have your code distribute the data across the rows. However, I personally think that would be a lot of overkill and prone to other issues and errors.

  • Namedquery using same table field multiple times with the use of a label

    Hi all,
    i'm having some trouble with a namedquery. I'm trying to
    use the following namedquery in Toplink to retrive some
    data out of a database.
    select proj.id
    , proj.code
    , proj.name
    , proj.budget
    , proj.status
    , proj.startdate
    , proj.enddate
    , proj.mdr_id projleader_id
    , med_leader.name projleader
    , proj.mdr_id_valt_onder promanager_id
    , med_promanager.name promanager
    , proj.mdr_id_is_account_from accmanager_id
    , med_accmanager.name accmanager
    from uur_projecten proj
    , uur_medewerkers med_leader
    , uur_medewerkers med_promanager
    , uur_medewerkers med_accmanager
    where ( #p_name is not null or #p_search_string is not null )
    and med_leader.id = proj.mdr_id
    and ( proj.mdr_id = nvl( #p_name, proj.mdr_id )
    or proj.mdr_id_valt_onder = nvl( #p_name, proj.mdr_id )
    or proj.mdr_id_is_account_van = nvl( #p_name, proj.mdr_id ))
    and (( #p_status is not null
    and substr( proj.status, 1, 1 ) = upper( #p_status ))
    or ( #p_status is null ))
    and ( upper( proj.code ) like upper( '%' || #p_search_string || '%' )
    or upper( proj.name ) like upper( '%' || #p_search_string || '%' ))
    and med_promanager.id = proj.mdr_id_valt_onder
    and med_accmanager.id = proj.mdr_id_is_account_van
    order by decode( substr( proj.status, 1, 1 )
    , 'A', 2, 'T', 3, 'F', 4, 1 ), proj.code desc
    As you all can see the table ‘uur_medewerkers’ is been used trice to
    determine the name for the corresponding ID. I have a Java class with
    the fields for the results and created a Toplink descriptor to map
    the fields to the database fields.
    The problem is that for the 'projleader', 'promanager' and 'accmanager'
    fields the results are null. The reason is probably that Toplink doesn't
    recognize the fields because of the label for the tables.
    Is there a way to make this work?
    Greets, René

    Post Author: quafto
    CA Forum: .NET
    Your query is not too clear so I'll do my best to answer it broadly.
    You mentioned that you have a .NET web application where your users enter data on one screen and then may retrieve it on another. If the data is written in real time to a database then you can create a standard Crystal Report by adding multiple tables. The tables should be linked together using the primary and foreign keys in order to optimize the database query and give you a speedy report. Using unlinked tables is not recommended and requires the report engine to index the tables (it is quite slow).
    You also mentioned you have a "PropID" to be used in a WHERE clause. This is a great place to use a parameter in your report. This parameter can then be used in your record selection formula inside Crystal Reports. The report engine will actually create the WHERE clause for you based on the parameter value. This is helpful because it allows you to simply concentrate on your code rather than keeping track of SQL queries.
    Now, what Crystal does not do well with is uncertainty. When you design a report with X number of tables the report engine expects X number of tables to be available at processing time. You should not surprise the print engine with more or less tables because you could end up with processing errors or incorrect data. You may need to design multiple reports for specific circumstances.
    Regarding the group expert question. I'm not sure how you would/could use the group expert to group a table? A table is a collection of fields and cannot be compared to another table without a complex algorithm. The group expert is used to group and sort records based on a field in the report. Have a look at the group expert section of the help file for more information.
    Hopefully my comments have given you a few ideas.

  • I cannot find mails received after some days; I have to "rebuild" the box and receive the same mails repeated several times, and I am never sure if I recovered all of them

    How can I recover lost mails received ?

    Hi Muriel,
    Not certain, but this can fix myriad Mail problems...
    Safe Boot from the HD, (holding Shift key down at bootup),  it will try to repair your Disk Directory while the spinning radian is happening, so let it go, run Disk Utility in Applications>Utilities, then highlight your drive, click on Repair Permissions..
    Move this file to the Desktop...
    /Users/YourUserName/Library/Mail/Envelope Index
    Reboot.
    If it happens again I'd suspect possibly bad RAM.

Maybe you are looking for

  • Customer exit variable in Planning filter - IP

    Hi, I am new to business planning. I am trying to use a customer exit variable in filter, while performing check system generates message "RFC failue error". the logic is written to get the current year. But I am not able to use this variable?  Can y

  • Creating a Purchase Order for the Asset

    Hi, Please see the following steps: 1.Create a purchase order for the asset. 2. Post a goods receipt from this PO. 3. Quantity of asset will be transfered from goods receipt. But I find quantity of asset(AS02) was not be transfered and still was equa

  • I updated my apple Id, now I can't access my music

    My old apple ID was an email I don't use anymore, so I decided to update it. When I logged in to appleid.apple.com, it prompted me to change my password because my old one did not fit the new requirements. After I changed my password I changed my app

  • Hdmi with audio for MacBook Pro (13-inch, Mid 2009)

    Hi Have anyone used aa cable to send video and hdmi on this particular model? I'd use a displayport-to-hdmi cable (male-male). but, i've read that this particular model won't send the audio signal over. this is my computer: http://support.apple.com/k

  • Library won't open in updated iPhoto

    I have updated to iPhoto 6.0.4 but it presents me with a library that is completely empty. There are folders for "Early Photos," "2004," "2005," etc. but nothing in them. I have all my earlier library on a backup disk, but when I move them over to my