UPDATE TABLE - ORACLE performance

Hi,
I have table with many columns, and i want to update it :
*1)- *
update table T
set col1=val1,
col2=val2
where c1;
*2)-*
update table T
set col1=val1
where c1;
update table T
set col1=val1
where c1;
Any one know : How is bettre for the oracle performance the first statement or the second?and WHY?
My table is a huge table with many column, i need know the best query in ordre to dont disturb the oracle performance
Thanks

Yes, Your guess is right. You can test it with a small test case in your test environment. I have tested it in my environment. see the result below. Also wait for other experts to comment.
SQL> create table test_emp (ename varchar2(50),empno number primary key, sal number);
Table created.
SQL> insert into test_emp select object_name,object_id,dbms_random.value(1000,100000) from all_objects;
42502 rows created.
SQL> commit;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(USER,'TEST_EMP',cascade=>true);
PL/SQL procedure successfully completed.
SQL> explain plan for
  2  UPDATE test_emp
  3  SET ename=INITCAP(ename), sal=sal+20
  4  WHERE empno>1500;
Explained.
SQL> select * from xplan;
PLAN_TABLE_OUTPUT
Plan hash value: 1067865627
| Id  | Operation        | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | UPDATE STATEMENT   |           | 41551 |  2110K|    85     (3)| 00:00:02 |
|   1 |  UPDATE         | TEST_EMP |       |       |         |           |
|*  2 |   TABLE ACCESS FULL| TEST_EMP | 41551 |  2110K|    85     (3)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
PLAN_TABLE_OUTPUT
   1 - UPD$1
   2 - UPD$1 / TEST_EMP@UPD$1
Predicate Information (identified by operation id):
   2 - filter("EMPNO">1500)
Column Projection Information (identified by operation id):
PLAN_TABLE_OUTPUT
   2 - (upd=2,4; cmp=3) "TEST_EMP".ROWID[ROWID,10],
       "ENAME"[VARCHAR2,50], "EMPNO"[NUMBER,22], "SAL"[NUMBER,22]
26 rows selected.
SQL> explain plan for
  2  UPDATE test_emp
  3  SET ename=INITCAP(ename)
  4  WHERE empno>1500;
Explained.
SQL> select * from xplan;
PLAN_TABLE_OUTPUT
Plan hash value: 1067865627
| Id  | Operation        | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | UPDATE STATEMENT   |           | 41551 |  1257K|    85     (3)| 00:00:02 |
|   1 |  UPDATE         | TEST_EMP |       |       |         |           |
|*  2 |   TABLE ACCESS FULL| TEST_EMP | 41551 |  1257K|    85     (3)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
PLAN_TABLE_OUTPUT
   1 - UPD$1
   2 - UPD$1 / TEST_EMP@UPD$1
Predicate Information (identified by operation id):
   2 - filter("EMPNO">1500)
Column Projection Information (identified by operation id):
PLAN_TABLE_OUTPUT
   2 - (upd=2; cmp=3) "TEST_EMP".ROWID[ROWID,10], "ENAME"[VARCHAR2,50],
       "EMPNO"[NUMBER,22]
26 rows selected.
SQL> exlain plan for
SP2-0734: unknown command beginning "exlain pla..." - rest of line ignored.
SQL> explain plan for
  2  UPDATE test_emp
  3  SET sal=sal+20
  4  WHERE empno>1500;
Explained.
SQL> select * from xplan;
PLAN_TABLE_OUTPUT
Plan hash value: 1067865627
| Id  | Operation        | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | UPDATE STATEMENT   |           | 41551 |  1095K|    85     (3)| 00:00:02 |
|   1 |  UPDATE         | TEST_EMP |       |       |         |           |
|*  2 |   TABLE ACCESS FULL| TEST_EMP | 41551 |  1095K|    85     (3)| 00:00:02 |
Query Block Name / Object Alias (identified by operation id):
PLAN_TABLE_OUTPUT
   1 - UPD$1
   2 - UPD$1 / TEST_EMP@UPD$1
Predicate Information (identified by operation id):
   2 - filter("EMPNO">1500)
Column Projection Information (identified by operation id):
PLAN_TABLE_OUTPUT
   2 - (upd=3; cmp=2) "TEST_EMP".ROWID[ROWID,10], "EMPNO"[NUMBER,22],
       "SAL"[NUMBER,22]
26 rows selected.
SQL>

Similar Messages

  • Jython error while updating a oracle table based on file count

    Hi,
    i have jython procedure for counting counting records in a flat file
    Here is the code(took from odiexperts) modified and am getting errors, somebody take a look and let me know what is the sql exception in this code
    COMMAND on target: Jython
    Command on source : Oracle --and specified the logical schema
    Without connecting to the database using the jdbc connection i can see the output successfully, but i want to update the oracle table with count. any help is greatly appreciated
    ---------------------------------Error-----------------------------
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 45, in ?
    java.sql.SQLException: ORA-00936: missing expression
    ---------------------------------------Code--------------------------------------------------
    import java.sql.Connection
    import java.sql.Statement
    import java.sql.DriverManager
    import java.sql.ResultSet
    import java.sql.ResultSetMetaData
    import os
    import string
    import java.sql as sql
    import java.lang as lang
    import re
    filesrc = open('c:\mm\xyz.csv','r')
    first=filesrc.readline()
    lines = 0
    while first:
    #get the no of lines in the file
    lines += 1
    first=filesrc.readline()
    #print lines
    ## THE ABOVE PART OF THE PROGRAM IS TO COUNT THE NUMBER OF LINES
    ## AND STORE IT INTO THE VARIABLE `LINES `
    def intWithCommas(x):
    if type(x) not in [type(0), type(0L)]:
    raise TypeError("Parameter must be an integer.")
    if x < 0:
    return '-' + intWithCommas(-x)
    result = ''
    while x >= 1000:
    x, r = divmod(x, 1000)
    result = ",%03d%s" % (r, result)
    return "%d%s" % (x, result)
    ## THE ABOVE PROGRAM IS TO DISPLAY THE NUMBERS
    sourceConnection = odiRef.getJDBCConnection("SRC")
    sqlstring = sourceConnection.createStatement()
    sqlstmt="update tab1 set tot_coll_amt = to_number( "#lines ") where load_audit_key=418507"
    sqlstring.executeQuery(sqlstmt)
    sourceConnection.close()
    s0=' \n\nThe Number of Lines in the File are ->> '
    s1=str(intWithCommas(lines))
    s2=' \n\nand the First Line of the File is ->> '
    filesrc.seek(0)
    s3=str(filesrc.readline())
    final=s0 + s1 + s2 + s3
    filesrc.close()
    raise final

    i changed as you adviced ankit
    am getting the following error now
    org.apache.bsf.BSFException: exception from Jython:
    Traceback (innermost last):
    File "<string>", line 37, in ?
    java.sql.SQLException: ORA-00911: invalid character
    here is the modified code
    sourceConnection = odiRef.getJDBCConnection("SRC")
    sqlstring = sourceConnection.createStatement()
    sqlstmt="update tab1 set tot_coll_amt = to_number('#lines') where load_audit_key=418507;"
    result=sqlstring.executeUpdate(sqlstmt)
    sourceConnection.close()
    Any ideas
    Edited by: Sunny on Dec 3, 2010 1:04 PM

  • Insert and update tables from SQL server to oracle database tables

    Hi,
    I am having problem while update data from sql server to oracle database tables.
    I am doing one way insert +updates that is from SQL Server tables ==> Oracle database tables
    I am using tools Sql server Integration service. I can insert data from sql server to oracle but update can't. Please help me how can I update + insert from sql server to oracle database tables easily.
    Thanks in advance.

    Hi,
    What about using Oracle SQL Developer for migration
    http://www.oracle.com/technetwork/database/migration/sqlserver-095136.html
    HTH

  • Update Query is Performing Full table Scan of 1 Millions Records

    Hello Everyboby I have one update query ,
    UPDATE tablea SET
              task_status = 12
              WHERE tablea.link_id >0
              AND tablea.task_status <> 0
              AND tablea.event_class='eventexception'
              AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
              AND ltask.task_status = 0)
    When I do explain plan it shows following result...
    Execution Plan
    0 UPDATE STATEMENT Optimizer=CHOOSE
    1 0 UPDATE OF 'tablea'
    2 1 FILTER
    3 2 TABLE ACCESS (FULL) OF 'tablea'
    4 2 TABLE ACCESS (BY INDEX ROWID) OF 'tablea'
    5 4 INDEX (UNIQUE SCAN) OF 'PK_tablea' (UNIQUE)
    NOW tablea may have more than 10 MILLION Records....This would take hell of time even if it has to
    update 2 records....please suggest me some optimal solutions.....
    Regards
    Mahesh

    I see your point but my question or logic say i have index on all columns used in where clauses so i find no reason for oracle to do full table scan,,,,
    UPDATE tablea SET
    task_status = 12
    WHERE tablea.link_id >0
    AND tablea.task_status <> 0
    AND tablea.event_class='eventexception'
    AND Exists(SELECT 1 from tablea ltask where ltask.task_id=tablea.link_id
    AND ltask.task_status = 0)
    I am clearly statis where task_status <> 0 and event_class= something and tablea.link_id >0
    so ideal case FOR optimizer should be
    Step 1)Select all the rowid having this condition...
    Step 2)
    For each row in rowid get all the row where task_status=0
    and where taskid=linkid of rowid selected above...
    Step 3)While looping for each rowid if it find any condition try for rowid obtained from ltask in task 2 update that record....
    I want to have this kind of plan,,,,,does anyone know how to make oracle obtained this kind of plan......
    It is try FULL TABLE SCAN is harmfull alteast not better than index scan.....

  • Real time update in Oracle Table

    Hi Friends,
    I have a new requirement (challenging) and i need your inputs to proceed further.
    User is expecting an real time update in Oracle Table.
    Example:
    I have an Oracle (10g) table DDSX-CALENDAR table. whenever a record(new) gets inserted into this table, i need to take this record and update the other oracle table (10g) existing in a different environment (schema and server).
    Please let me your inputs about handling this requirement.
    Thanks for your time.
    Regards,
    Diwakar Dayalan

    Thanks Prasath.
    I beleive for setting up the DBlink the user needs to have DBA role.
    How do we know the data has been inserted into the source, do i need to use a trigger for that ?
    In which way i can use the DB link inside a trigger to update the values in the target table.
    Example
    My source tables is
    DDSX_STR_BANK
    Store Number Bank Name Bank Account Number
    0001 BOA 111111111111 (assume previous value is 222222222222 )
    0002 BOA 222222222222 (assume previous value is 111111111111).
    Now two store numbers have got its bank account number updated in the source and the value is inserted, Now my requirement is once the value is inserted in the source, i need to update bank account number in SSDX_STR_BANK table in a different server and schema.
    SSDX_STR_BANK
    Store Number Bank Account Number
    0001 222222222222
    0002 111111111111
    Update the bank account numbers in Sync with the DDSX_STR_BANK.
    Please guide me how to proceed with this requirement.
    Thanks,
    Divakar

  • How to Update the oracle toad column value in table by using SSRS 2008

    Hi Team,
    How to update the oracle DB table column value by using SSRS 2008.
    Can any one help me on this.
    Thanks,
    Manasa.
    Thank You, Manasa.V

    Hi veerapaneni,
    According to your description, you want to use SSRS to update data in database table. Right?
    Though Reporting Services is mostly used for rendering data, your requirement is still can be achieved technically. You need to create a really complicated stored procedure. Pass insert/delete/update and the columns we need to insert/delete/update as
    parameters into the stored procedure. When we click "View Report", the stored procedure will execute so that we can execute insert/delete/update inside of the stored procedure. Please take a reference to two related articles below:
    Update Tables with Reporting Services – T-SQL Tuesday #005
    SQL Server: Using SQL Server Reporting Services to Manage Data
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou

  • Error Updating Table with "Get Active UnitOfWork" Checked

    Hi everyone.
    I call invoke the a DB Adapter twice to update table in one BPEL, and so I have checked "Get Active UnitOfWork" so that each update can be persisted. I am seeing inconsistent results. What could be causing this?

    Hi, Have you checked the .jca source to make sure this value is indeed checked? Sometimes even when you check this option in the wizard it doesn't reflect in the source when you also perform a select operation: http://docs.oracle.com/cd/E12839_01/relnotes.1111/e10133/adapter.htm. Also are you using the xADataSource driver in the data source and the xADataSourceName ConnectionFactory property?
    21.1.5.1 The Value Of the Active Unit Of Work Property Is Not Saved for Outbound SELECT Operation
    While configuring an outbound Oracle Database Adapter to perform a SELECT operation, if you select Get Active Unit of Work in the Adapter Configuration Wizard - Advanced Option page, then the value of the GetActiveUnitofWork property is not saved in the .jca file.
    The workaround for this issue is to manually add this property in the .jca file of the Oracle Database Adapter, as shown in the following example:
    <property name="GetActiveUnitOfWork" value="true"/>

  • Information in the case folder is not getting updated in Oracle Credit Mgt

    Hi,
    I have created an SO and an Invoice for this SO for a particular customer. Now, when I create a credit application and a case folder in OCM for this customer, Fields like Receivables Balance, Credit Exposure, Days Sales Outstanding etc. in the case folder are not getting updated. These fields are not getting updated even after refreshing the case folder. It means that the information from the 'Receivables Responsibility is not getting updated in the case folder.
    I have already run the 'Initialize credit Summaries' concurrent program after creating the Sales Order and Invoice.
    PS: The case folder gets updated only when I set the Profile Option: "AR: Allow summary table refresh" to Yes and then running the "Refresh AR Transactions Summary Tables" concurrent program. But once this program runs the Profile Option again becomes 'No' (Which is a Standard functionality).
    So this is a manual way to update the case folder. My requirement is that the case folder should get updated automatically and no manual intervention should be required.

    Sumit Malik wrote:
    Hi,
    I have created an SO and an Invoice for this SO for a particular customer. Now, when I create a credit application and a case folder in OCM for this customer, Fields like Receivables Balance, Credit Exposure, Days Sales Outstanding etc. in the case folder are not getting updated. These fields are not getting updated even after refreshing the case folder. It means that the information from the 'Receivables Responsibility is not getting updated in the case folder.
    I have already run the 'Initialize credit Summaries' concurrent program after creating the Sales Order and Invoice.
    PS: The case folder gets updated only when I set the Profile Option: "AR: Allow summary table refresh" to Yes and then running the "Refresh AR Transactions Summary Tables" concurrent program. But once this program runs the Profile Option again becomes 'No' (Which is a Standard functionality).
    So this is a manual way to update the case folder. My requirement is that the case folder should get updated automatically and no manual intervention should be required.Duplicate post -- Information in the case folder is not getting updated in Oracle Credit Mgt

  • Oracle Performance Issue

    Hardware Configuration:
    Regarding Oracle Performance Issue.
    Configuration 1
    ================
    SunV880 - Sunfire
    32 GB RAM
    14 numbers of 36GB hard disk
    8 CPUs
    CPU Speed 750MZ.
    Software Configuration:
    Oracle 8i
    OS version - Solaris 8
    Customized our own application - Namex
    Configuration 2
    ================
    Intel PIII - 750 MZ
    2 GB RAM
    2 CPUS
    Software configuration
    Oracle 8i
    OS version linux 6.2
    Customized our own application - Namex (multi threaded application)
    We installed the oracle application in all hard disks. All tables
    are splited in to separate hard disks.
    OS installed in 1 hard disk.
    namex application installed in 1 hard disk
    Oracle installed in 1 hard disk.
    All tables are splited in to other hard disks.
    We are trying to insert some user databases in oracle table. We
    achieved up to 150 records/second in Sun server. But in lower
    configuration our application inserts up to 100 records/second.
    (configuration 2)
    We want improve our inserting database records/per rate
    in Sun Server.
    How to tune our oracle application parameter values in init.ora
    file. Our application tries to insert up to 500 records per second.
    But I can't able to achieve this value.
    init.ora file
    =============
    db_name = "namex"
    instance_name = namex64
    service_names = namex64
    control_files = ("/disk1/oracle64/OraHome1/oradata/Namex64/control01.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control02.ctl", "/disk1/oracle64/OraHome1/oradata/namex64/control03.ctl")
    open_cursors = 300
    max_enabled_roles = 145
    #db_block_buffers = 20480
    db_block_buffers = 604800
    #shared_pool_size = 419430400
    shared_pool_size = 8000000000
    #log_buffer = 163840000
    log_buffer = 2147467264
    #large_pool_size = 614400
    java_pool_size = 0
    log_checkpoint_interval = 10000
    log_checkpoint_timeout = 1800
    processes = 1014
    # audit_trail = false # if you want auditing
    # timed_statistics = false # if you want timed statistics
    timed_statistics = true # if you want timed statistics
    # max_dump_file_size = 10000 # limit trace file size to 5M each
    # Uncommenting the lines below will cause automatic archiving if archiving has
    # been enabled using ALTER DATABASE ARCHIVELOG.
    # log_archive_start = true
    # log_archive_dest_1 = "location=/disk1/oracle64/OraHome1/admin/namex64/arch"
    # log_archive_format = arch_%t_%s.arc
    #DBCA uses the default database value (30) for max_rollback_segments
    #100 rollback segments (or more) may be required in the future
    #Uncomment the following entry when additional rollback segments are created and made online
    #max_rollback_segments = 500
    # If using private rollback segments, place lines of the following
    # form in each of your instance-specific init.ora files:
    #rollback_segments = ( RBS0, RBS1, RBS2, RBS3, RBS4, RBS5, RBS6, RBS7, RBS8, RBS9, RBS10, RBS11, RBS12, RBS13, RBS14, RBS15, RBS16, RBS17, RBS18, RBS19, RBS20, RBS21, RBS22, RBS23, RBS24, RBS25, RBS26, RBS27, RBS28 )
    # Global Naming -- enforce that a dblink has same name as the db it connects to
    # global_names = false
    # Uncomment the following line if you wish to enable the Oracle Trace product
    # to trace server activity. This enables scheduling of server collections
    # from the Oracle Enterprise Manager Console.
    # Also, if the oracle_trace_collection_name parameter is non-null,
    # every session will write to the named collection, as well as enabling you
    # to schedule future collections from the console.
    # oracle_trace_enable = true
    # define directories to store trace and alert files
    background_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/bdump
    core_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/cdump
    #Uncomment this parameter to enable resource management for your database.
    #The SYSTEM_PLAN is provided by default with the database.
    #Change the plan name if you have created your own resource plan.# resource_manager_plan = system_plan
    user_dump_dest = /disk1/oracle64/OraHome1/admin/Namex64/udump
    db_block_size = 16384
    remote_login_passwordfile = exclusive
    os_authent_prefix = ""
    compatible = "8.0.5"
    #sort_area_size = 65536
    sort_area_size = 1024000000
    sort_area_retained_size = 65536
    DB_WRITER_PROCESSES=4
    How to improve my performance activities on Oracle server.
    Please guide me regarding this issue.
    If anyone wants more info, please let me know.
    Best regards,
    Senthilkumar

    Are you sure that it is not an application constraint ? i.e. the application can't handle so much data per second ? (application locks, threads )
    Have you tried to write a simple test program, which inserts predefined data (which your application inserts) the same data, only changing keys ?
    Then comparing the values from the 1st and the 2nd configuration ?
    Did you check the way your application is communicating with oracle ? If it is TCP/ip (even on the local machine) then this is your main problem.
    And one more thing, do you know if your application is able to run the load (inserts) of data on different threads (i.e. in parallel), because if is not, you won't be able to push the speed higher because your constraint is the speed of a single CPU. Consider running several process, which loads the data.
    We had the same problem ot AIX machines with 4 cpus. Monitoring the machine, we found that only 25% (1 cpu) where in use. We had to run 4 processes to push the speed up. Check your system's overal load while running the 'load' (inserts).
    log_checkpoint_interval = 10000
    Check if this value is appropriate. Maybe you should set it to 0 (infinite). This will disable checkpoints on a 'number of undo record' basis. Checpoints will occure only on log switch.
    How much redo files per redo groups do you have ? What is their size ? Are they on different disks ? How much redo data is generated by a single 'record' inserted ?
    Hope i helped at least a little.

  • Update table problem

    Hello,
    I want to update table building_test from table buildings if name is null or if county_id or region_id is null or -1
    The below query works fine if my building_test table contains null values. if I add a test value with region or county -1, I get the error below.
    Note: table building are in a remote DB. I just make it simple to post it to the forum.
    Tha datatypes of the tables are OK.
    Oracle is 9i
    BEGIN
    FOR i IN (
    SELECT building_id, name, county_id, region_id
    FROM buildings
    WHERE building_id IN (SELECT building_id
    FROM building_test
    WHERE name IS NULL
    OR county_id IS NULL
    OR region_id IS NULL
    OR county_id = '-1'
    OR region_id = '-1'
    LOOP
    UPDATE building_test
    SET name = i.name
    WHERE building_id = i.building_id
    AND name IS NULL;
    UPDATE building_test
    SET county_id = i.county_id
    WHERE building_id = i.building_id
    AND county_id IS NULL
    OR county_id = -1;
    UPDATE building_test
    SET region_id = i.region_id
    WHERE building_id = i.building_id
    AND region_id IS NULL
    OR region_id = -1;
    END LOOP;
    END;
    Error starting at line 5 in command:
    Error report:
    ORA-01401: inserted value too large for column
    ORA-06512: at line 17
    01401. 00000 - "inserted value too large for column"
    *Cause:   
    *Action:
    Any help will be much appreciated.
    Thank you.

    it's not always clear, what AND and OR combinations really do:
    WHERE building_id = i.building_id
    AND county_id IS NULL
    OR county_id = -1;can mean
    WHERE (building_id = i.building_id AND county_id IS NULL) OR county_id IS NULL 
    *OR*
    WHERE building_id = i.building_id AND (county_id IS NULL OR county_id IS NULL)try with brackets and post the result.

  • Oracle performance query

    Hi folks,
    A question about Oracle performance.
    Which query would be faster....Is it the join of the tables or Is it the sub query of the tables.
    ex : select A.* from A a, B b where a.col1 = b.col1;
    (OR)
    select * from A where col1 in (select col1 from B);
    Thanks
    Shekar.

    the query are not equivalent!
    SQL> select * from dept where deptno in (select deptno from emp)
        DEPTNO DNAME          LOC
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            10 ACCOUNTING     NEW YORK
    SQL> select dept.* from dept,emp where dept.deptno=emp.deptno;
        DEPTNO DNAME          LOC
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            30 SALES          CHICAGO
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            30 SALES          CHICAGO
            10 ACCOUNTING     NEW YORK
            20 RESEARCH       DALLAS
            10 ACCOUNTING     NEW YORK
            30 SALES          CHICAGO
            20 RESEARCH       DALLAS
            30 SALES          CHICAGO
            20 RESEARCH       DALLAS
            10 ACCOUNTING     NEW YORK
    14 rows selected.

  • Update table using merge or Update statement

    Hi All,
    We have oracle 10G R2 On windows...
    We have tables BROK_DEALER_MAP and DTRMIS_REPORT.
    create table BROK_DEALER_MAP
    SL_NO NUMBER,     
    BROK_DLR_CODE VARCHAR2(30),     
    EMP_TAG     VARCHAR2(30),
    REMARKS     VARCHAR2(60),
    CONS_CODE VARCHAR2(30),     
    BROK_DLR_NAME VARCHAR2(50),
    BROKER_TYPE VARCHAR2(30),
    BROK_DLR_0 VARCHAR2(30),
    CATG_DESC VARCHAR2(60),     
    Category VARCHAR2(30));
    desc DTRMIS_REPORT
    SL_NO
    POSTED_DATE
    ZONE
    AMC_REGION
    CITY
    BROK_DLR_CODE
    BROK_DLR_NAME
    SUB_BROKER
    B_TYPE
    FOLIO_NO
    INVESTOR_NAME
    TAX_NO
    INV_TAG
    SCHEME_CODE
    SCHEME_NAME
    SCH_CLASS
    TRXN_MODE
    CHN_TAG
    FP_COUNT
    FP_AMOUNT
    AP_COUNT
    AP_AMOUNT
    PUR_COUNT
    PUR_AMOUNT
    SIP_COUNT
    SIP_AMOUNT
    SI_COUNT
    SI_AMOUNT
    RED_COUNT
    RED_AMOUNT
    SO_COUNT
    SO_AMOUNT
    DR_COUNT
    DR_AMOUNT
    STP_COUNT
    STP_AMOUNT
    NET_SALES
    DISTRIBUTOR_TYPE
    SCHEME_TYPE
    FOCUS_PRODUCT
    RM_CODE
    RM_NAMEtable BROK_DEALER_MAP doesn't have any duplicate records.
    table DTRMIS_REPORT have more than 2 lacks duplicate records.
    Now i want to update table DTRMIS_REPORT (DISTRIBUTOR_TYPY COLUMN) With the values of BROK_DEALER_MAP (CATEGORY COLUMN).
    For that i have written merge statement like below
    merge into dtrmis_report a
    using brok_dealer_map b
    on (a.brok_dlr_code=b.cons_code)
    when matched then
    update set a.Distributor_type=b.category
    where a.brok_dlr_code=b.cons_code;IT's giving error saying ORA-30926: unable to get a stable set of rows in the source tables.
    How to update the table.
    Please help.

    Chanchal Wankhade wrote:
    IT's giving error saying ORA-30926: unable to get a stable set of rows in the source tables.That means there are duplicate records in your source table.
    Please post the output of the below
    select cons_code
    from brok_dealer_map
    group by cons_code
    having count(*) > 1;In case of duplicate CONS_CODE, you need to decide with which the target table should get updated
    And are you seriously giving a where condition in merge like you posted..?
    Edited by: jeneesh on Dec 19, 2012 9:56 AM

  • Function module for Production order update (Table AFKO)

    Hello All,
    I know similar subject has been posted but please read the following.
    SAP 4.6c doesn't provide BAPI for production order update.
    We did develop an ABAP program that update production order and schedule it in a job.
    We are looking of avoiding direct update in table AFKO which our program do right now.
    We looked (with SE37) at either BAPI or FM to pass parameters to a functioin that would perform that update and ensure data integrity.
    CO_ZV_ORDER_POST seems interesting but is also using a lot of parameters and we have diffiulties to test it and understand it our dev team being fairly new.
    Can somebody tell us how to use this function or tell us another FM that could be used. (ie passing internal table content (New fields values) and a key value (AUFNR)) to update table AFKO and related objects/tables
    Regards
    Marc

    hi,
    TABLE IS AFKO
    rgds
    anver
    if hlped mark points

  • Oracle performance, slow for larger and more complex results.

    Hello Oracle forum,
    At the moment i have a Oracle database running and i'm specifically interested in the efficiency spatial extension for webmaps and GIS.
    I've been testing the database with large shape files (400mb - 1gigabyte) loaded them into the database with shp2sdo->sql*loader.
    Using Benchmark factory i've test the speed of transactions an these drop relatively quickly. I've started with a simple query:
    SELECT id FROM map WHERE id = 3 when I increase the amount of id's to 3-10000 the performance decreases drastically.
    so :
    SELECT id FROM map WHERE id >=3 and id <= 10000
    The explain plan shows the second query , both query's use the index.
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 9828 | 49140 | 22 (0)| 00:00:01 |
    |* 1 | INDEX RANGE SCAN| SYS_C009650 | 9828 | 49140 | 22 (0)| 00:00:01 |
    Statistics
    0 recursive calls
    0 db block gets
    675 consistent gets
    0 physical reads
    0 redo size
    134248 bytes sent via SQL*Net to client
    7599 bytes received via SQL*Net from client
    655 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    9796 rows processed
    The statistics does not show very weird stuff, but maybe i'm wrong. Nothing changed in the explain plan except for the range scan instead of a unique scan.
    The query returns lots of results and this is I think the reason why my measured time of the query is large. The time it takes returning large amount of rows increases quickly for more rows.
    . Can this be solved? The table has been analyzed before starting the query.
    The parameters of the database are not really changed from standard, I increased the amount of memory used by Oracle 11g to 1+ gigabyte.
    and let the database itself decide how it uses this memory.
    The system specs are and db_parameters are:
    Oracle 11G
    Memory Processor # of CPUs OS OS Version OS B
    1.99 gb Intel(R) Core(TM)2 CPU 6600 @ 2.40GHz 2 Microsoft WindowsXP 5.2600
    0=Oracle decides which value will be given
    cursor_sharing EXACT
    cursor_space_for_time FALSE
    db_block_size 8192
    db_recovery_file_dest_size 2147483648
    diagnostic_dest C:\DBBENCHMARK\ORACLE
    dispatchers (PROTOCOL=TCP) (SERVICE=gistestXDB)
    hash_area_size 131072
    log_buffer 5656576
    memory_max_target 1115684864
    memory_target 1048576000
    open_cursors 300
    parallel_max_servers 20
    pga_aggregate_target 0
    processes 150
    resumable_timeout 2162688
    sort_area_size 65536
    Sga=632mb
    PGA=368mb
    javapool=16mb
    largepool=8mb
    other=8mb
    So I indexed and analyzed the data what did i forget? I can speed it up with soft parsing, but the problem remains . Hopefully this enough information for some analysis, does anyone experienced the same problems ? I tested with SQLdeveloper the speed and is shows the same speed as Benchmark factory. What could be wrong with the parameters?
    Thanks,
    Jan Martijn
    Edited by: user12227964 on 25-jan-2010 4:53
    Edited by: user12227964 on 26-jan-2010 2:20

    Sand wrote:
    select count(id) , resulted in 3669015 counted id's.
    The database counted 18,345,075 rows per second without binded variables , which is ten times slower as your result. This can be possible because of hardware but my question is specifically about the number of rows returned thus large amount of results. The idea was not to compare the speed of "+select count(*)+" statements - but to illustrate that even when dealing with a huge number of rows, one can decrease the amount of I/O that needs to be performed to deal with that number of rows.
    Select id from map where id <= 1
    4000 rows per second are selected, Rows/sec is a meaningless measurement - due to physical I/O (PIO) versus logical I/O (LIO). You can select a 100 rows and these require PIO. Resulting in an elapsed time of 1 sec. You can select a 1000 rows that require only LIO. With an an elapsed time of 0.5 sec.
    Is the 2nd method better or faster? No. It simply needed less time to be spend on I/O as the data blocks were in the buffer cache (memory) and did not require very slow and expensive disk access.
    Another database i testes returns 6 times 25425 rows back per second for the same query (100 ids). What could be a parameter that limits the output speed of multiple rows in a query?.Every single row that needs to be read/processed by a SQL statement has a cost associated with it. This cost is not consistent! It differs depending on how that row can reached - what I/O paths are available to find that rows? Does the full table need to be scanned? Does an index need to be scanned? Is there a unique index that can be used? Is the table partitioned and can partitioning pruning be applied and local partition indexes used? Are there are user functions that need to be applied to the row's data? Etc. Etc.
    All these together determine how fast the client gets a row from the cursor executing that SQL.
    The more rows you want to process, the bigger the increase in the cost/expense - specifically more I/O. As I/O is the biggest expense (slowest ito elapsed time).
    So you want to do as little I/O as possible and read as little data as possible. For example, instead of a full table scan, a fast full index scan. For example, instead of reading the complete contents of a 10GB table, reading the complete contents of a 12MB index for that table.
    I suggest that you read the Oracle Performance Guide to familiarise yourself with basic performance concepts. Use http://tahiti.oracle.com for finding the the guide for your applicable Oracle version.

  • Updating tables in external database

    Is there a way to update tables in an external database using an HTML-DB process?
    I want to make my "Apply Changes" button process script update an external table in database x.
    Example:
    update x.tracking_locations@x
    set location = :P7_LOCATION,
    description = :P7_DESCRIPTION,
    IN_USE = :P7_IN_USE
    where tracking_location_num = :P7_TRACKING_LOCATION_NUM;
    I get the following error:
    ORA-01461: can bind a LONG value only for insert into a LONG column ORA-02063: preceding line from x
    None of these columns is type long! (they're varchar2)
    Both databases are 9iR2.
    I had a similar problem INSERTING the data (CREATE button) which I got around by posting to a table in my HTML-DB schema, inserting into the database x table as select from the HTML_DB table then deleting from the HTML-DB table all within the "on submit" process script.
    example:
    insert into tracking_locations
    (DESCRIPTION, LOCATION,IN_USE)
    values(:P7_DESCRIPTION,:P7_LOCATION,:P7_IN_USE);
    insert into x.tracking_locations@x
    (DESCRIPTION, LOCATION,IN_USE)
    select description, location, 'Y' from tracking_locations;
    delete from tracking_locations;
    This INSERT trick works, but seems like it should be easier! Nevertheless, I can't get updates to work.
    Any ideas?

    hey tony--
    neat issue. i just reproduced it on our development box and will log it in a minute. for now, though, an easier way to get around it would be to assign the values from your page items to local variables declared in your update procedure and use those local vars for your update. so instead of...
    update x.tracking_locations@x
    set location = :P7_LOCATION,
    description = :P7_DESCRIPTION,
    IN_USE = :P7_IN_USE
    where tracking_location_num = :P7_TRACKING_LOCATION_NUM;
    ...you could set it up like so...
    declare
    l_location varchar2(200) default :P7_LOCATION;
    l_description varchar2(200) default :P7_DESCRIPTION;
    l_in_use default :P7_IN_USE;
    begin
    update x.tracking_locations@x
    set location = l_location,
    description = l_description,
    in_use = l_in_use
    where tracking_location_num = :P7_TRACKING_LOCATION_NUM;
    end;
    ...and things should work fine. i'm also pretty sure that you could avoid having to declare the local variables if you referred to your items in your update statement using the v('ITEM_NAME') syntax, but you'd have to have global_names set to true and it might not be as performant a workaround.
    hope this helps,
    raj

Maybe you are looking for