Urgent : Performance Issue DELETE , INSERT INTO SELECT, UPDATE

Hi,
NEED ASSISTANCE TO OPTIMIZE THE INSERT STATEMENT (insert into select):
=================================================
We have a report.
As per current design following steps are used to populate the custom table whcih is used for reporting purpose:
1) DELETE all the recods from the custom table XXX_TEMP_REP.
2) INSERT records in custom table XXX_TEMP_REP (Assume all the records related to type A)
using
INSERT..... INTO..... SELECT.....
statement.
3) Update records in XXX_TEMP_REP
using some custom logic for the records populated .
4) INSERT records in custom table XXX_TEMP_REP (Records related to type B)
using
INSERT..... INTO..... SELECT.....
statement.
Stats gathered related to Insert statement are:
Event Wait Information
SID 460 is waiting on event : db file sequential read
P1 Text : file#
P1 Value : 20
P2 Text : block#
P2 Value : 435039
P3 Text : blocks
P3 Value : 1
Session Statistics
redo size : 293.84 M
parse count (hard) : 34
parse count (total) : 1217
user commits : 3
Transaction and Rollback Information
Rollback Used : 35.1796875 M
Rollback Records : 355886
Rollback Segment Number : 12
Rollback Segment Name : _SYSSMU12$
Logical IOs : 1627182
Physical IOs : 136409
RBS Startng Extent ID : 14
Transaction Start Time : 09/29/10 04:22:11
Transaction_Status : ACTIVE
Please suggest how this can be optimized.
Regards,
Narender

Hello,
Is there any relation with the Oracle Forms tool ?
Francois

Similar Messages

  • Performance issue when inserting into spatial indexed table with JDBC

    We have a table named 'feature' which has a "sdo_geometry" column, and we created spatial index on that column,
    CREATE TABLE feature ( id number, desc varchar, oshape sdo_gemotry)
    CREATE INDEX feature_sp_idx ON feature(oshape) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    Then we executed the following SQL to insert about 800 records into that table(We tried this by using DB visualizer and
    our Java application, both of them were using JDBC driver to connect to the oracle 11gR2 database) .
    insert into feature(id,desc,oshape) values (1001,xxx,xxxxx);
    insert into feature (id,desc,oshape) values (1002,xxx,xxxxx);
    insert into feature (id,desc,oshape) values (1800,xxx,xxxxx);
    We encoutered the same problem as this topic
    Performance of insert with spatial index
    It takes nearly 1 secs for inserting one record,compare to 50 records inserted per sec without spatial index,
    which is 50x drop in peformance when doing insertion with spatial index.
    However, when we copy and paste those insertion scripts into Oracle Client(same test and same table with spatial index), we got a totally different performance result:
    more than 50 records inserted in 1 secs, just as fast as the insertion without building spatial index.
    Is it because Oracle Client is not using JDBC? Perhaps JDBC was got something wrong when updating those spatial indexed tables.
    Edited by: 860605 on 19/09/2011 18:57
    Edited by: 860605 on 19/09/2011 18:58
    Edited by: 860605 on 19/09/2011 19:00

    Normally JDBC use auto-commit. So each insert can causes a commit.
    I don't know about Oracle Client. In sqlplus, insert is just a insert,
    and you execute "commit" to explicitly commit your changes.
    So maybe this is the reason.

  • How to convert update,delete statement into select stmt

    Hi all,
         I have a field called dml_stmt, i am getting the dml statement has input from the user.
         My requirement is, if user is giving "update set col_name = 'xyz' from table_name where codition = 'aa'", before updating the table, i need to get old values from the table and put it in the audit table
         For that,i need to convert those update statement into select stmt and need to execute the query to get the data and then i will put it in the audit table..
         can anyone guide how to convert the update or delete stmt into select(need to write in pl/sql)
    Please do needfull things ......
    Regards,
    Jame

    Maybe I'm missing something, but why would auditing help here? It sounds like the user wants to know the prior values of the data, not the SQL UPDATE statement that was issued. Auditing would tell you that a table was updated, fine-grained auditing would tell you what the UPDATE statement was, but you'd need something else to capture the state of the data prior to the update.
    Depending on why putting triggers on every table was discounted, you may also want to take a look at using Workspace Manager or Total Recall (in 11g) to track a history of data changes. But triggers would be the common solution to this sort of problem.
    Justin

  • Commit for every 1000 records in  Insert into select statment

    Hi I've the following INSERT into SELECT statement .
    The SELECT statement (which has joins ) has around 6 crores fo data . I need to insert that data into another table.
    Please suggest me the best way to do that .
    I'm using the INSERT into SELECT statement , but i want to use commit statement for every 1000 records .
    How can i achieve this ..
    insert into emp_dept_master
    select e.ename ,d.dname ,e.empno ,e.empno ,e.sal
       from emp e , dept d
      where e.deptno = d.deptno       ------ how to use commit for every 1000 records .Thanks

    Smile wrote:
    Hi I've the following INSERT into SELECT statement .
    The SELECT statement (which has joins ) has around 6 crores fo data . I need to insert that data into another table.Does the another table already have records or its empty?
    If its empty then you can drop it and create it as
    create your_another_table
    as
    <your select statement that return 60000000 records>
    Please suggest me the best way to do that .
    I'm using the INSERT into SELECT statement , but i want to use commit statement for every 1000 records .That is not the best way. Frequent commit may lead to ORA-1555 error
    [url http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:275215756923]A nice artical from ASKTOM on this one
    How can i achieve this ..
    insert into emp_dept_master
    select e.ename ,d.dname ,e.empno ,e.empno ,e.sal
    from emp e , dept d
    where e.deptno = d.deptno       ------ how to use commit for every 1000 records .
    It depends on the reason behind you wanting to split your transaction into small chunks. Most of the time there is no good reason for that.
    If you are tying to imporve performance by doing so then you are wrong it will only degrade the performance.
    To improve the performance you can use APPEND hint in insert, you can try PARALLEL DML and If you are in 11g and above you can use [url http://docs.oracle.com/cd/E11882_01/appdev.112/e25788/d_parallel_ex.htm#CHDIJACH]DBMS_PARALLEL_EXECUTE to break your insert into chunks and run it in parallel.
    So if you can tell the actual objective we could offer some help.

  • A trigger that changes an insert into an update?

    This is probably a quite unusual question, but is it possible to create a trigger that changes an insert into an update?
    So, if someone tries to do something like this:
    INSERT INTO SOME_TABLE (column1, column2, column3) VALUES (value1, value2, value3);
    ...the trigger is able to change it into:
    UPDATE SOME_TABLE column1=value1, column2=value2, column3=value3 WHERE ID=1;
    Can it be done?

    Hi,
    You can do things like that in an INSTEAD OF INSERT trigger.
    See the PL/SQL manual for details:
    http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28370/create_trigger.htm#sthref2864
    INSTEAD of triggers only work on views. Of course, you can create a view as "SELECT * FROM some_table" just so you can use an INSTEAD OF trigger.

  • How to use "insert into & select" in format search

    Hello,
    I am just wonderring whether you can help me solve this issue.
    I want to change the value of a field in the title area for a sales quotation. however, this field is not shown up on the interface.
    For example, there is a field in the title area, "OQUT.Ref1". You actually cannot see this field from the quotation interface or any other doc. Now I want to update the value of this field.
    What I am now trying to do is to create an field named "update" in the title area, use format search to update. the code for the command will be something like
    Insert into $[OQUT.Ref1.0]
    select $[OQUT.U_MFG#]
    here, U_MFG# is UDF. as you may understand, I want to copy the value in the U_MFG# to the field of "Ref1".
    However, when I run "Execute", it gives me error. I believe there is something wrong with the code of "Insert into $[OQUT.Ref1.0]
    Does anyone know how to write the code?
    many thanks
    Stanley

    Thanks both Suda and sagar. The reason I wanted to do this is because I wanted to have UDF info be shown on the  MS word-templated quotation document.
    As you know, when you click the word symbol, a word-templated doc will be generated. The client needs two completely different format of quotation printout. thus I plan to print one type from PLD and other type by clicking the Word symbol. but later, I found out that the UDF field cannot be selected on the MS word template, or only system fields.
    Thus, the only way I can do is to copy the value from udf to some unused sytem fields and then show that system fields on the MS word template.
    any idea do you have?
    I wanted to tell SAP that It is not useful if the udf fields cannot be inserted into word template.
    thanks
    Stanley

  • Performance issue with insert query !

    Hi ,
    I am using dbxml-2.4.16, my node-storage container is loaded with a large document ( 54MB xml ).
    My document basically contains around 65k records in the same table ( 65k child nodes for one parent node ). I need to insert more records in to my DB, my insert XQuery is consuming a lot of time ( ~23 sec ) to insert one entry through command-line and around 50sec through code.
    My container is indexed with "node-attribute-equality-string". The insert query I used:
    insert nodes <NS:sampleEntry mySSIAddress='70011' modifier = 'create'><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    If I modify my query with
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:sampleTable/NS:sampleEntry[@mySSIAddress='1']
    insted of
    into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
    Time taken reduces only by 8 secs.
    I have also tried to use insert "after", "before", "as first", "as last" , but there is no difference in performance.
    Is anything wrong with my query, what should be the expected time to insert one record in a DB of 65k records.
    Has anybody got any idea regarding this performance issue.
    Kindly help me out.
    Thanks,
    Kapil.

    Hi George,
    Thanks for your reply.
    Here is the info you requested,
    dbxml> listIndexes
    Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
    Index: node-attribute-equality-string for node {}:mySSIAddress
    2 indexes found.
    dbxml> info
    Version: Oracle: Berkeley DB XML 2.4.16: (October 21, 2008)
    Berkeley DB 4.6.21: (September 27, 2007)
    Default container name: n_b_i_f_c_a_z.dbxml
    Type of default container: NodeContainer
    Index Nodes: on
    Shell and XmlManager state:
    Not transactional
    Verbose: on
    Query context state: LiveValues,Eager
    The insery query with update takes ~32 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS';insert nodes <NS:sampleEntry mySSIAddress='70000' modifier = 'create' ><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 32.5002
    and the query without the updation part takes ~14 sec ( shown below )
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
    Time in seconds for command 'query': 13.7289
    The query :
    time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//PMB:sampleTable/PMB:sampleEntry[@mySSIAddress='1000']"
    Time in seconds for command 'query': 0.005375
    is very fast.
    The Updation of the document seems to consume much of the time.
    Regards,
    Kapil.

  • Issue with INSERT INTO, throws primary key violation error even if the target table is empty

    Hi,
    I am running a simple
    INSERT INTO Table 1 (column 1, column 2, ....., column n)
    SELECT column 1, column 2, ....., column n FROM Table 2
    Table 1 and Table 2 have same definition(schema).
    Table 1 is empty and Table 2 has all the data. Column 1 is primary key and there is NO identity column.
    This statement still throws Primary key violation error. Am clueless about this? 
    How can this happen when the target table is totally empty? 
    Chintu

    Nope thats not true
    Either you're not inserting to the right table or in the background some other trigger code is getting fired which is inserting into some table which causes a PK violation. 
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Mechanism of a insert into select

    Hi
    I am using a insert with select in a batch program which is supposed to run just a little before midnight.
    My question is, when a insert into a table occurs, does oracle select all the rows before the insert starts, OR, does the insert occur simultaneously with the select.
    eg. table B has 4 rows
    data is being inserted into A from B
    does oracle select row1 from B and insert row1 into table A and then move to row2 in B and insert row2 into A
    OR
    does oracle select rows1 thro 4 in B before the first insert into A starts?
    If anybody could point me to a document/reference about that would be aweosme.
    --vj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    There is no need to know this kind of internals. A statement is atomic, so it either completes in total or not at all. Other sessions cannot see intermediate results, even your own session is not able to. If you would try, for example by using database triggers, you'll get a mutating table error to prevent that.
    Regards,
    Rob.

  • Urgent Performance Issue

    Hi,
    Can someone please tell me and explain me that if i have performance issue at the portal side then what cache settings i have to choose at rsrt and then at the reporting agent level.
    I will appreciate if you can explain me also in detail.
    I will appreciate and award points.
    Thanks
    Sakshi

    Hi Sakshi,
    For your question pls read the following document carefully, so you will know the correct information,
    regarding your question.
    <b>Read Mode</b>
    The read mode determines how the OLAP processor gets data during navigation. Three alternatives are supported:
    1. Read when navigating/expanding the hierarchy
    In this method, the system transports the smallest amount of data from the database to the OLAP processor but the number of read processes is the largest.
    In the "Read when navigating" mode below, data is requested in a hierarchy drilldown for the fully expanded hierarchy. In the "Read when navigating/expanding the hierarchy" mode, data in the hierarchy is aggregated by the database and transferred to the OLAP processor from the lowest hierarchy level displayed in the start list. When expanding a hierarchy node, the system intentionally reads this node's children.
    You can improve the performance of queries with large presentation hierarchies by creating aggregates in a middle hierarchy level that is greater than or equal to the start level.
    2. Read when navigating
    The OLAP processor only requests the data required for each query navigation status in the Business Explorer. The data required is read for each navigation step.
    In contrast to the "Read when navigating/expanding the hierarchy" mode, the system always fully reads presentation hierarchies at tree level.
    When expanding nodes, the OLAP processor can read the data from the main memory.
    When accessing the database, the system uses the most suitable aggregate table and, if possible, aggregates in the database itself.
    3. Read everything at once
    There is only one read process in this mode. When executing the query in the Business Explorer, the data required for all possible navigation steps for this query is read to the OLAP processor's main memory area. When navigating, all new navigation statuses are aggregated and calculated from the main memory data.
    The "Read when navigating/expanding the hierarchy" mode has a markedly better performance in almost all cases than the other two modes. This is because the system only requests the data that the user wants to see in this mode.
    The "Read when navigating" setting, in contrast to "Read when navigating/expanding the hierarchy", only has a better performance for queries with presentation hierarchies.
    In contrast to the two previous modes, the "Read everything at once" setting also has a better performance with queries with free characteristics. The idea behind aggregates, that is working with pre-aggregated data, is least supported in the "Read everything at once" mode. This is because the OLAP processor carries out aggregation in each query view.
    We recommend you choose the "Read when navigating/ expanding the hierarchy" mode.
    Only use different mode to "Read when navigating/ expanding the hierarchy" in exceptional circumstances.
    The "Read everything at once" mode can be useful in the following cases:
    The InfoProvider does not support selection, meaning the OLAP processor reads significantly more data than the query needs anyway.
    A user exit is active in the query that prevents the system from having already aggregated in the database.
    If it is Useful informatin to u pls provide points.
    have nice day.
    by
    ANR

  • Procedure or function for insert into select ...

    Hi!
    We need to know if we can create a procedure or function that will run the following script:
    insert into agent.train_schedule
    select 4000, '06:25' ,trunc(sysdate, 'year')+t.n
    from agent.trains, (select rownum -1 n from dual
    connect by level <= 365) t
    where agent.trains.id = 4000 and
    agent.trains.weekday like '%'||to_char(trunc(sysdate, 'year')+t.n, 'd',
    'nls_date_language=AMERICAN')||'%';
    The script inserts the train schedule dates into the train schedule table in accordance to the trains table.
    Any help would be appreciated.
    Thanks!

    Try the below:
    Create Or Replace procedure test_proc(p_train_no number, p_train_time varchar2) as
    begin
    insert into agent.train_schedule
    select p_train_no, p_train_time ,trunc(sysdate, 'year')+t.n
    from agent.trains, (select rownum -1 n from dual
    connect by level <= 365) t
    where agent.trains.id = p_train_no and
    agent.trains.weekday like '%'||to_char(trunc(sysdate, 'year')+t.n, 'd',
    'nls_date_language=AMERICAN')||'%';
    end;
    Call the procedure from the gui also pass the train number and the train time.
    Regards,
    Samujjwal Basu

  • Performance issue Create table as select BLOB

    Hi!
    I have a performance issue when moving BLOB´s between tables. (The size of images files are from 2MB to 10MB).
    I'm using follwing statement for example,
    "Create table tmp_blob as select * from table_blob
    where blob_id = 333;"
    Is there any hints that I can give when moving data like this or is Oracle10g better with BLOB's?

    Did you find a resolution to this issue?
    We are also having the same issue and wondering if there is a faster mechanism to copy LOBs between two table.

  • Urgent! How to insert into and query video from database in forms???

    In forms 6i demos CD, There is a demo form ocxvideo.fmb,
    but it just for video in file system.
    I want to read *.avi file from file system, and insert into
    database, and query from my forms.
    I create table with long raw, with default forms wizard,
    long raw for [image] item in forms.
    I change item type to ActiveX ,and right_click mouse
    ==>[Insert object]==>Oracle Veideo control.
    still can not insert avi data into database and query from my forms.
    Please give me some advice to solve this problem?
    Thank you very much!
    Ming-An
    [email protected]

    In forms 6i demos CD, There is a demo form ocxvideo.fmb,
    but it just for video in file system.
    I want to read *.avi file from file system, and insert into
    database, and query from my forms.
    I create table with long raw, with default forms wizard,
    long raw for [image] item in forms.
    I change item type to ActiveX ,and right_click mouse
    ==>[Insert object]==>Oracle Veideo control.
    still can not insert avi data into database and query from my forms.
    Please give me some advice to solve this problem?
    Thank you very much!
    Ming-An
    [email protected]

  • Performance issue and functional question regarding updates on tables

    A person at my site wrote some code to update a custom field on the MARC table that was being copied from the MARA table.  Here is what I would have expected to see as the code.  Assume that both sets of code have a parameter called p_werks which is the plant in question.
    data : commit_count type i.
    select matnr zfield from mara into (wa_marc-matnr, wa_marc-zfield).
      update marc set zfield = wa_marc-zfield
         where werks = p_werks and matnr = wa_matnr.
      commit work and wait.
    endselect.
    I would have committed every 200 rows instead of every one row, but here's the actual code and my question isn't around the commits but something else.  In this case an internal table was built with two elements - MATNR and WERKS - could have done that above too, but that's not my question.
                DO.
                  " Lock the record that needs to be update with material creation date
                  CALL FUNCTION 'ENQUEUE_EMMARCS'
                    EXPORTING
                      mode_marc      = 'S'
                      mandt          = sy-mandt
                      matnr          = wa_marc-matnr
                      werks          = wa_marc-werks
                    EXCEPTIONS
                      foreign_lock   = 1
                      system_failure = 2
                      OTHERS         = 3.
                  IF sy-subrc <> 0.
                    " Wait, if the records not able to perform as lock
                    CALL FUNCTION 'RZL_SLEEP'.
                  ELSE.
                    EXIT.
                  ENDIF.
                ENDDO.
                " Update the record in the table MARC with material creation date
                UPDATE marc SET zzdate = wa_mara-zzdate
                           WHERE matnr = wa_mara-matnr AND
                                 werks = wa_marc-werks.    " IN s_werks.
                IF sy-subrc EQ 0.
                  " Save record in the database table MARC
                  CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
                    EXPORTING
                      wait   = 'X'
                    IMPORTING
                      return = wa_return.
                  wa_log-matnr   = wa_marc-matnr.
                  wa_log-werks   = wa_marc-werks.
                  wa_log-type    = 'S'.
                  " text-010 - 'Material creation date has updated'.
                  wa_log-message = text-010.
                  wa_log-zzdate  = wa_mara-zzdate.
                  APPEND wa_log TO tb_log.
                  CLEAR: wa_return,wa_log.
                ELSE.
                  " Roll back the record(un save), if there is any issue occurs
                  CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK'
                    IMPORTING
                      return = wa_return.
                  wa_log-matnr   = wa_marc-matnr.
                  wa_log-werks   = wa_marc-werks.
                  wa_log-type    = 'E'.
                  " 'Material creation date does not updated'.
                  wa_log-message = text-011.
                  wa_log-zzdate  = wa_mara-zzdate..
                  APPEND wa_log TO tb_log.
                  CLEAR: wa_return, wa_log.
                ENDIF.
                " Unlock the record from data base
                CALL FUNCTION 'DEQUEUE_EMMARCS'
                  EXPORTING
                    mode_marc = 'S'
                    mandt     = sy-mandt
                    matnr     = wa_marc-matnr
                    werks     = wa_marc-werks.
              ENDIF.
    Here's the question - why did this person enqueue and dequeue explicit locks like this ?  They claimed it was to prevent issues - what issues ???  Is there something special about updating tables that we don't know about ?  We've actually seen it where the system runs out of these ENQUEUE locks.
    Before you all go off the deep end and ask why not just do the update, keep in mind that you don't want to update a million + rows and then do a commit either - that locks up the entire table!

    The ENQUEUE lock insure that another program called by another user will not update the data at the same time, so preventing database coherence to be lost. In fact, another user on a SAP correct transaction, has read the record and locked it, so when it will be updated your modifications will be lost, also you could override modifications made by another user in another luw.
    You cannot use a COMMIT WORK in a SELECT - ENDSELECT, because COMMIT WORK will close each and every opened database cursor, so your first idea would dump after the first update. (so the internal table is mandatory)
    Go through some documentation like [Updates in the R/3 System (BC-CST-UP)|http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCCSTUP/BCCSTUP_PT.pdf]
    Regards

  • FRM-40602: Cannot insert into or update data in a view

    Hi all!
    I have a form based on a view and I want to get rid off this message.
    I set the properties blocks query only but it still doesn't work.
    Does someone met with this situation?
    Many thanks!

    Hello
    I've just been messing about with a similar problem. Basically, I have a view that involves a join across two tables, I have a data block in my form that's based on the view, and I've written an INSTEAD OF trigger to insert/update/delete from the two tables.
    I was getting the error message 'Cannot insert or update data in a view', and it turned out that the error was happening because the join column between the two tables was the primary key on one of the tables, but the corresponding column on the join table had no unique key on it. This meant that Oracle couldn't establish a one-to-one relationship between rows in the view and rows in the underlying tables. The column on the join table was in fact unique on that table, and adding a unique constraint on that column in the database cured the problem.
    Hope that's of use.
    regards
    Andrew
    UK

Maybe you are looking for

  • Vector problem

    Hi to you all, I have built a table which retrieves data from a .jsp file by making an http request. The particular request is made every 10 seconds. The data is read in lines and the lines are stored in a vector. i.e. the line is: 1 34 er ty 23 wher

  • In PM order, can we change the material description for item category N?

    Hi, In Plant Maintenance work order, is there a way to change the material description while purchasing material (item category N) using a material master number? In config. the material description field is open for input. However on the work order

  • Full Export/Import Errors with Queue tables and ApEx

    I'm trying to take a full export of an existing database in order to build an identical copy in another database instance, but the import is failing each time and causing problems with queue tables and Apex tables. I have used both the export utility

  • After installation photoshop cs5 extended is not working properly

    don't know what the problem is?

  • Installing windows media player on my ibook G4

    I have on several occasions now attempted to download and install wmp on my ibook (only because some radio stations need it to work) and I get as far as downloading it either to the desktop or in a folder in the documents section, then when I click o