Performance degradation when using foreign keys

Hi,
I face drastic performance degradation when I add foreign keys to a table and perform insert / update on that table.
I have a row store table to  which I need to insert around 1,50,000 records.
If the table has no foreign key reference it takes maximum of 5 seconds but if the same table has references to other tables (in my case there are 3 references), the processing speed reduces drastically to 2 minutes.
Is there any solution / best practice that can help me in gaining performance (processing speed) in this situation?
Thanks
S.Srivatsan

Hi Sri,
When you perform one insert in any database table which is having foreign key relationships, it will check the corresponding parent tables to check whether the master data is available or not. If your table is having 2 foreign key relationship, it happen twice per insert. Hence the performance will degrade. This is one of the reasons why ECC  doesn't establish foreign key relationship in the back end database. The case is not just for INSERT, for UPDATE & DELETE the same is applicable.
Sreehari

Similar Messages

  • Performance degradation when using proxy.pac file with FF ESR 31

    With Bug 923458 many people complained about a performance issue compared to other browsers when a proxy.pac file is used.
    The issue initially reported with the bug was resolved for ESR25 according to the statistics, but the general performance issue remained.
    I had the same issue with ESR24 and ESR31.3 .
    I was testing with www.bild.de.
    It took about 40 seconds to load the content completely. Without the proxy.pac file it took about 10 seconds.
    I added a few alerts to the pac-File in order to get logs within the console for some analyses.
    I found the following:
    1. the pac.file is executed for every request, no matter if the host changed or not.
    With us the pac-File checks for IP-Adresses and host-names only.
    It is not necessary to execute the pac file for each and every request to the same remote host.
    So the question is, if we are able to disable this behaviour via about:config?
    2. the content referenced by www.bild.de seems to be loaded sequentially and with a delay
    The overall time consumed by the proxy.pac file executions was about 4 Seconds compared to the 40 seconds of overall load time.
    So I checked the delay between executions of the pac-file and found an overall delay of 40 seconds. I expect that the delay between the calls to the pac-file is caused by the retrieval of contents from the remote host.
    So why are the requests executed sequentially?
    Hint: Due to the times necessary for executing the pac-file and downloading the contents from the remote host, I would expect the logs generated by my alerts to be mixed (especially if myIpAddress took 1 Second). But the log is cleanly ordered by URL. (see attachment)

    Hi guigs2,
    thanks for your response. As we only use myIpAddress once within our pac-File and only rely on dnsDomainIs(), ==-Comparisons and shExpMatch() and the sum of all pac-Executions was about 4 seconds compared to 40 seconds overall load time, I do not think that dns resolving is our issue.
    I checked the seetings of the configuration you mentioned above. It is set to "false", so the client would try the resolve the dns names. Our admin told, that we do not use socks-Proxies, only http-Proxies.
    Regarding sequential load of the contents included on www.bild.de from other web sites, I attached a screenthot.
    Please note the red highlights. These show the start time in milliseconds of the pac-execution. I added this as a kind of id which represents a unique identifier together with the URL if the log items are mixed. But they are not, instead they are cleanly ordered by URL (for all 360 pac-file calls).
    Moreover in the picture you can see the delay between the end of the last pac-file execution and the next one (blue timestamp in millisonds compared to the red timestamp of the next row saying "entered proxy.pac"). The delay sum up exactly to the 40 seconds the FF took to load the page completely.
    Alone the fragment shown represents a delay of 630ms between the pac-file executions. If the contents would be loaded in parallel, there should be no such delay.

  • Performance problem when using CAPS LOCK piano input

    Dear reader,
    I'm very new to Logic and am running into a performance problem when using the CAPS LOCK-piano-keyboard for input of an instrument: when I'm not recording everything is fine and the program instantly responds on my keystrokes, but as soon as I go into record-mode there is sometimes a delay in the response-time (so I press a key and it takes up to half a second longer before the note is actually played).
    Is there anything to do about this to improve performance (for example turning of certain features of the application), or should I never use the CAPS LOCK keyboard anyway and go straight for an external MIDI-keyboard?
    Thanks and regards,
    Tim Metz

    Does your project have Audio tracks and just how heavy it is, how many tracks? Also, what kind of Software Instrument do you use?

  • How to avoid shared locks when validating foreign keys?

    I have a table with a FK and I want to update a row in that table without being blocked by another transaction which is updating the parent row at the same time. Here is an example:
    CREATE TABLE dbo.ParentTable
    PARENT_ID int NOT NULL,
    VALUE varchar(128) NULL,
    CONSTRAINT PK_ParentTable PRIMARY KEY (PARENT_ID)
    GO
    CREATE TABLE dbo.ChildTable
    CHILD_ID int NOT NULL,
    PARENT_ID INT NULL,
    VALUE varchar(128) NULL,
    CONSTRAINT PK_ChildTable PRIMARY KEY (CHILD_ID),
    CONSTRAINT FK_ChildTable__ParentTable FOREIGN KEY (PARENT_ID)
    REFERENCES dbo.ParentTable (PARENT_ID)
    GO
    INSERT INTO ParentTable(PARENT_ID, VALUE)
    VALUES (1, 'Some value');
    INSERT INTO ChildTable(CHILD_ID, PARENT_ID, VALUE)
    VALUES (1, 1, 'Some value');
    GO
    Now I have 2 transactions running at the same time:
    The first transaction:
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;BEGIN TRAN
    UPDATE ParentTable
    SET VALUE = 'Test'
    WHERE PARENT_ID = 1;
    The second transaction:
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;BEGIN TRAN
    UPDATE ChildTable
    SET VALUE = 'Test',
    PARENT_ID = 1
    WHERE CHILD_ID = 1;
    If 'UPDATE ParentTable' statement runs a bit earlier, then 'UPDATE ChildTable' statement is blocked untill the first transaction is committed or rollbacked. It happens because SQL Server acquires shared locks when validating foreign keys, even
    if the transaction is using read uncommitted, read committed snapshot (read committed using row versioning) or snapshot isolation level. I cannot see why change in the ParentTable.VALUE should prevent me from updating ChildTable. Please note that ParentTable.PARENT_ID
    is not changed by the first transaction, which means that from FK's point of view whatevere is set to the ParentTable.VALUE is never a problem for referential integrity. So, such blocking behavior seems to me not logical. Furthermore, it contradicts to the
    MSDN:
    Transactions running at the READ UNCOMMITTED level do not issue shared locks to prevent other transactions from modifying data read by the current transaction. READ UNCOMMITTED transactions are also not blocked by exclusive locks that would prevent the
    current transaction from reading rows that have been modified but not committed by other transactions. 
    Does anybody know how to workaround the issue? In other words, are there any tricks to avoid shared locks when validating foreign keys? (Disabling FK is not an option.) Thank you.
    Alexey

    If you change the primary key of the parent table to be nonclustered, there is no blocking.
    Indeed, when I update ParentTable.VALUE, then:
    in case of PK_ParentTable is clustered, a particular row in the clustered index is locked (request_mode:X, resource_type: KEY)
    in case of PK_ParentTable is non-clustered, a particular physical row in the heap is locked (request_mode:X, resource_type: RID).
    and when I update ChildTable.PARENT_ID, then:
    in case of PK_ParentTable is clustered, this index is used to verify referential integrity:
    in case of PK_ParentTable is non-clustered, again this index is used to verify referential integrity, but this time it is not locked:
    It is important to note that in both cases SQL Server acquires shared locks when validating foreign keys. The principal difference is that in case of clustered PK_ParentTable the request is blocked, while for non-clustered index it is granted.
    Thank you, Erland for the idea and explanations.
    The only thing that upsets me is that this solution cannot be applied, because I don't want to convert PK_ParentTable from clustered to non-clustered just to avoid blocking issues. It is a pity that SQL Server is not smart enough to realize that:
    ParentTable.PARENT_ID is not changed and, as a result, should not be locked
    ChildTable.PARENT_ID is not actually changed either (old value in my example is 1 and the new value is also 1) and, as a result, there is no need at all for validating the foreign key.
    In fact, the problem I described is just a tip of the iceberg. The real challenge is that I have deadlocks because of the FK validation. In reality, the first transaction has an additional statement which updates ChildTable:
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
    BEGIN TRAN
    UPDATE ParentTable
    SET VALUE = 'Test'
    WHERE PARENT_ID = 1;
    UPDATE ChildTable
    SET VALUE = 'Test'
    WHERE PARENT_ID = 1;
    The result is famous message:
    Msg 1205, Level 13, State 51, Line xx
    Transaction (Process ID xx) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
    I know that if I change the order of the two statements, it will solve the deadlock issue. But let's imagine I cannot do it. What are the other options?
    Alexey

  • Dbms_crypto - avoid error when using different key in lower environment

    Hello Experts,
    We are using Oracle 11.2.0.2. We are planning to implement dbms_crypto to encrypt few columns. We clone the data from production to lower environment ( DEV, QC).
    For the lower environments, we do not want to get the sensitive data from production and do not plan to use same key. Rather than getting an error when using differnt key, is it possible to get a different resultset back.
    In other words, we want the implementation to be same across environments but want to use a diffent key in lower environment and get different result (or garbage).
    Any suggestions would be greatly appreciated.
    While testing this logic, I am getting following error when using differnt key to decrypt. It works fine if I use same key.
    Error at line 1
    ORA-28817: PL/SQL function returned an error.
    ORA-06512: at "SYS.DBMS_CRYPTO_FFI", line 67
    ORA-06512: at "SYS.DBMS_CRYPTO", line 44
    ORA-06512: at line 19
    DECLARE
      l_credit_card_no    VARCHAR2(19) := '1234 5678 9012 3456';
      l_ccn_raw           RAW(128) := UTL_RAW.cast_to_raw(l_credit_card_no);
    l_key               RAW(128) := UTL_RAW.cast_to_raw('abcdefgh');
       l2_key               RAW(128) := UTL_RAW.cast_to_raw('12345678');
      l_encrypted_raw     RAW(2048);
      l_decrypted_raw     RAW(2048);
    BEGIN
      DBMS_OUTPUT.put_line('Original  : ' || l_credit_card_no);
      l_encrypted_raw := DBMS_CRYPTO.encrypt(src => l_ccn_raw,
                                             typ => DBMS_CRYPTO.des_cbc_pkcs5,
                                             key => l_key);
      DBMS_OUTPUT.put_line('Encrypted : ' || RAWTOHEX(UTL_RAW.cast_to_raw(l_encrypted_raw)));
      l_decrypted_raw := DBMS_CRYPTO.decrypt(src => l_encrypted_raw,
                                             typ => DBMS_CRYPTO.des_cbc_pkcs5,
                                             key => l2_key); --**Using different key to decrypt
      DBMS_OUTPUT.put_line('Decrypted : ' || UTL_RAW.cast_to_varchar2(l_decrypted_raw));
    END;Thank you.

    If I understand what you are trying to do ... and I may not ... it is not going to work.
    SQL> DECLARE
      2   l_credit_card_no VARCHAR2(19) := '1612-1791-1809-2605';
      3   l_ccn_raw RAW(128) := utl_raw.cast_to_raw(l_credit_card_no);
      4   l_key1     RAW(128) := utl_raw.cast_to_raw('abcdefgh');
      5   l_key2     RAW(128) := utl_raw.cast_to_raw('zyxwvuts');  -- alternate key used to attempt a different decryption
      6 
      7   l_encrypted_raw RAW(2048);
      8   l_decrypted_raw RAW(2048);
      9  BEGIN
    10    dbms_output.put_line('Original : ' || l_credit_card_no);
    11 
    12    l_encrypted_raw := dbms_crypto.encrypt(l_ccn_raw, dbms_crypto.des_cbc_pkcs5, l_key1);
    13 
    14    dbms_output.put_line('Encrypted : ' || RAWTOHEX(utl_raw.cast_to_raw(l_encrypted_raw)));
    15 
    16    l_decrypted_raw := dbms_crypto.decrypt(src => l_encrypted_raw, typ => dbms_crypto.des_cbc_pkc
    s5, key => l_key1);
    17 
    18    dbms_output.put_line('Key1 : ' || utl_raw.cast_to_varchar2(l_decrypted_raw));
    19 
    20    l_decrypted_raw := dbms_crypto.decrypt(src => l_encrypted_raw, typ => dbms_crypto.des_cbc_pkc
    s5, key => l_key2);
    21 
    22    dbms_output.put_line('Key2 : ' || utl_raw.cast_to_varchar2(l_decrypted_raw));
    23  END;
    24  /
    Original : 1612-1791-1809-2605
    Encrypted : 3534443342333642353141363846384237463732384636373943374630364234323243334539383042323135
    Key1 : 1612-1791-1809-2605
    DECLARE
    ERROR at line 1:
    ORA-28817: PL/SQL function returned an error.
    ORA-06512: at "SYS.DBMS_CRYPTO_FFI", line 67
    ORA-06512: at "SYS.DBMS_CRYPTO", line 44
    ORA-06512: at line 20

  • Performance hit when using a FirePro GPU?

    Hey there!
    I'm interested in purchasing a cheap ATI FirePro V4900 or similar GPU in order to take advantage of my 10 bit TFT when using Photoshop. I was wondering if I have to expect a performance hit when using PS 6.0 with such a card as opposed to a normal "gamer" GPU like the NVidia 670 GTX when :
    - performing normal file handling, opening PSD files, paning, zooming, brushing, etc.
    - applying some of the newer GPU-enhanced filters like Liquify, Oil Paint, Iris Blur, 3d enhancements, etc.
    - actually applying/rendering a more demanding fliter such as Iris Blur?
    Does anyone know about this? I'm afraid I could not find any benchmarks at all except for the one on Tomshardware regarding OCL, but that one does not include professional GPUs...
    Thanks for any info in advance!

    There will not be synchronization when the method of A
    is being called. The method 1) certainly saves memory
    space, but will the performance be hurt since there
    will only be one object accessed by multiple threads?
    Or maybe it doesn't matter?If there is no synchronization, it will not matter. Threads execute methods. Methods do not run on Objects. The Object is just data that is implicitly linked to the method.
    Just make sure it's safe to keep the method unsynchronized.

  • Not use foreign keys.

    I would like to know if is it bad if I don't want to use foreign keys? is it just a constraint? Is it useful? If I wont use delete on cascade for example is it mandatory to use foreign keys? Please help me with this doubt.
    Thanks.

    A Small example:
    DROP TABLE T_TABLES ;
    CREATE TABLE T_TABLES AS
    SELECT ROWNUM AS ID, OWNER, TABLE_NAME
    FROM   ALL_TABLES;
    ALTER TABLE T_TABLES ADD CONSTRAINTS PK_TABLES PRIMARY KEY(ID);
    DROP TABLE T_TABLE_COLUMNS ;
    CREATE TABLE T_TABLE_COLUMNS AS
      SELECT T.ID, TC.OWNER, TC.TABLE_NAME, TC.COLUMN_NAME, TC.DATA_TYPE
      FROM   ALL_TAB_COLS TC
               JOIN T_TABLES T ON (T.OWNER = TC.OWNER AND T.TABLE_NAME = TC.TABLE_NAME)
      WHERE  T.TABLE_NAME NOT IN ('T_TABLES', 'T_TABLE_COLUMNS'); -- EXCEPT THESE TWO TABLES
    EXPLAIN PLAN FOR
    SELECT *
    FROM   T_TABLE_COLUMNS
    WHERE  ID IN (SELECT ID FROM T_TABLES);
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
    | Id  | Operation          | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |                 | 27823 |  3586K|    63   (7)| 00:00:01 |
    |   1 |  NESTED LOOPS      |                 | 27823 |  3586K|    63   (7)| 00:00:01 |
    |   2 |   TABLE ACCESS FULL| T_TABLE_COLUMNS | 27823 |  3233K|    60   (2)| 00:00:01 |
    |*  3 |   INDEX UNIQUE SCAN| PK_TABLES       |     1 |    13 |     0   (0)| 00:00:01 |
    --ADD FOREIGN KEY
    ALTER TABLE T_TABLE_COLUMNS ADD CONSTRAINT FK_TABLES FOREIGN KEY(ID) REFERENCES T_TABLES(ID);
    -- SAME QUERY AGAIN
    EXPLAIN PLAN FOR
    SELECT *
    FROM   T_TABLE_COLUMNS
    WHERE  ID IN (SELECT ID FROM T_TABLES);
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
    -- NO JOIN ANYMORE
    | Id  | Operation         | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |                 | 27823 |  3233K|    60   (2)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| T_TABLE_COLUMNS | 27823 |  3233K|    60   (2)| 00:00:01 |
    -------------------------------------------------------------------------------------FKs can even save you from joins.

  • How to retrieve character '#' when use Shift key + 3 instead of use Option key + 3?

    Hi everyone,
    Kindly advice how to retrieve character '#' when use Shift key + 3 instead of Option key + 3.
    Thank you in advance.

    Hi Tom,
    Thanks for your proposed solution. The problem has been solved.

  • Binding for table produces list for other tables using foreign key and crea

    Using
    software Jdev 11G, WLS 11G, Oracle DB 11G, Windows Vista platform
    technology EJB 3.0, jspx, backing beans, session bean
    I cannot create a namedquery on my secondary table. The method for the column uses the entity object rather than the name and value of the column.
    For instance,
    (Coketruck) table has inventory records(Products) table
    Coketruck has one to many to the Products table
    Products has a many to one to the Coketruck
    I need to return the products from the product table based on the CokeTruck but I cannot create a namedQuery because the method in the Product table is an entity object type instead of a long that I can use to look up all the products based off the column truck_id.
    This is what I was expecting…
    Private Long truckId;
    public Long getTruckId() {
    return truckId;
    public void setTruckId (Long truckId) {
    this. truckId = truckId;
    Instead this is what I have…
    @ManyToOne
    @JoinColumn(name = "TRUCK_ID")
    private Coketruck coketruck;
    this. coketruck = coketruck
    public Coketruck getCoketruck() {
    return coketruck;
    public void set Coketruck (Coketruck coketruck) {
    this. coketruck = coketruck;
    How do I do a query on the Product table to return all the products that are in the coketruck?
    If I do the following it expects for me to pass the Entity Object which I cannot use as search criteria for my find method.
    @NamedQuery(name = "Products.findById", query = "select o from Products o where o.truckId = :truckId")
    On a different note but the same song…
    I noticed that when I look at my Session Bean Data Contols that the coketruck already has a list of the products. I have created a jsp page with a backing bean and have been able to use the namedquery on the coketruck entity to retrieve the productList. Unfortunately I need to sort the products by type and was also not able to find where to perform the work to be able to iterate through the productList to get my desired display. Therefore I started looking at doing another namedquery that would only retrieve the product_type ordering by the truckId.
    Seems I have come full circle… I don’t care what method I have to use to get the info back.
    Any help is greatly appreciated!

    user9005175 wrote:
    Hi!
    I work on an application wich uses a shopping cart stored in a database. The shopping cart uses two tables:
    CART: Holds information common for one shopping cart: the user it is connected to etc.
    - Primary key: CART_ID
    CART_ROW: One row in the cart, e.g. one new product to buy.
    - Primary key: ROW_ID
    - Foreign key: CART_ROW.CART_ID references CART.CART_ID
    From the code the rows in the cart are collected per cart, as is modelled by the foreign key. There exists one more relationship, which we use in the code, but which is not modelled by a foreign key in the database. One row can be dependent on another row, which makes the other row a parent.
    CART_ROW has a column PARENT_ID which references CART_ROW.ROW_ID.
    Should we add a foreign key for PARENT_ID? Or are there any questions to consider when it is a foreign key to the same table?
    I suggest to add foreign key it wont harm the performance (except while on insert when there would be validation for the foreign key). But it would prevent users to insert wrong/corrupt data either through code or directly by loggin in the database.
    A while ago we added indexes, both on ROW_ID and on PARENT_ID. Could the index on PARENT_ID have been harmful, since there is no foreign key?
    Index on parent_id would only be harmful if you do not make use of index after creating it (i.e. there is no query which make use of this index).
    And if you decide to have a foreign key on parent_id then I suggest to have index too on parent_id as it would be helpful atleast when you delete any record in this table.
    Best regards!

  • Error when adding Foreign Key

    Hi,
    Problem:
    When trying to add a Foreign Key via Edit Table | Foreign Key tab an error is thrown
    Error:
    "ORA-00942: table or view does not exit", Vendor code 942
    SQL Developer Generated DDL:
    ALTER TABLE "CNArticlesCODES"
    DROP CONSTRAINT "CNArticlesCODES_FK1"
    ALTER TABLE "CNArticlesCODES"
    ADD CONSTRAINT "CNArticlesCODES_FK1" FOREIGN KEY
    "ID"
    ) REFERENCES CNArticlesDATA
    "ID"
    ) ENABLE
    My work around:
    table name next to REFERENCES command in double quotes
    ALTER TABLE "CNArticlesCODES"
    ADD CONSTRAINT "CNArticlesCODES_FK1" FOREIGN KEY
    "UID"
    ) REFERENCES "CNArticlesCODES"
    "UID"
    ) ENABLE
    SQL Developer 1.0.0.15.57 on Windows XP SP2
    Oracle Database 10g Enterprise Edition Release 10.2.0.10 (Eval Lic) on Windows 2003 Web Edition 32 bit
    The 2 tables in this example where generated by MS DTS tool while importing data. This issue is not present when performing the above action on 2 tables generated using SQL Developer.
    I'm no database expert and new to Oracle (R&D stage of SQL Server to Oracle migration). So can some body tell me/point me at docs that explains what's going on here with the table names.
    Thanks

    Hopefully someone from oracle will see this and log it as a bug. If that doesn't happen and you have a support license, you can open up a ticket with them directly.
    As an aside.. if you are daring enough, the template for that is stored in one of the jars in an xml file. You can always edit it and add the quotes. Or..and you don't want to hear this.. you may want to rename everything to uppercase. It would hurt now, but could end up saving you time down the road. (adding quotes to every query is a pain)
    Eric

  • Error when sorting foreign key fields in a view

    Hello all
    I created a view and links to other views, and then created a table. In the table I put different attributes from the view and the views it is linked to. When I click the sort ascending or descending button in a column attribute that is in the original view, everything works fine. However if I click the same arrow button for a different column, one that is in a linked view, I get this error message.
    Definition OnlineTerminalId of type Attribute is not found in ObjektiView1.
    I do not understand why this does not work. It shows the values inside these linked views just fine. I did change names of some attributes in the database but I had the problem before then, and after I changed them all back so I doubt that is the issue.
    If anyone can help I would be much obliged

    Dino2dy wrote:
    Hmm, what I did is slightly different, but what my way does is give me the ability to create forms that change automatically when I click a row in a table, which I thought would be useful. I could do it your way too but I would have to do everything again from the beginning, which is something I am trying to avoid. There should be a way to fix this problem this way, as it would be kind of stupid to be able to show the data in the table but just not be able to sort or filter by it.You can still accomplish this by relating the combined VO with your children VOs using View Links, same as what you are doing now.
    Also that view would be HUGE as it is a single table that is connected to a lot of other tables via foreign keys, so my single view would cover 10 tables.Not necessarily. You don't have to include all Entities in your combined VO - just those that you want their attributes displayed on your table.
    Nick

  • Using FOreign key constraints on tables in database.

    I am student and novice in the field of ORACLE and PL/SQL and Database Creation. I had created a database consisting tables and got problem while applying foreign key constraints.
    CUST_MSTR
    CREATE TABLE "DBA_BANKSYS"."CUST_MSTR"("CUST_NO" VARCHAR2(10),
    "FNAME" VARCHAR2(25), "MNAME" VARCHAR2(25), "LNAME" VARCHAR2(25),
    "DOB_INC" DATE NOT NULL,      "OCCUP" VARCHAR2(25), "PHOTOGRAPH" VARCHAR2(25),
    "SIGNATURE" VARCHAR2(25), "PANCOPY" VARCHAR2(1),      "FORM60" VARCHAR2(1));
    (CUST_NO is PRIMARY KEY, )
    -- EMP_MSTR
    CREATE TABLE "DBA_BANKSYS"."EMP_MSTR"("EMP_NO" VARCHAR2(10),
    "BRANCH_NO" VARCHAR2(10), "FNAME" VARCHAR2(25), "MNAME" VARCHAR2(25),
    "LNAME" VARCHAR2(25), "DEPT" VARCHAR2(30), "DESIG" VARCHAR2(30));
    (EMP_NO is primary key )
    --NOMINEE_MSTR
    CREATE TABLE "DBA_BANKSYS"."NOMINEE_MSTR"("NOMINEE_NO" VARCHAR2(10),
    "ACCT_FD_NO" VARCHAR2(10), "NAME" VARCHAR2(75), "DOB" DATE,
    RELATIONSHIP" VARCHAR2(25));
    (NOMINEE_NO is primary key )
    --ADDR_DTLS
    CREATE TABLE "DBA_BANKSYS"."ADDR_DTLS"("ADDR_NO" NUMBER(6),
    "CODE_NO" VARCHAR2(10),      "ADDR_TYPE" VARCHAR2(1), "ADDR1" VARCHAR2(50),
    "ADDR2" VARCHAR2(50), "CITY" VARCHAR2(25), "STATE" VARCHAR2(25),
    "PINCODE" VARCHAR2(6));
    ( ADDR_NO is primary key )
    Problem: I want to apply foreign key constraints on ADDR_DTLS table so that Before inserting value in ADDR_DTLS table it must check, VALUE in ADDR_DTLS.CODE_NO must be PRESENT either in attribute value CUST_MSTR.CODE_NO or EMP_MSTR.CODE_NO or NOMINEE_MSTR.CODE_NO table .
    I applied the foreign key constraints using this syntax
    CREATE TABLE "DBA_BANKSYS"."ADDR_DTLS"("ADDR_NO" NUMBER(6),
    "CODE_NO" VARCHAR2(10),      "ADDR_TYPE" VARCHAR2(1), "ADDR1" VARCHAR2(50),
    "ADDR2" VARCHAR2(50), "CITY" VARCHAR2(25), "STATE" VARCHAR2(25),
    "PINCODE" VARCHAR2(6),
    constraints fk_add foreign key CODE_NO references CUST_MSTR. CODE_NO,
    constraints fk_add1 foreign key CODE_NO references EMP_MSTR. CODE_NO,
    constraints fk_add2 foreign key CODE_NO references NOMINEE_MSTR.CODE_NO);
    (foreign key)
    ADDR_DTLS.CODE_NO ->CUST_MSTR.CUST_NO
    ADDR_DTLS.CODE_NO ->NOMINEE_MSTR.NOMINEE_NO
    ADDR_DTLS.CODE_NO ->BRANCH_MSTR.BRANCH_NO
    ADDR_DTLS.CODE_NO ->EMP_MSTR.EMP_NO
    When I applied foreign key constraints this way, its gives a error called foreign key constraints violation. (I understand that, its searches the attribute value of ADDR_DTLS.CODE_NO in all the three tables must be present then the value will be inserted. But I want, if the value is in any of the three table then its should insert the value or its gives an error.)
    Please help me out, though i put the question and i want too know how to apply the forign key in this way. and is there any other option if foreign key implementation is not pssible.

    If you are on 11g you can use ON DELETE SET NULL:
    CREATE TABLE addr_dtls
    ( addr_no          NUMBER(6)  CONSTRAINT addr_pk PRIMARY KEY
    , addr_cust_no     CONSTRAINT addr_cust_fk    REFERENCES cust_mstr    ON DELETE SET NULL
    , addr_emp_no      CONSTRAINT addr_emp_fk     REFERENCES emp_mstr     ON DELETE SET NULL
    , addr_nominee_no  CONSTRAINT addr_nominee_fk REFERENCES nominee_mstr ON DELETE SET NULL
    , addr_type        VARCHAR2(1)
    , addr1            VARCHAR2(50)
    , addr2            VARCHAR2(50)
    , city             VARCHAR2(25)
    , state            VARCHAR2(25)
    , pincode          VARCHAR2(6) );In earlier versions you'll need to code some application logic to do something similar when a parent row is deleted, as otherwise the only options are to delete the dependent rows or raise an error.
    btw table names can be up to 30 characters and don't need to end with MSTR or DTLS, so for example CUSTOMERS and ADDRESSES might be more readable than CUST_MSTR and ADDR_DTLS. Also if the Customer/Employee/Nominee PKs are generated from a sequence they should be numeric.
    Edited by: William Robertson on Aug 15, 2010 6:47 PM

  • Performance issues when using AQ notification with one consumer

    We have developed a system to load data from a reservation database to a reporting database
    A a certain point in the proces, a message with the identifier of the reservation is enqueued to a queue (multi-consumer) on the same DB and then propagated to a similar queue on the REP database.
    This queue (multi-consumer) has AQ notification enabled (with one consumer) which calls the queue_callback procedure which
    - dequeues the message
    - calls a procedure to load the Resv data into the Reporting schema (through DB link)
    We need each message to be processed ONLY ONCE thus the usage of one single subscriber (consumer)
    But when load testing our application with multiple threads, the number of records created in the Reservation Database becomes quite large, meaning a large number of messages going through the first queue and propagating to the second queue (very quickly).
    But messages are not processed fast enough by the 2nd queue (notification) which falls behind.
    I would like to keep using notification as processing is automatic (no need to set up dbms_jobs to dequeue etc..) or something similar
    So having read articles, I feel I need to use:
    - multiple subscribers to the 2nd queue where each message is processed only by one subscriber (using a rule : say 10 subscribers S0 to S10 with Si processing messages where last number of the identifier is i )
    problem with this is that there is an attempt to process the message for each subscriber, isn't there
    - a different dequeuing method where many processes are used in parallel , with each message is processed only by one subscriber
    Does anyone have experience and recommendations to make on how to improve throughput of messages?
    Rgds
    Philippe

    Hi, thanks for your interest
    I am working with 10.2.0.4
    My objective is to load a subset of the reservation data from the tables in the first DB (Reservation-OLTP-150 tables)
    to the tables in the second DB (Reporting - about 15 tables at the moment), without affecting performance on the Reservation DB.
    Thus the choice of advanced queueing (asyncronous )
    - I have 2 similar queues in 2 separate databases ( AND Reporting)
    The message payload is the same on both (the identifier of the reservation)
    When a certain event happens on the RESERVATION database, I enqueue a message on the first database
    Propagation moves the same message data to the second queue.
    And there I have notification sending the message to a single consumer, which:
    - calls dequeue
    - and the data load procedure, which load this reservation
    My performance difficulties start at the notification but I will post all the relevant code before notification, in case it has an impact.
    - The 2nd queue was created with a script containing the following (similar script for fisrt queue)
    dbms_aqadm.create_queue_table( queue_table => '&&CQT_QUEUE_TABLE_NAME',
    queue_payload_type => 'RESV_DETAIL',
    comment => 'Report queue table',
    multiple_consumers => TRUE,
    message_grouping => DBMS_AQADM.NONE,
    compatible => '10.0.0',
    sort_list => 'ENQ_TIME',
    primary_instance => '0',
    secondary_instance => '0');
    dbms_aqadm.create_queue (
    queue_name => '&&CRQ_QUEUE_NAME',
    queue_table => '&&CRQ_QUEUE_TABLE_NAME',
    max_retries => 5);
    - ENQUEUING on the first queue (snippet of code)
    o_resv_detail DLEX_AQ_ADMIN.RESV_DETAIL;
    o_resv_detail:= DLEX_AQ_ADMIN.RESV_DETAIL(resvcode, resvhistorysequence);
    DLEX_RESVEVENT_AQ.enqueue_one_message (o_resv_detail);
    where DLEX_RESVEVENT_AQ.enqueue_one_message is :
    PROCEDURE enqueue_one_message (msg IN RESV_DETAIL)
    IS
    enqopt           DBMS_AQ.enqueue_options_t;
    mprop           DBMS_AQ.message_properties_t;
    enq_msgid           dlex_resvevent_aq_admin.msgid_t;
    BEGIN
    DBMS_AQ.enqueue (queue_name => dlex_resvevent_aq_admin.c_resvevent_queue,
    enqueue_options => enqopt,
    message_properties => mprop,
    payload => msg,
    msgid => enq_msgid
    END;
    - PROPAGATION: The message is dequeued from 1st queue and enqueued automatically by AQ propagation into this 2nd queue.
    (using a call to the following 'wrapper' procedure)
    PROCEDURE schedule_propagate (
    src_queue_name IN VARCHAR2,
    destination IN VARCHAR2 DEFAULT NULL
    IS
    sprocname dlex_types.procname_t:= 'dlex_resvevent_aq_admin.schedule_propagate';
    BEGIN
    DBMS_AQADM.SCHEDULE_PROPAGATION(queue_name => src_queue_name,
                                            destination => destination,
    latency => 10);
    EXCEPTION
    WHEN OTHERS
    THEN
    DBMS_OUTPUT.put_line (SQLERRM || ' occurred in ' || sprocname);
    END schedule_propagate;
    - For 'NOTIFICATION': ONE subscriber was created using:
    EXECUTE DLEX_REPORT_AQ_ADMIN.add_subscriber('&&STQ_QUEUE_NAME','&&STQ_SUBSCRIBER',NULL,NULL, NULL);
    this is a wrapper procedure that uses:
    DBMS_AQADM.add_subscriber (queue_name => p_queue_name, subscriber => subscriber_agent );
    Then notification is registered with:
    EXECUTE dlex_report_aq_admin.register_notification_action ('&&AQ_SCHEMA','&&REPORT_QUEUE_NAME','&&REPORT_QUEUE_SUBSCRIBER');
    - job_queue_processes is set to 10
    - The callback procedure is as follows
    CREATE OR REPLACE PROCEDURE DLEX_AQ_ADMIN.queue_callback
    context RAW,
    reginfo SYS.AQ$_REG_INFO,
    descr SYS.AQ$_DESCRIPTOR,
    payload RAW,
    payloadl NUMBER
    IS
    s_procname CONSTANT VARCHAR2 (40) := UPPER ('queue_callback');
    r_dequeue_options DBMS_AQ.DEQUEUE_OPTIONS_T;
    r_message_properties DBMS_AQ.MESSAGE_PROPERTIES_T;
    v_message_handle RAW(16);
    o_payload RESV_DETAIL;
    BEGIN
    r_dequeue_options.msgid := descr.msg_id;
    r_dequeue_options.consumer_name := descr.consumer_name;
    DBMS_AQ.DEQUEUE(
    queue_name => descr.queue_name,
    dequeue_options => r_dequeue_options,
    message_properties => r_message_properties,
    payload => o_payload,
    msgid => v_message_handle
    -- Call procedure to load data from reservation database to Reporting DB through the DB link
    dlex_report.dlex_data_load.load_reservation
    ( in_resvcode => o_payload.resv_code,
    in_resvHistorySequence => o_payload.resv_history_sequence );
    COMMIT;
    END queue_callback;
    - I noticed that messages are not taken out of the 2nd queue,
    I guess I would need to use the REMOVE option to delete messages from the queue?
    Would this be a large source of performance degradation after just a few thousand messages?
    - The data load through the DB may be a little bit intensive but I feel that doing things in parallel would help.
    I would like to understand if Oracle has a way of dequeuing in parallel (with or without the use of notification)
    In the case of multiple subscribers with notification , does 'job_queue_processes' value has an impact on the degree of parallelism? If not what setting has?
    And is there a way supplied by Oracle to set the queue to notify only one subscriber per message?
    Your advice would be very much appreciated
    Philippe
    Edited by: user528100 on Feb 23, 2009 8:14 AM

  • Sy-tabix when using secondary key

    Hi,
    I have an internal table with records that contain a field with the line index of another table entry that they depend on.
    I can process this table recursively, by passing the parent index inside and using a
    LOOP AT ... USING KEY secondary_key WHERE index = iv_index.
    Unfortunately the sy-tabix is afterwards not correct, it contains the position in the secondary key probably instead the position in the internal table.
    Is there a way how I could find out the current table index, when using a secondary key to read a record?
    Regards,
    Bruno

    I rewrote it to include the row index also, not only the dependency.. was thinking too generic

  • Subquery in IF statement in trigger, without using foreign keys

    Hello,
    I'm investigating ways of writing a subquery in an IF statement, which is placed inside a trigger.
    I wanna write smth like IF (:new.jazz not in (select goldies from T where ... )) etc. I don't know whether the fact that the IF is in a trigger adds some additional restrictions. (Does it?)
    So far I found the solution described here: SubQuery Comparison in If Statement which I find a bit tacky, I could have the 'cooleststarinthegalaxy' instead of 1 and seems you need to do extra light, but still extra lifting.
    I also read about the possibility of using MERGE, which I'm currently researching.
    Is there any other way?
    Thanks
    Edited by: BluShadow on 14-Nov-2012 13:37
    fixed link
    Edite by me: the question is how (if possible) to do this without a foreign key.
    Edited by: questioningq12 on Nov 14, 2012 6:11 AM
    Edited by: questioningq12 on Nov 14, 2012 6:13 AM

    Hi,
    questioningq12 wrote:
    Say I have tables A(namea varchar(10)), B(nameb varchar(10)), and B contains tuples ('1stname','2ndname').
    I wrote a trigger before insertion, for each row, on table A. For a tuple t to be inserted, it should check whether t.namea is in the set of values nameb from B.
    E.g., INSERT INTO A VALUES('1stname') should work. But INSERT INTO A VALUES ('3rdname') should fail. You can use a foreign key constraint for that.
    If the tables already exist, and b.nameb is declared as UNIQUE (or PRIMARY KEY), then you can say:
    ALTER TABLE  a
        ADD CONSTRAINT     a_namea_fk
        FOREIGN KEY  (namea)
        REFERENCES b (nameb)
    ;If you had a situation where you really needed to query a table in PL/SQL, and you weren't sure if the query would find anything, you could put the query in its own BEGIN ... EXCEPTION block, and test for NO_DATA_FOUND.
    If you're just checking to see if a row exists or not, you can always write a query that is guaranteed to return exactly 1 row, like this:
    BEGIN
        SELECT  COUNT (*)
        INTO    x
        FROM    b
        WHERE   nameb = :NEW.namea
        AND         ROWNUM  = 1;
        IF x = 0
        THEN  
            ...        -- print msgs, raise exceptions etc
        END IF;
    END;Edited by: Frank Kulash on Nov 14, 2012 9:22 AM
    Added example

Maybe you are looking for