Problem in Foreign key creation

Hi,
I have a requirement where I need to create a foreign key in a ztable which points to the SPRAS field of table T002.Now this requires that the domain of this custom field should be the same as SPRAS. But domain SPRAS has got a conversion exit.For my case, I need to have different conversion exit for the customer field.
So can you please suggest how I can achieve both the things:-
1.Creating a PK-FK relatioship with the table field T002-SPRAS
2. Create a conversion-exit at the domain level of the custom field.
Thanks,
Samrat

Hi,
  I am confused with your requirement.
Lets understand why we have conversion exit ..
Conversion Exit 1
   Ex:  Input -- 'EN'  Conversion exit would change it to 'E' and store in DB
Conversion Exit 2
   Ex:  Input -- 'EL'  Conversion exit would change it to 'E' and store in DB
so 2 conversion exit perform 2 separate task...Right !!!
Now lets assume that your zfield uses conversion exit 2 and t002 uses conversion exit 1.If you create a foreign key to zfield to spras of t002
Input - 'EL' conversion exit will convert it to 'E' and it refers to 'E' of T002 ( but 'E' in t002 is actually 'EN'.
Isnt this wrong ??
I hope i did not confuse you.
Cheers,
KD

Similar Messages

  • EMIGALL : problem of foreign key

    Hi everybody
    I m working on a EMIGALL migration and i met one problem with foreign key…
    My field VKONT is defined (in specific table ZR006SAT) like a foreign key with check table FKKVK (Contract Account Header). It means that an entry can be inserted in ZR006SAT , only if VKONT value exists in FKKVK table.
    But, during the migration of my migration object ZR006SAT (same name as the table), I have no error when I put any value for my VKONT field in the input file. EMIGALL does not control the foreign key constraint and I don’t understand why (???).
    However,
    1.     when i try to insert manually (by SE11) an entry in ZR006SAT, SAP forces me to input correct value (existing in FKKVK table) for VKONT field.
    2.     for standards migration objects, there is no problem… foreign key check is working well
    3 days I ve been on this problem.
    Please help me

    Hai
    You can find detailed documentation of EMIGALL in SAP itself. Use Transaction EQ81 to display it. It provides all the concepts and procedures to work with EMIGALL. I will also prepare a document and send it to u later. Meanwhile just for ur info here are some points about EMIGALL :
    1. It Migrates data Business Object wise
    2. It uses Direct Input Technique
    3. It has more than 100 objects of IS-U
    and the steps for implementation goes like this:
    1)You have to create a user specially for migration which will have all the authorizations related to migration workbench, BASIS and IS-U
    2)You have to create your own company in EMIGALL. There is a default company called SAP.
    3)Company SAP contains all the Business Objects
    4)You have to figure out what business objects u need and then u have to copy those business objects to ur company from Standard Company SAP
    5)Each objects contains more than one structure and each structure can contain more than one fields. The relation goes like this
    Object ---> Structure ---> Field
    6)You have to define field rules for each required field of the object. You have to mark "Not required" for fields u don't need
    7)After field rules for a given object is set u have to generate load report i.e. actual Direct Input Program which will migrate data. This program is generated on basis of field rules set by u.
    8)After the load report is generated u have to prepare an input file (import File) for migration. The import file should be according to structure provided by SAP and must be in binary format. SAP Provides the structure of file according to your configurations. You have to write ur own Data conversion program(in any language) for this task.
    9)You take import file as input and migrate the data using generated load program
    10)Finally u can check the Migration Statistics and Error Log
    Regards
    Sreeni

  • Foreign key creation is very slow

    Hello,
    We are about to migrate a database from 9iR2 to 10g by doing export/import.
    One of the last steps of the migration, after the import, is to create constraints and indexes. The foreign key creation process is lasting about 7 hours, and this is something we cannot afford. How could we speed up this process of foreign key creation?
    Thanks in advance,
    Eva

    Look at the statement (using enterprise manager, SQL*Plus, sql trace or whatever) while the foreign keys are enabled/created. Basically for following tables:
    SQL> create table big as select * from dba_source;
    Table created.
    SQL> create table big1 as select * from dba_source;
    Table created.
    SQL> alter table big add constraint big_pk primary key (owner, name, type, line, text);
    Table altered.and following foreign key:
    SQL> alter table big1 add constraint big1_big_fk
      2  foreign key (owner, name, type, line, text)
      3  references big (owner, name, type, line, text);the statement is:
    select /*+ all_rows ordered */ A.rowid, :1, :2, :3
    from
    "GINTS"."BIG1" A , "GINTS"."BIG" B where( "A"."OWNER" is not null and
      "A"."NAME" is not null and "A"."TYPE" is not null and "A"."LINE" is not
      null and "A"."TEXT" is not null) and( "B"."OWNER" (+)= "A"."OWNER" and
      "B"."NAME" (+)= "A"."NAME" and "B"."TYPE" (+)= "A"."TYPE" and "B"."LINE" (+)
      = "A"."LINE" and "B"."TEXT" (+)= "A"."TEXT") and( "B"."OWNER" is null or
      "B"."NAME" is null or "B"."TYPE" is null or "B"."LINE" is null or
      "B"."TEXT" is null)As usually such statement can be executed using various execution plans. And as usual some are better than other for example I could manage 2 variants:
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          3          0           0
    Fetch        1     59.48      74.29      77125   10217945          0           0
    total        3     59.48      74.30      77125   10217948          0           0
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          0  CONCATENATION  (cr=10217945 pr=77125 pw=0 time=74294598 us)
          0   FILTER  (cr=2043589 pr=15557 pw=0 time=16167973 us)
    1014081    NESTED LOOPS OUTER (cr=2043589 pr=15557 pw=0 time=16234254 us)
    1014081     TABLE ACCESS FULL BIG1 (cr=15425 pr=15265 pw=0 time=4065256 us)
    1014081     INDEX UNIQUE SCAN BIG_PK (cr=2028164 pr=292 pw=0 time=10314886 us)(object id 125287)
          0   FILTER  (cr=2043589 pr=15331 pw=0 time=14522620 us)
    1014081    NESTED LOOPS OUTER (cr=2043589 pr=15331 pw=0 time=16225336 us)
    1014081     TABLE ACCESS FULL BIG1 (cr=15425 pr=15331 pw=0 time=4056358 us)
    1014081     INDEX UNIQUE SCAN BIG_PK (cr=2028164 pr=0 pw=0 time=8736590 us)(object id 125287)
          0   FILTER  (cr=2043589 pr=15413 pw=0 time=15081218 us)
    1014081    NESTED LOOPS OUTER (cr=2043589 pr=15413 pw=0 time=16257085 us)
    1014081     TABLE ACCESS FULL BIG1 (cr=15425 pr=15413 pw=0 time=4087642 us)
    1014081     INDEX UNIQUE SCAN BIG_PK (cr=2028164 pr=0 pw=0 time=9114281 us)(object id 125287)
          0   FILTER  (cr=2043589 pr=15412 pw=0 time=14470176 us)
    1014081    NESTED LOOPS OUTER (cr=2043589 pr=15412 pw=0 time=16260121 us)
    1014081     TABLE ACCESS FULL BIG1 (cr=15425 pr=15412 pw=0 time=4091125 us)
    1014081     INDEX UNIQUE SCAN BIG_PK (cr=2028164 pr=0 pw=0 time=8833532 us)(object id 125287)
          0   FILTER  (cr=2043589 pr=15412 pw=0 time=14052583 us)
    1014081    NESTED LOOPS OUTER (cr=2043589 pr=15412 pw=0 time=15224595 us)
    1014081     TABLE ACCESS FULL BIG1 (cr=15425 pr=15412 pw=0 time=4069696 us)
    1014081     INDEX UNIQUE SCAN BIG_PK (cr=2028164 pr=0 pw=0 time=8607611 us)(object id 125287)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       4906        0.12         13.62
      db file sequential read                       362        0.11          1.78and
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.01       0.00          0          0          0           0
    Execute      1      0.00       0.00          0          0          0           0
    Fetch        1      4.57      23.10      22563      30850          0           0
    total        3      4.59      23.11      22563      30850          0           0
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: SYS   (recursive depth: 1)
    Rows     Row Source Operation
          0  FILTER  (cr=30850 pr=22563 pw=0 time=23106976 us)
    1014081   HASH JOIN OUTER (cr=30850 pr=22563 pw=0 time=20989057 us)
    1014081    TABLE ACCESS FULL BIG1 (cr=15425 pr=10342 pw=0 time=6122826 us)
    1014081    TABLE ACCESS FULL BIG (cr=15425 pr=12221 pw=0 time=4079771 us)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      db file scattered read                       4114        0.21         13.20
      db file sequential read                      1849        0.22          5.55Obviously the second one is about 2 times better than the first one.
    So I'v managed to get it using:
    1) dbms_stats to compute stats on these table so that Oracle understand that these are really large tables
    2) altering session to use manual workarea size policy and give more resources for hash joins and sorts:
    SQL> alter session set workarea_size_policy= manual;
    Session altered.
    Elapsed: 00:00:00.01
    SQL> alter session set sort_area_size = 1000000000;
    Session altered.
    Elapsed: 00:00:00.01
    SQL> alter session set hash_area_size = 1000000000;So the trick is - understand the bottleneck and trying to do something about it. In my case I cannot do much more because everything now is dependant on disks and I cannot make my pc much faster :)))
    Another options is to create FKses either disabled and enable them later on or create enabled but not validate them. This is almost instantenous as you can see in the following code snippet. Of course that might give some implications later, especially if you create them disabled :)
    SQL> alter table big1 add constraint big1_big_fk
      2  foreign key (owner, name, type, line, text)
      3  references big (owner, name, type, line, text)
      4  disable;
    Table altered.
    Elapsed: 00:00:00.03
    SQL> alter table big1 drop constraint big1_big_fk;
    Table altered.
    Elapsed: 00:00:00.01
    SQL> alter table big1 add constraint big1_big_fk
      2  foreign key (owner, name, type, line, text)
      3  references big (owner, name, type, line, text)
      4  enable novalidate;
    Table altered.
    Elapsed: 00:00:00.03
    SQL> Gints Plivna
    http://www.gplivna.eu

  • Problem with Foreign Key check in an editable ALV

    Hi,
    I've implemented an editable ALV.
    The underlying context node is referenced to a structure and within the structure the foreign keys are defined.
    In my example, I have two editable columns with different foreign key checks.
    My problem is, the foreign key check works only for one column.
    So if I enter in both columns incorret values, only a message for the first column is thrown,
    but not for the second column!
    Only if I enter two errors in one(!) column (in two rows), than I get two error messages.
    Examples:
    does not work:
    COL1 | COL2
    err1  | err2   -> only one error message is displayed (for err1)
    It works in this case:
    COL1 | COL2
    err1  |  ok
    err2  |  ok
    => two messages for err1 and err2
    and in this case
    COL1 | COL2
    err1  |  ok
    ok     |  err2
    => two messages for err1 and err2
    I've found nothing in OSS. My system is a 7.00 with SP18, so OSS 1153492 is already implemented.
    Do I somenthing wrong or is this an error in SAP?
    Thanks,
    Andreas

    Hi Lekha,
    thank you very much for your support!
    I try to give you an example.
    In general, you need an editable ALV with at least two columns.
    The node for the ALV table in the component controller has to be assigned to a dictionary structure!
    That is very important, otherwise the foreign key check will not work!
    And the two fields in this dictionary structure have to be assigned to a "check table".
    Prerequisition: NW70 SP16 or higher! See oss note 1153492.
    Maybe an easy way to reproduce it is using the WD component WDT_FLIGHTLIST_EDIT.
    So copy this component to a Z-component.
    Than create a dictionary structure for the node "NODE_FLIGHTTAB" with the same 10 fields as the node attributes.
    In your new dictionary structure, assign to the fields CARRID and CONNID the check tables SCARR and SPFLI. (see table SFLIGHT).
    Than make both columens (CARRID and CONNID) editable.
    This has to be done in the "RESULTVIEW" in the method "INIT".
    You can user the following code:
      lr_column = lr_column_settings->get_column( 'CARRID' ).
      create object lr_input_field
        exporting
          value_fieldname = 'CARRID'.
      lr_column->set_cell_editor( lr_input_field ). 
      lr_column = lr_column_settings->get_column( 'CONNID' ).
      create object lr_input_field
        exporting
          value_fieldname = 'CONNID'.
      lr_column->set_cell_editor( lr_input_field ).
    Copy this code below this code:
      lr_column_settings ?= l_value.
      lr_column = lr_column_settings->get_column( 'PRICE' ).
      create object lr_input_field
        exporting
          value_fieldname = 'PRICE'.
      lr_column->set_cell_editor( lr_input_field ).
    Than just activate all,  create a Web Dynpro Application and your are ready to test it.
    To test it, do the following:
    Append/Insert an empty row.
    Enter a CARRID and CONNID that does not exist and press ENTER.
    My result: only one error message is displayed for the wrong CARRID, but no error message for CONNID!
    Insert a new row.
    Enter in the first row, column CARRID an invalid value and in the second(!) row in column CONNNID.
    Than you get two(!) error messages. That's the behavior I expect.
    So thank you very much in advance for your help!
    Regards,
    Andreas

  • Constraint Problem in Foreign key --- Very Urgent - Help Needed

    Hello All,
    There are 2 tables and their associated fields
    EmpProj
    Emp_id(pk)
    Proj_id (pk)
    eff_from_dt(pk)
    ProjDesc
    Proj_id(fk)
    eff_from_dt(fk)
    Proj_name
    I have created the 2 tables like shown below
    CREATE TABLE EMPPROJ
    (EMP_ID NUMBER,
    PROJ_ID NUMBER,
    eff_from_dte date,
    PRIMARY KEY(EMP_ID,PROJ_ID,EFF_FROM_DT)
    CREATE TABLE PROJDESC
    (PROJ_ID NUMBER,
    PROJ_NAME VRACHAR2(20),
    EFF_FROM_DT DATE,
    CONSTRAINT S2 FOREIGN KEY(PROJ_ID,EFF_FROM_DT) REFERENCES EMPPROJ(PROJ_ID,EFF_FROM_DT));
    Now whenever i try to create a foriegn key table it gives an error message like "No matching parent key found."
    The columns in the foreign key should be same in number, same datatype and size. you can't create a foreign key with one column for a primary key of two columns.
    What i need to do to refer only the two columns in the Primary Key by a Foreign key?
    Please suggest anyway to resolve this problem.
    Thanks in advance.
    Captain

    My question is
    The foreign key can not refer only partial column of primary key as the rule of RDMS.
    How should i achieve that by other alternatives?
    Please suggest any method.
    Thanks in advance.

  • Problem with Foreign Key relationships in SAP R/3 4.7

    Hi Experts,
    I am trying to create a foreign key relationship between 2 transparent tables in SAP R/3 4.7
    Table 1:ZAAVNDR (MANDT (pk), VENDORNO (pk), NAME, REGION, COUNTRY (fk)) Foreign Key Table
    Table 2: ZAAVNDRREF(MANDT(pk), COUNTRY (pk)) ---Check table
    I have added few valid countries in check table but when I am adding some records in foreign key table with invalid countries these records are not being restricted and are still successfully going into the table.
    Could any one please help in this.
    Thanks in anticipation.
    -Amit

    Hi Sandra,
    Many thanks for your response and providing time of yours.
    Now, I have done exactly the same thing, but still it is the same.
    I have created two new tables as below:
    ZAAVREF (Check table)
    MANDT (PK)
    COUNTRY (PK) Domain:ZAACOUNT (CHAR 10)
    ZAAV1 (Foreign key table)
    MANDT (PK)
    COUNTRY (PK) Domain:ZAACOUNT (CHAR 10)
    Then I have created FK on country of foreign key table ZAAV1 and then SE16 (for table ZAAVREF)->Create Entries-> Entered values for Country only->Save....Records entered with valid Country values.
    After that SE16 (for table ZAAV1)->Create Entries-->Entered an Invalid country->Save->Still the record entered to the Database successfully....
    Could you please let me know where I am going wrong.
    I am using SAP R/3 4.7 and creating tables using Tools->ABAP Workbench->Development->ABAP dictionary

  • Problem with foreign key in entity bean in WSED..Plzzzzz help!!!!!

    hi all,
    m very new to ejb...m crerating container managed entity bean in wsed..The steps I have followed r as follows,
    i) created 2 tables in database,one is PARENT with fields ID(Primary Key) and NAME and another is CHILD with fields ID1(Foreign Key) and NAME.
    ii)created 2 entity beans... 1st is parent with fields id(key field) and name(promote getter & setter methods to local interface)...2nd is child
    (choosed parent as bean super type) with fields id1 (promote getter & setter methods to local interface) and name (promote getter & setter methods to local interface)...
    iii)Generated EJB to RDB mapping(choosed crreate new backend folder->meet in the middle->use existing connection->choosed the tables parent & child in the database->match by name->finish)
    now m getting an error in Map.mapxmi--->"The table PARENT does not have a discriminator column"...and a warning--->"A primary key does not exist for table: CHILD in file: platform:/resource/NewFK/ejbModule/META-INF/backends/DB2UDBNT_V8_1/Map.mapxmi.     Map.mapxmi     NewFK/ejbModule/META-INF/backends/DB2UDBNT_V8_1     L/NewFK/ejbModule/META-INF/backends/DB2UDBNT_V8_1/Map.mapxmi"

    Hi Sandra,
    Many thanks for your response and providing time of yours.
    Now, I have done exactly the same thing, but still it is the same.
    I have created two new tables as below:
    ZAAVREF (Check table)
    MANDT (PK)
    COUNTRY (PK) Domain:ZAACOUNT (CHAR 10)
    ZAAV1 (Foreign key table)
    MANDT (PK)
    COUNTRY (PK) Domain:ZAACOUNT (CHAR 10)
    Then I have created FK on country of foreign key table ZAAV1 and then SE16 (for table ZAAVREF)->Create Entries-> Entered values for Country only->Save....Records entered with valid Country values.
    After that SE16 (for table ZAAV1)->Create Entries-->Entered an Invalid country->Save->Still the record entered to the Database successfully....
    Could you please let me know where I am going wrong.
    I am using SAP R/3 4.7 and creating tables using Tools->ABAP Workbench->Development->ABAP dictionary

  • Problem whit foreign key in target

    I have a 2 interfaces whit the same source, one of this targets is a table whit a foreign key to the another table (in the another interface).
    for example table1 and table2 - table2 has a column FK_table1.
    What do put in the mapping interface in this field.
    Thanks

    As far an I understood it correctly:
    Interface 1 - SrcTable -> Table1
    Interface 2 - SrcTable -> Table2 (FK_Table1)
    In this case, you will execute interfaces in order Interface 1 and then Interface 2.
    In Interface 2, on the source datastore side, you will drag-drop table1 and make a join between SrcTable and Table1. And in the column for Table2, add the column from Table1.
    Hope that helps

  • In foreign key creation

    hi
    i created a foreign key for a table .But the foreign key table is accepting all values , not only the check table content. why?

    Creating Foreign Keys Procedure
    In the field maintenance screen of the table, select the check field and choose  symbol of KEY
    If the  domain of the check field has a  value table, you can have the system create a proposal with the value table as check table. In this case a proposal will be made for the field assignment in the foreign key.
    If the domain does not have a value table or if you reject the proposal, the screen for foreign key maintenance appears without proposals. In this case, enter the check table and save your entries. The check table must have a key field to which the domain of the check field is assigned.
    You can then let the system make a proposal for assigning of the foreign key fields to the key fields of the check fields. The system attempts to assign the key fields of the check table to fields of the table with the same domain. If you do not want a proposal, the key fields of the check table are listed and you must assign them to suitable fields of the foreign key table.
    Enter an explanatory short text in the field Short text.
    The short text provides a technical documentation of the meaning of the foreign key.
    Choose Copy. The foreign key is saved and you return to the maintenance screen for the table.

  • Urgent expalantion reqired about foreign keys

    we are doing data migration project and we have get data from different regions data in flat files, but we have problem below like:
    i am facing a problem to declare unique constraints, because in some files column should be in number data type and same file from some other region is varchar2 but these two columns are unique ,these two columns don't having the any duplicate values.
    can i take this column data type in varchar2?
    why because if i am taking number type it cant accept varchar2
    any suggestions.................
    Regards,
    sh

    sh wrote:
    we are doing data migration project and we have get data from different regions data in flat files, but we have problem below like:
    i am facing a problem to declare unique constraints, because in some files column should be in number data type and same file from some other region is varchar2 but these two columns are unique ,these two columns don't having the any duplicate values.
    can i take this column data type in varchar2?
    why because if i am taking number type it cant accept varchar2
    any suggestions.................
    Regards,
    shNot really sure i understand your question. If you have 2 data sources and 2 different data types for the same type of data then you'd want to ensure you do a TO_NUMBER on the character representation of the data to ensure you strip out any non-printing characters like tabs or spaces.
    If that's not your problem, please try to be a little more clear. Your subject mentions you have an urgent problem about foreign keys and i find nothing to support your claim of urgency, nor foreign keys, within the subject of your question.
    Cheers,

  • Caching problem w/ primary-foreign key mapping

    I have seen this a couple of times now. It is not consistent enough to
    create a simple reproducible test case, so I will have to describe it to you
    with an example and hope you can track it down. It only occurs when caching
    is enabled.
    Here are the classes:
    class C1 { int id; C2 c2; }
    class C2 { int id; C1 c1; }
    Each class uses application identity using static nested Id classes: C1.Id
    and C2.Id. What is unusual is that the same value is used for both
    instances:
    int id = nextId();
    C1 c1 = new C1(id);
    C2 c2 = new C2(id);
    c1.c2 = c2;
    c2.c1 = c1;
    This all works fine using optimistic transactions with caching disabled.
    Although the integer values are the same, the oids are unique because each
    class defines its own unique oid class.
    Here is the schema and mapping (this works with caching disabled but fails
    with caching enabled):
    table t1: column id integer, column revision integer, primary key (id)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Because the ids are known to be the same, the primary key values are also
    used as foreign key values. Accessing C2.c1 is always non-null when caching
    is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
    null. When it is null we get warnings about dangling references to deleted
    instances with id values of 0 and other similar warnings.
    The workaround is to add a redundant column with the same value. For some
    reason this works around the caching problem (this is unnecessary with
    caching disabled):
    table t1: column id integer, column id2 integer, column revision integer,
    primary key (id), unique index (id2)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id2"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id2"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Needless to say, the extra column adds a lot of overhead, including the
    addition of a second unique index, for no value other than working around
    the caching defect.

    Tom-
    The first thing that I think of whenever I see a problem like this is
    that the equals() and hashCode() methods of your application identity
    classes are not correct. Can you check them to ensure that they are
    written in accordance to the guidelines at:
    http://docs.solarmetric.com/manual.html#jdo_overview_pc_identity_application
    If that doesn't help address the problem, can you post the code for your
    application identity classes so we can double-check, and we will try to
    determine what might be causing the problem.
    In article <[email protected]>, Tom Landon wrote:
    I have seen this a couple of times now. It is not consistent enough to
    create a simple reproducible test case, so I will have to describe it to you
    with an example and hope you can track it down. It only occurs when caching
    is enabled.
    Here are the classes:
    class C1 { int id; C2 c2; }
    class C2 { int id; C1 c1; }
    Each class uses application identity using static nested Id classes: C1.Id
    and C2.Id. What is unusual is that the same value is used for both
    instances:
    int id = nextId();
    C1 c1 = new C1(id);
    C2 c2 = new C2(id);
    c1.c2 = c2;
    c2.c1 = c1;
    This all works fine using optimistic transactions with caching disabled.
    Although the integer values are the same, the oids are unique because each
    class defines its own unique oid class.
    Here is the schema and mapping (this works with caching disabled but fails
    with caching enabled):
    table t1: column id integer, column revision integer, primary key (id)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Because the ids are known to be the same, the primary key values are also
    used as foreign key values. Accessing C2.c1 is always non-null when caching
    is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
    null. When it is null we get warnings about dangling references to deleted
    instances with id values of 0 and other similar warnings.
    The workaround is to add a redundant column with the same value. For some
    reason this works around the caching problem (this is unnecessary with
    caching disabled):
    table t1: column id integer, column id2 integer, column revision integer,
    primary key (id), unique index (id2)
    table t2: column id integer, column revision integer, primary key (id)
    <jdo>
    <package name="test">
    <class name="C1" objectid-class="C1$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t1"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c2">
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="column.id" value="id2"/>
    </extension>
    </field>
    </class>
    <class name="C2" objectid-class="C2$Id">
    <extension vendor-name="kodo" key="jdbc-class-map" value="base">
    <extension vendor-name="kodo" key="table" value="t2"/>
    </extension>
    <extension vendor-name="kodo" key="jdbc-version-ind"
    value="version-number">
    <extension vendor-name="kodo" key="column" value="revision"/>
    </extension>
    <field name="id" primary-key="true">
    <extension vendor-name="kodo" key="jdbc-field-map" value="value">
    <extension vendor-name="kodo" key="column" value="id"/>
    </extension>
    </field>
    <field name="c1">
    <extension vendor-name="kodo" key="dependent" value="true"/>
    <extension vendor-name="kodo" key="inverse-owner" value="c2"/>
    <extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
    <extension vendor-name="kodo" key="table" value="t1"/>
    <extension vendor-name="kodo" key="ref-column.id" value="id2"/>
    <extension vendor-name="kodo" key="column.id" value="id"/>
    </extension>
    </field>
    </class>
    </package>
    </jdo>
    Needless to say, the extra column adds a lot of overhead, including the
    addition of a second unique index, for no value other than working around
    the caching defect.
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Legacy Mapping Problem: Compound PK is Foreign Key

    My job is to map beans using JPA onto a legacy database schema that must not get modified. I am stuck with doing the relation between two tables.
    The schema is:
    CREATE TABLE A (
         a1 INTEGER NOT NULL,
         a2 INTEGER NOT NULL,
         PRIMARY KEY (a1, a2)
    CREATE TABLE B (
         b1 INTEGER NOT NULL,
         b2 INTEGER NOT NULL,
         b3 INTEGER NOT NULL,
         FOREIGN KEY f (b1, b2) REFERENCES A (a1, b2) ON UPDATE CASCADE ON DELETE CASCADE,
         PRIMARY KEY (b1, b2, b3)
    )As you can see, both tables have compound primary keys, and the primary key of table B contains a foreign key to table A. This is typical in our DB because B in fact acts like a "inner table" (in the sense of "inner class") to A.
    So far I defined both classes and provided a primary key, but the question now is, how to provide the relation between both? The relation shall be unidirectional, so that I can query all Bs belonging to one A by "myA.getBs(): Collection".
    The code I have written so far is:
    @Entity
    public class A {
         @EmbeddedId
         private APK primaryKey;
         @OneToMany
         private Collection<B> characteristics;
    @Embeddable
    public class APK {
         private String a1;
         private String a2;
    @Entity
    public class B {
         @EmbeddedId
         private BPK primaryKey;
    @Embeddable
    public class BPK {
         private String b1;
         private String b2;
         private String b3;
    }Unfortunately that is not working, because the JPA Provider (here: TopLink Essentials) tries to map the relationship on a third table named "A_B" that actually is not there.
    So how to tell TopLink to find the mapping information in table B (just like using the foreign key f)?
    Please help, I am driving nuts!

    OK
    So i found out the real problem
    And the problem is that my CustomerHistory has a Text Field, as in a MS SQL Server TEXT Field
    When i changed it to a VARCHAR, it started working
    The strangest thing is that Customer also has a TEXT Field (mapped to a String) and i can do Finds without any problems
    So my question now is how can i work with TEXT in CMPs ?
    Thanks

  • Problem to insert id into the foreign key  php/mysql

    Hello all,
    I'm having rouble to understand the process and there is no tutorial about my problem anywhere
    I have two table:
    Table 1 (member) with id, name, phone etc
    Table2 (post) add_id, title, description, price, member_id
    I got a form to post the add and I need to insert the id of table 1 into my table 2 member_id zone
    Fisrt I did the recorset to get user id
    $colname_rsMember = "-1";
    if (isset($_SESSION['MM_Username'])) {
      $colname_rsMember = $_SESSION['MM_Username'];
    mysql_select_db($database_connect, $connect);
    $query_rsMember = sprintf("SELECT * FROM member WHERE username='".$_SESSION['MM_Username']."'")or die(mysql_error());
    $rsMember = mysql_query($query_rsMember, $connect) or die(mysql_error());
    $row_rsMember = mysql_fetch_assoc($rsMember);
    $totalRows_rsMember = mysql_num_rows($rsMember);
    This part is working and I'm able to retreive info via echo just for testing
    After this code I have my insert code
    if ((isset($_POST["MM_insert"])) && ($_POST["MM_insert"] == "form2")) {
      $insertSQL = sprintf("INSERT INTO add (title, `description`, price, member_id) VALUES (%s, %s, %s, %s)",
                           GetSQLValueString($_POST['title'], "text"),
                           GetSQLValueString($_POST['description'], "text"),
                           GetSQLValueString($_POST['price'], "text"),
                           GetSQLValueString($_POST['member_id'], "int"));
      mysql_select_db($database_connect, $connect);
      $Result1 = mysql_query($insertSQL, $connect) or die(mysql_error());
      $insertGoTo = "ok.php";
      if (isset($_SERVER['QUERY_STRING'])) {
        $insertGoTo .= (strpos($insertGoTo, '?')) ? "&" : "?";
        $insertGoTo .= $_SERVER['QUERY_STRING'];
      header(sprintf("Location: %s", $insertGoTo));
    Do I need to include hidden field in y form?
    I'm having the same error message. Col member_id can't not be null
    Any idea what I'm doing wrong?
    Thank You!

    When someone logs in, Dreamweaver creates a session variable called $_SESSION['MM_Username']. Use that session variable to create a recordset to get the user's ID, which can then be entered into the foreign key field of the child table.
    Dreamweaver automatically puts the code for recordsets immediately above the DOCTYPE declaration, so you will need to move it above the code for the Insert Record server behavior. So, it needs to be in this order:
    Recordset to get user ID
    Insert Record for child table

  • Primary key foreign key remove problem

    hi expretrs,
    I create 5 tables in ddic.
    1. zpr_cmp Company Master
    2. zpr_dpt Department Master
    3. zpr_dsg Designation Master
    4. zpr_emp Employee master.
    5. zpr_slm Salary Master.
    Foreign key reference in zpr_emp from table 1,2 and 3 created and
    table zpr_emp has
    cmpcd, Company Code
    empcd, Employee Code
    dptcd , Department Code
    dsgcd Designation Code
    as key fields.
    I have upload data and create module pool and reports.
    My problem that hr person say that we want to change deptcd/dsgcd of employee (zpr_emp) and
    dptcd and dsgcd is as key fields in zpr_emp when i change the table zpr_emp and remove
    two key fields dptcd/dsgcd and active error display.
    I also try with se14 (Activate and Adjust database) but error still.
    how can i remove key field from zpr_emp and active table without loss data and without any change of
    module pool/reports
    pl. help

    Diagnosis
    ZPR_EMP table is defined as a check table. For reasons of consistency, changes to the primary key of the table are not allowed.
    Procedure
    If it is essential that you change the primary key, you must delete the relevant foreign keys. Refer to the where-used list to find all tables containing a field that is checked against this table. Delete the foreign keys for these fields.
    If necessary, maintain the deleted foreign keys again.
    Value table - It's a field in a domain it helps in domain level data validation.
    Check table - unlike value table it helps in feild level data validation.
    The relational data model contains not only tables, but also relationships between tables. These relationships are defined in the ABAP/4 Dictionary by foreign keys. An important function of foreign keys is to support data integrity in the relational data model. Foreign key fields may assume only those values allowed by the check table, in other words, values occurring in the primary key of the check table.
    A foreign key provides a link between two tables, for eg.,T1 and T2 by including a reference in table T1 to the primary key of table T2. For this purpose, Foreign key fields assigned to the primary key fields of T2 are included in T1. Table T1, which is the one being checked, is called a foreign key table, and table T2 is called a check table. The terms dependent (foreign key) table and referenced (check) table are also used.
    VALUE TABLE:If the domain of the check field has a value table, this is proposed by the system as check table in the foreign field maintenance. The key fields of the value table are in this case assigned fields of the foreign key table with the same domain. These fields may assume only those values allowed by the value table.
    The value range of the domain can be defined by specifying value table.All table fields referring to this domain can then be checked against the corresponding field of this value table.In order the check can be executed, a foreign key must be defined for the value table.

  • Tracking down which unindexed foreign keys are the biggets problem

    I joined a new project recently. I checked prod and our biggest bottleneck is unindexed foreign keys. It is high enough that I can see that it is causing problems. So I ran a query and got a list of all the unindexed foreign keys. Unfortunately there are about 80 of them. This application was inherited. The last team lost the project and I think one of the issues is with an off the shelf application (which we can't get rid of).
    I really don't like the idea of adding 80 indexes in a big rollout. It is too big of a change to do at once. It is also hard to measure whether those indexes may cause other problems. So what I would like to do is take my Enqueue waits for unindexed foreign keys and somehow figure out which unindexed foreign keys are causing us the biggest problem. With this many there is a strong possibility that some of these tables are having their parents hit more than others and some of these tables are blocking other sessions more than others.
    any suggestions on how to do this? It doesn't need to be exact. However, Id like to propose adding indexes that will give us the biggest bang for the buck.
    I am not sure how to take my system wide enqueue/deque waits down to particular tables being hit with DML that cause locking on child tables that in turn cause other sessions to be blocked.

    Guess2 wrote:
    I am not sure how to take my system wide enqueue/deque waits down to particular tables being hit with DML that cause locking on child tables that in turn cause other sessions to be blocked.Depending on your version of Oracle, and whether or not you are licensed to run the performance pack and diagnostic pack, you could query v$active_session_history (and it's repository dba_hist_active_sess_history).
    The type of query you need would be something like:
    select
            blocking_session, current_obj#, substr(to_char(p1,'xxxxxxxx'),-1), count(*)
    from
            v$active_session_history
    where
            event like 'enq: TM - contention'
    and     session_state = 'WAITING'
    and     sample_time between sysdate - 1/24 and sysdate
    group by
            blocking_session, current_obj#, substr(to_char(p1,'xxxxxxxx'),-1)
    /The counts would give you relative time for blocking due to each "current_obj#" - which you'd have to look up against object_id.
    I've also broken this down by blocking_session_id, and the lock mode (which ought to be 4 or 5) - 4 would SUGGEST simple parent/child collisions, 5 would SUGGEST that the probem could be exacerbated by "on delete cascade" constraints.
    Regards
    Jonathan Lewis

Maybe you are looking for