Header Condtion Records-Transactional Table Details

Hi Folks,
My client is having a requirement to generate a report which fetches the condition value details like (freight, Tax etc) against a particular sales orders.
We have noted certain cases where the condition records at the header level can be manually entered by the user at the time of creating the sales order. My question is that where exactly is the transactional data pertaining to the condition records in a sales order are stored?
How can I advice my technical guy to fetch the above mentioned data? Can you advice please
Thanks,
Safeer Rahman

Dear Safeer Rahman,
According to your requirement please find the below solution.
First take the sales order number(VBELN), go to VBAK fetch the document condition (KNUMV) and go to KONV give the document condition and execute.
You can find the condition value (KWERT_K) which condition origin (KHERK) is D.
Just now i have tried in my system .
Hope it fix your issue, please let me know the status.
Thanks&Regards
Raghu.k

Similar Messages

  • Duplicate records returned when details come from 2 different tables

    I believe this should be a trivial case, but after days and days of experimenting, I can't get it. I'm creating a report of customers' transactions. The customer table links to a transaction header table (default join type). There are 2 tables which link to the transaction header, one for normal transaction line items, one for refunds. There is no correlation between the line items in the transaction table and those in the refund table, and they may or may not share a header. I have linked these two tables to the transaction header with left-outer-joins. What I want in the dataset, for a given transaction header, are all normal line items followed by all the refund line items (or vice versa). What I get, however, is a record for each combination of transaction details and refund details, whenever they share a common header. So, for example, if I have transaction line items 1, 2, 3, and 4 for a given header, plus refund lines A and B for the same header (as might happen if a customer returned 2 products at the same time he bought 4), what I want is a data set of 1,2,3,4,A,B. Instead, what I get is 1A,1B,2A,2B,3A,3B,4A,4B.
    How can I get the result I need? Is this simply a join issue, or must I weed out the duplication using complex formula scripts? Or a subreport? Or something else?
    thanks.
    John

    Subreport would solve this problem but it sub-report take a long time export. Does anyone know how to integrate subreport without suffering the performance lost?
    My report have two subreport and everytime I tried to export the value it take ages to export. My database is only about 20k items and it take almost 10 min to export.
    Does any one know how to make this process faster?

  • Committed SCN in Undo header slot (Transaction table )

    Hi ,
    When ever a transaction was committed the Committed SCN is recorded with the Change vector in redo logs and in Slot reserved for this transaction in undo segment header .
    At some later point in time another session may read one of those blocks and discover that the ITL includes a transaction that has committed but not been cleaned out. (It
    can work this out by cross-checking the ITL entry with the relevant transaction table slot in the undo segment header block).
    My Question is : By the time next session visit this block, what happens if the undo slot was over written (This may happens due to lack of undo space and or the the time after commit reached undo retention ) .
    Please clarify my doubts ?
    -thanks
    Vijay

    Vijay.Cherukuri wrote:
    thanks for your reply
    Possibly: ORA-01554 transaction concurrency limit reached reason:%s params:%s, %s
    (ORA-01574 is another candidate, but I think that one really is about just the parameter setting.)
    In this case we may also get the error : ORA-30036 : unable to extend segment by string in undo tablespace 'string' Sine the undo table space was full not extendable. ?
    And also : ORA-01554 transaction concurrency limit reached reason:%s params:%s, %s also valid Since the Maximum number of concurrent transactions reached
    Which error code more likely to occur in this case ?
    Why do you think you need to know the answer to that question ?
    Regards
    Jonathan Lewis

  • Process records in a transaction table in real time

    We are currently designing a new system which basically needs to process records from a transaction table. We are envisaging to have approximately 100000 records per hour. There is no need to process each transaction independently as the external process will process all records residing in the table which have not been processed as yet.
    We are basically looking at various options:
    1) have the external process run continuously, select all records in the table, process them and delete them and then start the process again
    2) have the external process run continuously, select all records in the table, process them, update a status flag and then start the process again processing only those records with their status not yet updated
    3) fire a trigger for each record launching the external process (if it is not running yet)
    4) have a separate table containing a timestamp which is updated via trigger for every transaction that is inserted in the transaction table. Have the external process run continuously and only process those records which have exceeded the previous timestamp.
    Would appreciate any ideas you may have how to tune this process and your views regarding the options mentioned above(or others you might have)
    Thanks a lot.

    user9511474 wrote:
    We are currently designing a new system which basically needs to process records from a transaction table. We are envisaging to have approximately 100000 records per hour. There is no need to process each transaction independently as the external process will process all records residing in the table which have not been processed as yet.My busiest table collects up to 50 million rows per hour (peak periods in the day) that also needs to be processed immediately (as a batch) after the hour. I use partitioning. It is a very flexible and there are very few (if any) performance knocks.
    The entire data set has to be processed. With a partition that means a full scan of the table partition - and the ability to do it using parallel query. No additional predicates are needed, except the to have the CBO apply partition pruning. In other words, the predicate enables the CBO to narrow down the SQL to the previous hour's partition only.
    No additional predicates needed like a STATUS flag to differentiate between processed and unprocessed rows - as the entire data set in the partition is unprocessed at that time. (such a flag approach will not be very scalable in any case)
    Also, I do not use external processes. Too expensive performance wise to ship data all the way from the Oracle buffer cache to some external process. And parallel query is also not an option with an external process as the OCI does not provide the external process with a threading interface in order to hook into each of the data output streams provided by the parallel query clients.
    I stay inside PL/SQL to perform the data processing. PL/SQL is even more capable than ProC/C++ and Java and .Net in this regard.
    The execution interface to drive the scheduling of processing is DBMS_JOB. Straight forward and simple to use.
    The basic principles of processing large data volumes in Oracle is to effectively use I/O. Why use indexes when an entire data set needs to be processed? Why perform updates (e.g. updating a status flag) when the data model and physical implementation of that can eliminate it?
    I/O is the most expensive operation. And when dealing with a large volume, you need to make sure that every single I/O is actually required to achieve the end result. There's no room to waste I/O as the performance penalties are hefty.

  • Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long.

    Hello Friends,
    The background is I am working as conversion manager and we move the data from oracle to SQL Server using SSMA and then we will apply the conversion logic and then move the data to system test ,UAT and Production.
    Scenario:
    Moving the 80 Million records from Conversion database to System Test database (Just for one transaction table) taking too long. Both the databases are in the same server.
    Questions are…
    What is best option?
    IF we use the SSIS it’s very slow and taking 17 hours (some time it use to stuck and won’t allow us to do any process).
    I am using my own script (Stored procedure) and it’s taking only 1 hour 40 Min. I would like know is there any better process to speed up and why the SSIS is taking too long.
    When we move the data using SSIS do they commit inside after particular count? (or) is the Microsoft is committing all the records together after writing into Transaction Log
    Thanks
    Karthikeyan Jothi

    http://www.dfarber.com/computer-consulting-blog.aspx?filterby=Copy%20hundreds%20of%20millions%20records%20in%20ms%20sql
    Processing
    hundreds of millions records can be done in less than an hour.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Insert record via data tab of table detail display

    I am using SQL Developer 3.0.04
    When I tries to insert more than 200 rows into a table via data tab of table detail display, it takes more than 8 minutes. It takes more than 19 minutes for 400+ rows. It is slow and it is not so slow in previous version (e.g. 1.5.4)
    Had anyone experienced the same issue and is there any workaround (other than loading data via SQL loader and stop using SQL Developer)?
    [Timer of SQL Developer 3.0| http://i.imgur.com/KVAdI.png]

    I've seen this also sometimes but i rarely load large amount of data using SQLDeveloper anyway.
    Please make sure you have patch1 installed and provide information on the database used.
    You can check if patch1 is installed at the
    Help -> About -> Extensionyou should find a "SQL Developer Patch" entry there.

  • How to supress the header if there is no detail record in the report

    Hi,
    I am trying to create a report using the TEXT_IO utility in oracle forms. The header part is created in one program unit and detail part is created in another program unit. How can I supress the header if there is no detail part? I came to know that I can use HOST command . How can I use this? Can any one help me in resolving this?
    My report should apppear in the format specified below.
    case number : 1
    Name Address1 Address2 State ZIp
    XXXXX XXXXXXX XXXXXXXx XXXXX XXXxx
    case number: 2
    Name Address1 Address2 State Zip
    YYYYY YYYYYY YYYYYYY YYYYY YYYY
    Edited by: 837462 on Feb 16, 2011 6:12 PM

    Hi
    i think it's a better idea to paste ur code here...
    but logically... i supposed u have 2 function header_function & detail_function e.g. both functions returns either 1 or 0 if there is data then it returns and the contrary if returns 0
    In the calling these functions i e.g. WHEN-BUTTON-PRESSED u need to validate as follow
    DECLARE
    v_header_output  varchar(2);
    v_detail_output  varchar(2);
    BEGIN
    v_header_output :=  header_function ( PARAMETERS ... , ,);
    v_detail_output :=  header_function ( PARAMETERS ... , ,);
    IF  v_detail_output = 0 THEN
    v_header_output := 0;   -- or u can do some works that change the return of the   header_function  to null or 0
    END IF;
    END;Hope it works...
    Regards,
    Abdetu...

  • PO header condtion value

    Hi Gurus
    How to fetch the value from PO header condtion for layout purpose? &
    How to fetch the value from subtotals (ex: Asseable value) value from header condtions tab?
    Thanks with anticipation
    Raju

    Thanku
    Please let me know detail.
    Got the condtion record number by passing PO no. to EKKO. then condtion record no. entered in KONP and KONH but there was message that "no table enteries found"
    I would like capture value against particular condtion type from header.
    Raju

  • Kick off SSIS master package once new data gets into a transactional table

    hello
    I have a SSIS master package.. I also have an UI which the user uses to configure up some date ranges (that I need to use to load the data from the package). After date configuration, when user clicks on RUN SSIS button in UI, that configured data gets loaded
    up in a transactional db table. Once new data is entered into that table, i need to kick off this master SSIS package.
    Question is , is there any other method besides using table triggers?
    Please advice.
    Thanks

    Another method is to have a automated job created in SQL Agent which keeps on polling the transactional table. you can configure it to run periodically (may be every 30 mins etc). Each time it starts it needs to check the status of a table which captures
    details of whether user has pressed RUN SSIS button (you need to include that logic in UI to set a bit in a status table when button is pressed). If status is set, it will check if transactional table has data (IF EXISTS(SELECT 1 FROM Table)). Once these two
    are truw it will go to next step where it calls master package. At the end it needs to log somewhere that data transfer is completed and also need to reset the bit field as well.
    So job will have three steps
    Step1: check bit is set and table has data
    ie
    IF (SELECT bitField FROM StatusTable)=1 AND EXISTS(SELECT 1 FROM TTable)
    SELECT 1
    ELSE
    RAISERROR 'Table not ready',16,1
    and set job advanced option as
    quit job reporting success for failure condition and go to next step for success condition
    Step2: Execute master package
    Step3: rest status bit and update daya transfer status in a log table with date of run,record count etc
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Oracle LIte Conflict data table details Needed

    Hai,
    I have a new requirement from client that we have to send mail to user all conflict information between oracle lite and main database after synchronisation. We have taken publication and conclict table details from mobileadmin.C$EQ and wants to know which database table should be used for taking field wise details.
    Regards, Palani.R.

    What do you mean by conflict?
    The publication items will by default have a conflict type of 'server wins' or 'client wins'. This comes into play where updates have happened on both the server and the client, and in these cases the rule will be followed, the data updated accordingly and nothing is stored.
    If there is an ERROR in the upload processing, then the entire transaction will be put into the error queue (C$EQ), which sounds like what you are looking at.
    C$EQ will have a record for each record in the upload transaction, but the first one to cause an error will have the message text as not null (there may be other errors behind this, but only the first one is updated).
    If what you are trying to do is send an email when a transaction errors, then i would suggest you
    select the user (clientid), transaction (tranid$$) and message_text (this is the actual error) from c$eq
    to get the related table then
    select pub.owner||'.'||pub.store
    from c$all_publications pub
    ,c$eq eq
    where pub.name=eq.publication
    This is the schema and object name of the server table the data is going to. The actual error queue table is
    select pub.owner||'.CEQ$'||pub.store
    from c$all_publications pub
    ,c$eq eq
    where pub.name=eq.publication
    ie: object name prefixed by CEQ$ sitting in the main schema
    If you want the actual column data, then just use all tab columns inside a bit of dynamic SQL

  • Advice needed on designing schema to accomodate multiple transaction tables.

    Hi,
    The attached images shows my current schema. It consists of three transaction tables, a product table and a calendar table.
    - Background -
    The product table 'Q1 Data Set' contains all unique sales. In addition it also contains a number of columns by which I will later filter my pivot tables (e.g. whether the customer of the order is new/returning). This
    table also contains a column named 'DateOrdered',the date the order was originally placed (but not paid). 
    Each sale that is paid can be done so either in a single transaction, or across multiple transactions of different transaction types.
    An example of a sale  paid in multiple parts would be an order that has three transactions;
    one online (table 'trans_sagepay',
    one over the phone (table'trans_epdq')
    and another by card (table'trans_manual'). Furthermore there can be more than one transaction of each type for an sale.
    I have created measures which total the sales in each transaction table.
    Each transaction has a 'transaction_date' which is the date of that individual transaction.
    The calendar is simply a date table that has some friendly formatted columns for laying out pivot tables. An example column
    is FiscalMonthAbbrv which displays months similar to '(04) - January'
    to accommodate our fiscal year.
    - Problem -
    My problem is that I need the ability to create some tables that have the
    Date Ordered as the rows (listed by Year>Month), and I need to produce other tables that have
    Transaction Date as the rows.  
    Date ordered works fine, however the problem comes when I try and create a table based on the transaction date.
    With the current model seen in the attached image I cannot do it because the transactions have a relationship to
    Q1 Data Set and this table has the relationship with the
    Cal_Trans table. What happens in this scenario is that whenever I set the rows to be FiscalMonthAbbr  the values it displays is the transactions based not on transaction date but date ordered. To explain further:
    If I have an order A with a DateOrdered of 01/01/2014, but the transaction of £100 for that order was made later on the 05/01/2014, that £100 is incorrectly attributed to the 01/01/2014.
    To clarify the type of table I am aiming for see the mock-up below, I however NEED the ability to filter this table using columns found in
    Q1 Data Set.
    How can I make a schema so that I can use both DateOrdered and TransactionDate? I cannot combine all three transaction tables into one because each transaction type has columns unique to that specific type.

    Thanks for your suggestions, at the moment I don't have time to prepare a non-confidential copy of the data model, however I've taken one step forward, and one step back!
    First to clarify; to calculate sales of each transaction type I have created the following measures (I've given them friendly names):
    rev_cash
    rev_online
    rev_phone
    I then have a measure called rev_total which sums together the above measures. This allows me to calculate total revenue, but also to break it down by transaction type.
    With this in mind I revised the schema based on Visakh original suggestion to look like this:
    Using this I was able to produce a table which looked like that below:
    There were two issues with this:
    If I add the individual measures for each transaction type I get no errors, as soon as I add the 'Total Sales' measure on the end of the table I get an error "Relationship between tables may be needed". Seemingly however the numbers still calculate as expected
    - what is causing this error and how do I remove it?
    I CAN with this scenario filter by 'phd' which is a column in the Q1 Data Set table
    and it works as expected. I cannot however filter by all columns in this table, an example would be 'word count'.
    'Word Count' is a integer column, each record in the Q1 Data Set table has a value set for this column.
    I would like to take the column above and add a new measure called 'Total Word Count' (which I have created) which will calculate the total number of words in that monthly period. When I add this however I get the same relationship error as above and it
    display the word count total for the entire source tbale for every row of the pivot table.
    How can I get this schema working so that I can filter by word count and other columns from the product table. It Is confusing me how I can filter by one column, but not by a another in the same table.
    Also, I don't fully understand how I would add a second date table or how it would help my issues.
    Thanks very much for you help.

  • Master table, detail table, detail table not getting refreshed selection

    i have a page where i am displaying data as master table and detail table. both table VOs are based on SQL queries which use bind variables.
    i have a view link between vos of type 1:M
    i created master table detail table page by dropping detail iterator from data control panel under master and selecting master table detail table
    on my page i see detail table records getting populated only for first record of parent table.
    on changing parent record, child table shows same records and does not refresh
    i am using partial triggers on both tables to be populated on a button click as i need to pass some bind variables to VOs which are taken as input from users
    how can i show corresponding rows in detail table when parent record in table changes
    will i have to use table selection listener
    is it possible declaratively to have master detail table view when both VOs have bind variables
    jdev 11 1 1 5

    these are the SQLs used
    Parent SQL Based VO Query
    SELECT to_char(d.status_date,'yyyymmddhh24') TIME123, count(DISTINCT d.c4)
    FROM t1 d,
    t2 w
    WHERE w.c1 = nvl(:ou, w.c1)
    AND UPPER(w.c2) = UPPER(nvl(:tt, w.c2))
    AND d.c3 >= :startTime AND :startTime IS NOT NULL
    AND d.c3 <= :endTime AND :endTime IS NOT NULL
    AND d.c4 = w.c4
    AND UPPER(d.status) = 'CLOSED'
    GROUP BY to_char(status_date,'yyyymmddhh24') ORDER BY to_char(status_date,'yyyymmddhh24') DESC
    Child SQL Based VO Query
    SELECT w.c1,
    w.c5 - w.c6 processing_time,
    w.c3,
    w.c6,
    w.c7,
    w.c8,
    to_char(d.status_date,'yyyymmddhh24') TIME123 FROM t1 d,
    t2 w
    WHERE w.c2 = nvl(:ou, w.c2)
    AND UPPER(w.c3) = UPPER(nvl(:tt, w.c3))
    AND d.c4 >= :startTime AND :startTime IS NOT NULL
    AND d.c4 <= :endTime AND :endTime IS NOT NULL
    AND d.c1 = w.c1
    AND UPPER(d.status) = 'CLOSED' ORDER BY to_char(status_date,'yyyymmddhh24') DESC
    view link is based on column TIME123

  • Master table detail table with SQL based read only VO with bind variables

    i have a page where i am displaying data as master table and detail table. both table VOs are based on SQL queries which use bind variables.
    i have a view link between vos of type 1:M
    i created master table detail table page by dropping detail iterator from data control panel under master and selecting master table detail table
    on my page i see detail table records getting populated only for first record of parent table.
    on changing parent record, child table shows same records and does not refresh
    i am using partial triggers on both tables to be populated on a button click as i need to pass some bind variables to VOs which are taken as input from users
    how can i show corresponding rows in detail table when parent record in table changes
    is it possible declaratively to have master detail table view when both VOs have bind variables
    jdev 11 1 1 5
    these are the SQLs used
    Parent SQL Based VO Query
    SELECT to_char(d.status_date,'yyyymmddhh24') TIME123, count(DISTINCT d.c4)
    FROM t1 d,
    t2 w
    WHERE w.c1 = nvl(:ou, w.c1)
    AND UPPER(w.c2) = UPPER(nvl(:tt, w.c2))
    AND d.c3 >= :startTime AND :startTime IS NOT NULL
    AND d.c3 <= :endTime AND :endTime IS NOT NULL
    AND d.c4 = w.c4
    AND UPPER(d.status) = 'CLOSED'
    GROUP BY to_char(status_date,'yyyymmddhh24') ORDER BY to_char(status_date,'yyyymmddhh24') DESC
    Child SQL Based VO Query
    SELECT w.c1,
    w.c5 - w.c6 processing_time,
    w.c3,
    w.c6,
    w.c7,
    w.c8,
    to_char(d.status_date,'yyyymmddhh24') TIME123 FROM t1 d,
    t2 w
    WHERE w.c2 = nvl(:ou, w.c2)
    AND UPPER(w.c3) = UPPER(nvl(:tt, w.c3))
    AND d.c4 >= :startTime AND :startTime IS NOT NULL
    AND d.c4 <= :endTime AND :endTime IS NOT NULL
    AND d.c1 = w.c1
    AND UPPER(d.status) = 'CLOSED' ORDER BY to_char(status_date,'yyyymmddhh24') DESC
    view link is based on column TIME123

    Instead of doing the master-detail layout by dragging the details over, can you try a new page where you first drag the master VO over and then drag the detail VO over, and then set partialTrigger from the detail to point to the master?

  • How to create transaction table in R/3

    Hi Gurus,
                  I have created master data table in R/3 with zempid as a primary key. I need to create transaction table for employee and visitor. how can i relate 2 transaction table with employee master table while creating tables in R/3 SE11.
    Notes: both employee and visitor transaction table has zempid.
    please let me know how to create table with relations.
    Thanks & Regards,
    Bharath

    Hi,
    In a normal database (say Oracle), this referential integrity is taken care of by PK-FK (primary key - foreign key) relationships. As far as OLTP - SAP R/3 is concerned, these are exisitng for the tables defined by SAP.
    Speaking about BW:
    1. You will bring master data from R/3--> BW first (its always a good practice), which should mean that empid 1000 - 1007 will flow to BW. Also your OLTP system ensures that there can be no transaction data (due to PK-FK relationships), for any emp. id other than 1000-1007.
    2. There could be another case that when the master data has been transferred to BW, due to time lag, in between master data and transaction data are created in R/3... meaning thereby that R/3 now has both master data about say emp.Id 1008 and transaction data as well.
    3. Now the transaction data is extracted from R/3, which will bring the transaction data for 1008, eventhough the master data for 1008 (loaded in step 1) has not come down to BW.
    4. Such situation can be controlled in BW, there are options wherein you can control this. the options are:
      - Load data even if there is no master data (this will create master data record with key values and nulls in other attributes) - record for 1008 will get loaded in such cases.
    - Do not Load data when there is no master data - The record with 1008 emp. id will not be loaded in such a case.
    Hope this helps.
    Cheers,
    Sumit

  • How to insert a new record to table with foreign key

    I have 3 tables like this :
    CREATE TABLE PERSON (
    PK INTEGER NOT NULL,
    NAME VARCHAR(10),
    SSNUM INTEGER,
    MGR INTEGER);
    ALTER TABLE PERSON ADD CONSTRAINT PK_PERSON PRIMARY KEY (PK);
    ALTER TABLE PERSON ADD CONSTRAINT FK_PERSON FOREIGN KEY (MGR) REFERENCES
    PERSON (PK);
    /* Tables
    CREATE TABLE PROJECT (
    PK INTEGER NOT NULL,
    CODE_NAME INTEGER);
    ALTER TABLE PROJECT ADD CONSTRAINT PK_PROJECT PRIMARY KEY (PK);
    /* Tables
    CREATE TABLE XREF (
    PERSON INTEGER NOT NULL,
    PROJECT INTEGER NOT NULL);
    ALTER TABLE XREF ADD CONSTRAINT PK_XREF PRIMARY KEY (PERSON, PROJECT);
    ALTER TABLE XREF ADD CONSTRAINT FK_XREF1 FOREIGN KEY (PERSON) REFERENCES
    PERSON (PK);
    ALTER TABLE XREF ADD CONSTRAINT FK_XREF2 FOREIGN KEY (PROJECT) REFERENCES
    PROJECT (PK);
    I do like the way of "ReverseTutoral" and the file .jdo here :
    <?xml version="1.0" encoding="UTF-8"?>
    <jdo>
    <package name="reversetutorial">
    <class name="Person" objectid-class="PersonId">
    <extension vendor-name="kodo" key="class-column" value="none"/>
    <extension vendor-name="kodo" key="lock-column" value="none"/>
    <extension vendor-name="kodo" key="table" value="PERSON"/>
    <field name="name">
    <extension vendor-name="kodo" key="data-column"
    value="NAME"/>
    </field>
    <field name="person">
    <extension vendor-name="kodo" key="pk-data-column"
    value="MGR"/>
    </field>
    <field name="persons">
    <collection element-type="Person"/>
    <extension vendor-name="kodo" key="inverse"
    value="person"/>
    <extension vendor-name="kodo" key="inverse-owner"
    value="person"/>
    </field>
    <field name="pk" primary-key="true">
    <extension vendor-name="kodo" key="data-column"
    value="PK"/>
    </field>
    <field name="ssnum">
    <extension vendor-name="kodo" key="data-column"
    value="SSNUM"/>
    </field>
    <field name="xrefs">
    <collection element-type="Xref"/>
    <extension vendor-name="kodo" key="inverse"
    value="person"/>
    <extension vendor-name="kodo" key="inverse-owner"
    value="person"/>
    </field>
    </class>
    <class name="Project" objectid-class="ProjectId">
    <extension vendor-name="kodo" key="class-column" value="none"/>
    <extension vendor-name="kodo" key="lock-column" value="none"/>
    <extension vendor-name="kodo" key="table" value="PROJECT"/>
    <field name="codeName">
    <extension vendor-name="kodo" key="data-column"
    value="CODE_NAME"/>
    </field>
    <field name="pk" primary-key="true">
    <extension vendor-name="kodo" key="data-column"
    value="PK"/>
    </field>
    <field name="xrefs">
    <collection element-type="Xref"/>
    <extension vendor-name="kodo" key="inverse"
    value="project"/>
    <extension vendor-name="kodo" key="inverse-owner"
    value="project"/>
    </field>
    </class>
    <class name="Xref" objectid-class="XrefId">
    <extension vendor-name="kodo" key="class-column" value="none"/>
    <extension vendor-name="kodo" key="lock-column" value="none"/>
    <extension vendor-name="kodo" key="table" value="XREF"/>
    <field name="person">
    <extension vendor-name="kodo" key="pk-data-column"
    value="PERSON"/>
    </field>
    <field name="person2" primary-key="true">
    <extension vendor-name="kodo" key="data-column"
    value="PERSON"/>
    </field>
    <field name="project">
    <extension vendor-name="kodo" key="pk-data-column"
    value="PROJECT"/>
    </field>
    <field name="project2" primary-key="true">
    <extension vendor-name="kodo" key="data-column"
    value="PROJECT"/>
    </field>
    </class>
    </package>
    </jdo>
    Data of those tables are :
    PERSON :
    | PK | NAME | SSNUM | MGR |
    | 1 | ABC | 1 | 1 |
    | 2 | DEF | 5 | 1 |
    PROJECT
    | PK | CODE_NAME |
    | 1 | 12 |
    | 2 | 13 |
    And now I want to add a new record into table XREF : insert into XREF
    values (1,1);
    public void createData() {
    Xref xref = new Xref();
    Person person = new Person(1);
    Project project = new Project(1);
    xref.setPerson(person);
    xref.setProject(project);
    person.getXrefs().add(xref);
    person.getXrefs().add(xref);
    pm.currentTransaction().begin();
    pm.makePersistent(xref);
    pm.currentTransaction().commit();
    I don't know why Kodo automatically insert new record to table PERSON ->
    confilct Primary Key. The errors are :
    0 [main] INFO kodo.Runtime - Starting Kodo JDO version 2.4.1
    (kodojdo-2.4.1-20030126-1556) with capabilities: [Enterprise Edition
    Features, Standard Edition Features, Lite Edition Features, Evaluation
    License, Query Extensions, Datacache Plug-in, Statement Batching, Global
    Transactions, Developer Tools, Custom Database Dictionaries, Enterprise
    Databases, Custom ClassMappings, Custom ResultObjectProviders]
    41 [main] WARN kodo.Runtime - WARNING: Kodo JDO Evaluation expires in 29
    days. Please contact [email protected] for information on extending
    your evaluation period or purchasing a license.
    1627 [main] INFO kodo.MetaData -
    com.solarmetric.kodo.meta.JDOMetaDataParser@e28b9: parsing source:
    file:/D:/AN/Test/classes/reversetutorial/reversetutorial.jdo
    3092 [main] INFO jdbc.JDBC - [ C:23387093; T:19356985; D:10268916 ] open:
    jdbc:firebirdsql:localhost/3050:D:/An/test/temp.gdb (sysdba)
    3325 [main] INFO jdbc.JDBC - [ C:23387093; T:19356985; D:10268916 ]
    close:
    com.solarmetric.datasource.PoolConnection@164dbd5[[requests=0;size=0;max=70;hits=0;created=0;redundant=0;overflow=0;new=0;leaked=0;unavailable=0]]
    3335 [main] INFO jdbc.JDBC - [ C:23387093; T:19356985; D:10268916 ] close
    connection
    3648 [main] INFO jdbc.JDBC - Using dictionary class
    "com.solarmetric.kodo.impl.jdbc.schema.dict.InterbaseDictionary" to
    connect to "Firebird" (version "__WI-V6.2.972 Firebird 1.0.3)WI-V6.2.972
    Firebird 1.0.3/tcp (annm)/P10") with JDBC driver "firebirdsql jca/jdbc
    resource adapter" (version "0.1")
    4032 [main] INFO jdbc.JDBC - [ C:25657668; T:19356985; D:10268916 ] open:
    jdbc:firebirdsql:localhost/3050:D:/An/test/temp.gdb (sysdba)
    4143 [main] INFO jdbc.SQL - [ C:25657668; T:19356985; D:10268916 ]
    preparing statement <3098834>: INSERT INTO XREF(PERSON, PROJECT) VALUES
    4224 [main] INFO jdbc.SQL - [ C:25657668; T:19356985; D:10268916 ]
    executing statement <3098834>: [reused=1;params={(int)1,(int)1}]
    4244 [main] INFO jdbc.SQL - [ C:25657668; T:19356985; D:10268916 ]
    preparing statement <9090824>: INSERT INTO PERSON(MGR, NAME, PK, SSNUM)
    VALUES (?, ?, ?, ?)
    4315 [main] INFO jdbc.SQL - [ C:25657668; T:19356985; D:10268916 ]
    executing statement <9090824>: [reused=1;params={null,null,(int)1,(int)0}]
    4598 [main] WARN jdbc.JDBC - java.sql.SQLWarning: java.sql.SQLWarning:
    resultSetType or resultSetConcurrency changed
    4598 [main] WARN jdbc.JDBC - java.sql.SQLWarning: java.sql.SQLWarning:
    resultSetType or resultSetConcurrency changed
    4598 [main] INFO jdbc.JDBC - [ C:25657668; T:19356985; D:10268916 ] begin
    rollback
    4608 [main] INFO jdbc.JDBC - [ C:25657668; T:19356985; D:10268916 ] end
    rollback 10ms
    4628 [main] INFO jdbc.JDBC - [ C:25657668; T:19356985; D:10268916 ]
    close:
    com.solarmetric.datasource.PoolConnection@1878144[[requests=2;size=2;max=70;hits=0;created=2;redundant=0;overflow=0;new=2;leaked=0;unavailable=0]]
    4628 [main] INFO jdbc.JDBC - [ C:25657668; T:19356985; D:10268916 ] close
    connection
    javax.jdo.JDOFatalDataStoreException:
    com.solarmetric.kodo.impl.jdbc.sql.SQLExceptionWrapper:
    [SQL=INSERT INTO PERSON(MGR, NAME, PK, SSNUM) VALUES (null, null, 1, 0)]
    [PRE=INSERT INTO PERSON(MGR, NAME, PK, SSNUM) VALUES (?, ?, ?, ?)]
    GDS Exception. violation of PRIMARY or UNIQUE KEY constraint "PK_PERSON"
    on table "PERSON" [code=335544665;state=null]
    NestedThrowables:
    com.solarmetric.kodo.impl.jdbc.sql.SQLExceptionWrapper:
    [SQL=INSERT INTO PERSON(MGR, NAME, PK, SSNUM) VALUES (null, null, 1, 0)]
    [PRE=INSERT INTO PERSON(MGR, NAME, PK, SSNUM) VALUES (?, ?, ?, ?)]
    GDS Exception. violation of PRIMARY or UNIQUE KEY constraint "PK_PERSON"
    on table "PERSON"
    at
    com.solarmetric.kodo.impl.jdbc.runtime.SQLExceptions.throwFatal(SQLExceptions.java:17)
    at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:416)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.flush(PersistenceManagerImpl.java:575)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:438)
    at reversetutorial.Finder.createData(Finder.java:74)
    at reversetutorial.Finder.main(Finder.java:141)
    NestedThrowablesStackTrace:
    org.firebirdsql.jdbc.FBSQLException: GDS Exception. violation of PRIMARY
    or UNIQUE KEY constraint "PK_PERSON" on table "PERSON"
    at
    org.firebirdsql.jdbc.FBPreparedStatement.internalExecute(FBPreparedStatement.java:425)
    at
    org.firebirdsql.jdbc.FBPreparedStatement.executeUpdate(FBPreparedStatement.java:136)
    at
    com.solarmetric.datasource.PreparedStatementWrapper.executeUpdate(PreparedStatementWrapper.java:111)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executePreparedStatementNonBatch(SQLExecutionManagerImpl.java:542)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executePreparedStatement(SQLExecutionManagerImpl.java:511
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeInternal(SQLExecutionManagerImpl.java:405)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.flush(SQLExecutionManagerImpl.java:272
    at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:411)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.flush(PersistenceManagerImpl.java:575)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:438)
    at reversetutorial.Finder.createData(Finder.java:74)
    at reversetutorial.Finder.main(Finder.java:141)
    at org.firebirdsql.gds.GDSException: violation of PRIMARY or UNIQUE KEY
    constraint "PK_PERSON" on table "PERSON
    at org.firebirdsql.jgds.GDS_Impl.readStatusVector(GDS_Impl.java:1683)
    at org.firebirdsql.jgds.GDS_Impl.receiveResponse(GDS_Impl.java:1636)
    at org.firebirdsql.jgds.GDS_Impl.isc_dsql_execute2(GDS_Impl.java:865)
    at
    org.firebirdsql.jca.FBManagedConnection.executeStatement(FBManagedConnection.java:782)
    at
    org.firebirdsql.jdbc.FBConnection.executeStatement(FBConnection.java:1072)
    at
    org.firebirdsql.jdbc.FBPreparedStatement.internalExecute(FBPreparedStatement.java:420)
    at
    org.firebirdsql.jdbc.FBPreparedStatement.executeUpdate(FBPreparedStatement.java:136)
    at
    com.solarmetric.datasource.PreparedStatementWrapper.executeUpdate(PreparedStatementWrapper.java:111)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executePreparedStatementNonBatch(SQLExecutionManagerImpl.java:542)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executePreparedStatement(SQLExecutionManagerImpl.java:511)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.executeInternal(SQLExecutionManagerImpl.java:405)
    at
    com.solarmetric.kodo.impl.jdbc.SQLExecutionManagerImpl.flush(SQLExecutionManagerImpl.java:272)
    at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.flush(JDBCStoreManager.java:411)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.flush(PersistenceManagerImpl.java:575)
    at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.commit(PersistenceManagerImpl.java:438)
    at reversetutorial.Finder.createData(Finder.java:74)
    at reversetutorial.Finder.main(Finder.java:141)
    Exception in thread "main"

    First off, use the '-primaryKeyOnJoin true' flag when running the reverse
    mapping tool so that you can get rid of that useless Xref class and have
    a direct relation between Person and Project. See the documentation on
    reverse mapping tool options here:
    http://www.solarmetric.com/Software/Documentation/latest/docs/ref_guide_pc_reverse.html
    But your real problem is that you are creating new objects, assigning
    primary key values, and expecting them to represent existing objects.
    That's not the way JDO works. If you want to set relations to existing
    objects in JDO, you use the PM to look up those objects. If you try to
    create new objects, JDO will assume you want to insert new records into
    the DB, and you'll get PK conflicts like you see here.
    There are several good books out on JDO; if you're just starting out with
    it, they might save you a lot of time and help you master JDO quickly.

Maybe you are looking for

  • Best practice - caching objects

    What is the best practice when many transactions requires a persistent object that does not change? For example, in a ASP model supporting many organizations, organization is required for many persistent objects in the model. I would rather look the

  • Problem with command line argument files and quoted strings

    I'm trying to use the following custom tag: -tag com.me.Test.testTag:f:"My Test Tag" This works fine when sent straight to the command line, but if I put it in an argument file (@argfile) along with the rest of my command, I get the following error:

  • Want a longer lightning to 30-pin adapter

    I love the lightning to 30-pin adapter as it works perfectly with my new car's stereo system but...the cord is too short. Can I buy an extension for it? Will this work: Male to female 30-pin work? http://www.amazon.com/gp/product/B00ASVCY46/ref=ox_sc

  • IMac (2007) intermittent power issues

    Hi, I have a 2007 iMac. I am getting intermittent power issues with it. One day it wont power up at all...and might not power up for a number of weeks. And just when I have given up on it, it will power up and work for a wee while without any issues

  • New CS6 is crashing and now my old CS2 is crashing too.

    I posted this before, was told it was probably a font conflict, paid $70 for Font Doctor, ran that, cleaned up the fonts and its still crashing by simply scrolling thru to find a file to open and to make matters worse, my CS2 which hardly ever crashe