Concurrency into sequence tables

Hi,
Is there any concurrency problem by using sequence tables instead of a real sequence?
[]s

In general there is no concurrency problem, however using the sequence table TopLink allocates the sequences in another transaction/connection(with server session). There may be an issue when using external connection pools and JTS and no non-JTS pool, as the sequence table access will be in the JTS transaction so could reduce concurrency in this specific case.
If you are using Oracle, then native sequencing with sequence objects and preallocation through the increment is suggested, and is a little more efficient and concurrent than table sequencing.
If you are any other database the sequence table with preallocation is suggested as their native sequencing does not allow preallocation.

Similar Messages

  • Update Row into Run Table Task is not executing in correct sequence in DAC

    Update Row into Run Table Task is not executing in correct sequence in DAC.
    The task phase for this task is "Post Lost" . The depth in the execution plan is 19 but this task is running some times in Depth 12, some times in 14 and some time in Depth 16. Would like to know is this sequence of execution is correct order or not? In the out of the Box this task is executed at the end of the entire load. No Errors were reported in DAC log.
    Please let me know if any documents that would highlight this issue
    rm

    Update into Run table is a task thats required to update a table called W_ETL_RUN_S. The whole intention of this table is to keep the poor mans run history on the warehouse itself. The actual run history is stored in the DAC runtime tables, however the DAC repository could be on some other database/schema other than warehouse. Its mostly a legacy table, thats being carried around. If one were to pay close attention to this task, it has phase dependencies defined that dictate when this task should run.
    Apologies in advance for a lengthy post.... But sure might help understanding how DAC behaves! And is going to be essential for you to find issues at hand.
    The dependency generation in DAC follows the following rules of thumb!
    - Considers the Source table target table definitions of the tasks. With this information the tasks that write to a table take precedence over the tasks that reads from a table.
    - Considers the phase information. With this information, it will be able to resolve some of the conflicts. Should multiple tasks write to the same table, the phase is used to appropriately stagger them.
    - Considers the truncate table option. Should there be multiple tasks that write to the same table with the same phase information, the task that truncates the table takes precedence.
    - When more than one task that needs to write to the table that have similar properties, DAC would stagger them. However if one feels that either they can all go in parallel, or a common truncate is desired prior to any of the tasks execution, one could use a task group.
    - Task group is also handy when you suspect the application logic dictates cyclical reads and writes. For example, Task 1 reads from A and writes to B. Task 2 reads from B and writes back to A. If these two tasks were to have different phases, DAC would be able to figure that out and order them accordingly. If not, for example those tasks need to be of the same phase for some reason, one could create a task group as well.
    Now that I described the behavior of how the dependency generation works, there may be some tasks that have no relevance to other tasks either as source tables or target tables. The update into run history is a classic example. The purpose of this task is to update the run information in the W_ETL_RUN_S with status 'Completed' with an end time stamp. Because this needs to run at the end, it has phase dependency defined on it. With this information DAC will be able to stagger the position of execution either before (Block) or after (Wait) all the tasks belonging to a particular phase is completed.
    Now a description about depth. While Depth gives an indication to the order of execution, its only an indication of how the tasks may be executed. Its a reflection of how the dependencies have been discovered. Let me explain with an example. The tasks that have no dependency will have a depth of 0. The tasks that depend on one or more or all of depth 0 get a depth of 1. The tasks that depend on one or more or all of depth 1 get a depth of 2. It also means implicitly a task of depth 2 will indirectly depend on a task of depth 0 through other tasks in depth 1. In essence the dependencies translate to an execution graph, and is different from the batch structures one usually thinks of when it comes to ETL execution.
    Because DAC does runtime optimization in the order in which tasks are executed, it may pick a task thats of order 1 over something else with an order of 0. The factors considered for picking the next best task to run depend on
    - The number of dependent tasks. For example, a task which has 10 dependents gets more priorty than the one whose dependents is 1.
    - If all else equal, it considers the number of source tables. For example a task having 10 source tables gets more priority than the one that has only two source tables.
    - If all else equal, it considers the average time taken by each of the tasks. The longer running ones will get more preference than the quick running ones
    - and many other factors!
    And of course the dependencies are honored through the execution. Unless all the predecessors of a task are in completed state a task does not get picked for execution.
    Another way to think of this depth concept : If one were to execute one task at a time, probably this is the order in which the tasks will be executed.
    The depth can change depending on the number of tasks identified for the execution plan.
    The immediate predecessors and successor can be a very valuable information to look at and should be used to validate the design. All predecessors and successors provide information to corroborate it even further. This can be accessed through clicking on any task and choosing the detail button. You will see all these information over there. As an alternate method, you could also use the 'All/immediate Predecessors' and 'All/immediate Successor' tabs that provide a flat view of the dependencies. Note that these tabs may have to retrieve a large amount of data, and hence will open in a query mode.
    SUMMARY: Irrespective of the depth, validate
    - if this task has 'Phase dependencies' that span all the ETL phases and has a 'Wait' option.
    - click on the particular task and verify if the task does not have any successors. And the predecessors include all the tasks from all the phases its supposed to wait for!
    Once you have inspected the above two you should be good to go, no matter what the depth says!
    Hope this helps!

  • Oracle 10g performance degrades while concurrent inserts into a table

    Hello Team,
    I am trying to insert into single table via multiple threads, Some of these threads perform reasonably well but some will take really longer time, As the time goes on performance drastically degrades (even down by 500 to 600 times). With AWR report i see that there quite huge number of buffer gets there. I am not sure how can i reduce those. If i ran on a single thread this operation is consistent.
    I tried quite a few options like
    1. Increasing SGA Memory
    2. Moving redo logs to another disk drive.
    3. Trying it on a empty table
    4. Trying it on a table which has huge data (4 Million rows)
    5. I have even tried partitioning the table with HASH algoritm
    Note: Each thread i am pupming equal amount of data (let say 25K rows).
    Can any body suggest me a clue what could be the culprit here.
    Thanks in Advance
    Satish Kumar Ballepu

    user11150696 wrote:
    Can you please guide me how do i do that, I am not aware of how to generate explain plan for that query.Since you have the trace file already (and I don't mean the tkprof output), you could do the following:
    Read the trace file to find the statement you're interested id - the line above it will be a +"PARSING"+ line, and will include a reference to the statement hash_value look like +'hv=3838377475845'+.
    Use the hash_value to query v$sql to get the sql_id and child_number;
    Use the sql_id and child number in a call to dbms_xplan.display_cursor:
    PARSING IN CURSOR #7 len=68 dep=0 uid=55 oct=3 lid=55 tim=448839952404 *hv=3413100263* ad='2f6ede48'
    select ... etc.  (the statement I want the plan for)
    SQL> select sql_id , child_number from v$sql where hash_value = *3413100263*;
    SQL_ID        CHILD_NUMBER
    053tyaz5qzjr7            0
    SQL> select * from table(dbms_xplan.display_cursor(*'053tyaz5qzjr7'*,*0*));
    PLAN_TABLE_OUTPUT
    SQL_ID  053tyaz5qzjr7, child number 0
    select  /*+ use_concat */  small_vc from  t1 where  n1 = 1 or n2 = 2
    Plan hash value: 82564388
    | Id  | Operation                    | Name  | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT             |       |       |       |     4 |
    |   1 |  CONCATENATION               |       |       |       |       |
    |   2 |   TABLE ACCESS BY INDEX ROWID| T1    |    10 |   190 |     2 |
    |*  3 |    INDEX RANGE SCAN          | T1_N2 |    10 |       |     1 |
    |*  4 |   TABLE ACCESS BY INDEX ROWID| T1    |    10 |   190 |     2 |
    |*  5 |    INDEX RANGE SCAN          | T1_N1 |    10 |       |     1 |
    Predicate Information (identified by operation id):
       3 - access("N2"=2)
       4 - filter(LNNVL("N2"=2))
       5 - access("N1"=1)
    Note
       - cpu costing is off (consider enabling it)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "For every expert there is an equal and opposite expert."
    Arthur C. Clarke
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Inserting data into one table using another table

    Hi i have 2 tables
    CREATE TABLE N_SET
    N_ST_ID NUMBER(38) NOT NULL, ---- PK
    N_ST_NM VARCHAR2(50 BYTE) NOT NULL,
    N_ST_DSC VARCHAR2(200 BYTE),
    DFTID NUMBER ------ FK
    CREATE TABLE RZ
    NST_ID NUMBER(38) NOT NULL, ---- FK
    RID NUMBER(38) NOT NULL, --- PK
    RNM VARCHAR2(30 BYTE) NOT NULL
    I entered the data into the N_SET table using sequence in column N_ST_ID (using procedure)
    Now i need to enter the data into RZ table where NST_ID should contain the value of N_SET.N_ST_ID
    so for this i've written another procedure
    but confused how to write the select statement to retrieve the above condition..
    Could you help me in this please...

    Hi,
    I have a table Target whose structure is
    create table employee
    id VARCHAR2(20),
    name VARCHAR2(20),
    employee_seq NUMBER not null
    -- Create sequence
    create sequence test_seq
    minvalue 1
    maxvalue 999999999999999999999999999
    start with 5
    increment by 1
    nocache
    cycle;
    create table emp
    id VARCHAR2(20)
    name VARCHAR2(20)
    INSERT INTO emp
    ( id, name )
    VaLUES ( '100','test1');
    commit;
    INSERT INTO emp
    ( id, name )
    VaLUES ( '100','test2');
    commit;
    i have to insert into the TARGET table the fsa value from
    SOURCE table along with the sequence number using sequence test_seq.nextval.
    INSERT INTO employee
    ( id, name, employee_seq )
    SELECT id, ename, ( select test_seq.nextval from dual )
    FROM emp ;

  • Diffrence between backend insert and front end insert into a table.

    I am developing a conversion program for tax exemption. For this program only ZX_EXEMPTIONS table is used to populate the data and we got confirmation from Oracle also regarding this.For inserting the data into this table we are taking the max of tax_exemption_id which is pk for this table and adding one to it and inserting into the table. But problem here is after inserting from back end we are not able to insert from front end.
    It seems backend data is holding the tax_exemption_id which is suppose to reserve by front end data.Please explain the different behavior of populating of tax_exemption_id from front end and back end.

    Hi,
    i think the problem is that you are using max-value + 1 for tax_exemption_id. But as ZX_EXEMPTIONS is using sequence ZX_EXEMPTIONS_S
    for primary key generation, you encounter situation that you are increasing PK Id for this table without increasing sequence value.
    When trying to insert rows from front end - which probably uses sequence value - it tries to use a sequence value already used by your backend
    process (which generated it by Maxvalue + 1) and would then encounter a primary key violation.
    I think you should use sequence mentioned above to generate your PK Ids in backend process as well. And before doing so, check current value
    of ZX_EXEMPTIONS_S, as you might need to rebuild the sequence in order to select nextval sucessfully for both frontend and backend.
    Regards

  • Error while Inserting data into flow table

    Hi All,
    I am very new to ODI, I am facing lot of problem in my 1st interface. So I have many questions here, please forgive me if it has irritated to you.
    ========================
    I am developing a simple Project to load a data from an input source file (csv) file into a staging table.
    My plan is to achieve this in 3 interfaces:
    1. Interface-1 : Load the data from an input source (csv) file into a staging table (say Stg_1)
    2. Interface-2 : Read the data from the staging table (stg_1) apply the business rules to it and copy the processed records into another staging table (say stg_2)
    3. Interface-3 : Copy the data from staging table (stg_2) into the target table (say Target) in the target database.
    Question-1 : Is this approach correct?
    ========================
    I don't have any key columns in the staging table (stg_1). When I tried to execute the Flow Control of this I got an error:
    Flow Control not possible if no Key is declared in your Target Datastore
    With one of the response (the response was - "FLOW control requires a KEY in the target table") in this Forum I have introduced a column called "Record_ID" and made it a Primary Key column into my staging table (stg_1) and my problem has been resolved.
    Question-2 : Is a Key column compulsary in the target table? I am working in BO Data Integrator, there is no such compulsion ... I am little confused.
    ========================
    Next, I have defined one Project level sequence. I have mapped the newly introduced key column Record_Id (Primary Key) with the Project level sequence. Now I am got another error of "CKM not selected".
    For this, I have inserted "Insert Check (CKM)" knowledge module in my Project. With this the above problem of "CKM not selected" has been resolved.
    Question-3 : When is this CKM knowledge module required?
    ========================
    After this, the flow/interface is failing while loading data into the intermediar ODI created flow table (I$)
    1 - Loading - SS_0 - Drop work table
    2 - Loading - SS_0 - Create work table
    3 - Loading - SS_0 - Load data
    5 - Integration - FTE Actual data to Staging table - Drop flow table
    6 - Integration - FTE Actual data to Staging table - Create flow table I$
    7 - Integration - FTE Actual data to Staging table - Delete target table
    8 - Integration - FTE Actual data to Staging table - Insert flow into I$ table
    The Error is at Step-8 above. When opened the "Execution" tab for this step I found the message - "Missing parameter Project_1.FTE_Actual_Data_seq_NEXTVAL RECORD_ID".
    Question-4 : What/why is this error? Did I made any mistake while creating a sequence?

    Everyone is new and starts somewhere. And the community is there to help you.
    1.) What is the idea of moving data from stg_1 and then to stg_2 ? Do you really need it for any other purpose other than move data from SourceFile to Target DB.
    Otherwise, its simple to move data from SourceFile -> Target Table
    2.) Does your Target table have a Key ?
    3.) CKM (Check KM) is required when you want to do constraint validation (Checking) on your data. You can define constraints (business rules) on the target table and Flow Control will check the data that is flowing from Source File to Target table using the CKM. All the records that donot satisfy the constraint will be added to E$ (Error table) and will not be added to the Target table.
    4.) Try to avoid ODI sequences. They are slow and arent scalable. Try to use Database sequence wherever possible. And use the DB sequence is target mapping as
    <%=odiRef.getObjectName( "L" , "MY_DB_Sequence_Row" , "D" )%>.nextval
    where MY_DB_Sequence_Row is the oracle sequence in the target schema.
    HTH

  • How to insert data from APEX form into two tables

    Hi,
    I'm running APEX 4.1 with Oracle XE 11g, having two tables CERTIFICATES and USER_FILES. Some of the (useless) fields are cut to reduce information:
    CREATE TABLE CERTIFICATES
    CERT_ID NUMBER NOT NULL ,
    CERT_OWNER NUMBER NOT NULL ,
    CERT_VENDOR NUMBER NOT NULL ,
    CERT_NAME VARCHAR2 (128) ,
    CERT_FILE NUMBER NOT NULL ,
    ) TABLESPACE CP_DATA
    LOGGING;
    ALTER TABLE CERTIFICATES
    ADD CONSTRAINT CERTIFICATES_PK PRIMARY KEY ( CERT_ID ) ;
    CREATE TABLE USER_FILES
    FILE_ID NUMBER NOT NULL ,
    FILENAME VARCHAR2 (128) ,
    BLOB_CONTENT BLOB ,
    MIMETYPE VARCHAR2 (32) ,
    LAST_UPDATE_DATE DATE
    ) TABLESPACE CP_FILES
    LOGGING
    LOB ( BLOB_CONTENT ) STORE AS SECUREFILE
    TABLESPACE CP_FILES
    STORAGE (
    PCTINCREASE 0
    MINEXTENTS 1
    MAXEXTENTS UNLIMITED
    FREELISTS 1
    BUFFER_POOL DEFAULT
    RETENTION
    ENABLE STORAGE IN ROW
    NOCACHE
    ALTER TABLE USER_FILES
    ADD CONSTRAINT CERT_FILES_PK PRIMARY KEY ( FILE_ID ) ;
    ALTER TABLE CERTIFICATES
    ADD CONSTRAINT CERTIFICATES_USER_FILES_FK FOREIGN KEY
    CERT_FILE
    REFERENCES USER_FILES
    FILE_ID
    NOT DEFERRABLE
    What I'm trying to do is to allow users to fill out all the certificate data and upload a file in an APEX form. Once submitted the file should be uploaded in the USER_FILES table and all the fields along with CERT_ID, which is the foreign key pointing to the file in the USER_FILES table to be populated to the CERTIFICATES table. APEX wizard forms are based on one table and I'm unable to build form on both tables.
    That's why I've created a view (V_CERT_FILES) on both tables and using INSTEAD OF trigger to insert/update both tables. I've done this before and updating this kind of views works perfect. Here is where the problem comes, if I'm updating the view all the data is updated correctly, but if I'm inserting into the view all the fields are populated at CERTIFICATES table, but for USER_FILES only the fields FILE_ID and LAST_UPDATE_DATE are populated. The rest three regarding the LOB are missing: BLOB_CONTENT, FILENAME, MIMETYPE. There are no errors when running this from APEX, but If I try to insert into the view from SQLDeveloper, I got this error:
    ORA-22816: unsupported feature with RETURNING clause
    ORA-06512: at line 1
    As far as I know RETURNING clause in not supported in INSTEAD of triggers, although I didn't have any RETURNING clauses in my trigger (body is below).
    Now the interesting stuff, after long tracing I found why this is happening:
    First, insert is executed and the BLOB along with all its properties are uploaded to wwv_flow_file_objects$.
    Then the following insert is executed to populate all the fields except the BLOB and it's properties, rowid is RETURNED, but as we know RETURNING clause is not supported in INSTEAD OF triggers, that's why I got error:
    PARSE ERROR #1918608720:len=266 dep=3 uid=48 oct=2 lid=48 tim=1324569863593494 err=22816
    INSERT INTO "SVE". "V_CERT_FILES" ( "CERT_ID", "CERT_OWNER", "CERT_VENDOR", "CERT_NAME", "BLOB_CONTENT") VALUES (:B1 ,:B2 ,:B3 ,:B4, ,EMPTY_BLOB()) RETURNING ROWID INTO :O0
    CLOSE #1918608720:c=0,e=11,dep=3,type=0,tim=1324569863593909
    EXEC #1820672032:c=3000,e=3168,p=0,cr=2,cu=4,mis=0,r=0,dep=2,og=1,plh=0,tim=1324569863593969
    ERROR #43:err=22816 tim=1324569863593993
    CLOSE #1820672032:c=0,e=43,dep=2,type=1,tim=1324569863594167
    Next my trigger gets in action, sequences are generated, CERTIFICATES table is populated and then USER_FILES, but only the FILE_ID and LAST_UPDATE_DATE.
    Finally update is fired against my view (V_CERT_FILES), reading data from wwv_flow_files it populates BLOB_CONTENT, MIMETYPE and FILENAME fields at the specific rowid in V_CERT_FILES, the one returned from the insert at the beginning. Last, file is deleted from wwv_flow_files.
    I'm using sequences for the primary keys, this is only the body of the INSTEAD OF trigger:
    select user_files_seq.nextval into l_file_id from dual;
    select certificates_seq.nextval into l_cert_id from dual;
    insert into user_files (file_id, filename, blob_content, mimetype, last_update_date) values (l_file_id, :n.filename, :n.blob_content, :n.mimetype, sysdate);
    insert into certificates (cert_id, cert_owner, cert_vendor, cert_name, cert_file) values (l_cert_id, :n.cert_owner, :n.cert_vendor, :n.cert_name, l_file_id);
    I'm surprised that I wasn't able to find a valuable source of information regarding this problem, only MOS note about running SQLoader against view with CLOB column and INSTEAD OF trigger. The solution would be to ran it against base table, MOS ID 795956.1.
    Maybe I'm missing something and that's why I decided to share my problem here. So my question is how do you create this kind of architecture, insert into two tables with a relation between them in APEX ? I read a lot in the Internet, some advices were for creating custom form with APEX API, create a custom ARP, create two ARP or create a PL/SQL procedure for handing the DML?
    Thanks in advance.
    Regards,
    Sve

    Thank you however I was wondering if there was an example available which uses EJB and persistence.

  • R12 - Approach to Insert into custom table after payment is done

    Hi,
    I have a requirement to insert into a custom table after invoice payment in R12. Code units are already in place to load data from ap tables to custom one.
    I have identified 2 ways in which this can be done:
    1. Call from the business event oracle.apps.ap.payment
    2. Call after the program 'Send separate remittance advice' completes, again as a business event after the concurrent program completes i.e. to use 'Request Completed' business event.
    My questions are as below:
    1. Is there any other way in which I can write to a custom table after payment is done
    2. How else can after report trigger be fired for SRA in R12 (without customizing the standard SRA java conc pgm)
    3. When exactly business event 'oracle.apps.ap.payment' will be fired? Will it be fired for all payments irrespective of the way payment is done like thru PPR, payment workbench, etc?
    Please share your thoughts on this.
    Thanks,
    Kavipriya

    in our case, we have created a database trigger on IBY_PAY_INSTRUCTIONS_ALL, to insert the requisite data into custom tables whenever the payment status is "Ready for Printing"

  • Help in inserting rows into a table

    I have a table called acct_fact,
    I need to insert rows in the table using a script but the problem is there's a column called seq_nbr which has random seq nbr of 14character length like 'ZWX98MGD9MVAD6J','ZWX98MG67RVAD6J' etc.,
    While inserting rows I need to generate such seq_nbr for those columns and insert rows into the table, can I use any such mechanism in my insert query to insert such random nbr's while inserting rows into a table.
    If so please suggest me

    Hi Peter,
    Thankyou for the quick reply:)
    can you suggest me how to implement it here in my script snippet:
    while read var_acct_nbr
    do
    echo "update acct_attr set acct_attr_exp_dt ='$ExpDate' where Acct_Attr_Value_Text='15' and acct_attr_exp_dt is null and person_id='LDCarrBillAgrm' and acct_nbr='$var_acct_nbr' ;" >> ./$DirectoryName/SQLQuery_$TimeStamp.sql
    echo "insert into acct_fact values ('$var_acct_nbr','$ExpDate','$ExpTime','*seq_nbr*','N','ProjTereza','Remoção de acordo d; data de expiração: $ExpDate',null,'1','LDE',null);" >> ./$DirectoryName/SQLQuery_$TimeStamp.sql
    done < ./$DirectoryName/ExpireAccts_$TimeStamp.LOG
    the script takes each acct_nbr nbr form a input file and fires an insert statement.
    The one in bold is the column where such sequence need to be inserted.can you help me in implementing the way you suggested in my script i.e., insert statement
    Thanks in Advance:)
    Edited by: rkrish on Jun 27, 2012 3:04 AM

  • Insertion into a Table

    Hi,
    I was asked this question and I didnt know the answer. Please help me understand this stituation if anybody knows.
    I have written a procedure to do some calculations on a table data which has million records. I know that the procedure will take an hour or so to complete the task.
    In the meantime, somebody else inserted new data into the table( while my procedure is executing) say another million records and commited too.
    What will happen then? what will be the no. of records for the output? 1 million, 2 million or error? I donno what happens in this senario.
    Please explain if any body knows.
    Indira

    HI,
    This is called "Data Concurrency and Consistency". this mechanism is used by oracle to take care of issue you have explained below:-
    Below oracle doc will help you understand it:-
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c21cnsis.htm
    Thanks,
    JD

  • Upload .txt file into Database Table

    Hi,
    I was wondering if someone could please point me in the right direction. I've been looking around the forum but can't find anything to help me achieve the following.
    I would like to be able to upload a .txt file using a webpage. Then store the information inside this file into database tables.
    eg. contents of mytextfile.txt:
    richard
    10 anywhere street, anytown, somewhere
    111 222 333 444
    joe
    9 somestreet, elsewhere
    999 888 777 666
    peter
    214 nearby lane, overhere
    555 555 555 555
    I would like to insert this data into a table.
    eg. table name = CONTACTS
    userid = primary key (using sequence)
    username = (line 1 - richard, joe, peter)
    address = (line 2)
    phone_no = (line 3)
    As you can see the records will appear 1 at a time and will have a blank line between records. Is there anyway for me to upload a file like this and have it placed into tables?
    I have seen http://otn.oracle.com/products/database/htmldb/howtos/howto_file_upload.html but this seems to be for uploading a whole file and downloading the same file, rather than extracting data from the file.
    I hope I have managed to explain my problem.
    Many thanks,
    Richard.

    Richard,
    HTML DB allows you to upload CSV files via the Data Workshop. That data would then be parsed and inserted into a specific table. Alternatively, if you have your data in an Excel spreadsheet, and it is less than 32k, you can copy & paste the data directly into HTML DB's Data Workshop, which will then parse and import it into the Oracle database.
    The one obstacle you may have to overcome is converting your data from the format you outlined to CSV format. Specifically, you would have to make this:
    richard
    10 anywhere street, anytown, somewhere
    111 222 333 444
    Look something like this:
    "richard","10 anywhere street, anytown, somewhere","111 222 333 444"
    Hope this helps,
    - Scott -

  • Weblogic Eclipselink Sequence Table Connection Pool Sequence Separate transaction while JTA on main transaction

    Hi,
    And thanks in advance for your support.
    In weblogic 12, managing to get the eclipse link connection sequencing mechanism when one uses Tables for sequencing entity ids seems to be complicated.
    QUICK REFERENCE:
    http://www.eclipse.org/eclipselink/api/2.5/org/eclipse/persistence/config/PersistenceUnitProperties.html
    The concept:
    While having EJB, MDBs etc... run on a JEE container, be it glassfish or weblogic, it should be possible to have the main thread transaction be managed as part of JTA global transactions by the contianer.
    Namely, pumping messages to JMS queues, persisting entities etc.
    Meanwhile, it should be also possible to as the transaction is on going write and update entity ids from sequencing tables.
    For this very purpose, eclipse link provides persistence.xml properties, such as the now deprecated eclipselink.jdbc.sequence-connection-pool" value="true", to fullfill this very purpose.
    This option greatly avoids dead longs, by allowing eclipse link to fetch a non JTA managed connection, pseudo "two phase locking read table update table" go to the datbase and fetch a new sequence.
    The same mechnism under JTA is a disaster. A transaction that creates ten different entities, might do ten reads and updates on this table, while mean while a competing transaction might be trying to do the same. It is guaranteed dead lock with minimal stress on the environment.
    Under glassfish, for example, tagging a persistence.xml with :
    <persistence-unit name="MY_PU" transaction-type="JTA">
            <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
            <jta-data-source>jdbc/DERBY_DS</jta-data-source>
            <non-jta-data-source>jdbc/DERBY_DS</non-jta-data-source>       
            <properties>           
                <property name="eclipselink.jdbc.sequence-connection-pool" value="true" />
            </properties>
    </peristence-unit>
    does miracles, when entities are using TABLE sequencing.
    Under weblogic, say you are using the Derby embedded XA driver with two phase commit, deploying the applicaiton immediately leads to:
    Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.3.v20120629-r11760): org.eclipse.persistence.exceptions.DatabaseException
    Internal Exception: java.sql.SQLException: Cannot call commit when using distributed transactions
    Error Code: 0
      at org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:324)
      at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicCommitTransaction(DatabaseAccessor.java:426)
      at org.eclipse.persistence.internal.databaseaccess.DatasourceAccessor.commitTransaction(DatasourceAccessor.java:389)
      at org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.commitTransaction(DatabaseAccessor.java:409)
      at org.eclipse.persistence.internal.sequencing.SequencingManager$Preallocation_Transaction_Accessor_State.getNextValue(SequencingManager.java:579)
      at org.eclipse.persistence.internal.sequencing.SequencingManager.getNextValue(SequencingManager.java:1067)
      at org.eclipse.persistence.internal.sequencing.ClientSessionSequencing.getNextValue(ClientSessionSequencing.java:70)
      at org.eclipse.persi
    While weblogic is right that their might be a distributed transaction ongoing, it is mistaken in the fact tha tthe connection requested by eclipse link for generating the ID should be part of the global transaciton.
    Eclipse link provides other ways to attempt to configure the sequencing mechanism, by sating for example a non-jta transaction.
    I have attempted also using these properties both withe original data DERBY_DS that uses the XA driver, and later with a new data source i created on purpose to try to work around the sequencing contengy.
    For example:
    <!--property name="eclipselink.jdbc.sequence-connection-pool.nonJtaDataSource" value="jdbc/DERBY_SEQUENCING_NON_JTA" /-->
                <!--property name="eclipselink.connection-pool.sequence.nonJtaDataSource" value="jdbc/DERBY_SEQUENCING_NON_JTA" /-->
    This new DERBY_SEQUENCING_NON_JTA is explicitly configured to use a NON_XA driver with global transactions flag set to disabled.
    Regardless, the only thing I get out of this is that the application is deployed and super fast, up to the point where i stress it with a system test that introduces some degreee of concurrency, and then I see the dead locks on the sequencing table.
    Meaning that the ongoing transactions are holding tight to their locks on the sequencing table.
    Is this a known issue?
    Is there something I am missing in the configuration?
    It really should not be this diffcult to get eclipse link to run its sequence reads and updates on a separate transaction of the main JTA transaction, but so far looks impossible.
    Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.3.3.v20120629-r11760): org.eclipse.persistence.exceptions.DatabaseException
    Internal Exception: java.sql.SQLTransactionRollbackException: A lock could not be obtained within the time requested
    Error Code: 30000
    Call: UPDATE ID_GEN SET SEQ_VALUE = SEQ_VALUE + ? WHERE SEQ_NAME = ?
      bind => [2 parameters bound]
    Query: DataModifyQuery(name="MyEntity_Gen" sql="UPDATE ID_GEN SET SEQ_VALUE = SEQ_VALUE + ? WHERE SEQ_NAME = ?")
    Many thanks for your help.

    Are you calling the cmp bean code and your new Sql code under a same transactional context?
    The following setting
    "rollbackLocalTxUponConnClose=true"
    will make the connectionpool to call the rollback method on the connection object before keeping it back in the pool. In your sql code if you are calling connection.close() , then your entire transaction will be rolled back.
    CMP bean requires a transactional connection while communicating with the database.
    What is the sequence of code execution?
    I think you must be calling sql code first and then cmp bean code later.
    You may avoid this problem in this way. This is my guess based on my understanding on your code execution.
    1. set rollbackLocalTxUponConnClose=false
    Execute the sql code and cmp code in a single transaction (in a single session bean method with cmt or bmt transaction ). Specify tx.rollback if it is bmt. or call tx.setRollbackOnly() if it is a cmt. In this way you will have control to roll back the transactions.
    Hope this helps you.
    bmt-> bean managed transaction
    cmt-> container managed transaction.
    Regards,
    Seshi.

  • Link and insert into 2 tables

    Hi Everyone,
    I am building an application that that contains information about helpdesk calls. I am using 2 tables:
    Table 1 contains tracking info- TRK_CALLS
    ID -primary key
    USER_
    ASSIGNED_TO
    PROBLEM
    SOLUTION
    STATUS
    EDIT
    Table 2 contain date and time info - TRKCALLS_TIME
    ID - primary key
    CALL_ID - same number as ID in table 1
    DATE_
    TIME
    I have taken the advice that Denes Kubicek gave another poster and created a workspace at apex.oracle.com and places my app in there for others to look at
    workspace: kjwebb
    username: [email protected]
    password: gtmuc
    application: calltracking2
    I have a report called create/edit call tracking in there that I can either Edit or create an entry into TRK_CALLS. clicking create takes me to a form, after info is entered I have a create button that assigns the PK and inserts info into TRK_CALLS. I then have to click Edit Call Time button to input info on a form that inserts into TRKCALLS_TIME table. I would like to link these tables somehow so that when I go to the (Edit Call Time) form the Call ID is populated with the PK ID from the TRK_CALLS table.
    It would be easier to insert this info all in one page but I worked on that for a long time before giving up because I could not get anything to insert into the tables so I have taken this route.
    The basic desired outcome is to tie the tables with a PK, ID in table 1 and CALL_ID in table 2. So that they correspond and displayed on the report page.
    Please help in any way you can and make changes to my app.
    I would not be asking for help unless I have reached the ends of my apex knowledge.
    Thanks and please let me know if there are any questions,
    Kirk

    I can imagine it is pretty obscure when your knowledge of PL/SQL is not (yet) so big.
    The statement I wrote ar meant exactly for your situation.
    OK, here we go:
    First you have created a view in the Object Browser. Suppose it is called trkcalls_view .
    Then you go to SQL Workshop > SQL Commands.
    You cut the next statement and paste it in the upper white part of the screen, just under the autocommit checkbox. Replace the bold sequence references by the real name of the sequences that are used to populate the ID's of the two tables.
    You say Run and the trigger is created.
    A trigger on the view is created. Creation of such a trigger is not possible in the Object Browser, so I understand your confusion. This triggers performs when an insert in the view is performed. As you might see in the code, it creates seperate insert statements for both tables.
    CREATE OR REPLACE TRIGGER bi_trkcalls_view
    INSTEAD OF UPDATE ON trkcalls_view
    REFERENCING NEW AS NEW OLD AS OLD
    FOR EACH ROW
    DECLARE
    v_id number;
    bv_id2 number;
    BEGIN
    select sequence1.nextval into v_id from dual;
    select sequence2.nextval into v_id2 from dual;
    INSERT INTO TRK_CALLS
    ( ID
    , USER_ASSIGNED_TO
    , PROBLEM
    , SOLUTION
    , STATUS
    , EDIT
    VALUES
    ( v_id
    , :new.user_assigned_to
    , :new.problem
    , :new.solution
    , :new.status
    , :new.edit
    insert into TRKCALLS_TIME
    ( id
    , call_id
    , date_time
    values
    ( v_id2
    , v_id
    , :new.date_time
    end;
    END ;good luck,
    DickDral

  • Selective load into setup table

    Hello,
    Is there any way we can load multiple selections into setup table in OLI7BW. Currently I am only able to load range of values. But i have a list of almost 2000 records and they are not in sequence. And loading one by one requires too much effort. Loading ranges would load millions of records.
    Please advice.
    Regards,
    Tints

    Hi Tinu,
    Pick all 200 0 orders and check in VBAK table for CREATED ON field.
    Down load all 2000 orders into excel and put filter on created on value....then note down all the created on dates.
    Now you can load orders based on created on date. make sure that you are loading the data to DSO as it avoids duplication
    Regards
    SujanR

  • Trying to insert CLOB data into Remote Table..

    Hi everyone,
    I think this question had already posted.But i am not able to figure out this problem..
    what i am trying to do is
    I have a table in the remote database with a CLOB column like this
    REMOTE_TABLE
    ============
    REMOTE_TABLE_ID (Populated with sequence)
    REMOTE_CLOB CLOB
    In my Local database i have to write a Procedure to gather some information on a particular record (My Requirement) and save that CLOB in the REMOTE_TABLE.
    I built that procedure like this
    Declare
    var_clob CLOB; /* I need to processs several records and keep all data in a clob
    begin
    /***** Processed several records in a local database and stored in the variable var_clob which i need to insert into remote database ****/
    Insert into remote_table@remote values (remote_table_seq.nextval,var_clob);
    /*** when i try to execute the above command i am getting the following error
    ORA-06550: line 6, column 105:
    PL/SQL: ORA-22992: cannot use LOB locators selected from remote tables
    ORA-06550: line 6, column 1:
    PL/SQL: SQL Statement ignored *****/
    /***For a test i created the same table in local db****/
    Insert into local_table values (local_table_seq.nextval,var_clob);
    It is working fine and i am able to see the entire CLOB what i want.
    surprisingly if i pass some value instead of a varibale to the remote table like the following..
    Insert into remote_table@remote values (remote_table_seq.nextval,'Hiiiiiiiii');
    It is working fine...
    I tried the following too..
    decalre
    var_clob clob;
    begin
    var_clob := 'Hiiiiiiiiiiiiiii';
    Insert into remote_table@remote (remote_table_id) values (1);
    commit;
    update remote_table@remote set remote_clob = var_clob where remote_table_id = 1;
    commit;
    end
    I am getting the following error..
    ORA-22922: nonexistent LOB value
    ORA-02063: preceding line from CARDIO
    ORA-06512: at line 6
    Could someone please help me in fixing this issue..I need to process all the data to a variable like var_clob and insert that clob into remote table..
    Thanks in advance..
    phani

    Go to http://asktom.oracle.com and search for clob remote table
    also docs contain quite lot of info:
    http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14249/adlob_working.htm#sthref97
    Gints Plivna
    http://www.gplivna.eu

Maybe you are looking for

  • C drive crash?

    Hi There, This is Denise. I am working on my hubbies laptop while using mine for solutions. I am not a professional by any means. Here's what's happening. Windows cant access the C drive. The networking for internet wont work. I cant access trouble s

  • Xcode/AppleScript App - bring window to front

    i have built an xcode/applescript wrapper application which executes a unix shell script. while the shell script is running (downloading + installing stuff, takes several minutes), the wrapper application is displaying an animated NSProgressIndicator

  • Update decimal places in sales and use tax percentage rate [FTXP]

    i need to update the decimal places in the tax percentage rate.  i've gone through all of the tax configuration in the img several times and it is not there.  have searched the internet for an answer to this, and am not finding it. need to update fro

  • Where are custom styles saved on my computer?

    Hi, I need to create a cd with all the customizations I have made to a template (object, table, cell, paragraph, and character styles).  I can't seem to find them, however. In CS4 I found them here: Computer > Local Disk (C:) > Program Files > Adobe

  • SAX StartElement - Enumerating over attributes

    Hello In my StartElement event handler method I want to enumerate over all attributes found for an element, is this possible does anyone know? Thanks Jon