BAPI_RE_CN_CHANGE don't commit
Hi,
When i use BAPI_RE_CN_CHANGE it dosn't work and no change for contract.
can any one help me
thank's
BR,
Ahmed
Hi
Hope you are using 'BAPI_TRANSACTION_COMMIT' in your prog. If not you should .
Regards
Vinay
Similar Messages
-
Submit button don't commit field data to database
Hello,
I am working to fix a problem in apex. The problem that I have is that the submit button doesn't work propertely. When the form is running, I select a row in the tabular form, change value in the independent fields to Y, click the submit button, I get the message "the database have been updated " but when I refresh the report, I still don't see the field updated with the Y value. it seems like the submit button don't commit the value to the database. can someone help? what do I do?Hi,
I think it is an IE issue. -
Hi,
What exactly does Rollback do? I know that COMMIT applies the changes to the database system. The thing is, I've read that until commit is applied, the database in permanent storage is unchanged. So what exactly does a ROLLBACK
"roll back"? Does it rollback the changes made to the data tables that have been loaded into RAM?
Thanks
JoeSo what you're saying is that even though I'm in transaction mode and haven't done a commit or rollback, the database on the hard disk can still be affected by the transaction.
So when I do a COMMIT, then all remaining changes in memory are applied to the database and when I ROLLBACK, any changes done both to the database and the tables in RAM are undone?
Do I understand this right?
Thanks!
Yes, when you are in a transaction and you make a change to a database page, before SQL makes that change, it, of course, must have that page in memory. The change is then made in memory. But if there is a lot of memory pressure, that page in
memory can be written to disk even though the transaction is still open.
When you do a commit, the rows are unlocked and available to other processes, but if the changed rows are in memory and not yet written to disk, they are not necessarily immediately written to disk. They may stay in memory and be written to disk at
a later time. They will be written back to disk either when SQL needs that memory for something else, or SQL has some unused I/O time that is not needed for other processes, or a CHECKPOINT occurs. In essence, a checkpoint forces all committed
changes (CORRECTION -thanks Shanky_621 - all changes (whether committed or not (the rules for tempdb are a little different)) to be written to disk. You may want to see
http://technet.microsoft.com/en-us/library/ms189573.aspx for a description of checkpoints.
What is guaranteed to be written to disk when you do a commit is the log records needed to reproduce the change. And if an uncommitted change is written to disk, that change is also logged before it is written to the database.
The reason for that is when your system unexpectedly crashes, there can be uncommitted changes that have been written to disk and committed changes that were in memory and not on disk. If this happens, the next time you bring up SQL Server it examines
the log. Any uncommitted change that got written to disk is reversed and any committed change which was never written to disk is applied to the disk. That allows SQL Server to insure that every committed change is on the disk and only committed
changes are on disk.
All of this is, of course, complex. But there is a reason for it. Usually, the bottleneck in database systems is the disk I/O system. So SQL server has a lot of complex algorithms to make disk I/O as efficient as possible. The good
news is that in most cases you can ignore all of this. Most of it you have no control over. There are some things you can tweak. The primary one is the amount of memory you let SQL Server use. Also, you can change the CHECKPOINT interval
(which essentially lets you make tradeoffs between how efficient normal processing is vs the length of time it can take to recover after a system failure. But the default value for CHECKPOINT interval works fine in most cases.
Tom -
Calling nested function to perform DML and wait for commit
Hi everybody,
I would like to do some DML -- an insert statement, to be specific -- in a function and have the function then return the numeric key of the newly added row. I call this function from another context and woud then be able to use the newly added data to do some stuff.
More specifically, what I am going to do is this: I have a graph consisting of source, destination and distance triplets in a table. A user should now be able to
1.) add a node 'A' to the graph,
2.) add a node 'B' to the graph
3.) get the shortest path from A to B through the graph.
I have an inner function function INSERT_NEW_NODE(node_in in sdo_geometry, graph_in in integer) return integer
is
pragma autonomous_transaction;
cursor node_cur is
select
source,
source_geom
from graph
cursor edge_cur is
select
source,
destination,
distance,
edge_geom
from
graph
where
sdo_geom.relate(edge_geom, 'anyinteract', node_in, .005) = 'TRUE';
begin
-- check if identical with any existing node
for node_rec in node_cur loop
if sdo_geom.relate(node_rec.source_geom, 'EQUAL', node_in, .005) = 'EQUAL' then
return node_rec.source;
end if;
end loop;
-- get edges
for edge_rec in edge_cur loop
-- new_node-->edge.destination and vice versa
insert into
graph
ID,
GRAPH,
SOURCE,
DESTINATION,
DISTANCE,
SOURCE_GEOM,
DESTINATION_GEOM,
EDGE_GEOM
values
graph_id_seq.nextval, --id
graph_in, --graph
morton(node_in.sdo_point.x, node_in.sdo_point.y), -- source morton key
edge_rec.source, -- destination morton key
sdo_geom.sdo_distance(edge_rec.source_geom_marl2000, node_in, .005, 'unit=M'), -- distance
node_in, -- source geom
edge_rec.source_geom, -- dest geom
split_line(edge_rec.edge_geom_marl2000, node_in).segment1 -- edge geom
commit;
--new_node-->edge.source and vice versa
insert into
gl_graph
ID,
GRAPH,
SOURCE,
DESTINATION,
DISTANCE,
SOURCE_GEOM,
DESTINATION_GEOM,
EDGE_GEOM
values
graph_id_seq.nextval, --id
graph_in, --graph
edge_rec.source, -- source morton key
morton(node_in.sdo_point.x, node_in.sdo_point.y), -- destination morton key
sdo_geom.sdo_distance(edge_rec.source_geom, node_in, .005, 'unit=M'), -- distance
edge_rec.source_geom, -- source geom
node_in, -- dest geom
split_line(edge_rec.edge_geom, node_in).segment2 -- edge geom
commit;
end loop
return(morton(node_in.sdo_point.x, node_in.sdo_point.y));
end insert_new_node;, which adds the new nodes to the graph, connects, calculates distances etc. and returns a handle to the newly added node. I call this function twice from another, outer function function get_path (line_in in sdo_geometry, graph_in in integer) return sdo_geometry
is
source number;
destination number;
source_geom mdsys.sdo_geometry;
destination_geom mdsys.sdo_geometry;
begin
source := insert_new_node(get_firstvertex(line_in), graph_in);
destination := insert_new_node(get_lastvertex(line_in), graph_in);
-- source := insert_new_node(get_firstvertex(line_in), graph_in);
-- destination := insert_new_node(get_lastvertex(line_in), graph_in);
return(get_path_geom(source, destination)); --returns a geometry which is the shortest path between source and destination
end get_path;; and I think, I have to use automous transaction in the inner function, so that the outer function can see any change performed by the inner one. However, this only works, when I call the inner function twice (i.e. remove the comment signs in front of the last two lines of code right before the return statement in the outer function.
So here's my questions: 1.) Why do I have to call the function twice to see the transaction complete? and 2.) How can I avoid that? Is there a way to wait with the execution of the return statement in the inner function until the insert is committed and can be seen by the outer function?
Cheers!Tanks, everybody, for your replies! Let me go through them one by one
smon asked: if you remove the pragma statement, does it work then?No, it does not, at least not, if I call the function from the outer function. In this case the insert statements in the inner function are not committed.
If I call the inner function like thisDECLARE
NODE_IN SDO_GEOMETRY;
GRAPH_IN NUMBER;
v_Return NUMBER;
BEGIN
NODE_IN := MDSYS.SDO_GEOMETRY(2001,<srid>,MDSYS.SDO_POINT_TYPE(<x>,<y>,<z>),NULL,NULL);
GRAPH_IN := 3;
v_Return := INSERT_NEW_NODE(
NODE_IN => NODE_IN,
GRAPH_IN => GRAPH_IN
DBMS_OUTPUT.PUT_LINE('v_Return = ' || v_Return);
:v_Return := v_Return;
END;, it works without autonomous transaction, but then again like this I do not use the handle to access the newly inserted data immediately to perform some other task with it.
sb92075 said:COMMIT inside LOOP is sub-optimal implementation (increases elapsed time) & can result in ORA-01555 error.Thanks, that was very helpful; I changed my code to commit outside of the loop, just before the return statement and it performs a lot faster now.
user1983440, regarding my statement I think, I have to use automous transaction in the inner function, so that the outer function can see any change performed by the inner one. asked: Are you certain that this is true? No, anything but certain. I should have said "It +seems+, I have to use autonomous transaction". I wish it would work without autonomous transaction, I think it actually should and I wonder why it does not. However, if I do not use autonomous transaction, the outer function seems to try to access the data that I have inserted in the inner function before it is committed, throws a no-data-found-exception, hence a rollback is performed.
davidp 2 said:The outer function will see whatever the inner function has done, without commit or autonomous transaction [...] In PL/SQL, the default commit is COMMIT WRITE NOWAIT, which I think does mean the transaction might not be instantly visible to the outside transaction, because the call returns before the commit really finishes. Yes, that is my notion, too. However, without autonomous transaction the inner function completes without error, then the outer uses the handles returned by the inner function to call the function <font face="courier">get_path_geom()</font> which cannot find the handles in the graph-table which raises an exception and causes a rollback.
Let me summarize: The inner function completes fine, without and with autonomous transaction and returns the handle. The inner function commits, if called directly, without and (of course) with autonomous transaction. The outer function does not see the data inserted by the inner function immediately, whether without or with autonomous transaction. If I let the outer function call the inner function twice (4 times, to be specific, but twice for each newly inserted row) from the outer function and do not use autonomous transaction, I get a no-data-found exception. If I let the outer function call the inner function twice and do use autonomous transaction, it works.
I agree with everything that was said about not using autonomous transaction in this case and I still want to solve this the right way. Any ideas are welcome! -
How to use COMMIT and ROLLBACK in BAPIs
Hi experts,
Can we use COMMIT or ROLLBACK in the BAPI just like how we implement it in the ABAP programming ?? If it is yes,
Where can we exactly use normal COMMIT and where we use BAPI_TRANSACTION_COMMIT when we are implementing BAPIs ?
Please clarify this. Any reply is really appreciated !!
Thank you in advance.Hi
see commit is the thing which saves the changes you made to the
database otherwise imean if u have not done the commit work what
happens is the changes will be killed once the your program life has
been killed i think you got why we will do commit work
BAPI's are the methods through which we can input the data i mean it
is an interface technique it is a direct input method.
for example you have inserted some data into one table by using this
BAPI technique but you not done the Commit BAPI then wht happens is
the changes u made to the database cannot been seen in the table these
will get effective once u have Done the Commit BAPI
i think i am clear to u
Rollback
see by taking the above example only we can know wht is Rollback it is nothing but UNDO option in ms office se untill saving if we want one step back we will do it by undo option am i right similalry untill commit ing i.e. nothing until saving the changes made u can delete the modified change i.e. u can go to the stage how it was previously ok
i think u got me
plzz reward if i am clear to u.......
see once u have done commit u cant rollback
same as once u saved one document u cant undo the document changes i think u got me
Thanks and regards
plzz dont forget to reward if i am useful....
u can contact me as my details are in my business card if u want any further clarification.... -
I have heard that it is considered best practices not to put Commits in your packages unless you really have to. Instead the Front End should usually do the commits. What is the reason for this?
Prohan wrote:
Thanks hoek.
Would you consider this summary of your response to be accurate:
Complete transactions should be committed, not procedures/functions.Yes, complete transactions, with*expected* outcomes.
You wrote:
If something goes wrong, and transactions get committed without 'caller/invoker' knowing about it then your data is to be considered as corrupted.
I think you meant "If something goes wrong, and PROCEDURES/FUNCTIONS get committed without 'caller/invoker' knowing about it ..."
The idea is that only part of a transaction will have been completed if something goes wrong, thus ALL changes need to be rolled back Is that an accurate description of what you meant? Yes. Think about it: procedure X succeeds and commits. Procedure Y ( followed after X) fails and commits. Tadah: data corrupted.
You only commit at the end of a complete transaction as in: all business rules/checks etc have been met.
Never ever commit 'in between things'.
Meet the business rule, tell the caller: 'Hey, we're the data and we're still fine after all those checks in the same session, so you can commit safely now'.
And don't you dare to use a WHEN OTHERS.
What exactly did you mean there? Did you mean don't commit under that handler?Never ever commit under a WHEN OTHERS.
Don't use it.
Forget about it's existance and stay out of trouble. -
Commit when exceptionn occurs?
public static void main(String args[])
method1();//transaction1
method2()// transaction2
// catching exception here
say we have a main method main . In that main method we call method1 which creates a customer using connection con1. Then we call method2() which creates
the account for customer using same connection con1.I have two questions on above
1)Assume some exception occurs during method2 and we catch the exception and do con1.commit(); Will the transaction1 still be commited ?
2) Assume no exception occurs. But When we do con1.commit jvm crashes and transaction is not committed yet, Now when we restart the application
will that transaction be committed?JavaFunda wrote:
public static void main(String args[])
method1();//transaction1
method2()// transaction2
// catching exception here
say we have a main method main . In that main method we call method1 which creates a customer using connection con1. Then we call method2() which creates
the account for customer using same connection con1.I have two questions on above
1)Assume some exception occurs during method2 and we catch the exception and do con1.commit(); Will the transaction1 still be commited ?Yes, t1 will be committed. Additionally, any changes that were part of t2 that occurred before the exception will be committed, because in reality, if you don't commit until {t1; part of t2; catch}, t1 and t2 are the same--there's only one transaction.
Committing in a catch block is a bad idea, because you don't know how much was done before the error occurred.
It defeats the purpose of a transaction, which is that either all happens, or none happens.
2) Assume no exception occurs. But When we do con1.commit jvm crashes and transaction is not committed yet, Now when we restart the application
will that transaction be committed?Impossible to say, but then, you don't actually care. and it shouldn't matter.
Since you're talking about the JVM crashing at an inopportune moment, there's always the chance of the commit either succeeding or failing, and then the JVM crashesbefore you can find out, so no matter what, you have to assume that when restarting after a crash, you'll have to figure out which transactions failed and which succeeded, if that matters to you.
Edited by: jverd on Jul 21, 2011 9:54 AM -
Method persist() don't persist the data.
Hello Experts
I have a problem whem i persist a entity.
Often the instance of entity don't persist when i call method persist().
I don't know what happen. The method don't cause any error, but don't have any registry in the database after passing the persist() method.
Att.
MarcosHi Adrian
Thanks for your quickly reply.
i don't use the method flush. But in other entity, i do the same process and the data is commit in database. i've compared these two entity, and they are equals. the only diference is the quantity of attributes in both.
I try to use the method flush in both entity. In the first is OK, the data is commited. In the second cause one error.
When i don't use the method flush, the same happend, the first commit, the second don't commit, but don't cause error.
Waiting for anwser.
Thanks.
Marcos. -
What is the Account posting for Commissions
Hi All,
If there is any account posting done for commissions during the transactions like A/R Invoice?
If yes, then please tell me about the journal in detail and accounts in which it is posted.
Thanx & Regards
SibasishCommissions can be defined for a sales employee, an item, or a customer. The commission is determined when a sales document is entered and saved for all the rows in the document. The Commission Groups define the commissions that are given internally to the sales employees. The commissions are calculated in a report, (Tools -> Queries ->System Queries -> SP Commission by Invoices in Posting Date Cross-Section) and are not posted to any accounts.
-
When is SQL insert done?
I have a default form that has the default create button (which I'm assuming is tied to the after submit process - process row of <TABLENAME> which actually does the insert??). I want to execute a procedure after submit because I need to pass it the value of the primary key of this new record (I can see in session variables that this ID is bound already). Because of a FK constraint, my procedure process is bombing because the new record isn't actually in the database yet, even though my process runs AFTER the process row. Does anyone know how to make this work?
After insert proces put next process (ensure that this process is fired only when INSERT process is regularely done)....
In my example I'll use ID (number) field as PK of table1 which is referenced in Table2. This is working only if one isertion is done....If there are many of them then trigger is better solution...:
SELECT MAX(ID) INTO NEW_ID from table1;
INSERT INTO TABLE2 (....,FK_ID,...) VALUES (....,NEW_ID,....);This code will work because new id exists only in your transaction and no one other see it until commit is done.
Commit is done automatically one end of page, end of session or end of storry (as I get an answer once on this forum!).
Hope this helps! -
CRIO : Envoie de données via wifi ( protocole TCP)
Bonjour,
Je suis débutant en Labview et je suis entrain de développer une application sur Lavbview pour envoyer des données récupérées par la modèle d'acquisition CRIO à mon pc via wifi et j'ai utilisé la Protocole TCP comme il indique le programme ci-joint , et j'ai une application sur mon pc qui écoute le trafic d'envoie et de réception, malheureusement je n’arrive pas à reçue les données sur mon pc.
Votre aide SVP et merci d'avance
Bien Cordialement,
Rafayello
Pièces jointes :
Envoie TCP.vi 50 KBBonsoir Eric,
Comme vous voyez l'image ci-jointe j'ai une modem wifi et je veux envoyer les donnés récupérées par le cible CRIO à mon pc à distance (sans câble Ethernet) ,j'ai lu la tuto pour la communication TCP/IP et j'ai compris que je dois utiliser la fonction «ouvrir connexion TCP » et j'ai donné comme entrée l'@ IP de destination client ( mon pc ) et le num de port , et le relier avec la fonction «écrire TCP » en donnant les données comme entrée et enfin j'ai fermé la connexion par la fonction «TCP Close ». Mais je ne sais pas sur quel port exactement je dois envoyer les trames TCP personnalisés et est ce que la démarche que j’ai fais sur le VI pour l’envoie des trames TCP est correcte ou non ??
Bien Cordialement,
Rafayello
Pièces jointes :
DSC_0002.JPG 798 KB -
NavLists don't work with modifying records - BUG
Ok, using JDev 10.1.3.4.
HELP.
Nav Lists don't seem to work when you modify. You can change records, but if you type in anything in a writable field, and don't commit, the value carries over to the next record. ADF doesn't refresh from the database and the value you just typed in on the previous record is still displayed AND, if you then commit, the new value gets put on the second record. . This is most certainly a bug.
In addition, if you are using CLOBs, it doesnt' refresh writable inputText components AT ALL. If it's read only, no problem. Everything works fine, if you make it read/write. BOOM. Nothing gets refreshed.
If I add an Execute button, and press that, after making a change. JDev will work correctly with the navigation and text boxes (I don't think that will help the clobs)
So, I guess the answer to fix this BUG is to make the selectOneChoices in the Navigation Lists (I have three) do an execute. Can anyone tell me how to do this? They are already set to AutoSubmit = TRUE.
Thanks in advance,
JetOk, so I turned autosubmit off. this forces the execute button to have to be pushed before navigation will occur. BUT, this still doesn't work. Seems that ADF doesn't want to see what I typed in until after I navigate.
IF, I press the Execute button first, then navigate (whether by pressing the execute button or if I have autosubmit on) then it will work properly.
Why do you have to press a button before navigating to get this to work??? -
How can I can make a pre-commit in jsf adf
Hi!
Can anyone help me on how to create a pre-commit jsf adf page.
thanks in advance..
alvinHi,
not sure what a pre-commit page is in your mind. All pages that don't commit to the database are pre-commit.
With ADF Business Components fro example, all data is first submitted to the ADF BC cache and persisted only if explicitly calling commit.
Frank -
Why there is implicit commit before and after executing DDL Statements
Hi Guys,
Please let me know why there is implicit commit before and after executing DDL Statements ?
Regards,
sushmitaHelyos wrote:
This is because Oracle has design it like this.Come on Helyos, that's a bit of a weak answer. :)
The reason is that it makes no sense to update the structure of the database whilst there is outstanding data updates that have not been committed.
Imagine having a column that is VARCHAR2(50) that currently only has data that is up to 20 characters in size.
Someone (person A) decides that it would make sense to alter the table and reduce the size of the column to varchar2(20) instead.
Before they do that, someone else (person B) has inserted data that is 30 characters in size, but not yet committed it.
As far as person B is concerned that insert statement has been successful as they received no error, and they are continuing on with their process until they reach a suitable point to commit.
Person A then attempts to alter the database to make it varchar2(20).
If the database allowed that to happen then the column would be varchar2(20) and the uncommitted data would no longer fit, even though the insert was successful. When is Person B going to find out about this? It would be wrong to tell them when they try and commit, because all their transactions were successful, so why should a commit fail.
In this case, because it's two different people, then the database will recognise there is uncommitted transactions on that table and not let person B alter it.
If it was just one person doing both things in the same session, then the data would be automatically committed, the alter statement executed and the person informed that they can't alter the database because there is (now) data exceeding the size they want to set it to.
It makes perfect sense to have the database in a data consistent state before any alterations are made to it, hence why a commit is issued beforehand.
Here's something I wrote the other day on the subject...
DDL's issue a commit before carrying out the actual action
As long as the DDL is syntactically ok (i.e. the parser is happy with it) then the commit is issued, even if the actual DDL cannot be executed for another reason.
Example...
We have a table with some data in it...
SQL> create table xtest as select rownum rn from dual;
Table created.
SQL> select * from xtest;
RN
1We then delete the data but don't commit (demonstrated by the fact we can roll it back)
SQL> delete from xtest;
1 row deleted.
SQL> select * from xtest;
no rows selected
SQL> rollback;
Rollback complete.
SQL> select * from xtest;
RN
1
SQL> delete from xtest;
1 row deleted.
SQL> select * from xtest;
no rows selectedSo now our data is deleted, but not committed, what if we issue a DDL that is syntactically incorrect...
SQL> alter tab xtest blah;
alter tab xtest blah
ERROR at line 1:
ORA-00940: invalid ALTER command
SQL> rollback;
Rollback complete.
SQL> select * from xtest;
RN
1... the data can still be rolled back. This is because the parser was not happy with the syntax of the DDL statement.
So let's delete the data again, without committing it, and issue a DDL that is syntactically correct, but cannot execute for another reason (i.e. the database object it refers to doesn't exist)...
SQL> delete from xtest;
1 row deleted.
SQL> select * from xtest;
no rows selected
SQL> truncate table bob;
truncate table bob
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> rollback;
Rollback complete.
SQL> select * from xtest;
no rows selectedSo, there we have it. Just because the statement was syntactically correct, the deletion of the data was committed, even though the DDL couldn't be performed.
This makes sense really, because if we are planning on altering the definition of the database where the data is stored, it can only really take place if the database is in a state where the data is where it should be rather than being in limbo. For example, imagine the confusion if you updated some data on a column and then altered that columns datatype to be a different size e.g. reducing a varchar2 column from 50 character down to 20 characters. If you had data that you'd just updated to larger than 20 characters whereas previously there wasn't, then the alter table command would not know about it, would alter the column size and then the data wouldn't be valid to fit whereas the update statement at the time didn't fail.
Example...
We have a table that only allows 20 characters in a column. If we try and insert more into that column we get an error for our insert statement as expected...
SQL> create table xtest (x varchar2(20));
Table created.
SQL> insert into xtest values ('012345678901234567890123456789');
insert into xtest values ('012345678901234567890123456789')
ERROR at line 1:
ORA-12899: value too large for column "SCOTT"."XTEST"."X" (actual: 30, maximum: 20)Now if our table allowed more characters our insert statement is successful. As far as our "application" goes we believe, nay, we have been told by the database, we have successfully inserted our data...
SQL> alter table xtest modify (x varchar2(50));
Table altered.
SQL> insert into xtest values ('012345678901234567890123456789');
1 row created.Now if we tried to alter our database column back to 20 characters and it didn't automatically commit the data beforehand then it would be happy to alter the column, but then when the data was committed it wouldn't fit. However the database has already told us that the data was inserted, so it can't go back on that now.
Instead we can see that the data is committed first because the alter command returns an error telling us that the data in the table is too big, and also we cannot rollback the insert after the attempted alter statement...
SQL> alter table xtest modify (x varchar2(20));
alter table xtest modify (x varchar2(20))
ERROR at line 1:
ORA-01441: cannot decrease column length because some value is too big
SQL> rollback;
Rollback complete.
SQL> select * from xtest;
X
012345678901234567890123456789
SQL>Obviously, because a commit statement is for the existing session, if we had tried to alter the table column from another session we would have got
SQL> alter table xtest modify (x varchar2(20));
alter table xtest modify (x varchar2(20))
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified
SQL>... which is basically saying that we can't alter the table because someone else is using it and they haven't committed their data yet.
Once the other session has committed the data we get the expected error...
ORA-01441: cannot decrease column length because some value is too bigHope that explains it -
How can I load my data faster? Is there a SQL solution instead of PL/SQL?
11.2.0.2
Solaris 10 sparc
I need to backfill invoices from a customer. The raw data has 3.1 million records. I have used pl/sql to load these invoices into our system (dev), however, our issue is the amount of time it's taking to run the load - effectively running at approx 4 hours. (Raw data has been loaded into a staging table)
My research keeps coming back to one concept: sql is faster than pl/sql. Where I'm stuck is the need to programmatically load the data. The invoice table has a sequence on it (primary key = invoice_id)...the invoice_header and invoice_address tables use the invoice_id as a foreign key. So my script takes advantage of knowing the primary key and uses that on the subsequent inserts to the subordinate invoice_header and invoice_address tables, respectively.
My script is below. What I'm asking is if there are other ideas on the quickest way to load this data...what am I not considering? I have to load the data in dev, qa, then production so the sequences and such change between the environments. I've dummied down the code to protect the customer; syntax and correctness of the code posted here (on the forum) is moot...it's only posted to give the framework for what I currently have.
Any advice would be greatly appreciated; how can I load the data faster knowing that I need to know sequence values for inserts into other tables?
DECLARE
v_inv_id invoice.invoice_id%TYPE;
v_inv_addr_id invoice_address.invoice_address_id%TYPE;
errString invoice_errors.sqlerrmsg%TYPE;
v_guid VARCHAR2 (128);
v_str VARCHAR2 (256);
v_err_loc NUMBER;
v_count NUMBER := 0;
l_start_time NUMBER;
TYPE rec IS RECORD
BILLING_TYPE VARCHAR2 (256),
CURRENCY VARCHAR2 (256),
BILLING_DOCUMENT VARCHAR2 (256),
DROP_SHIP_IND VARCHAR2 (256),
TO_PO_NUMBER VARCHAR2 (256),
TO_PURCHASE_ORDER VARCHAR2 (256),
DUE_DATE DATE,
BILL_DATE DATE,
TAX_AMT VARCHAR2 (256),
PAYER_CUSTOMER VARCHAR2 (256),
TO_ACCT_NO VARCHAR2 (256),
BILL_TO_ACCT_NO VARCHAR2 (256),
NET_AMOUNT VARCHAR2 (256),
NET_AMOUNT_CURRENCY VARCHAR2 (256),
ORDER_DT DATE,
TO_CUSTOMER VARCHAR2 (256),
TO_NAME VARCHAR2 (256),
FRANCHISES VARCHAR2 (4000),
UPDT_DT DATE
TYPE tab IS TABLE OF rec
INDEX BY BINARY_INTEGER;
pltab tab;
CURSOR c
IS
SELECT billing_type,
currency,
billing_document,
drop_ship_ind,
to_po_number,
to_purchase_order,
due_date,
bill_date,
tax_amt,
payer_customer,
to_acct_no,
bill_to_acct_no,
net_amount,
net_amount_currency,
order_dt,
to_customer,
to_name,
franchises,
updt_dt
FROM BACKFILL_INVOICES;
BEGIN
l_start_time := DBMS_UTILITY.get_time;
OPEN c;
LOOP
FETCH c
BULK COLLECT INTO pltab
LIMIT 1000;
v_err_loc := 1;
FOR i IN 1 .. pltab.COUNT
LOOP
BEGIN
v_inv_id := SEQ_INVOICE_ID.NEXTVAL;
v_guid := 'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff');
v_str := str_parser (pltab (i).FRANCHISES); --function to string parse - this could be done in advance, yes.
v_err_loc := 2;
v_count := v_count + 1;
INSERT INTO invoice nologging
VALUES (v_inv_id,
pltab (i).BILL_DATE,
v_guid,
'111111',
'NONE',
TO_TIMESTAMP (pltab (i).BILL_DATE),
TO_TIMESTAMP (pltab (i).UPDT_DT),
'READ',
'PAPER',
pltab (i).payer_customer,
v_str,
'111111');
v_err_loc := 3;
INSERT INTO invoice_header nologging
VALUES (v_inv_id,
TRIM (LEADING 0 FROM pltab (i).billing_document), --invoice_num
NULL,
pltab (i).BILL_DATE, --invoice_date
pltab (i).TO_PO_NUMBER,
NULL,
pltab (i).net_amount,
NULL,
pltab (i).tax_amt,
NULL,
NULL,
pltab (i).due_date,
NULL,
NULL,
NULL,
NULL,
NULL,
TO_TIMESTAMP (SYSDATE),
TO_TIMESTAMP (SYSDATE),
PLTAB (I).NET_AMOUNT_CURRENCY,
(SELECT i.bc_value
FROM invsvc_owner.billing_codes i
WHERE i.bc_name = PLTAB (I).BILLING_TYPE),
PLTAB (I).BILL_DATE);
v_err_loc := 4;
INSERT INTO invoice_address nologging
VALUES (invsvc_owner.SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH INITIAL',
pltab (i).BILL_DATE,
NULL,
pltab (i).to_acct_no,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 5;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH',
pltab (i).BILL_DATE,
NULL,
pltab (i).TO_ACCT_NO,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 6;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH2',
pltab (i).BILL_DATE,
NULL,
pltab (i).TO_CUSTOMER,
pltab (i).to_name,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 7;
INSERT INTO invoice_address nologging
VALUES ( SEQ_INVOICE_ADDRESS_ID.NEXTVAL,
v_inv_id,
'BLAH3',
pltab (i).BILL_DATE,
NULL,
'SOME PROPRIETARY DATA',
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
NULL);
v_err_loc := 8;
INSERT
INTO invoice_event nologging (id,
eid,
root_eid,
invoice_number,
event_type,
event_email_address,
event_ts)
VALUES ( SEQ_INVOICE_EVENT_ID.NEXTVAL,
'111111',
'222222',
TRIM (LEADING 0 FROM pltab (i).billing_document),
'READ',
'some_user@some_company.com',
SYSTIMESTAMP);
v_err_loc := 9;
INSERT INTO backfill_invoice_mapping
VALUES (v_inv_id,
v_guid,
pltab (i).billing_document,
pltab (i).payer_customer,
pltab (i).net_amount);
IF v_count = 10000
THEN
COMMIT;
END IF;
EXCEPTION
WHEN OTHERS
THEN
errString := SQLERRM;
INSERT INTO backfill_invoice_errors
VALUES (
pltab (i).billing_document,
pltab (i).payer_customer,
errString || ' ' || v_err_loc
COMMIT;
END;
END LOOP;
v_err_loc := 10;
INSERT INTO backfill_invoice_timing
VALUES (
ROUND ( (DBMS_UTILITY.get_time - l_start_time) / 100,
2)
|| ' seconds.',
(SELECT COUNT (1)
FROM backfill_invoice_mapping),
(SELECT COUNT (1)
FROM backfill_invoice_errors),
SYSDATE
COMMIT;
EXIT WHEN c%NOTFOUND;
END LOOP;
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
errString := SQLERRM;
INSERT INTO backfill_invoice_errors
VALUES (NULL, NULL, errString || ' ' || v_err_loc);
COMMIT;
END;Hello
You could use insert all in your case and make use of sequence.NEXTVAL and sequence.CURRVAL like so (excuse any typos - I can't test without table definitions). I've done the first 2 tables, so it's just a matter of adding the rest in...
INSERT ALL
INTO invoice nologging
VALUES ( SEQ_INVOICE_ID.NEXTVAL,
BILL_DATE,
my_guid,
'111111',
'NONE',
CAST(BILL_DATE AS TIMESTAMP),
CAST(UPDT_DT AS TIMESTAMP),
'READ',
'PAPER',
payer_customer,
parsed_francises,
'111111'
INTO invoice_header
VALUES ( SEQ_INVOICE_ID.CURRVAL,
TRIM (LEADING 0 FROM billing_document), --invoice_num
NULL,
BILL_DATE, --invoice_date
TO_PO_NUMBER,
NULL,
net_amount,
NULL,
tax_amt,
NULL,
NULL,
due_date,
NULL,
NULL,
NULL,
NULL,
NULL,
SYSTIMESTAMP,
SYSTIMESTAMP,
NET_AMOUNT_CURRENCY,
bc_value,
BILL_DATE)
SELECT
src.billing_type,
src.currency,
src.billing_document,
src.drop_ship_ind,
src.to_po_number,
src.to_purchase_order,
src.due_date,
src.bill_date,
src.tax_amt,
src.payer_customer,
src.to_acct_no,
src.bill_to_acct_no,
src.net_amount,
src.net_amount_currency,
src.order_dt,
src.to_customer,
src.to_name,
src.franchises,
src.updt_dt,
str_parser (src.FRANCHISES) parsed_franchises,
'import' || TO_CHAR (CURRENT_TIMESTAMP, 'hhmissff') my_guid,
i.bc_value
FROM BACKFILL_INVOICES src,
invsvc_owner.billing_codes i
WHERE i.bc_name = src.BILLING_TYPE;Some things to note
1. Don't commit in a loop - you only add to the run time and load on the box ultimately reducing scalability and removing transactional integrity. Commit once at the end of the job.
2. Make sure you specify the list of columns you are inserting into as well as the values or columns you are selecting. This is good practice as it protects your code from compilation issues in the event of new columns being added to tables. Also it makes it very clear what you are inserting where.
3. If you use WHEN OTHERS THEN... to log something, make sure you either rollback or raise the exception. What you have done in your code is say - I don't care what the problem is, just commit whatever has been done. This is not good practice.
HTH
David
Edited by: Bravid on Oct 13, 2011 4:35 PM
Maybe you are looking for
-
I wrote once before but I must have mislabeled my message. Haven't been able to even find my question much less any replies. I purchased an HP 221 Monitor with installed speakers, I am a novice at computers and didn't really know what I would need
-
Hello All, What I am trying to do is use SCCM's own OSD Task Sequence / Windows PE to deploy an operating system that boots using VHD Native Boot. (C:\ = VHD File with Windows, D:\ Data drive with BCDBoot). MDT 2013 Can do it I've been told, but I a
-
Creating custom event listener ?
Hello, Is there anyone that have a link to a tutorial or have some information on what is needed to be able to create a custom event listener on a component ? I am creating an interactive JSF chart library (JSFlot) and I would like to have events suc
-
Hi Everyone, I developed a cube with 5 dimensions like agent,measure,time,value,agency etc.and i have data file to load the data into cube.when i tried to load the data first time it is loaded partially because of someof agent members are no there in
-
Need to recover lost Windows license associated with HP laptop
I have a HP G71-340US. I installed a separate copy of Windows 7 onto the laptop. I wish to find my Windows Product ID number (Serial Number) from my set of install disks which I've lost. Is there some way to send my laptop serial number in (CNF94607F