COMMIT in triger
Hello everybody.
Im working with RedHat ES/AS 4, Oracle Database 10R2, DeveloperSuite Forms 10.
My problem
I have a tabular Form, where the user fill a record and the form jump to next record after it is filled to let the user fill the next.
I have looking for a trigger where I can do the commit to save the filled record in database. But I dont find.
I have used , after looking for it in this forum , the STANDARD.COMMIT instruction, but it dont work, dont do nothing, not give error but also dont make the commit.
Anybody know what its this trigger ?.
Thanks in advanced for your time.
Regards everybody.
POST_UPDATE ???
first: POST-UPDATE is a trigger which automatically fires, when you commit updated data.
The trigger POST_UPDATE didn't exist, or you created the trigger and it is a user-defined trigger with your own code. Don't do that, because the name is nearly identical to POST-UPDATE
create functions and procedures instead
Similar Messages
-
LOGFF triger does not execute when clients terminated abnormally
Here is a triger for LOGOFF:
create or replace TRIGGER scott.before_logoff_trg
before LOGoff ON database
declare
If clients exit normally, it is executed( just delete some rows from a table). But if clients crashed, this triger will not be executed.
Is there any way I can execute some pl/sql code when a client log off normally or abnormally?Antares wrote:
My application want to do this:
1. Client A can insert rows into a table, and client B can read them at the same time. So I think client A must commit his changes.
2. Later, client A can decide to keep or discard changes from the time he connected to Oracle. If he want to discard, all rows inserted should be deleted; If he want to keep, all rows should be inserted to another table. He must choose to keep or discard changes, so there should no rows for him in that table at last.
3. If client A crashed, the rows he inserted can not be deleted. There is no opportunity to do this.
PS: Many client can log in Oracle using same Oracle user name.
These inserted rows are "server side resource" owned by one client. It should be cleared when clients log off normally or abnormally.
I want something just like DBMS_LOCK. I think locks owned by one client could be released when the client log off normally or abnormally.What a very odd requirement. Why would you require a client "B" to be able to see what client "A" has inserted if client "A" is not yet ready to commit business-wise to that inserted data?
1. You're right client A would have to commit inserted rows for client B to see them.
2. Why is client A choosing to discard rows that he was already happy for client B to see? What if client B is acting upon those rows? That would suggest that client B can take action upon something that is not yet truly committed in the business process. This sounds like a serious flaw in the business process and hence the database design.
3. If client A crashes, how is anything supposed to know about that? Client A can't send a signal/message to say "I've crashed" and just because client A hasn't performed activity doesn't necessarily mean that it's crashed, so how does anything know client A has crashed? As previously mentioned, the database server doesn't call out to all clients to check they are alive, that's not the way client server architecture works. Server's don't talk out to clients.... it's clients that talk in to servers.
If there were any logical reason for trying to implement something as bizarre as the above, the closest thing you could have would probably be some sort of "keep alive" signal that client A sends periodically, whether that's through the client application automatically sending something to the database on a timer or whether it's an interactive thing with the user so that they have a countdown timer or something on the screen and have to perform some activity on the client to indicate that they are still alive. Then you'd have something running periodically on the database as a scheduled job, to check for any "clients" that haven't sent their "keep alive" signal in time and remove any data from the intermediate table related to that client. Of course there's a potential delay (determined by your implementation) between a client crashing (or user being inactive) and the associated data getting deleted, but that's the price you pay for implementing something so unusual. -
How can we remove the commas from the Formula value in SAP BW BEx query
Hi All,
How can we remove the commas from the Formula value in SAP BW BEx query
We are using the formula replacing with characteristic.The characteristic value needs to be display as number with out commas.
Regards
Venkat.Do you want to remove the commas when you run the query on Bex Web or in RSRT?
Regards -
In import commit 100000 records
Dear Friends,
Please let me know whether in import any method is there to commit records for an interval of records say 100000. If we use commit=y and each table has 5M records, it take multiple days to complete.
And if we don’t give commit=y, it requires enough memory buffers. Please let me know some good soln.
Thanks
VKPlease use buffer parameter for this
For more details:- http://www.oracleutilities.com/OSUtil/import.html
Thanks
Dattatraya Walgude -
Comma delimited in Sql query decode function errors out
Hi All,
DB: 11.2.0.3.0
I am using the below query to generate the comma delimited output in a spool file but it errors out with the message below:
SQL> set lines 100 pages 50
SQL> col "USER_CONCURRENT_QUEUE_NAME" format a40;
SQL> set head off
SQL> spool /home/xyz/cmrequests.csv
SQL> SELECT
2 a.USER_CONCURRENT_QUEUE_NAME || ','
3 || a.MAX_PROCESSES || ','
4 || sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
5 ||sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'I',1,0),0)) Pending_Normal ||','
6 ||sum(decode(b.PHASE_CODE,'R',decode(b.STATUS_CODE,'R',1,0),0)) Running_Normal
7 from FND_CONCURRENT_QUEUES_VL a, FND_CONCURRENT_WORKER_REQUESTS b
where a.concurrent_queue_id = b.concurrent_queue_id AND b.Requested_Start_Date <= SYSDATE
8 9 GROUP BY a.USER_CONCURRENT_QUEUE_NAME,a.MAX_PROCESSES;
|| sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
ERROR at line 4:
ORA-00923: FROM keyword not found where expected
SQL> spool off;
SQL>
Expected output in the spool /home/xyz/cmrequests.csv
Standard Manager,10,0,1,0
Thanks for your time!
Regards,Get to work immediately on marking your previous questions ANSWERED if they have been!
>
I am using the below query to generate the comma delimited output in a spool file but it errors out with the message below:
SQL> set lines 100 pages 50
SQL> col "USER_CONCURRENT_QUEUE_NAME" format a40;
SQL> set head off
SQL> spool /home/xyz/cmrequests.csv
SQL> SELECT
2 a.USER_CONCURRENT_QUEUE_NAME || ','
3 || a.MAX_PROCESSES || ','
4 || sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
5 ||sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'I',1,0),0)) Pending_Normal ||','
6 ||sum(decode(b.PHASE_CODE,'R',decode(b.STATUS_CODE,'R',1,0),0)) Running_Normal
7 from FND_CONCURRENT_QUEUES_VL a, FND_CONCURRENT_WORKER_REQUESTS b
where a.concurrent_queue_id = b.concurrent_queue_id AND b.Requested_Start_Date <= SYSDATE
8 9 GROUP BY a.USER_CONCURRENT_QUEUE_NAME,a.MAX_PROCESSES;
|| sum(decode(b.PHASE_CODE,'P',decode(b.STATUS_CODE,'Q',1,0),0)) Pending_Standby ||','
>
Well if you want to spool query results to a file the first thing you need to do is write a query that actually works.
Why do you think a query like this is valid?
SELECT 'this, is, my, giant, string, of, columns, with, commas, in, between, each, word'
GROUP BY this, is, my, giant, stringYou only have one column in the result set but you are trying to group by three columns and none of them are even in the result set.
What's up with that?
You can only group by columns that are actually IN the result set. -
Splitting comma seperated column data into multiple rows
Hi Gurus,
Please help me for solving below scenario. I have multiple value in single column with comma seperated values and my requirement is load that data into multiple rows.
Below is the example:
Source Data:
Product Size Stock
ABC X,XL,XXL,M,L,S 1,2,3,4,5,6
Target Data:
Product Size Stock
ABC X 1
ABC XL 2
ABC XXL 3
ABC M 4
ABC L 5
ABC S 6
Which transformation we need to use for getting this output?
Thanks in advance !Hello,
Do you need to do this tranformation through OWB mapping only? And can you please tell what type of source you are using? Is it a flat file or a table?
Thanks -
DS sends updates to DB only in commit (can't find modified data in same TX)
Hello experts!
We have a physical data service mapping a simple Oracle database table. When we update one record in the database (invoking submit on the DS), and use a function in that same dataservice to get the refreshed record, the updated column comes with the OLD value. But after the transaction ends (we isolated this is a simple JWS), the database gets updated.
The most strange fact: we also did a test using another Data Service to do the update, now mapping a stored procedure to do the updates (without commits in body). Then the test works fine.
The conclusion I can reach is DSP is holding the instance somewhere after the submit() and is not sending it to the database connection. I understand that the commit is not performed, but if I do a query in the same TX, I should see my changed, shouldn't I?
We use WL 8.1.6 with DSP 2.5.
We´d appreciate very much if yuo guys could help.
Thanks,
Zica.Let me get your scenario straight
client starts a transaction
client calls submit to update a data services
client calls read to re-read the update value (does not see update)
client ends the transaction
If you read now, you will see the update
And you're wondering why the first read doesn't seem the update values?
By default, ALDSP 2.5 reads are through an EJB that has trans-attrib=NotSupported - which means if you do a read within a transaction, that transaction is suspended the call is made without any transaction, then the transaction is resumed. So this is one reason you don't see the read. To do the read via an EJB method with trans-attribute=Required. See TRANSACTION SUPPORT at http://e-docs.bea.com/aldsp/docs25/javadoc/com/bea/dsp/dsmediator/client/DataServiceFactory.html
If you are using the control, the control will need to specify
@jc:LiquidData ReadTransactionAttribute="Required"
That is only part of the solution. You will also need to configure your connection pool with the property KeepXAConnTillTxComplete="true" to ensure that your read is on the same connection as your write.
<JDBCConnectionPool CapacityIncrement="2"
KeepXAConnTillTxComplete="true" />
Then I have to ask - if your client already has the modified DataObject in memory, what's the purpose of re-reading it? If all you need is a clean ChangeSummary so you can do more changes (the ChangeSummary is not cleared when you call submit), you can simply call myDataObject..getDataGraph().getChangeSummary().beginLogging() -
SQL Developer can't commit edited data in Table Data pane
When I try to commit changes in "Data" pane for selected table SQL Developer gives me a strange error:
One error saving changes to table "TABLENAME".:
Row XXX: Data got commited in another/same session, cannot update row.
I can see in the log that SQL Developer tries to do something like:
UPDATE "TABLENAME" set "COLUMN"="value1" where ROWNUM="xxxx1" and ROW_SCN=nnn1;
UPDATE "TABLENAME" set "COLUMN"="value2" where ROWNUM="xxxx2" and ROW_SCN=nnn1;
UPDATE "TABLENAME" set "COLUMN"="value3" where ROWNUM="xxxx3" and ROW_SCN=nnn2;
If I update the same rows in SQL window by other condition and do commit - all is OK. Why so strange behaivour?
My table has not a primary key and no other users try to change it. SQL Developer version 3.0.04 and Oracle 10.2.0.4 Linux.
Best regards,
Sergey LogichevThat's because the inaccuracy of ROW_SCN.
I suggest you turn off Preferences - Database - ObjectViewer - Use ORA_ROWSCN (as I did the very moment we got the option).
Have fun,
K. -
Materialized view - REFRESH FAST ON COMMIT - UNION ALL
In a materialized view which uses REFRESH FAST ON COMMIT to update the data, how many UNION ALL can be used.
I am asking this question because my materialized view works fine when there are only 2 SELECT statement (1 UNION ALL).
Thank you.In a materialized view which uses REFRESH FAST ON COMMIT to update the data, how many UNION ALL can be used.As far as I remember you can have 64K UNIONized selects.
I am asking this question because my materialized view works fine when there are only 2 SELECT statement (1 UNION ALL).Post SQL that does not work. -
After call commit sql , data can not flush to disk
I use berkey db which support sql . It's version is db-5.1.19.
1, Open a database.
2. Create a table.
3. exec "begin;" sql
4. exec sql which is insert record into table
5. exec "commit;" sql
6. copy database file (SourceDB_912_1.db and SourceDB_912_1.db-journal) to Local Disk of D, then use a tool of dbsql to open the database.
7. use select sql to check data, there is no record in table.
1
sqlite3 * m_pDB;
int nRet = sqlite3_open_v2(strDBName.c_str(), & m_pDB,SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE,NULL);
2
string strSQL="CREATE TABLE [TBLClientAccount] ( [ClientId] CHAR (36), [AccountId] CHAR (36) );";
char * errors;
nRet = sqlite3_exec(m_pDB, strSQL.c_str(), NULL, NULL, &errors);
3
nRet = sqlite3_exec(m_pDB, "begin;", NULL, NULL, &errors);
4
nRet = sqlite3_exec(m_pDB, "INSERT INTO TBLClientAccount (ClientId,AccountId) VALUES('dd','ddd'); ", NULL, NULL, &errors);
5
nRet = sqlite3_exec(m_pDB, "commit;", NULL, NULL, &errors);
Edited by: 887973 on Sep 27, 2011 11:15 PMHi,
Here is a simple test case program I used based on your description:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "sqlite3.h"
int error_handler(sqlite3*);
int main()
sqlite3 *m_pDB;
const char *strDBName = "C:/SRs/OTN Core 2290838 - after call commit sql , data can not flush to disk/SourceDB_912_1.db";
char * errors;
sqlite3_open_v2(strDBName, &m_pDB, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, NULL);
error_handler(m_pDB);
sqlite3_exec(m_pDB, "CREATE TABLE [TBLClientAccount] ( [ClientId] CHAR (36), [AccountId] CHAR (36) );", NULL, NULL, &errors);
error_handler(m_pDB);
sqlite3_exec(m_pDB, "begin;", NULL, NULL, &errors);
error_handler(m_pDB);
sqlite3_exec(m_pDB, "INSERT INTO TBLClientAccount (ClientId,AccountId) VALUES('dd','ddd'); ", NULL, NULL, &errors);
error_handler(m_pDB);
sqlite3_exec(m_pDB, "commit;", NULL, NULL, &errors);
error_handler(m_pDB);
//sqlite3_close(m_pDB);
//error_handler(m_pDB);
int error_handler(sqlite3 *db)
int err_code = sqlite3_errcode(db);
switch(err_code) {
case SQLITE_OK:
case SQLITE_DONE:
case SQLITE_ROW:
break;
default:
fprintf(stderr, "ERROR: %s. ERRCODE: %d.\n", sqlite3_errmsg(db), err_code);
exit(err_code);
return err_code;
}Than I copied the SourceDB_912_1.db database and the SourceDB_912_1.db-journal directory containing the environment files (region files, log files) to D:\, opened the database using the "dbsql" command line tool, and queried the table; the data is there:
D:\bdbsql-dir>ls -al
-rw-rw-rw- 1 acostach 0 32768 2011-10-12 12:51 SourceDB_912_1.db
drw-rw-rw- 2 acostach 0 0 2011-10-12 12:51 SourceDB_912_1.db-journal
D:\bdbsql-dir>C:\BerkeleyDB\db-5.1.19\build_windows\Win32\Debug\dbsql SourceDB_912_1.db
Berkeley DB 11g Release 2, library version 11.2.5.1.19: (August 27, 2010)
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
dbsql> .tables
TBLClientAccount
dbsql> .schema TBLClientAccount
CREATE TABLE [TBLClientAccount] ( [ClientId] CHAR (36), [AccountId] CHAR (36) );
dbsql> select * from TBLClientAccount;
dd|dddI do not see where the issue is. The data can be successfully retrieved, it is present in the database.
Could you try putting in the sqlite3_close() call and see if you still get the error?
Did you remove the __db.* files from the SourceDB_912_1.db-journal directory?
Did you use PRAGMA synchronous, and if so, what is the value you set?
If this is still an issue for you, please describe in more detail the exact steps needed to get this reproduced and provide a simple stand-alone test case program that reproduces it.
Regards,
Andrei -
How to Capture Commit Point in Forms
Dear Members,
We are on E-Business Suite 11.5.10.2.
We are trying to change the behavior of AP Invoice Work Bench form through CUSTOM.pll.
When you try to reverse an existing distribution line, then oracle does a lot of validations and many triggers are fired until commit occurs.
I turned on custom events and found WHEN-VALIDATE-RECORD trigger fires 14 times until commit occurs at the end.
MY question is how can we know the commit occurred. I mean in custom.pll i need to write some logic where i need to take some field values in a block at the very end when commit occurs. The field values that i am talking about keep changing from the start to the end and i want to capture the values at the point commit occurs,
Is there any means to know that commit occurred; so that i can retrieve values at that moment.
Thanks
SandeepTry to debug the Block_status for line.I will regularly change from change mode to some mode for commit.
-
CO54: Message is not sent to any destination: Commit Work is getting failed
Hello,
While processing the Messages from XFP to SAP Update is getting terminated due to the use of Commit Work. SAP Note 147467 - Update termination when sending process messages has been referred which is for 4.6C, but we are in ECC6.0.
While processing the message in second iteration Commit Work triggres and Update happens in the data base.
Following Issue is only concerned with Nested HUs
T-Code used CO54
Following are the analysis:
it looks like u201CCreate and Post a Physical Inventory Docu201D is failing in the initial processing because the inventory doc is created when the Transfer Order to the PSA (or the TO from the PSA) is not yet completed (quant is still locked somehow).
CO54, process message category=ZHU_CONS: HU to be consumed is a nested pallet (unpacking/ repacking and TOs to/from) and qty to consume is greater that HU qty in SAP (creation of a physical inventory doc is required).
1. The error occurs while clearing the inventory posting for the physical inventory document.
2. The surprising factor is the u201CProcess Messageu201D is not processed correctly for the first time.
3. Indeed it is successfully processed without any error if you do process it second time.
Error Log from C054 T-Code.
02.08.2010 Dynamic List Display 1
Type
Message text
LTxt
Message category: ZHU_CONS --- Process message: 100000000000000621 "Send to All Destinations" Is Active
Message to be sent to destination:
ZHGI ZPP_0285_XFP_GOODSISSUE Individual Processing Is Active
=> Message will be sent to destination (check log for destination)
Message category: ZHU_CONS --- Process message: 100000000000000621 "Send to All Destinations" Is Active
Message destination ZPP_0285_XFP_GOODSISSUE triggered COMMIT WORK
Input parameters OK, passed to source field structure
Step 0: Now checking if scenario with HU
Step 2: Scenario with HU, Now checking if HU nested
Step 3: HU nested, checking if HU in repack area
Step 4: HU not in repack area, moving it to repack area
HU moved to repack area, TO number 0000000873
Step 5: Depacking nested HU...
Nested HU depacked, HU pallet n°: 00176127111000461994
Steps 6-7: Moving back HUs to supply storage type / bin...
HU 00376127111000462001 moved back to original area with TO number 0000000875
Steps 6-7: Moving back HUs to supply storage type / bin...
No need to move back HU 00176127111000461994 (not in table LEIN)
Step 8: Checking if HU fully used and quantity matches HU system quantity
HU not fully used but picked quantity > HU quantity: inventory necessary
Step 9: Inventory needed, creating physical inventory document...
Physical inventory document 0000000126 created
Step 10: Adding weighted quantity on inventory document...
Weighted quantity entered in document 0000000126
Step 11: Posting rectification in inventory document 0000000126...
Physical inventory document 0000000126 rectified
Error: rectification for doc 0000000126 not updated in DB!
=> Destination ZHGI ZPP_0285_XFP_GOODSISSUE can currently not process the message
=> Message is not sent to any destination
The errors in SM13 for this contain the program SAPLZPP_0285_HUINV_ENH (creating/changing the HUM physical inventory doc). WM Function module L_LK01_VERARBEITEN is also involved.
From SM13, it displays the following piece of code in include LL03TF2M (read the LEIN table=Storage Units table):
WHEN CON_LK01_NACH.
IF (LEIN-LGTYP = LK01-NLTYP AND
LEIN-LGPLA = LK01-NLPLA) OR
NOT P_LEDUM IS INITIAL.
ELSE.
Das ist der Fall einer TA-Quittierung wo Von-Hu = Nach-HU ist und sofort die WA-Buchung erfolgt. Dann steht die HU noch auf dem Von-Platz, daher darf hier kein Fehler kommen, sondern es wird ein. Flag gesetzt, daß verhindert daß die LE fortgeschrieben wird.
FLG_NO_LE_UPDATE = CON_X.
MESSAGE A558 WITH P_LENUM.
ENDIF.
Translation in English of the German text via Google:
"This is the case where confirmation of a TO source-HU = destination-HU, and now the WA (Good Issue?)- made book. Then, the HU is still on the From-space may therefore come here not a mistake, but it sets a flag that prevents that the LE is updated."
Thanks and Regards,
Prabhjot Singh
Edited by: Prabhjot Singh on Aug 2, 2010 4:39 PMHope you have carried out following things in Production ( Please refer to SAP help before actually doing it in production).
1. Transport the predefined characteristics from the SAP reference client (000) to your logon client.
2. Adopt Predefined Message Categories - In this step, you copy the process message categories supplied by SAP from internal tables as Customizing settings in your plants. -
Failed to commit objects to server. Error while publishing reports from BW
Hi,
I am getting below error while publishing reports from BW to BO.
"0000000001 Unable to commit the changes to Enterprise. Reason: Failed to commit objects to server : #Duplicate object name in the same folder."
Anyone having any solution for this. Thanks in advance.Hi Amit
It would be great if you could add a little info about how you solved this issue. Others might run into similar situations - I just did:-(
Thank you:-) -
Need to Convert Comma separated data in a column into individual rows from
Hi,
I need to Convert Comma separated data in a column into individual rows from a table.
Eg: JOB1 SMITH,ALLEN,WARD,JONES
OUTPUT required ;-
JOB1 SMITH
JOB1 ALLEN
JOB1 WARD
JOB1 JONES
Got a solution using Oracle provided regexp_substr function, which comes handy for this scenario.
But I need to use a database independent solution
Thanks in advance for your valuable inputs.
GeorgeGo for ETL solution. There are couple of ways to implement.
If helps mark -
Difference between Implicit and Explicit Commit
Hi All ,
Can some one pls let me know the difference between Implicit commit and Explicit Commit .
Thanks in Advance..>
Kalyan wrote:
> Hi,
>
> An explicit commit happens when we execute an SQL "commit" command.
>
> Implicit commits occur without running a commit command and occur only when certain SQL (DDL) statements are executed
>
> (Ie, INSERT,UPDATE OR DELETE Statements)
> >
> Hope this is clear & Helpful Short and Simple.
>
> Thanks
> Kalyan
Highlighted bit is incorrect.
Maybe you are looking for
-
How to convert element to mesasge type variable in BPEL
Hi, I have xml message in element type variable. I want to create one variable of message type and have same elements values what element type variable has. I modified my xsd and created of same type like element namespace. Now when i use assign acti
-
has anyone else had this problem? i cant get to any other screen in itunes other than "restore this ipod shuffle" in itunes.
-
Simple question about media activation
Hi, Suppose i give someone a signed media (book) on a SD card. can he plug it into his device, connect to the internet and activate it ? the question is, does the activation process need to be done in the same session of the media download, by the sa
-
Automatic assignment of hierarchy field in the main table
Hi, We are upgrading from SRM-MDM catalog 2.0 to SRM-MDM catalog 3.0 sp07 patch 10. In the main table, we have the standard field Hierarchy, that is a lookup on table Hierarchy. In order to value this field, we have a specific field in the Main table
-
Hi, can anyone advise me how to save a copy of my vi and sub vi from the hierachy to a new file. All vi includes from the vi library.