R12 - Approach to Insert into custom table after payment is done
Hi,
I have a requirement to insert into a custom table after invoice payment in R12. Code units are already in place to load data from ap tables to custom one.
I have identified 2 ways in which this can be done:
1. Call from the business event oracle.apps.ap.payment
2. Call after the program 'Send separate remittance advice' completes, again as a business event after the concurrent program completes i.e. to use 'Request Completed' business event.
My questions are as below:
1. Is there any other way in which I can write to a custom table after payment is done
2. How else can after report trigger be fired for SRA in R12 (without customizing the standard SRA java conc pgm)
3. When exactly business event 'oracle.apps.ap.payment' will be fired? Will it be fired for all payments irrespective of the way payment is done like thru PPR, payment workbench, etc?
Please share your thoughts on this.
Thanks,
Kavipriya
in our case, we have created a database trigger on IBY_PAY_INSTRUCTIONS_ALL, to insert the requisite data into custom tables whenever the payment status is "Ready for Printing"
Similar Messages
-
Constantly inserting into large table with unique index... Guidance?
Hello all;
So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
This DB is about 1.7 TB of small record data.
One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
Now what we are observing is that the inserts into this table
- Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
- Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
- If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.Hello,
Here is a link to a blog article that will give you the right questions and answers which apply to your case:
http://jonathanlewis.wordpress.com/?s=delete+90%25
As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
(a) unique index (sourceid, timestamp)
(b) index(create time)
Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
create index indexname (sourceid, timestamp) compress;
or
alter index indexname rebuild compress; You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
Best Regards
Mohamed Houri -
How can I insert into a table other than the default table in a form
Hi,
I want to insert into a table with some fields value of a form of another table. I have written insert code On successful submission of that form, but after submit it gives the following error
An unexpected error occurred: ORA-06502: PL/SQL: numeric or value error (WWV-16016)
My code is like this
declare
l_trn_id number;
l_provider_role varchar2(3);
l_provider_id varchar2(10);
begin
l_trn_id := p_session.get_value_as_number(p_block_name=>'DEFAULT',p_attribute_name=>'A_TRANSACTION_ID');
l_provider_id := p_session.get_value_as_varchar2(p_block_name=>'DEFAULT',p_attribute_name=>'A_PROVIDER1');
l_PROVIDER_ROLE := p_session.get_value_as_varchar2(p_block_name=>'DEFAULT',p_attribute_name=>'A_PROVIDER_ROLE1');
if (l_provider_role is not null) and (l_provider_id is not null) then
insert into service_provider_trans_records(service_provider_id,transaction_id,role_type_id)
values(l_provider_id, l_trn_id, l_provider_role);
commit;
end if;
end;
Where 'PROVIDER1' and 'PROVIDER_ROLE1' are not table fields.
How can do that or why this error comes ? Any idea?
Thanks
SumitaHi,
When do you get this error? Is it while running or while creating the form.
Here is a sample code which inserts a non-database column dummy into a table called dummy. This is done in successful procedure.
declare
l_dummy varchar2(1000);
begin
l_dummy := p_session.get_value_as_varchar2(p_block_name=>'DEFAULT',
p_attribute_name=>'A_DUMMY');
insert into sjayaram903_1g.dummy values(l_dummy);commit;
end;
Please check in your case if the size of the local variable is enough to hold the values being returned.
Thanks,
Sharmila -
Apex Application inserting into two tables from two regions in a page
Hey Guys,
Fairly new to apex like two days. So please bare with me. I am using apex 4.0.2.00.07.
I have a page with two regions. The first region has only one row and I have no problems accessing it and inserting those values into the table this is what I used to access one column from that one row APEX_APPLICATION.G_F03(1)
The second region is an interactive report, to which I have managed to add a checkbox and a textbox.
1)So if a checkbox has been checked off the row should be inserted
2) if the textbox has been entered it should replaceone of the values.
My question is how to access the second region? I started a loop based on the rows in the collection but that is as far as I got.
FOR idx IN 1..l_collectionTable_name
loop
end loop
Any help would be greatly appreciated?
Thank you
Edited by: Aj05 on Aug 2, 2012 2:10 PMHi Phil I used the following code
<cfquery name="qArrivalDates" datasource="rayannesql">
SET NOCOUNT ON
INSERT INTO booking (book_made, book_checkin_date,
book_checkout_date, book_adults, book_children)
VALUES('#FORM.book_made#','#FORM.book_checkin_date#','#FORM.book_checkout_date#','#FORM.bo ok_adults#','#FORM.book_children#')
SELECT SCOPE_IDENTITY() AS theNewId;
SET NOCOUNT OFF
</cfquery>
<cfquery name="qArrivalDates" datasource="rayannesql">
INSERT INTO Customer( firstname, lastname, address, address2,
city, state, postalcode, country, phone, mobile, email, notes)
Values (#qArrivalDates.theNewId# '#FORM.firstname#',
'#FORM.lastname#', '#FORM.address#', '#FORM.address2#',
'#FORM.city#', '#FORM.state#', '#FORM.postalcode#',
'#FORM.country#', '#FORM.phone#', '#FORM.mobile#', '#FORM.email#',
'#FORM.notes#' )
</cfquery>
When I tried to complete the form, I got the following error
Macromedia][SequeLink JDBC Driver][ODBC
Socket][Microsoft][SQL Native Client][SQL Server]Incorrect syntax
near 'Fred'.
The error occurred in
C:\Inetpub\wwwroot\rayanne\customerinsertsql.cfm: line 16
14 : <cfquery name="qArrivalDates"
datasource="rayannesql">
15 : INSERT INTO Customer( firstname, lastname, address,
address2, city, state, postalcode, country, phone, mobile, email,
notes)
16 : Values (#qArrivalDates.theNewId# '#FORM.firstname#',
'#FORM.lastname#', '#FORM.address#', '#FORM.address2#',
'#FORM.city#', '#FORM.state#', '#FORM.postalcode#',
'#FORM.country#', '#FORM.phone#', '#FORM.mobile#', '#FORM.email#',
'#FORM.notes#' )
17 : </cfquery> -
Diffrence between backend insert and front end insert into a table.
I am developing a conversion program for tax exemption. For this program only ZX_EXEMPTIONS table is used to populate the data and we got confirmation from Oracle also regarding this.For inserting the data into this table we are taking the max of tax_exemption_id which is pk for this table and adding one to it and inserting into the table. But problem here is after inserting from back end we are not able to insert from front end.
It seems backend data is holding the tax_exemption_id which is suppose to reserve by front end data.Please explain the different behavior of populating of tax_exemption_id from front end and back end.Hi,
i think the problem is that you are using max-value + 1 for tax_exemption_id. But as ZX_EXEMPTIONS is using sequence ZX_EXEMPTIONS_S
for primary key generation, you encounter situation that you are increasing PK Id for this table without increasing sequence value.
When trying to insert rows from front end - which probably uses sequence value - it tries to use a sequence value already used by your backend
process (which generated it by Maxvalue + 1) and would then encounter a primary key violation.
I think you should use sequence mentioned above to generate your PK Ids in backend process as well. And before doing so, check current value
of ZX_EXEMPTIONS_S, as you might need to rebuild the sequence in order to select nextval sucessfully for both frontend and backend.
Regards -
How to insert into 2 tables from the same page (with one button link)
Hi,
I have the following 2 tables....
Employees
emp_id number not null
name varchar2(30) not null
email varchar2(50)
hire_date date
dept_id number
PK = emp_id
FK = dept_id
Notes
note_id number not null
added_on date not null
added_by varchar2(30) not null
note varchar2(4000)
emp_id number not null
PK = note_id
FK = emp_id
I want to do an insert into both tables via the application and also via the same page (with one button link). I have made a form to add an employee with an add button - adding an employee is no problem.
Now, on the same page, I have added a html text area in another region, where the user can write a note. But how do I get the note to insert into the Notes table when the user clicks the add button?
In other words, when the user clicks 'add', the employee information should be inserted into the Employees table and the note should be inserted into the Notes table.
How do I go about doing this?
Thanks.Hi,
These are my After Submit Processes...
After Submit
30 Process Row of NOTES Automatic Row Processing (DML) Unconditional
30 Process Row of EMPLOYEES Automatic Row Processing (DML) Unconditional
40 reset page Clear Cache for all Items on Pages (PageID,PageID,PageID) Unconditional
40 reset page Clear Cache for all Items on Pages (PageID,PageID,PageID) Unconditional
40 reset page Clear Cache for all Items on Pages (PageID,PageID,PageID) Unconditional
40 reset page Clear Cache for all Items on Pages (PageID,PageID,PageID) Unconditional
50 Insert into Tables PL/SQL anonymous block Conditional
My pl/sql code is the same as posted earlier.
Upon inserting data into the forms and clicking the add button, I get this error...
ORA-06550: line 1, column 102: PL/SQL: ORA-00904: "NOTES": invalid identifier ORA-06550: line 1, column 7: PL/SQL: SQL Statement ignored
Error Unable to process row of table EMPLOYEES.
Is there something wrong with the pl/sql code or is it something else? -
How to store the flat file data into custom table?
Hi,
Iam working on inbound interface.Can any one tell me how to store the flat file data into custom table?what is the procedure?
Regards,
SujanHie
u can use function
F4_FILENAME
to pick the file from front-end or location.
then use function
WS_UPLOAD
to upload into
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_file.
CALL FUNCTION 'F4_FILENAME' "Function to pick file
EXPORTING
field_name = 'p_file' "file
IMPORTING
file_name = p_file. "file
CALL FUNCTION 'WS_UPLOAD'
EXPORTING
filename = p_file1
TABLES
data_tab = it_line
*then loop at it_line splitting it into the fields of your custom table.
loop at it_line.
split itline at ',' into
itab-name
itab-surname.
endloop.
then u can insert the values into yo table from the itab work area.
regards
Isaac Prince -
Insert into two tables, how to insert multiple slave records
Hi, I have a problem with insert into two tables wizard.
The wizard works fine and I can add my records, but I need to enter multiple slave table records.
My database:
table: paper
`id_paper` INTEGER(11) NOT NULL AUTO_INCREMENT,
`make` VARCHAR(20) COLLATE utf8_general_ci NOT NULL DEFAULT '',
`model` VARCHAR(20) COLLATE utf8_general_ci NOT NULL DEFAULT '',
`gsm` INTEGER(11) NOT NULL,
PRIMARY KEY (`id_paper`)
table: paper_data
`id_paper_data` INTEGER(11) NOT NULL AUTO_INCREMENT,
`id_paper` INTEGER(11) NOT NULL,
`value` DOUBLE(15,3) NOT NULL,
`nanometer` INTEGER(11) NOT NULL,
PRIMARY KEY (`id_paper_data`)
I need to add multiple fields "value" and "nanometer"
Current form looks like this:
Make:
Model:
Gsm:
Value:
nanometer:
I need it to look like this:
Make:
Model:
Gsm:
Value:
nanometer:
Value:
nanometer:
Value:
nanometer:
Value:
nanometer:
and so on.
The field "id_paper" in table paper_data needs to get same id for entire transaction. Also how do I set default values for each field "nanometer" on my form the must be different (370,380,390 etc)?
Thanks.you can find an answer here: http://209.85.129.132/search?q=cache:PzQj57dsWmQJ:www.experts-exchange.com/Web_Development /Software/Macromedia_Dreamweaver/Q_23713792.html+Insert+Into+Two+Tables+Wizard&cd=3&hl=lt& ct=clnk&gl=lt
This is a copy of the post:
Hi experts,
Im using ADDT to design a page that needs to insert one record into a master ALBUMS table, along with three records into a GENRES table, all linked by the primary, auto-incremented ALBUMS. ALBUM_ID.
Ive tried many different ways of combining the Insert into Multiple Tables wizard and the insert record wizard with Link Transactions, all with no luck. Either only the album info gets inserted, or a ALBUM_ID cannot be null error from MySQL. Here is the structure of the tables
ALBUMS
ALBUM_ID, INT(11), Primary, Auto_Increment
alb_name, varchar
alb_release, YEAR
USER_ID, int
alb_image, varchar
GENRES
ALBUM_ID, int, NOT NULL
GENRE_ID, int, NOT NULL
ID, int, primary, auto-increment
Many thanks in advance...
==========================================================================================
//remove this line if you want to edit the code by hand
function Trigger_LinkTransactions(&$tNG) {
global $ins_genres;
$linkObj = new tNG_LinkedTrans($tNG, $ins_genres);
$linkObj->setLink("ALBUM_ID");
return $linkObj->Execute();
function Trigger_LinkTransactions2(&$tNG) {
global $ins_genres2;
$linkObj = new tNG_LinkedTrans($tNG, $ins_genres2);
$linkObj->setLink("ALBUM_ID");
return $linkObj->Execute();
function Trigger_LinkTransactions3(&$tNG) {
global $ins_genres3;
$linkObj = new tNG_LinkedTrans($tNG, $ins_genres3);
$linkObj->setLink("ALBUM_ID");
return $linkObj->Execute();
//end Trigger_LinkTransactions trigger
//-----------------------Different Section---------------------//
// Make an insert transaction instance
//Add Record Genre 1
$ins_genres = new tNG_insert($conn_MySQL);
$tNGs->addTransaction($ins_genres);
$ins_genres->registerTrigger("STARTER", "Trigger_Default_Starter", 1, "VALUE", null);
$ins_genres->registerTrigger("BEFORE", "Trigger_Default_FormValidation", 10, $detailValidation);
$ins_genres->setTable("genres");
$ins_genres->addColumn("GENRE_ID", "NUMERIC_TYPE", "POST", "GENRE_ID");
$ins_genres->addColumn("ALBUM_ID", "NUMERIC_TYPE", "VALUE", "");
$ins_genres->setPrimaryKey("ID", "NUMERIC_TYPE");
// Add Record Genre 2
$ins_genres2 = new tNG_insert($conn_MySQL);
$tNGs->addTransaction($ins_genres2);
$ins_genres2->registerTrigger("STARTER", "Trigger_Default_Starter", 1, "VALUE", null);
$ins_genres2->setTable("genres");
$ins_genres2->addColumn("GENRE_ID", "NUMERIC_TYPE", "POST", "GENRE_ID2");
$ins_genres2->addColumn("ALBUM_ID", "NUMERIC_TYPE", "VALUE", "");
$ins_genres2->setPrimaryKey("ID", "NUMERIC_TYPE");
// Add Record Genre 3
$ins_genres3 = new tNG_insert($conn_MySQL);
$tNGs->addTransaction($ins_genres3);
$ins_genres3->registerTrigger("STARTER", "Trigger_Default_Starter", 1, "VALUE", null);
$ins_genres3->setTable("genres");
$ins_genres3->addColumn("GENRE_ID", "NUMERIC_TYPE", "POST", "GENRE_ID3");
$ins_genres3->addColumn("ALBUM_ID", "NUMERIC_TYPE", "VALUE", "");
$ins_genres3->setPrimaryKey("ID", "NUMERIC_TYPE");
=========================================================================================
Hi Aaron,
Nice job!!
$ins_albums->registerTrigger("AFTER", "Trigger_LinkTransactions2", 98);
$ins_albums->registerTrigger("AFTER", "Trigger_LinkTransactions3", 98);
These lines, right? :-( Sorry I forgot to mention that
Thanks a lot for the grading! -
Insert into other table from form
Hi All,
I have created a data block with view as a data source. I need to save the data from a form to some other table.
I have tried using INSTEAD OF Trigger. howeverwhen I try to save, getting an error : ORA-01445 cannot select ROWID from a join view without a key-preserved table.
I appreciate any suggestions on it.
Thanks and Regards
SaiSorry, it looked at first as if you wanted help on the problem you encountered. After reading again your response to Zaibiman I see that you were just tricking us with the detailed explanation of the error.
To insert into a table other than the one you queried, use the on-insert trigger.
However, an Instead Of trigger on the view is usually the best method. If you have set the key mode, defined a primary key item and removed any references to rowid, as per Zaibiman, then it should work. You'll need your own locking method if the view uses Group By or Distinct. -
Insert into local table as select from remote tables
Hi all,
In Oracle DB version 11g i have the following issue:
I want to insert into a table in the current schema as selecting data from two remote tables. I execute the insert in portions of data. Firstly, when the target table where i want to insert is empty the Select as Insert is being executed very fast. But after every insert i made , the performance became worse than the previous one. I have no FKs or indexes in the local table/target table where i'm inserting/... I tried using /*+ append*/ hint but no success...what should be the reason of that?
Thanks in advance,
Alexander.a.stoyanov wrote:
Hi all,
In Oracle DB version 11g i have the following issue:
I want to insert into a table in the current schema as selecting data from two remote tables. I execute the insert in portions of data. Firstly, when the target table where i want to insert is empty the Select as Insert is being executed very fast. But after every insert i made , the performance became worse than the previous one. I have no FKs or indexes in the local table/target table where i'm inserting/... I tried using /*+ append*/ hint but no success...what should be the reason of that?
Thanks in advance,
Alexander.How should we know? You don't give enough information to be able to tell. Not even the SQL involved.
Please read {message:id=9360002} and {message:id=9360003}
and follow the advice given. -
Hi Everyone,
I am building an application that that contains information about helpdesk calls. I am using 2 tables:
Table 1 contains tracking info- TRK_CALLS
ID -primary key
USER_
ASSIGNED_TO
PROBLEM
SOLUTION
STATUS
EDIT
Table 2 contain date and time info - TRKCALLS_TIME
ID - primary key
CALL_ID - same number as ID in table 1
DATE_
TIME
I have taken the advice that Denes Kubicek gave another poster and created a workspace at apex.oracle.com and places my app in there for others to look at
workspace: kjwebb
username: [email protected]
password: gtmuc
application: calltracking2
I have a report called create/edit call tracking in there that I can either Edit or create an entry into TRK_CALLS. clicking create takes me to a form, after info is entered I have a create button that assigns the PK and inserts info into TRK_CALLS. I then have to click Edit Call Time button to input info on a form that inserts into TRKCALLS_TIME table. I would like to link these tables somehow so that when I go to the (Edit Call Time) form the Call ID is populated with the PK ID from the TRK_CALLS table.
It would be easier to insert this info all in one page but I worked on that for a long time before giving up because I could not get anything to insert into the tables so I have taken this route.
The basic desired outcome is to tie the tables with a PK, ID in table 1 and CALL_ID in table 2. So that they correspond and displayed on the report page.
Please help in any way you can and make changes to my app.
I would not be asking for help unless I have reached the ends of my apex knowledge.
Thanks and please let me know if there are any questions,
KirkI can imagine it is pretty obscure when your knowledge of PL/SQL is not (yet) so big.
The statement I wrote ar meant exactly for your situation.
OK, here we go:
First you have created a view in the Object Browser. Suppose it is called trkcalls_view .
Then you go to SQL Workshop > SQL Commands.
You cut the next statement and paste it in the upper white part of the screen, just under the autocommit checkbox. Replace the bold sequence references by the real name of the sequences that are used to populate the ID's of the two tables.
You say Run and the trigger is created.
A trigger on the view is created. Creation of such a trigger is not possible in the Object Browser, so I understand your confusion. This triggers performs when an insert in the view is performed. As you might see in the code, it creates seperate insert statements for both tables.
CREATE OR REPLACE TRIGGER bi_trkcalls_view
INSTEAD OF UPDATE ON trkcalls_view
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
DECLARE
v_id number;
bv_id2 number;
BEGIN
select sequence1.nextval into v_id from dual;
select sequence2.nextval into v_id2 from dual;
INSERT INTO TRK_CALLS
( ID
, USER_ASSIGNED_TO
, PROBLEM
, SOLUTION
, STATUS
, EDIT
VALUES
( v_id
, :new.user_assigned_to
, :new.problem
, :new.solution
, :new.status
, :new.edit
insert into TRKCALLS_TIME
( id
, call_id
, date_time
values
( v_id2
, v_id
, :new.date_time
end;
END ;good luck,
DickDral -
What is the subsitution for this appearant macro? Solution will not build if removed.
Using
Adding Connection Points to an Object directions.Hi Shawn,
Since you have posted this issue to the VC++ forum, I think you would get better support there:
http://social.msdn.microsoft.com/Forums/vstudio/en-US/8494410e-9578-4b67-a08d-3380aac10fcf/include-filenameih-inserted-into-c-source-after-adding-connection-points-to-atl-object?forum=vcgeneral#503cbc06-40a2-4073-a56e-1b384b11b56e
So I will move this thread to the Off-topic forum. Thanks for your understanding.
Sincerely,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
How to capture insert into wf_notifications table
we are building a custom app, which needs to know as soon as a row in insterted into the wf_notifications table, and we want to avod creating a on insert trigger on this table.
Is there a way to figure out that a row is being inserted into this table without having to write a trigger.
Thanks
TapashOracle Workflow raises a business event oracle.apps.wf.notification.send as soon a notification record is created in WF_NOTIFICATIONS table. This business event has parameters such as NOTIFICATION_ID, RECIPIENT_ROLE etc. If you have access to the notification id you may access all other information from WF_NOTIFICATIONS table.
You may create a subscription to this business event with On Error -> Skip property so that in case of an error, your subscription does not impact the normal processing of seeded subscriptions. You may have a PLSQL rule function in the subscription to process the logic required when the notification is created.
Please refer to Oracle Workflow Developer Guide for more information on using Business Event System.
Hope this helps.
Vijay -
Selecting Records from 125 million record table to insert into smaller table
Oracle 11g
I have a large table of 125 million records - t3_universe. This table never gets updated or altered once loaded, but holds data that we receive from a lead company.
I need to select records from this large table that fit certain demographic criteria and insert those into a smaller table - T3_Leads - that will be updated with regard to when the lead is mailed and for other relevant information.
My question is what is the best (fastest) approach to select records from this 125 million record table to insert into the smaller table. I have tried a variety of things - views, materialized views, direct insert into smaller table...I think I am probably missing other approaches.
My current attempt has been to create a View using the query that selects the records as shown below. Then use a second query that inserts into T3_Leads from this View V_Market. This is very slow. Can I just use an Insert Into T3_Leads with this query - it did not seem to work with the WITH clause? My Index on the large table is t3_universe_composite and includes zip_code, address_key, household_key.
CREATE VIEW V_Market as
WITH got_pairs AS
SELECT /*+ INDEX_FFS(t3_universe t3_universe_composite) */ l.zip_code, l.zip_plus_4, l.p1_givenname, l.surname, l.address, l.city, l.state, l.household_key, l.hh_type as l_hh_type, l.address_key, l.narrowband_income, l.p1_ms, l.p1_gender, l.p1_exact_age, l.p1_personkey, e.hh_type as filler_data, 1.p1_seq_no, l.p2_seq_no
, ROW_NUMBER () OVER ( PARTITION BY l.address_key
ORDER BY l.hh_verification_date DESC
) AS r_num
FROM t3_universe e
JOIN t3_universe l ON
l.address_key = e.address_key
AND l.zip_code = e.zip_code
AND l.p1_gender != e.p1_gender
AND l.household_key != e.household_key
AND l.hh_verification_date >= e.hh_verification_date
SELECT *
FROM got_pairs
where l_hh_type !=1 and l_hh_type !=2 and filler_data != 1 and filler_data != 2 and zip_code in (select * from M_mansfield_02048) and p1_exact_age BETWEEN 25 and 70 and narrowband_income >= '8' and r_num = 1
Then
INSERT INTO T3_leads(zip, zip4, firstname, lastname, address, city, state, household_key, hh_type, address_key, income, relationship_status, gender, age, person_key, filler_data, p1_seq_no, p2_seq_no)
select zip_code, zip_plus_4, p1_givenname, surname, address, city, state, household_key, l_hh_type, address_key, narrowband_income, p1_ms, p1_gender, p1_exact_age, p1_personkey, filler_data, p1_seq_no, p2_seq_no
from V_Market;I had no trouble creating the view exactly as you posted it. However, be careful here:
and zip_code in (select * from M_mansfield_02048)
You should name the column explicitly rather than select *. (do you really have separate tables for different zip codes?)
About the performance, it's hard to tell because you haven't posted anything we can use, like explain plans or traces but simply encapsulating your query into a view is not likely to make it any faster.
Depending on the size of the subset of rows you're selecting, the /*+ INDEX hint may be doing your more harm than good. -
Inserting into a table which is created "on the fly" from a trigger
Hello all,
I am trying to insert into a table from a trigger in Oracle form. The table name however, is inputted by the user in am item form.
here is what the insert looks like:
insert into :table_name
values (:value1, :value2);
the problem is that forms does not recognize ::table_name. If I replace :table_name with an actual database table, it works fine. However, I need to insert to a table_name based from oracle form item.
By the way, the table|_name is built on the fly using a procedure before I try to insert into it.
Any suggestion on how can I do that? My code in the trigger is:
declare
dm_drop_tbl(:table_name,'table) // a call to an external procedure to drop the table
dm_create_tbl(:table_name,'att1','att2');
insert into :table_name
values (:value1, :value2);
this give me an error:
encounter "" when the symbol expecting one.....Hi ,
You should use the FORMS_DDL built_in procedure. Read the on-line documentation of forms ...
Simon
Maybe you are looking for
-
I have the original box with the serial number on it, but it is no longer protected under the AppleCare plan. I don't have any of the GPS or MobileMe apps installed on it, but was wondering if there was a way that you could still recover it. Also is
-
How do I change the order of songs in a playlist on my iPad?
How do I change the order of songs while a playlist is playing on my iPod?
-
Hi, I have a problem with spaces. There are constantly two spaces open named Safari. I cannot remove them. See enclosed image. Any ideas how to do it?
-
Update did not work had to reload download and missing
When I updated Itunes to 11 notice said, CD drivers did not download please download Itunes again. I downloaded again. I do not like this arrangement. After trial and error I found the section that used to appear as soon as you attach your phone to y
-
No album art in off-line mode with spotify app version 1.0.0.3251
The iOS developer team is aware of the problems with album art. Hopefully they will be fixed in the next update. Please be patient until then. I know it's annoying to wait so long.