Performance improvement in mtl material transactions query
Hello All,
We have some millions of records in mtl_material_transactions and we need to extract some transactions based on particular OU(currently 1 OU) and some filtering condition. This program will run every 5 min in the prod environment so there would be minimal records (1 to 50) which are eligible to be picked up in each run. Currently the below query is taking 2 to 4 min depending on the server load. Even there are no eligible records for pickup then also it is running 2 to 4 min. If there are any improvement points please suggest us.
Below is the query details and explain plan
WITH mtst AS
SELECT mtt.transaction_source_type_id, mtt.transaction_type_id,
mtt.transaction_action_id
FROM mtl_transaction_types mtt
WHERE EXISTS (
SELECT 1
FROM apps.sga_denpyo_comm_maint sdcm
WHERE sdcm.unso_event = 'M'
AND sdcm.unso_extract = 'Y'
AND mtt.transaction_type_id = sdcm.mtl_type_id)
AND mtt.transaction_source_type_id '2'),
mp AS
(SELECT organization_id
FROM mtl_parameters
WHERE master_organization_id = 1773)
SELECT mt.transaction_id, NVL (mt.attribute3, '00') attribute3
FROM mtl_material_transactions mt, mtst, mp
WHERE mt.transaction_source_type_id = mtst.transaction_source_type_id
AND mt.transaction_type_id = mtst.transaction_type_id
AND mt.transaction_action_id = mtst.transaction_action_id
AND mt.transaction_date > '01-JAN-2000'
AND mt.organization_id = mp.organization_id
AND SUBSTR (NVL (mt.attribute3, '00'), 1, 1) = '0'
AND mt.transaction_quantity < 0
AND NVL (mt.transaction_reference, 'XXX') NOT LIKE 'NO%'
AND NVL (mt.transaction_reference, 'YYY') NOT LIKE 'KR%'
AND NVL (mt.transaction_reference, 'ZZZ') NOT LIKE 'UNSO_CANCEL%'
ORDER BY mt.transaction_id
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 1 162
SORT ORDER BY 1 73 162
FILTER
TABLE ACCESS BY INDEX ROWID INV.MTL_MATERIAL_TRANSACTIONS 1 43 51
NESTED LOOPS 1 73 116
MERGE JOIN CARTESIAN 1 30 65
NESTED LOOPS 1 22 59
SORT UNIQUE
TABLE ACCESS FULL SGA.SGA_DENPYO_COMM_MAINT_ALL 1 11 11
TABLE ACCESS BY INDEX ROWID INV.MTL_TRANSACTION_TYPES 1 11 1
INDEX UNIQUE SCAN INV.MTL_TRANSACTION_TYPES_U1 1
BUFFER SORT 33 264 64
TABLE ACCESS FULL INV.MTL_PARAMETERS 33 264 6
INDEX RANGE SCAN INV.MTL_MATERIAL_TRANSACTIONS_N9 202 4
Kindly avoid using Where, Rather use Where exists. Also Create an index that will be helpful in increasing performance significantly.
Similar Messages
-
Important Validations for MTL Material Transaction
Hi All,
I am creating an OAF page for material transaction -- for material issue and receipt.
For this I am inserting a record into MTL transaction interface table and the running the transaction manager API in oracle, which populate mtl_material_transactions table.
But after general insertion i found that the record is getting error out due to oracle validation. The item may be lot or serial controlled.
Please let me know the validations performed by this program, so that I can handle these before inserting into interface tables.
Any help/document will be appreciated.
You can mail me to [email protected] as well.
Regards
RiyasRiyas,
You have to insert the serial number if it is a serial controlled item or it will fail always. Before inserting into the interface table you can validate whether it is serial no is exists or not from the wsh_serial_numbers or mtl_serial_numbers.
You can compare the values from the inventory_itme_id to get the serial numbers.
You can use the below query to validate the records. It might be helpful.
1stà select * from oe_order_headers_all where header_id=4838351
2ndà select * from wsh_delivery_details where source_header_id=4839902
3rdà select * from wsh_serial_numbers where delivery_detail_id=5088694
INSERT INTO mtl_serial_numbers_interface
(source_code, source_line_id,
transaction_interface_id,
last_update_date, last_updated_by,
creation_date, created_by,
last_update_login, fm_serial_number,
to_serial_number, product_code
--product_transaction_id
VALUES ('Miscellaneous issue', 7730351,
71737725,
--mtl_material_transactions_s.NEXTVAL, --transaction_interface_id
SYSDATE, --LAST_UPDATE_DATE
fnd_global.user_id, --LAST_UPDATED_BY
SYSDATE, --CREATION_DATE
fnd_global.user_id, --CREATED_BY
fnd_global.login_id, --LAST_UPDATE_LOGIN
'168-154-701',
--FM_SERIAL_NUMBER
'168-154-701', --TO_SERIAL_NUMBER
'RCV'
--PRODUCT_CODE
--l_rcv_transactions_interface_s
--v_txn_interface_id --product_transaction_id -
MTL MATERIAL TRANSACTION error 'CALL_TO_CONTRACTS'. Please help.
I am getting the following error from the MTL_TRANSACTIONS_INTERFACE table while running a material transaction.
You have encountered an unexpected error in 'CALL_TO_CONTRACTS': (No Data Found).
I know that there is an query not being properly fulfilled and it's giving me back this error.
Does anybody know the specific reason that this error happens?
I don't know how to track down where this query is. I'm trying to find the package that it is in.
Any help would be meet with much thanks.I get this error too when trying to display a content area with a single URL element (displaying the CA as a portlet).
When the URL is set to open in a NEW window no error.
But if I select the URL item to open inside the folder (CA) I get the error.
Pls let me know if you find a solution since I would like to have my CA shown ONLY as a portlet and not as a new window.... -
MTL MATERIAL TRANSACTIONS having zero unit cost
Hi All,
For some of the POs, we are getting zero unit cost in the Receiving Account, but rest of the POs are working fine and correct data is populated.
This is not with any particular Org as well. Any idea what can be the reason so that the cost will be zero in there.
I am new to costing/Finance module, so please let me know if I am not making sense or missing any information that will be important to get the solution.
Thanks,Amit
What is your costing method in that organization? Can you please check the price in the PO for that item which is giving you this issue?
This can happen: If your price in the PO is Zero and you are using any costing method when you receive this is what happens:
Cr Accrual Account @Po Price
Dr Receiving Account @Po Price (could be Zero)
When you deliver to Inventory depending on the costing method value will change:
In std costing
Cr Receiving @ po price (zero)
Cr Purchase Price Variance @st cost
Dr Inventory @ std cost
In avg costing
Cr Receiving cost @ Po Price
Dr Inventory @ po price
But the cost will be recalculated as
(onhand qty*current cost+po qty*po_price)/(onhand_qty+po_qty). So if the cost was 100 and onhand was 1 and if the po qty is 1 and po price is Zero cost will become 50.
Thanks
Nagamohan -
Pls help me to modify the query for performance improvement
Hi,
I have the below initialization
DECLARE @Active bit =1 ;
Declare @id int
SELECT @Active=CASE WHEN id=@id and [Rank] ='Good' then 0 else 1 END FROM dbo.Students
I have to change this query in such a way that the conditions id=@id and [Rank] ='Good' should go to the where condition of the query. In that case, how can i use Case statement to retrieve 1 or 0? Can you please help me to modify this initialization?I dont understand your query...May be below? or provide us sample data and your output...
SELECT * FROM dbo.students
where @Active=CASE
WHEN id=@id and rank ='Good' then 0 else 1 END
But, I doubt you will have performance improvement here?
Do you have index on id?
If you are looking for getting the data for @ID with rank ='Good' then use the below:Make sure, you have index on id,rank combination.
SELECT * FROM dbo.students
where id=@id
and rank ='Good' -
Spatial vs. materialized views/query rewrite
Dear all,
we are trying to use Spatial (Locator) functionality together with performance optimization using materialized views and query rewrite, and it does not seem to work. Does anybody has experience with this?
The problem in more detail:
* There is a spatial attribut (vom Typ GEOMETRY) in our table;
* we define a materialized view on that table;
* we run a query that could be better answered using the materialized view with query rewrite;
*the optimizer does not choose the plan using the materialized view, query rewrite does not take place;
This happenes, even if neither the materialized view, nor the query contains the spatial attribut.
The explanation given by the procedure DBMS_MVIEW.Explain_Rewrite is:
"QSM-01064 query has a fixed table or view Cause: Query
rewrite is not allowed if query references any fixed tables or views"
We are using Oracle 9R2, Enterprise Edition, with locator. Nevertheless, it would also be interesting, if there is any improvement in 10g?
A more complicated task, using materialized views to optimize spatial operations (e.g., sdo_relate) would also be very interesting, as spatial joins are very expensive operations.
Thanks in advance for any comments, ideas!
Cheers,
Gergely LukacsHi Dan,
thanks for your rapid response!
A simple example is:
alter session set query_rewrite_integrity=trusted;
alter session set query_rewrite_enabled=true;
set serveroutput on;
/* Creating testtable */
CREATE TABLE TESTTABLE (
KEY1 NUMBER (4) NOT NULL,
KEY2 NUMBER (8) NOT NULL,
KEY3 NUMBER (14) NOT NULL,
NAME VARCHAR2 (255),
X NUMBER (9,2),
Y NUMBER (9,2),
ATTR1 VARCHAR2 (2),
ATTR2 VARCHAR2 (30),
ATTR3 VARCHAR2 (80),
ATTR4 NUMBER (7),
ATTR5 NUMBER (4),
ATTR6 NUMBER (5),
ATTR7 VARCHAR2 (40),
ATTR8 VARCHAR2 (40),
CONSTRAINT TESTTABLE_PK
PRIMARY KEY ( KEY1, KEY2, KEY3 ));
/* Creating materialized view */
CREATE MATERIALIZED VIEW TESTTABLE_MV
REFRESH COMPLETE
ENABLE QUERY REWRITE
AS SELECT DISTINCT ATTR7, ATTR8
FROM TESTTABLE;
/* Creating statistics, just to make sure */
execute dbms_stats.gather_table_stats(ownname=> 'TESTSCHEMA', tabname=> 'TESTTABLE', cascade=>TRUE);
execute dbms_stats.gather_table_stats(ownname=> 'TESTSCHEMA', tabname=> 'TESTTABLE_MV', cascade=>TRUE);
/* Explain rewrite procedure */
DECLARE
Rewrite_Array SYS.RewriteArrayType := SYS.RewriteArrayType();
querytxt VARCHAR2(1500) :=
'SELECT COUNT(*) FROM (
SELECT DISTINCT
ATTR8 FROM
TESTTABLE
i NUMBER;
BEGIN
DBMS_MVIEW.Explain_Rewrite(querytxt, 'TESTTABLE_MV', Rewrite_Array);
FOR i IN 1..Rewrite_Array.count
LOOP
DBMS_OUTPUT.PUT_LINE(Rewrite_Array(i).message);
END LOOP;
END;
The message you get is:
QSM-01009 materialized view, string, matched query text
Cause: The query was rewritten using a materialized view, because query text matched the materialized view text.
Action: No action required.
i.e. query rewrite works!
/* Adding geometry column to the testtable -- not to the materialized view, and not to the query! */
ALTER TABLE TESTTABLE
ADD GEOMETRYATTR mdsys.sdo_geometry;
/* Explain rewrite procedure */
DECLARE
Rewrite_Array SYS.RewriteArrayType := SYS.RewriteArrayType();
querytxt VARCHAR2(1500) :=
'SELECT COUNT(*) FROM (
SELECT DISTINCT
ATTR8 FROM
TESTTABLE
i NUMBER;
BEGIN
DBMS_MVIEW.Explain_Rewrite(querytxt, 'TESTTABLE_MV', Rewrite_Array);
FOR i IN 1..Rewrite_Array.count
LOOP
DBMS_OUTPUT.PUT_LINE(Rewrite_Array(i).message);
END LOOP;
END;
The messages you get are:
QSM-01064 query has a fixed table or view
Cause: Query rewrite is not allowed if query references any fixed tables or views.
Action: No action required.
QSM-01019 no suitable materialized view found to rewrite this query
Cause: There doesn't exist any materialized view that can be used to rewrite this query.
Action: Consider creating a new materialized view.
i.e. query rewrite does not work!
If this works, the next issue is to use materialized views for optimizing spatial operations, e.g., a spatial join. I can supply you with an example, if necessary (only makes sense, I think, after the first problem is solved).
Thanks in advance for any ideas, comments!
Cheers,
Gergely -
Flashback Transaction Query very SLOWWWW
We are planning to make numerous changes to data in our database soon and we
want to be able to use flashback_transaction to rollback these changes if we
need to. I have been able to use flashback_transaction_query to capture and
create the undo sql but it is a VERY slow process. I have lowered the
db_flashback_retention_target from 1140 to 360 in an attempt to reduce the
amount of data we have to read to capture the undo sql but that didn't seem to
help. Even with the db_flashback_retention_target set to 360 I am seeing
statements over 6 hours old. Is there any way to speed up the process of
capturing the undo sql? Here is the sql I use:
select undo_sql
from flashback_transaction_query
where logon_user = 'ROLLOUT';This information is form the documentation -> http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14251/adfns_flashback.htm#sthref1493
Flashback Tips – Performance
* For better performance, generate statistics on all tables involved in a Flashback Query by using the DBMS_STATS package, and keep the statistics current. Flashback Query always uses the cost-based optimizer, which relies on these statistics.
* The performance of a query into the past depends on how much undo data must be accessed. For better performance, use queries to select small sets of past data using indexes, not to scan entire tables. If you must do a full table scan, consider adding a parallel hint to the query.
* The performance cost in I/O is the cost of paging in data and undo blocks that are not already in the buffer cache. The performance cost in CPU use is the cost of applying undo information to affected data blocks. When operating on changes in the recent past, flashback features essentially CPU bound.
* Use index structures for Flashback Version Query: the database keeps undo data for index changes as well as data changes. Performance of index lookup-based Flashback Version Query is an order of magnitude faster than the full table scans that are otherwise needed.
* In a Flashback Transaction Query, the type of the xid column is RAW(8). To take advantage of the index built on the xid column, use the HEXTORAW conversion function: HEXTORAW(xid).
* Flashback Query against a materialized view does not take advantage of query rewrite optimizations.
See Also:
Oracle Database Performance Tuning Guide
Also taking sql trace and analysing would help -> http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14211/sqltrace.htm#PFGRF01020 -
Performance improvement in OBIEE 11.1.1.5
Hi all,
In OBIEE 11.1.1.5 reports takes long time to load , Kindly provide me some performance improvement guides.
Thanks,
Haree.Hi Haree,
Steps to improve the performance.
1. implement caching mechanism
2. use aggregates
3. use aggregate navigation
4. limit the number of initialisation blocks
5. turn off logging
6. carry out calculations in database
7. use materialized views if possible
8. use database hints
9. alter the NQSONFIG.ini parameters
Note:calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV(Materialized views).
and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
This is the latest version for OBIEE11g.
http://blogs.oracle.com/pa/resource/Oracle_OBIEE_Tuning_Guide.pdf
Report level:
1. Enable cache -- change nqsconfig instead of NO change to YES.
2. GO--> Physical layer --> right click table--> properties --> check cacheable.
3. Try to implement Aggregate mechanism.
4.Create Index/Partition in Database level.
There are multiple other ways to fine tune reports from OBIEE side itself:
1) You can check for your measures granularity in reports and have level base measures created in RPD using OBIEE utility.
http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
This will pick your aggr tables and not detailed tables.
2) You can use Caching Seeding options. Using ibot or Using NQCMD command utility
http://www.artofbi.com/index.php/2010/03/obiee-ibots-obi-caching-strategy-with-seeding-cache/
http://satyaobieesolutions.blogspot.in/2012/07/different-to-manage-cache-in-obiee-one.html
OR
http://hiteshbiblog.blogspot.com/2010/08/obiee-schedule-purge-and-re-build-of.html
Using one of the above 2 methods, you can fine tune your reports and reduce the query time.
Also, on a safer side, just take the physical SQL from log and run it directly on DB to see the time taken and check for the explain plan with the help of a DBA.
Hope this help's
Thanks,
Satya
Edited by: Satya Ranki Reddy on Aug 12, 2012 7:39 PM
Edited by: Satya Ranki Reddy on Aug 12, 2012 8:12 PM
Edited by: Satya Ranki Reddy on Aug 12, 2012 8:20 PM -
HKONG: Material Transaction Interface 의 data를 처리 Process들에 대한 정의
PURPOSE
Material Transaction Interface 의 data를 처리 Process들에 대해 정의하고자 함.
Explanation
관련된 정보는 다음과 같습니다.
Material Transaction Interface 의 data를 처리하기 위해서는 다음의 2개의 Process가 수행됩니다.
- INCTCM (Process transaction Interface)
- INCTCW (Inventory transactions worker)
(1)
Records are processed into this table by the INCTCM - Process Transactions Interface from the Interface tables :
MTL_TRANSACTIONS_INTERFACE to MTL_MATERIAL_TRANSACTIONS_TEMP
MTL_TRANSACTION_LOTS_INTERFACE to MTL_TRANSACTION_LOTS_TEMP (Lot 사용시)
MTL_SERIAL_NUMBERS_INTERFACE to MTL_SERIAL_NUMBERS_TEMP (serial 사용시)
==> INCTCM 에 의해 interface table의 data가 validation후, temp table로 옮겨집니다.
(2)
After the records are processed from the MTL_TRANSACTIONS_INTERFACE into the MTL_MATERIAL_TRANSACTIONS_TEMP
by the INCTCM - Process Transactions Interface,
a worker will be launched to process the record from MTL_MATERIAL_TRANSACTIONS_TEMP into MTL_MATERIAL_TRANSACTIONS.
The worker is called INCTCW - Inventory Transaction Worker.
The INCTCM - Process Transactions Interface will launch a single INCTCW - Inventory Transaction Worker for all rows
that meet the criteria in MTL_MATERIAL_TRANSACTIONS_TEMP :
TRANSACTION_MODE = 3
LOCK_FLAG = N
PROCESS_FLAG = Y
Once the process is complete the records will be moved into the corresponding
transaction table :
MTL_MATERIAL_TRANSACTIONS_TEMP to MTL_MATERIAL_TRANSACTIONS
MTL_TRANSACTION_LOTS_TEMP to MTL_TRANSACTION_LOT_NUMBERS
MTL_SERIAL_NUMBERS_TEMP to MTL_UNIT_TRANSACTIONS
==> INCTCM은 INCTCW를 call하게 되고, 이 Process에 의해 TEMP table로부터 MMT table과 Inventory table에
DATA가 Insert됩니다.
The rows in mtl_transactions_interface are processed in 5 phases.
1. Derives dependant columns, eg:acct_period_id, primary_quantity etc..
2. Detailed validation performed on the records
3. On hand qty check for negative qty's etc..
4. Reservations Relieved if demand was created in order entry
5. Rows are moved to mtl_material_transactions_temp where the
transaction processor is called to process these rows and update the inventory levels etc..
Reference Documents
------------------- -
IB:How to update a serial Number which has Inventory Material Transactions?
Dear friends
first of all thanks for your time and valuable solutions
Install base: How to update a serial Number which has Inventory Material Transactions
problem description:
Install base > quick search
Here is Installbase record, when I query from quick search
Rec# Item Item Instance Serial Number Status
1 300-7000-01 3000000 1000XXX-0538JQ0003 Return for Adv Exchange
2 300-7000-01 8000000 1000XXX-0538JQ0003- Return for Adv Exchange
3 300-7000-01 5000000 1000XXX-0538JQ0003-A Return for Adv Exchange
looking above data, first and third records are the legitimate serial numbers(correct according to the client specs), second record is not legitimate since it has a dash as suffix, we found there are many illegitimate serial Numbers exists, needs to be updated with the right serial Numbers which I analyzed in excel after pulling data from mtl_material_transactions , oe_order_lines_all , mtl_serial_numbers , mtl_system_items_b
basically these are all RMAs
I need to update the second record as 1094SUZ-0538JQ0003-B as per the guidelines, while updating I need to keep all the existing contracts, Warranty, what ever material transations it has, need to be same.
we have a package updating the serial numbers using IB API (csi_Item_Instance_Pub.update_item_instance) but it is updating only the records which has no serial numbers present for that instance, if there is a serial number already exists it is not working.
user define error msg "Serial Number 1094SUZ-0538JQ0003- has Inventory Material Transactions. This serial number cannot be used to update an existing Item Instance", but I need to update this anyway!! or am I missing something here, please advice me
below post looks like similar issue, talks about hard update, I have no clue, by doing that the updated serial number will have same transations, contracts, dates....attached to it like the previous serial number
IB UPDATE_ITEM_INSTANCE ERROR - doesn't allow ACTIVE_START_DATE to change
would be great If you guys help me out, really appreciated!!
unfortunately I couldn't find any solutoin in metalink for the existing serial number update
code for updating the serial number using IB API
x_msg_count := 0;
x_msg_data := '';
p_instance_rec.instance_id := rec.child_instance_id;
p_instance_rec.serial_number := rec.child_serial_number;
p_instance_rec.object_version_number := rec.child_object_number;
p_txn_rec.transaction_id := Fnd_Api.g_miss_num;
p_txn_rec.transaction_date := SYSDATE;
p_txn_rec.source_transaction_date := SYSDATE;
p_txn_rec.transaction_type_id := 1;
csi_Item_Instance_Pub.update_item_instance
p_api_version => 1.0,
p_commit => Fnd_Api.g_false,
p_init_msg_list => Fnd_Api.g_false,
p_validation_level => 1,
p_instance_rec => p_instance_rec,
p_ext_attrib_values_tbl => p_ext_attrib_values_tbl,
p_party_tbl => p_party_tbl,
p_account_tbl => p_account_tbl,
p_pricing_attrib_tbl => p_pricing_attrib_tbl,
p_org_assignments_tbl => p_org_assignments_tbl,
p_asset_assignment_tbl => p_asset_assignment_tbl,
p_txn_rec => p_txn_rec,
x_instance_id_lst => x_instance_id_lst,
x_return_status => x_return_status,
x_msg_count => x_msg_count,
x_msg_data => x_msg_data
Thanks
SuriSuri
Used this. May not be perfect but should get you there. Only if the table is registered (all the seeded tables should be registered) this will work.
select distinct a.table_name,b.column_name from fnd_tables a, fnd_columns b
where a.table_id=b.table_id
and upper(b.column_name) like '%SERIAL%'
Also this is very old one but if you need history for this change add the history insert logic as well..
DECLARE
l_return_err VARCHAR2 (80);
PROCEDURE debug (p_message IN VARCHAR2)
IS
BEGIN
dbms_output.put_line (SUBSTR (p_message, 1, 255));
END debug;
BEGIN
debug('======================================================================');
debug('Switching from serial number XDT07406. to XDT07406 ');
debug('======================================================================');
UPDATE fa_additions_b
SET serial_number = 'XDT07406'
WHERE serial_number = 'XDT07406.';
debug('No of rows in fa_additions_b updated :'||sql%rowcount);
UPDATE fa_mass_additions
SET serial_number = 'XDT07406'
WHERE serial_number = 'XDT07406.';
debug('No of rows in fa_mass_additions updated :'||sql%rowcount);
UPDATE rcv_serial_transactions
SET serial_num = 'XDT07406'
WHERE serial_num = 'XDT07406.';
debug('No of rows in rcv_serial_transactions updated :'||sql%rowcount);
UPDATE mtl_serial_numbers
SET serial_number = 'XDT07406'
WHERE serial_number = 'XDT07406.';
debug('No of rows in mtl_serial_numbers updated :'||sql%rowcount);
UPDATE mtl_unit_transactions
SET serial_number = 'XDT07406'
WHERE serial_number = 'XDT07406.';
debug('No of rows in mtl_unit_transactions updated :'||sql%rowcount);
UPDATE csi_item_instances_h
SET new_serial_number = 'XDT07406'
WHERE new_serial_number = 'XDT07406.';
debug('No of rows in csi_item_instances_h updated :'||sql%rowcount);
UPDATE csi_t_txn_line_details
SET serial_number = 'XDT07406'
WHERE serial_number = 'XDT07406.';
debug('No of rows in csi_t_txn_line_details updated :'||sql%rowcount);
UPDATE csi_item_instances
SET serial_number = 'XDT07406'
WHERE serial_number = 'XDT07406.';
debug('No of rows in csi_item_instances updated :'||sql%rowcount);
UPDATE wsh_delivery_details
SET serial_number = 'XDT07406'
WHERE serial_number = 'XDT07406.';
debug('No of rows in wsh_delivery_details updated :'||sql%rowcount);
debug('======================================================================');
debug('Switching from serial number jct20591 to JCT20591 ');
debug('======================================================================');
UPDATE fa_additions_b
SET serial_number = 'JCT20591'
WHERE serial_number = 'jct20591';
debug('No of rows in fa_additions_b updated :'||sql%rowcount);
UPDATE fa_mass_additions
SET serial_number = 'JCT20591'
WHERE serial_number = 'jct20591';
debug('No of rows in fa_mass_additions updated :'||sql%rowcount);
UPDATE rcv_serial_transactions
SET serial_num = 'JCT20591'
WHERE serial_num = 'jct20591';
debug('No of rows in rcv_serial_transactions updated :'||sql%rowcount);
UPDATE mtl_serial_numbers
SET serial_number = 'JCT20591'
WHERE serial_number = 'jct20591';
debug('No of rows in mtl_serial_numbers updated :'||sql%rowcount);
UPDATE mtl_unit_transactions
SET serial_number = 'JCT20591'
WHERE serial_number = 'jct20591';
debug('No of rows in mtl_unit_transactions updated :'||sql%rowcount);
UPDATE csi_item_instances_h
SET new_serial_number = 'JCT20591'
WHERE new_serial_number = 'jct20591';
debug('No of rows in csi_item_instances_h updated :'||sql%rowcount);
UPDATE csi_t_txn_line_details
SET serial_number = 'JCT20591'
WHERE serial_number = 'jct20591';
debug('No of rows in csi_t_txn_line_details updated :'||sql%rowcount);
UPDATE csi_item_instances
SET serial_number = 'JCT20591'
WHERE serial_number = 'jct20591';
debug('No of rows in csi_item_instances updated :'||sql%rowcount);
COMMIT;
EXCEPTION
WHEN OTHERS
THEN
l_return_err :='Updating in one of the script has this error:'|| substrb(sqlerrm, 1, 55);
debug('Value of l_return_err='||l_return_err);
END;
Thanks
Nagamohan -
"cannot perform a DML operation inside a query" error when using table func
hello please help me
i created follow table function when i use it by "select * from table(customerRequest_list);"
command i receive this error "cannot perform a DML operation inside a query"
can you solve this problem?
CREATE OR REPLACE FUNCTION customerRequest_list(
p_sendingDate varchar2:=NULL,
p_requestNumber varchar2:=NULL,
p_branchCode varchar2:=NULL,
p_bankCode varchar2:=NULL,
p_numberOfchekbook varchar2:=NULL,
p_customerAccountNumber varchar2:=NULL,
p_customerName varchar2:=NULL,
p_checkbookCode varchar2:=NULL,
p_sendingBranchCode varchar2:=NULL,
p_branchRequestNumber varchar2:=NULL
RETURN customerRequest_nt
PIPELINED
IS
ob customerRequest_object:=customerRequest_object(
NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL);
condition varchar2(2000 char):=' WHERE 1=1 ';
TYPE rectype IS RECORD(
requestNumber VARCHAR2(32 char),
branchRequestNumber VARCHAR2(32 char),
branchCode VARCHAR2(50 char),
bankCode VARCHAR2(50 char),
sendingDate VARCHAR2(32 char),
customerAccountNumber VARCHAR2(50 char),
customerName VARCHAR2(200 char),
checkbookCode VARCHAR2(50 char),
numberOfchekbook NUMBER(2),
sendingBranchCode VARCHAR2(50 char),
numberOfIssued NUMBER(2)
rec rectype;
dDate date;
sDate varchar2(25 char);
TYPE curtype IS REF CURSOR; --RETURN customerRequest%rowtype;
cur curtype;
my_branchRequestNumber VARCHAR2(32 char);
my_branchCode VARCHAR2(50 char);
my_bankCode VARCHAR2(50 char);
my_sendingDate date;
my_customerAccountNumber VARCHAR2(50 char);
my_checkbookCode VARCHAR2(50 char);
my_sendingBranchCode VARCHAR2(50 char);
BEGIN
IF NOT (regexp_like(p_sendingDate,'^[[:digit:]]{4}/[[:digit:]]{2}/[[:digit:]]{2}$')
OR regexp_like(p_sendingDate,'^[[:digit:]]{4}/[[:digit:]]{2}/[[:digit:]]{2}[[:space:]]{1}[[:digit:]]{2}:[[:digit:]]{2}:[[:digit:]]{2}$')) THEN
RAISE_APPLICATION_ERROR(-20000,cbdpkg.get_e_m(-1,5));
ELSIF (p_sendingDate IS NOT NULL) THEN
dDate:=TO_DATE(p_sendingDate,'YYYY/MM/DD hh24:mi:ss','nls_calendar=persian');
dDate:=trunc(dDate);
sDate:=TO_CHAR(dDate,'YYYY/MM/DD hh24:mi:ss');
condition:=condition|| ' AND ' || 'sendingDate='||'TO_DATE('''||sDate||''',''YYYY/MM/DD hh24:mi:ss'''||')';
END IF;
IF (p_requestNumber IS NOT NULL) AND (cbdpkg.isspace(p_requestNumber)=0) THEN
condition:=condition|| ' AND ' || ' requestNumber='||p_requestNumber;
END IF;
IF (p_bankCode IS NOT NULL) AND (cbdpkg.isspace(p_bankCode)=0) THEN
condition:=condition|| ' AND ' || ' bankCode='''||p_bankCode||'''';
END IF;
IF (p_branchCode IS NOT NULL) AND (cbdpkg.isspace(p_branchCode)=0) THEN
condition:=condition|| ' AND ' || ' branchCode='''||p_branchCode||'''';
END IF;
IF (p_numberOfchekbook IS NOT NULL) AND (cbdpkg.isspace(p_numberOfchekbook)=0) THEN
condition:=condition|| ' AND ' || ' numberOfchekbook='''||p_numberOfchekbook||'''';
END IF;
IF (p_customerAccountNumber IS NOT NULL) AND (cbdpkg.isspace(p_customerAccountNumber)=0) THEN
condition:=condition|| ' AND ' || ' customerAccountNumber='''||p_customerAccountNumber||'''';
END IF;
IF (p_customerName IS NOT NULL) AND (cbdpkg.isspace(p_customerName)=0) THEN
condition:=condition|| ' AND ' || ' customerName like '''||'%'||p_customerName||'%'||'''';
END IF;
IF (p_checkbookCode IS NOT NULL) AND (cbdpkg.isspace(p_checkbookCode)=0) THEN
condition:=condition|| ' AND ' || ' checkbookCode='''||p_checkbookCode||'''';
END IF;
IF (p_sendingBranchCode IS NOT NULL) AND (cbdpkg.isspace(p_sendingBranchCode)=0) THEN
condition:=condition|| ' AND ' || ' sendingBranchCode='''||p_sendingBranchCode||'''';
END IF;
IF (p_branchRequestNumber IS NOT NULL) AND (cbdpkg.isspace(p_branchRequestNumber)=0) THEN
condition:=condition|| ' AND ' || ' branchRequestNumber='''||p_branchRequestNumber||'''';
END IF;
dbms_output.put_line(condition);
OPEN cur FOR 'SELECT branchRequestNumber,
branchCode,
bankCode,
sendingDate,
customerAccountNumber ,
checkbookCode ,
sendingBranchCode
FROM customerRequest '|| condition ;
LOOP
FETCH cur INTO my_branchRequestNumber,
my_branchCode,
my_bankCode,
my_sendingDate,
my_customerAccountNumber ,
my_checkbookCode ,
my_sendingBranchCode;
EXIT WHEN (cur%NOTFOUND) OR (cur%NOTFOUND IS NULL);
BEGIN
SELECT requestNumber,
branchRequestNumber,
branchCode,
bankCode,
TO_CHAR(sendingDate,'yyyy/mm/dd','nls_calendar=persian'),
customerAccountNumber ,
customerName,
checkbookCode ,
numberOfchekbook ,
sendingBranchCode ,
numberOfIssued INTO rec FROM customerRequest FOR UPDATE NOWAIT;
--problem point is this
EXCEPTION
when no_data_found then
null;
END ;
ob.requestNumber:=rec.requestNumber ;
ob.branchRequestNumber:=rec.branchRequestNumber ;
ob.branchCode:=rec.branchCode ;
ob.bankCode:=rec.bankCode ;
ob.sendingDate :=rec.sendingDate;
ob.customerAccountNumber:=rec.customerAccountNumber ;
ob.customerName :=rec.customerName;
ob.checkbookCode :=rec.checkbookCode;
ob.numberOfchekbook:=rec.numberOfchekbook ;
ob.sendingBranchCode:=rec.sendingBranchCode ;
ob.numberOfIssued:=rec.numberOfIssued ;
PIPE ROW(ob);
IF (cur%ROWCOUNT>500) THEN
CLOSE cur;
RAISE_APPLICATION_ERROR(-20000,cbdpkg.get_e_m(-1,4));
EXIT;
END IF;
END LOOP;
CLOSE cur;
RETURN;
END;Now what exactly would be the point of putting a SELECT FOR UPDATE in an autonomous transaction?
I think OP should start by considering why he has a function with an undesirable side effect in the first place. -
Flashback transaction query advice
I need to be able to see all transactions on a table during a specific period of time (about every 15 minutes). I have a choice of creating a DB trigger on the table that captures the transactions and writes them to a new table or using the flashback transaction query feature. I haven't used flashback transaction query before in a production DB and was wondering if there were any performance issues I need to worry about (since there will be continous changes to the table being made). I think my query would be something like below and would be executing it very 15 minutes. Any advice/tips would be appreciated.
SELECT last_name, versions_starttime, versions_operation
FROM emp versions BETWEEN TIMESTAMP
SYSTIMESTAMP - INTERVAL '15' MINUTE AND SYSTIMESTAMP
WHERE versions_starttime is not null
ORDER BY versions_endtime asc
Edited by: bobmagan on Feb 7, 2013 5:32 AMActually, this is a Flashback Versions query ( http://docs.oracle.com/cd/E11882_01/appdev.112/e17125/adfns_flashback.htm#i1019938 ), not a Flashback Transaction query (http://docs.oracle.com/cd/E11882_01/appdev.112/e17125/adfns_flashback.htm#i1007455 ). I found that the Flashback Version Queries were more reliable than the Flashback Transaction Queries.
Be aware that truncates and other DDL will cause a 'break' and you will not be able to query before that.
Make sure your undo_retention is set to keep undo long enough for your purposes.
Test well. You may or may not see versions that were created within the same transaction. -
MV Refresh Performance Improvements in 11g
Hi there,
the 11g new features guide, says in section "1.4.1.8 Refresh Performance Improvements":
"Refresh operations on materialized views are now faster with the following improvements:
1. Refresh statement combinations (merge and delete)
2. Removal of unnecessary refresh hint
3. Index creation for UNION ALL MV
4. PCT refresh possible for UNION ALL MV
While I understand (3.) and (4.) I don't quite understand (1.) and (2.). Has there been a change in the internal implementation of the refresh (from a single MERGE statement)? If yes, then which? Is there a Note or something in the knowledge base, about these enhancements in 11g? I couldn't find any.
Considerations are necessary for migration decision to 11g or not...
Thanks in advance.I am not quit sure, what you mean. You mean perhaps, that the MVlogs work correctly when you perform MERGE stmts with DELETE on the detail tables of the MV?
And were are the performance improvement? What is the refresh hint?
Though I am using MVs and MVlogs at the moment, our app performs deletes and inserts in the background (no merges). The MVlog-based fast refresh scales very very bad, which means, that the performance drops very quickly, with growing changed data set. -
Why GN_INVOICE_CREATE has no performance improvement even in HANA landscape?
Hi All,
We have a pricing update program which is used to update the price for a Material Customer combination(CMC).This update is done using the FM 'GN_INVOICE_CREATE'.
The logic is designed to loop on customers, wherein this FM will be called passing all the materials valid for that customer.
This process is taking days(Approx 5 days) to get executed and updated for CMC of 100 million records.
Hence we are planning to move towards HANA for better improvement in performance.
We designed the same programs in the HANA landscape and executed it in both systems for 1 customer and 1000 material combination.
Unfortunately, both the systems gave same runtimes around 27 seconds for execution.
This is very disappointing thinking the performance improvement we should have on HANA landscape.
Could anyone throw light on any areas where we are missing out and why no performance improvement was obtained ?
Also is there any configuration related changes to be done on HANA landscape for better performance.?
The details regarding both the systems are as below.
Suite on HANA:
SAP_BASIS : 740
SAP_APPL : 617
ECC
SAP_BASIS : 731
SAP_APPL : 606
Also see the below screenshots of the system details.
HANA:
ECC:
Thanks & regards,
NaseemHi,
just to fill in on Lars' already exhaustive comments:
Migrating to HANA gives you lots of options to replace your own functionality (custom ABAP code) wuth HANA artifacts - views or SQLscript procedures. This is where you can really gain on performance. Expecting ABAP code to automatically run faster on HANA may be unrealistic, since it depends on the functionality of the code and how well it "translates" to a HANA environment. The key to really minimize run time is to replace DB calls with specific HANA views or procedures, then call these from your code.
I wrote a blog on this; you might find it useful as a general introduction:
A practical example of ABAP on HANA optimization
When it comes to SAP standard code, like your mentioned FM, it is true that SAP is migrating some of this functionality to HANA-optimized versions, but this doesn't mean everything will be optimized in one go. This particular FM is probably not among those being initially selected for "HANAification", so you basically have to either create your own functionality (which might not be advisable due to the fact that this might violate data integrity) or just be patient.
But again, the beauty of HANA lies in the brand new options for developers to utilize the new ways of pushing code down to the DB server. Check out the recommendations from Lars and you'll find yourself embarking on a new and exciting journey!
Also - as a good starting point - check out the HANA developer course on open.sap.com.
Regards,
Trond -
Performance improve using TEZ/HIVE
Hi,
I’m newbie in HDInsight. Sorry for asking simple Questions. I have queries around performance improvement of my HIVE query on File data of 90 GB (15 GB * 6).
We have enabled execution engine has TEZ, I heard the AVRO format improves the speed of execution, Is AVRO SERDE enabled TEZ Queries or do I need upload *.jar files to WASB. I’m using latest version. Any sample Query.
In TEZ, Will ORC Column Format and Avro compression can work together, when we set ORC compression level on hive has
Snappy and LZO ?. Is there any Limitation of Number of columns for ORC tables.
Is there any best compression technique to upload data file to Blob, I mean compress and upload. I used *.gz, which compressed by 1/4<sup>th</sup> of File Size and upload to Blob, but problem *.gz is not split able and it will always
uses less (single ) Mapper or should I use Avro with Snappy Compression . Is the Microsoft Avro Library performs snappy Compression or is there any compress which can be split and compress.
If data structure for file change over time , will there be necessity of reloading older data?. Can existing query works without change in code.
It has been said that TEZ has Real Time Reporting capability , but when I Query 90 GB file (It includes Group By, order by clauses) is taking almost 8 mins of time on 20 nodes, are there any pointers to improve performance further and get the Query result
in Seconds.
Mahender-- Tez is an execution engine, I don't think you need any additional jar file to get AVRO Serde working on Hive when Tez is used. You can used AvroSerDe, AvroContainerInputFormat & AvroContainerOutputFormat to get AVRO working when tez is
used.
-- I tried creating a table with about 220 columns, although the table was empty, I was able to query from the table, how many columns does your table hold?
CREATE EXTERNAL TABLE LargColumnTable02(t1 string,.... t220 string)
PARTITIONED BY(EventDate string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS ORC LOCATION '/data'
tblproperties("orc.compress"="SNAPPY");
-- You can refer
http://dennyglee.com/2013/03/12/using-avro-with-hdinsight-on-azure-at-343-industries/
Getting Avro data into Azure Blob Storage Section
-- It depends on what data has change , if you are using Hadoop, HBase etc..
-- You will have to monitor your application check node manager logs if there is any pause in execution again. It depends on what you are doing, would suggest open a Support case to investigate further.
Maybe you are looking for
-
Install error Acrobat Pro 9 stops at Synaptics
During the install of Acrobat Pro 9 I am getting an error that stops the install. It says: A newer version of the Synaptics Pointing Device Driver is installed. If you want to install the older version of the Driver, please uninstall the current Syn
-
I am getting very poor quality prints from my new Canon MG6220. I have it set on the advanced photo with glossy paper. Am I missing something?
-
Using Crystal 2008 Dev Question on rounding, I have a formula that displays my running total number, I need to format the 1st decimal. Basically if I have 2.3 I need that to round up to 2.5, or if I have 2.2 I want this to round down to 2.0. I have
-
Dear all, I have a big problem with my hyper-v server and network card. I've found a lot of topic around this question but none of them helped me. I have a Win 2008 R2 Server with only Hyper-V installed. I have 3 NIC , one is reserved to the host man
-
Where is the Content setting page?
Trying to allow popup .