Performance tuning for String operation in Field routine
Dear Experts,
I am writing a ABAP routine in field level for capturing an occurence of a string.
The string may occur in anny of the values in itab. The ITAB has around 100 thousand records and the source package 400 thousand records.
My coding goes as
LOOP AT INV_ITAB INTO INV_WA.
IF SOURCE_FIELDS-/BIC/X_ASSIGNM CS INV_WA-XFIELD3(7).+
RESULT = INV_WA-PO_NO.
EXIT.
ELSE.
RESULT = 'NA'.
ENDIF.
ENDIF.
Now the coding takes more than 15 hours and still running. (It is working fine for small number of records) .
I beleive the problem is in the LOOP statement (goes around 400000*100000 times). Is it possible for me to somehow improve this code so as to decrease the load time.
Kindly help on this.
Thanks,
Rajarathnam.S
Hi,
I think you can use this code in your start routine instead of field level it will speed up your process.As it checks on datapacket level instead of checking records one by one.
I am just putting some logic
LOOP AT INV_ITAB INTO INV_WA.
IF Datapackage-/BIC/X_ASSIGNM CS INV_WA-XFIELD+3(7).
Datapackage-targetfield = INV_WA-PO_NO.
EXIT.
ELSE.
Datapackage-targetfield = 'NA'.
ENDIF.
ENDIF.
Regards,
Ravi
Similar Messages
-
Performance Tuning for BAM 11G
Hi All
Can anyone guide me for any documents or any tips realted to performance tuning for BAM 11G on on LinuxIt would help to know if you have any specific issue. There are number of tweaks all they way from DB to Browser.
Few key things to follow:
1. Make sure you create index on DO. If there are too much old data in the DO and not useful then periodically delete it. Similar to relational database indexes, defining indexes in Oracle BAM creates and maintains an ordered list of data object elements for fast retrieval.
2. Ensure that IE setup to do automatic caching. This will help with reducing server round trips.
3. Tune DB performance. This would typically require DBA. Identify the SQL statements most likely to be causing the waits by looking at
the drilldown Top SQL Statements Ordered by Wait Time. Use SQL Analyze, EXPLAIN PLAN, or the tkprof utility to tune the queries that were identified.
Check the Dataobject tables involved in the query for missing indexes.
4. Use batching (this is on by default for most cases)
5. Fast network
6. Use profilers to look at machine load/cpu usage and distribute components on different boxes if needed.
7. Use better server AND client hardware. BAM dashboard are heavy users of ajax/javascript logic on the client -
What are the steps doing a performance tuning for pertcular program
What are the steps doing a performance tuning for pertcular program
chk this link
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
checkout these links:
www.sapgenie.com/abap/performance.htm
www.sap-img.com/abap/ performance-tuning-for-data-selection-statement.htm
www.thespot4sap.com/Articles/ SAPABAPPerformanceTuning_Introduction.asp
Message was edited by: Chandrasekhar Jagarlamudi -
Performance Tuning for Concurrent Reports
Hi,
Can you help me with Performance Tuning for Concurrent Reports/Requests ?
It was running fine but suddenly running slow.
Request Name : Participation Process: Compensation programWhat is your application release?
Please see if (Performance Issues With Participation Process: Compensation Workbench [ID 389979.1]) is applicable.
To enable trace/debug, please see (FAQ: Common Tracing Techniques within the Oracle Applications 11i/R12 [ID 296559.1] -- 5. How does one enable trace for a concurrent program INCLUDING bind variables and waits?).
Thanks,
Hussein -
Performance tuning for siebel CRM application on oracle database
Hi,
Please send me the link for Performance tuning for siebel CRM application on oracle database. If there are any white papers please send me the link.
Thanks,
RajeshHi,
This metalink document is very useful, if you have any other documents or links please inform me.
Thanks once again
Rajesh -
Performance Tuning for ECC 6.0
Hi All,
I have an ECC 6.0EP 7.0(ABAPJAVA). My system is very slow. I have Oracle 10.2.0.1.
Can you please guide me how to do these steps in the sytem
1) Reorganization should be done at least for the top 10 huge tables
and their indexes
2) Unaccessed data can be taken out by SAP Archiving
3)Apply the relevant corrections for top sap standard objects
4) CBO update statistics must be latest for all SAP and customer objects
I have never done performance tuning and want to do it on this system.
Regards,
JitenderHi,
Below are the details of ST06. Please suggest me what should I do. the system performance is very bad.
I require your inputs for performance tuning
CPU
Utilization user % 3 Count 2
system % 3 Load average 1min 0.11
idle % 1 5 min 0.21
io wait % 93 15 min 0.22
System calls/s 982 Context switches/s 1752
Interrupts/s 4528
Memory
Physical mem avail Kb 6291456 Physical mem free Kb 93992
Pages in/s 473 Kb paged in/s 3784
Pages out/s 211 Kb paged out/s 1688
Pool
Configured swap Kb 26869896 Maximum swap-space Kb 26869896
Free in swap-space Kb 21631032 Actual swap-space Kb 26869896
Disk with highest response time
Name md3 Response time ms 51
Utilization 2 Queue 0
Avg wait time ms 0 Avg service time ms 51
Kb transfered/s 2 Operations/s 0
Current parameters in the system
System: sapretail_RET_01 Profile Parameters for SAP Buffers
Date and Time: 08.01.2009 13:27:54
Buffer Name Comment
Profile Parameter Value Unit Comment
Program buffer PXA
abap/buffersize 450000 kB Size of program buffer
abap/pxa shared Program buffer mode
|
CUA buffer CUA
rsdb/cua/buffersize 3000 kB Size of CUA buffer
The number of max. buffered CUA objects is always: size / (2 kB)
|
Screen buffer PRES
zcsa/presentation_buffer_area 4400000 Byte Size of screen buffer
sap/bufdir_entries 2000 Max. number of buffered screens
|
Generic key table buffer TABL
zcsa/table_buffer_area 30000000 Byte Size of generic key table buffer
zcsa/db_max_buftab 5000 Max. number of buffered objects
|
Single record table buffer TABLP
rtbb/buffer_length 10000 kB Size of single record table buffer
rtbb/max_tables 500 Max. number of buffered tables
|
Export/import buffer EIBUF
rsdb/obj/buffersize 4096 kB Size of export/import buffer
rsdb/obj/max_objects 2000 Max. number of objects in the buffer
rsdb/obj/large_object_size 8192 Bytes Estimation for the size of the largest object
rsdb/obj/mutex_n 0 Number of mutexes in Export/Import buffer
|
OTR buffer OTR
rsdb/otr/buffersize_kb 4096 kB Size of OTR buffer
rsdb/otr/max_objects 2000 Max. number of objects in the buffer
rsdb/otr/mutex_n 0 Number of mutexes in OTR buffer
|
Exp/Imp SHM buffer ESM
rsdb/esm/buffersize_kb 4096 kB Size of exp/imp SHM buffer
rsdb/esm/max_objects 2000 Max. number of objects in the buffer
rsdb/esm/large_object_size 8192 Bytes Estimation for the size of the largest object
rsdb/esm/mutex_n 0 Number of mutexes in Exp/Imp SHM buffer
|
Table definition buffer TTAB
rsdb/ntab/entrycount 20000 Max. number of table definitions buffered
The size of the TTAB is nearly 100 bytes * rsdb/ntab/entrycount
|
Field description buffer FTAB
rsdb/ntab/ftabsize 30000 kB Size of field description buffer
rsdb/ntab/entrycount 20000 Max. number / 2 of table descriptions buffered
FTAB needs about 700 bytes per used entry
|
Initial record buffer IRBD
rsdb/ntab/irbdsize 6000 kB Size of initial record buffer
rsdb/ntab/entrycount 20000 Max. number / 2 of initial records buffered
IRBD needs about 300 bytes per used entry
|
Short nametab (NTAB) SNTAB
rsdb/ntab/sntabsize 3000 kB Size of short nametab
rsdb/ntab/entrycount 20000 Max. number / 2 of entries buffered
SNTAB needs about 150 bytes per used entry
|
Calendar buffer CALE
zcsa/calendar_area 500000 Byte Size of calendar buffer
zcsa/calendar_ids 200 Max. number of directory entries
|
Roll, extended and heap memory EXTM
ztta/roll_area 3000000 Byte Roll area per workprocess (total)
ztta/roll_first 1 Byte First amount of roll area used in a dialog WP
ztta/short_area 3200000 Byte Short area per workprocess
rdisp/ROLL_SHM 16384 8 kB Part of roll file in shared memory
rdisp/PG_SHM 8192 8 kB Part of paging file in shared memory
rdisp/PG_LOCAL 150 8 kB Paging buffer per workprocess
em/initial_size_MB 4092 MB Initial size of extended memory
em/blocksize_KB 4096 kB Size of one extended memory block
em/address_space_MB 4092 MB Address space reserved for ext. mem. (NT only)
ztta/roll_extension 2000000000 Byte Max. extended mem. per session (external mode)
abap/heap_area_dia 2000000000 Byte Max. heap memory for dialog workprocesses
abap/heap_area_nondia 2000000000 Byte Max. heap memory for non-dialog workprocesses
abap/heap_area_total 2000000000 Byte Max. usable heap memory
abap/heaplimit 40000000 Byte Workprocess restart limit of heap memory
abap/use_paging 0 Paging for flat tables used (1) or not (0)
|
Statistic parameters
rsdb/staton 1 Statistic turned on (1) or off (0)
rsdb/stattime 0 Times for statistic turned on (1) or off (0)
Regards,
Jitender -
Performance tuning for Sales Order and its configuration data extraction
I write here the data fetching subroutine of an extract report.
This report takes 2.5 hours to extract 36000 records in the quality server.
Kindly provide me some suggestions for performance tuning it.
SELECT auart vkorg vtweg spart vkbur augru
kunnr yxinsto bstdk vbeln kvgr1 kvgr2 vdatu
gwldt audat knumv
FROM vbak
INTO TABLE it_vbak
WHERE vbeln IN s_vbeln
AND erdat IN s_erdat
AND auart IN s_auart
AND vkorg = p_vkorg
AND spart IN s_spart
AND vkbur IN s_vkbur
AND vtweg IN s_vtweg.
IF NOT it_vbak[] IS INITIAL.
SELECT mvgr1 mvgr2 mvgr3 mvgr4 mvgr5
yyequnr vbeln cuobj
FROM vbap
INTO TABLE it_vbap
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln
AND posnr = '000010'.
SELECT bstkd inco1 zterm vbeln
prsdt
FROM vbkd
INTO TABLE it_vbkd
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln.
SELECT kbetr kschl knumv
FROM konv
INTO TABLE it_konv
FOR ALL ENTRIES IN it_vbak
WHERE knumv = it_vbak-knumv
AND kschl = 'PN00'.
SELECT vbeln parvw kunnr
FROM vbpa
INTO TABLE it_vbpa
FOR ALL ENTRIES IN it_vbak
WHERE vbeln = it_vbak-vbeln
AND parvw IN ('PE', 'YU', 'RE').
ENDIF.
LOOP AT it_vbap INTO wa_vbap.
IF NOT wa_vbap-cuobj IS INITIAL.
CALL FUNCTION 'VC_I_GET_CONFIGURATION'
EXPORTING
instance = wa_vbap-cuobj
language = sy-langu
TABLES
configuration = it_config
EXCEPTIONS
instance_not_found = 1
internal_error = 2
no_class_allocation = 3
instance_not_valid = 4
OTHERS = 5.
IF sy-subrc = 0.
READ TABLE it_config WITH KEY atnam = 'IND_PRODUCT_LINES'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_GQ'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_VKN'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_ZE'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_HQ'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
READ TABLE it_config WITH KEY atnam = 'IND_CALCULATED_INST_HOURS'.
IF sy-subrc = 0.
wa_char-obj = wa_vbap-cuobj.
wa_char-atnam = it_config-atnam.
wa_char-atwrt = it_config-atwrt.
APPEND wa_char TO it_char.
CLEAR wa_char.
ENDIF.
ENDIF.
ENDIF.
ENDLOOP. " End of loop on it_vbap
Edited by: jaya rangwani on May 11, 2010 12:50 PM
Edited by: jaya rangwani on May 11, 2010 12:52 PMHello Jaya,
Will provide some point which will increase the performance of the program:
1. VBAK & VBAP are header & item table. And so the relation will be 1 to many. In this case, you can use inner join instead multiple select statement.
2. If you are very much confident in handling the inner join, then you can do a single statement to get the data from VBAK, VBAP & VBKD using the inner join.
3. Before using for all entries, check whether the internal table is not initial.
And sort the internal table and delete adjacent duplicates.
4. Sort all the resultant internal table based on the required key fields and read always using the binary search.
You will get a number of documents where you can get a fair idea of what should be done and what should not be while doing a program related to performance issue.
Also you can have number of function module and BAPI where you can get the sales order details. You can try with u2018BAPISDORDER_GETDETAILEDLISTu2019.
Regards,
Selva K. -
Performance tuning for new OFMW environment
Hello
We have just set up a new UAT environment entirely on all Oracle FMW latest products which includes:
1) OS -RHEL V5
2) WebCenter
3) SOA Suite
4) Database
5) IDM
6) HTTP Server
7) WebLogic Server
As as post infrastructure setup plan we need to to performance tuning as well. Shall we do only WLS performance tuning or we need to tune each and every product in order to get optimum results.
Any guide? resource to help?
Pls advice
Regards
DevWhen you install the product soa-suite (and probably webcenter too) a prerequisite check is performed
that gives information about packages and parameters that need to be adjusted on your operating system.
For products that run on WebLogic Server you probably have to tune the JVM, examples can be found here:
- http://middlewaremagic.com/weblogic/?p=6930 (discusses JRockit)
- http://middlewaremagic.com/weblogic/?p=7083 (focuses on Coherence, but the considerations can be applied to other environments as well)
Examples that set-up the SOA-Suite can be found here:
- http://middlewaremagic.com/weblogic/?p=6040 (discusses considerations when installing the SOA suite)
- http://middlewaremagic.com/weblogic/?p=6872 (discusses considerations when adding the web-tier to the SOA suite)
The enterprise deployment guides can help too:
- SOA Suite: http://download.oracle.com/docs/cd/E21764_01/core.1111/e12036/toc.htm
- WebCenter: http://download.oracle.com/docs/cd/E21764_01/core.1111/e12037/toc.htm
These also contain sections on integration with Oracle Identity Management and the Oracle HTTP Server -
ABAP Logic/Structure for a Start and Field Routine in Transformations
My Requirment is to export data from Data Target to Application Server.
And for that purpose i built a APD...
In Transformations to read data from MAster Data Table i had written below Global & Field Routine.
Start Routine:
Glodal Declaration
DATA: it_dep type standard table of /BI0/MDEPT,
is_dep type /BI0/MDEPT.
LOOP AT SOURCE_PACKAGE ASSIGNING <source_fields>.
if not SOURCE_PACKAGE[] is initial.
SELECT * FROM /BI0/MDEPT INTO TABLE it_dep for all entries in
SOURCE_PACKAGE
WHERE depLOYEE = SOURCE_PACKAGE-dep AND
OBJVERS = 'A' AND
DATETO GE SY-DATUM.
ENDIF.
ENDLOOP.
FIELD ROUTINE
Clear:is_dep.
Read table it_dep into is_dep with key
depLOYEE = SOURCE_FIELDS-deployee binary search.
if sy-subrc = 0.
RESULT = is_dep-USERNAME.
endif.
Now for another field 'Manager' name.......
My requirment
Start Routine:
(Sub Detp is an attribute to Dept and Sub Dept is referenced on dept)
First it should copy all the Sub depts for the corresponding depts in the source field to a Temperoray table (TEMP1)
For all sub depts in TEMP1 table it should copy manager names from dept master data table to a Temp2 table
In start routine i need to first read temp1 and result from temp1 should be passed to temp2 and the result from tem2 can be passed to result field
Please updateHi,
i am providin you a sample code please modify it (field name and tables name's as per your requirement).
Please write the code in transformation rule of field Emp_TDate.
Map field Emp_SDATE to the target field for Emp_TDATE .
SELECT * FROM /BIC/AEMPPED00
WHERE Emp_SDATE NE ' '.
if sy-subrc is initial.
result = source_field-Emp_SDATE.
else.
result = ' '.
endif.
Please replace the emp_SDATE field with the source field name.
But still i have some question...
1. On what basis u decide the latest record ??
Can u please explain scenarion bit mroe clearly.
Thanks
Dipika
Edited by: Dipika Tyagi on Jun 24, 2008 8:47 AM -
Performance Tuning for a report
Hi,
We have developed a program which updates 2 fields namely Reorder Point and Rounding Value on the MRP1 tab in TCode MM03.
To update the fields, we are using the BAPI BAPI_MATERIAL_SAVEDATA.
The problem is that when we upload the data using a txt file, the program takes a very long time. Recently when we uploaded a file containing 2,00,000 records, it took 27 hours. Below is the main portion of the code (have ommitted the open data set etc). Please help us fine tune this, so that we can upload these 2,00,000 records in 2-3 hours.
select matnr from mara into table t_mara.
select werks from t001w into corresponding fields of table t_t001w .
select matnr werks from marc into corresponding fields of table t_marc.
loop at str_table into wa_table.
if not wa_table-partnumber is initial.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
INPUT = wa_table-partnumber
IMPORTING
OUTPUT = wa_table-partnumber
endif.
clear wa_message.
read table t_mara into wa_mara with key matnr = wa_table-partnumber.
if sy-subrc is not initial.
concatenate 'material ' wa_table-partnumber ' doesnot exists'
into wa_message.
append wa_message to t_message.
endif.
read table t_t001w into wa_t001w with key werks = wa_table-HostLocID.
if sy-subrc is not initial.
concatenate 'plant ' wa_table-HostLocID ' doesnot exists' into
wa_message.
append wa_message to t_message.
else.
case wa_t001w-werks.
when 'DE40'
or 'DE42'
or 'DE44'
or 'CN61'
or 'US62'
or 'SG70'
or 'FI40'
read table t_marc into wa_marc with key matnr = wa_table-partnumber
werks = wa_table-HostLocID.
if sy-subrc is not initial.
concatenate 'material' wa_table-partnumber ' not extended to plant'
wa_table-HostLocID into wa_message.
append wa_message to t_message.
endif.
when others.
concatenate 'plant ' wa_table-HostLocID ' not allowed'
into wa_message.
append wa_message to t_message.
endcase.
endif.
if wa_message is initial.
data: wa_headdata type BAPIMATHEAD,
wa_PLANTDATA type BAPI_MARC,
wa_PLANTDATAx type BAPI_MARCX.
wa_headdata-MATERIAL = wa_table-PartNumber.
wa_PLANTDATA-plant = wa_table-HostLocID.
wa_PLANTDATAX-plant = wa_table-HostLocID.
wa_PLANTDATA-REORDER_PT = wa_table-ROP.
wa_PLANTDATAX-REORDER_PT = 'X'.
wa_plantdata-ROUND_VAL = wa_table-EOQ.
wa_plantdatax-round_val = 'X'.
CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
EXPORTING
HEADDATA = wa_headdata
PLANTDATA = wa_PLANTDATA
PLANTDATAX = wa_PLANTDATAX
IMPORTING
RETURN = t_bapiret
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
write t_bapiret-message.
endif.
clear: wa_mara, wa_t001w, wa_marc.
endloop.
loop at t_message into wa_message.
write wa_message.
endloop.
Thanks in advance.
Peter
Edited by: kishan P on Sep 17, 2010 4:50 PMHi Peter,
I would suggest few changes in your code. Please refer below procedure to optimize the code.
Steps:
Please run SE30 run time analysis and find out if ABAP code or Database fetch is taking time.
Please run extended program check or code inspector to remove any errors and warnings.
Few code changes that i would suggest in your code
For select query from t001w & marc remove the corresponding clause as this also reduces the performance. ( For this you can define an Internal table with only the required fields in the order they are specified in the table and execute a select query to fetch these fields)
Also put an initial check if str_table[] is not initial before you execute the loop.
where ever you have used read table. Please sort these tables and use binary search.
Please clear the work areas after every append statment.
As i dont have a sap system handy. i would also check if my importing parameters for the bapi structure is a table. Incase its a table i would directly pass all the records to this table and then pass it to the bapi. Rather than looping every records and updating it.
Hope this helps to resolve your problem.
Have a nice day
Thanks -
Need help in Performance tuning for function...
Hi all,
I am using the below algorithm for calculating the Luhn Alogorithm to calculate the 15th luhn digit for an IMEI (Phone Sim Card).
But the below function is taking about 6 min for 5 million records. I had 170 million records in a table want to calculate the luhn digit for all of them which might take up to 4-5 hours.Please help me performance tuning (better way or better logic for luhn calculation) to the below function.
A wikipedia link is provided for the luhn algorithm below
Create or Replace FUNCTION AddLuhnToIMEI (LuhnPrimitive VARCHAR2)
RETURN VARCHAR2
AS
Index_no NUMBER (2) := LENGTH (LuhnPrimitive);
Multiplier NUMBER (1) := 2;
Total_Sum NUMBER (4) := 0;
Plus NUMBER (2);
ReturnLuhn VARCHAR2 (25);
BEGIN
WHILE Index_no >= 1
LOOP
Plus := Multiplier * (TO_NUMBER (SUBSTR (LuhnPrimitive, Index_no, 1)));
Multiplier := (3 - Multiplier);
Total_Sum := Total_Sum + TO_NUMBER (TRUNC ( (Plus / 10))) + MOD (Plus, 10);
Index_no := Index_no - 1;
END LOOP;
ReturnLuhn := LuhnPrimitive || CASE
WHEN MOD (Total_Sum, 10) = 0 THEN '0'
ELSE TO_CHAR (10 - MOD (Total_Sum, 10))
END;
RETURN ReturnLuhn;
EXCEPTION
WHEN OTHERS
THEN
RETURN (LuhnPrimitive);
END AddLuhnToIMEI;
http://en.wikipedia.org/wiki/Luhn_algorithmAny sort of help is much appreciated....
Thanks
RedeThere is a not needed to_number function in it. TRUNC will already return a number.
Also the MOD function can be avoided at some steps. Since multiplying by 2 will never be higher then 18 you can speed up the calculation with this.
create or replace
FUNCTION AddLuhnToIMEI_fast (LuhnPrimitive VARCHAR2)
RETURN VARCHAR2
AS
Index_no pls_Integer;
Multiplier pls_Integer := 2;
Total_Sum pls_Integer := 0;
Plus pls_Integer;
rest pls_integer;
ReturnLuhn VARCHAR2 (25);
BEGIN
for Index_no in reverse 1..LENGTH (LuhnPrimitive) LOOP
Plus := Multiplier * TO_NUMBER (SUBSTR (LuhnPrimitive, Index_no, 1));
Multiplier := 3 - Multiplier;
if Plus < 10 then
Total_Sum := Total_Sum + Plus ;
else
Total_Sum := Total_Sum + Plus - 9;
end if;
END LOOP;
rest := MOD (Total_Sum, 10);
ReturnLuhn := LuhnPrimitive || CASE WHEN rest = 0 THEN '0' ELSE TO_CHAR (10 - rest) END;
RETURN ReturnLuhn;
END AddLuhnToIMEI_fast;
/My tests gave an improvement for about 40%.
The next step to try could be to use native complilation on this function. This can give an additional big boost.
Edited by: Sven W. on Mar 9, 2011 8:11 PM -
Need Performance tuning in delete operation
Hi Gurus,
I am performing delete operation by following SQL query.
delete from gl_account where bu_id = -99
but it take long time to execute. Table contains 1 trigger and 5 index. I have disabled the trigger and rebuild the index but still it not executing.
Here is my explain plan.
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
DELETE STATEMENT Optimizer Mode=ALL_ROWS 561 19
DELETE OFFLINETESTDB.GL_ACCOUNT
INDEX RANGE SCAN OFFLINETESTDB.BU_ID 561 27 K 2
Pls help me out to solve this.Hi All,
I am still facing the same performs problem for deleting row in a table.
here by i have attached my TKPROF for your consideration.
TKPROF: Release 10.2.0.1.0 - Production on Tue Oct 12 14:01:13 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Trace file: rubikon_s002_3952.trc
Sort options: exeela exerow
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
DELETE FROM GL_ACCOUNT
WHERE
GL_ACCT_ID IN (16908,16909,16456)
call count cpu elapsed disk query current rows
Parse 1 0.01 0.13 0 0 0 0
Execute 1 0.03 0.26 0 6 221 3
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.04 0.40 0 6 221 3
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 40 (OFFLINETESTDB)
Rows Row Source Operation
0 DELETE GL_ACCOUNT (cr=177742 pr=160538 pw=0 time=31518664 us)
3 INLIST ITERATOR (cr=6 pr=0 pw=0 time=103 us)
3 INDEX RANGE SCAN GL_ACCOUNT_PK (cr=6 pr=0 pw=0 time=86 us)(object id 65637)
Rows Execution Plan
0 DELETE STATEMENT MODE: ALL_ROWS
0 DELETE OF 'GL_ACCOUNT'
3 INLIST ITERATOR
3 INDEX MODE: ANALYZED (RANGE SCAN) OF 'GL_ACCOUNT_PK'
(INDEX (UNIQUE))
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_ACCOUNT_SUMMARY" where "GL_ACCT_ID" = :1 and
"GL_ACCT_NO" = :2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.06 0 0 0 0
Fetch 3 0.00 0.00 0 6 0 3
total 7 0.00 0.06 0 6 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=6 pr=0 pw=0 time=236 us)
0 VIEW index$_join$_001 (cr=6 pr=0 pw=0 time=185 us)
0 HASH JOIN (cr=6 pr=0 pw=0 time=172 us)
0 INDEX RANGE SCAN GL_ACCOUNT_SUMMARY_IX2 (cr=6 pr=0 pw=0 time=82 us)(object id 65648)
0 INDEX RANGE SCAN GL_ACCOUNT_SUMMARY_IX1 (cr=0 pr=0 pw=0 time=0 us)(object id 65647)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_ACCOUNT_QUARTERLY_STAT" where "GL_ACCT_ID" = :1 and
"GL_ACCT_NO" = :2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.06 0 0 0 0
Fetch 3 1.64 20.79 108398 109500 0 3
total 7 1.64 20.86 108398 109500 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=109500 pr=108398 pw=0 time=20797344 us)
0 TABLE ACCESS FULL GL_ACCOUNT_QUARTERLY_STAT (cr=109500 pr=108398 pw=0 time=20797279 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_ACCOUNT_MONTHLY_STAT" where "GL_ACCT_ID" = :1 and
"GL_ACCT_NO" = :2
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 4 0.00 0.06 0 0 1 0
Fetch 3 0.75 10.11 52140 59532 0 3
total 9 0.75 10.18 52140 59532 1 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Parsing user id: SYS
Rows Row Source Operation
3 SORT AGGREGATE (cr=59532 pr=52140 pw=0 time=10116280 us)
0 TABLE ACCESS FULL GL_ACCOUNT_MONTHLY_STAT (cr=59532 pr=52140 pw=0 time=10116221 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_ACCOUNT_RECON_TXN_JOURNAL" where "GL_ACCT_ID" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.02 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.02 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=138 us)
0 TABLE ACCESS FULL GL_ACCOUNT_RECON_TXN_JOURNAL (cr=9 pr=0 pw=0 time=97 us)
select text
from
view$ where rowid=:1
call count cpu elapsed disk query current rows
Parse 3 0.01 0.00 0 0 0 0
Execute 3 0.01 0.00 0 0 2 0
Fetch 3 0.00 0.00 0 6 0 3
total 9 0.03 0.00 0 6 2 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS BY USER ROWID VIEW$ (cr=1 pr=0 pw=0 time=34 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_BULK_CRITERIA" where "GL_ACCT_ID" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=109 us)
0 TABLE ACCESS FULL GL_BULK_CRITERIA (cr=9 pr=0 pw=0 time=71 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_ACCOUNT_HISTORY" where "GL_ACCT_ID" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.01 0.02 0 5070 0 3
total 7 0.01 0.02 0 5070 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=5070 pr=0 pw=0 time=22519 us)
0 TABLE ACCESS FULL GL_ACCOUNT_HISTORY (cr=5070 pr=0 pw=0 time=22472 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_ACCOUNT_BULK_HISTORY" where "GL_ACCT_ID" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=106 us)
0 TABLE ACCESS FULL GL_ACCOUNT_BULK_HISTORY (cr=9 pr=0 pw=0 time=69 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_ALLOTMENT" where "POOL_ACCT_ID" = :1 and "POOL_ACCT_NO" =
:2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 3 0 3
total 7 0.00 0.02 0 3 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=3 pr=0 pw=0 time=113 us)
0 TABLE ACCESS BY INDEX ROWID GL_ALLOTMENT (cr=3 pr=0 pw=0 time=68 us)
0 INDEX RANGE SCAN GL_ALLOTMENT_IX1 (cr=3 pr=0 pw=0 time=50 us)(object id 65651)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_ACCOUNT_YEARLY_STAT" where "GL_ACCT_ID" = :1 and
"GL_ACCT_NO" = :2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.01 0.01 0 3453 0 3
total 7 0.01 0.01 0 3453 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=3453 pr=0 pw=0 time=10485 us)
0 TABLE ACCESS FULL GL_ACCOUNT_YEARLY_STAT (cr=3453 pr=0 pw=0 time=10440 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "CASH_GL_ACCT_ID" = :1 and
"CASH_GL_ACCT_NO" = :2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=110 us)
0 TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=71 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "DEPOT_GL_ACCT_ID" = :1 and
"DEPOT_GL_ACCT_NO" = :2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=107 us)
0 TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=71 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_TXN_ALLOTTEE" where "GL_ALLOTTEE_ACCT_ID" = :1 and
"GL_ALLOTTEE_ACCT_NO" = :2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=122 us)
0 TABLE ACCESS FULL GL_TXN_ALLOTTEE (cr=9 pr=0 pw=0 time=84 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "POSN_GL_ACCT_ID" = :1 and
"POSN_GL_ACCT_NO" = :2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=110 us)
0 TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=69 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_ALLOTTEE" where "RECIPIENT_ACCT_ID" = :1 and
"RECIPIENT_ACCT_NO" = :2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=155 us)
0 TABLE ACCESS FULL GL_ALLOTTEE (cr=9 pr=0 pw=0 time=119 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."BU_GL_INTERFACE_ACCOUNT" where "INTER_BU_GL_ACCT_ID" = :1
and "INTER_BU_GL_ACCT_NO" = :2
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=102 us)
0 TABLE ACCESS FULL BU_GL_INTERFACE_ACCOUNT (cr=9 pr=0 pw=0 time=67 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_BUDGET_ITEM_DATA" where "GL_ACCT_ID" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=151 us)
0 TABLE ACCESS FULL GL_BUDGET_ITEM_DATA (cr=9 pr=0 pw=0 time=108 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_TOTALLING_ACCOUNT_LINE" where "GL_ACCT_ID" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.02 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=119 us)
0 TABLE ACCESS FULL GL_TOTALLING_ACCOUNT_LINE (cr=9 pr=0 pw=0 time=82 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."SWEEP_FUNDS_XFER" where "TO_GL_ACCT_ID" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.01 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.01 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=123 us)
0 TABLE ACCESS FULL SWEEP_FUNDS_XFER (cr=9 pr=0 pw=0 time=84 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."GL_ACCESS_ACCOUNT_LIST" where "GL_ACCT_ID" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=117 us)
0 TABLE ACCESS FULL GL_ACCESS_ACCOUNT_LIST (cr=9 pr=0 pw=0 time=79 us)
select /*+ all_rows */ count(1)
from
"OFFLINETESTDB"."SETTLEMENT_BANK_ACCOUNT" where "MIRROR_GL_ACCT_ID" = :1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
total 7 0.00 0.00 0 9 0 3
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
3 SORT AGGREGATE (cr=9 pr=0 pw=0 time=121 us)
0 TABLE ACCESS FULL SETTLEMENT_BANK_ACCOUNT (cr=9 pr=0 pw=0 time=86 us)
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 8 0.01 0.29 0 0 8 0
Execute 11 0.03 0.32 0 6 222 3
Fetch 7 0.75 10.11 52140 59532 0 7
total 26 0.79 10.73 52140 59538 230 10
Misses in library cache during parse: 7
Misses in library cache during execute: 2
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 47 0.01 0.06 0 0 0 0
Execute 85 0.03 0.18 0 0 2 0
Fetch 85 1.67 20.83 108398 118214 0 85
total 217 1.71 21.08 108398 118214 2 85
Misses in library cache during parse: 21
Misses in library cache during execute: 20
7 user SQL statements in session.
48 internal SQL statements in session.
55 SQL statements in session.
5 statements EXPLAINed in this session.
Trace file: rubikon_s002_3952.trc
Trace file compatibility: 10.01.00
Sort options: exeela exerow
1 session in tracefile.
7 user SQL statements in trace file.
48 internal SQL statements in trace file.
55 SQL statements in trace file.
29 unique SQL statements in trace file.
5 SQL statements EXPLAINed using schema:
OFFLINETESTDB.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
548 lines in trace file.
32 elapsed seconds in trace file.Thanks & Regards
Sami -
Performance tuning for Oracle SOA 11g
Hi,
Ours is SOA 11g Environment ver 11.1.1.4.2 on Windows Box.
what are the performance tuning steps / guidelines that needs to be followed for SOA 11g in production env
Thanks in adv
Thanks & Regars,
anvv sharmahttp://download.oracle.com/docs/cd/E17904_01/core.1111/e10108/toc.htm
Regards,
Anuj -
Performance Tuning for OBIEE Reports
Hi Experts,
I had a requirement for which i have to end up building a snowflakt model in Physical layer i.e. One Dimension table with Three snowflake tables(Materialized views).
The key point is the Dimension table is used in most of the OOTB reports.
so all the reports use other three snowflakes tables in the Join conditions due to which the reports take longer time than ever like 10 mints.
can anyone suggest good performance tuning tips to tune the reports.
i created some indices on Materialized view columns and and on dimension table columns.
i created the Materialized views with cache Enabled and refreshes only once in 24 hours etc
is there anything i have to improve performance or have to consider re-designing the Physical layer without snowflake
Please Provide valuable suggestions and comments
Thank You
KumarKumar,
Most of the Performance Tuning should be done at the Back End , So calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
Hope that helps
~Srix -
Hi,
Can any one guide me how to do the performance tuning in an ECC5,
MSSQL2000,Windows 2003 system
Here the RAM size is 3.5 gb.
Page file size was 9 gb(i increased that to 11.5 gb (3*RAM+ 1 gb)
Also increased the abap/buffer size to 400000.
Regards,
ArunHi Arun,
Go through the following links which are very useful in understanding the parameters values.
Zero administration memory management
Regarding Virtual Memory?
Regards,
Hari.
PS: Points are welcome.
Maybe you are looking for
-
MacPro Raid Card w/ SAS Drives is battery required?
We have a MacPro (early 2008) dual 2.8 Quad with Mac Raid Card and 4 300g SAS drives. The battery became an issue early on it was always showing low on start-ups. Now it shows failed. I am now running system with 3 drives striped raid 0. Is the batte
-
I have the same problem on all my computers, one running Windows 7 and two with Vista. The three buttons in the upper right corner of the FF windows are not shown correctly. They are soft of cut off in the middle when the FF window is maximized.
-
Handling service-callouts in bpm which doesnt use faults to signal errors
I need to call a bunch of services from my bpm process. Normally i have boundary events on my service call activities, but now i have a whole bunch of services that i need to call that doesnt doesnt respond with faults for error situations. Instead,
-
How can i get a query to count a number of strings?
I want the query to count for me how many holidays there are to the USA in my table. But im not sure how to get it to retreive the result of a string or varchar2 datatype. This is what i have tried so far. select count(COUNTRYVIS) <<this column conta
-
File Content Conversion(SenderFileadapter) fields parameters
Hi all I am doing aFile to File scenario, Using content Conversion at SENDER FILE adapter my source message type is as : <?xml version="1.0" encoding="UTF-8"?> <ns0:MT_Cnet_Source xmlns:ns0="http://abc.com/Cnet"> <<b>HeaderPayment</b>