Performance problem with query on bkpf table
hi good morning all ,
i ahave a performance problem with a below query on bkpf table .
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
WHERE budat IN s_budat.
is ther any possibility to improve the performanece by using index .
plz help me ,
thanks in advance ,
regards ,
srinivas
hi,
if u can add bukrs as input field or if u have bukrs as part of any other internal table to filter out the data u can use:
for ex:
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
WHERE budat IN s_budat
and bukrs in s_bukrs.
or
SELECT bukrs
belnr
gjahr
FROM bkpf
INTO TABLE ist_bkpf_temp
for all entries in itab
WHERE budat IN s_budat
and bukrs = itab-bukrs.
Just see , if it is possible to do any one of the above?? It has to be verified with ur requirement.
Similar Messages
-
Performance Problem with Query load
Hello,
after the upgrade to SPS 23, we have some problems with loading a Query. Before the Upgrade, the Query runs 1-3 minutes, and now more then 40 minutes.
Does anyone have an idea?
Regards
MarcoHi,
Suggest executing the Query in RSRT transaction by choosing the option ' Execute+Debugger' to further analyze where exactly the query is taking time.
Make sure choosing the appropriate 'Query Display' (List/BEx Analyzer/ HTML) option before Executing the query in the Debugger mode since the display option also effect the query run time.
Hope this info helps!
Bala Koppuravuri -
Performance problem with query
Hi,
I have a query.
SELECT
CDW_DIM_DATUM_VW.DATUM
,sum(CDW_FT031_DISCO_VW.RATE_CHARGEABLE_DURATION/60)
FROM
CDW_FT031_DISCO_VW,
CDW_DIM_DATUM_VW
WHERE CDW_DIM_DATUM_VW.DATUM >= to_date('20110901','yyyymmdd')
AND CDW_DIM_DATUM_VW.DATUM < to_date('20110911','yyyymmdd')
AND CDW_DIM_DATUM_VW.DATUM_KEY=CDW_FT031_DISCO_VW.DATUM_KEY
GROUP BY
CDW_DIM_DATUM_VW.DATUM
This query takes 2hrs on production normally which has to take 2 mts.
On checking Explain plan i understood that it is doing full table scan on cdw_ft031 which is the table used in view(cdw_ft031_disco_vw) which has got 30 lakh records.
I analyzed tables and indexes. I rebuilt all indexes . I used hints also but even then it is doing full table scan.
There are indexes created on columns refered in where clause.
Kindly help me.As the source are VIEWS your code is even not showing the full code. (Beside not showing the tables, indexes etc)
which has got 30 lakh recordsfrom wikipedia:
lakh is a unit in the Indian numbering system equal to one hundred thousandSo if you what that we do understand what your problem is, give us as much information as you have in a language the world understands.
I guess "CDW_DIM_DATUM_VW" is a "time dimension" so it has 1 row per date? Why is it a view?
Then the only join is to CDW_FT031_DISCO_VW where "FT" stands for "Fact"? (Why is it a views?)
What you want is that it picks the index (which it has on DATUM_KEY??) instead of a full table scan?
Is cdw_ft031 parititioned? (You often partition a fact via date, and you see "full table scan" which stand for "full partition scan".
So your fact has 3 Million rows? And the query takes 2 hours? My laptop would do that in less without any index.... there is sometime completely wrong.
I discourage the join of views, but based on what you gave us, hard to give advices. -
CONNECT BY, Performance problems with hierarchical SQL
Hello,
I have a performance problem with the following SQL:
table name: testtable
columns: colA, colB, colC, colD, colE, colF
Following hierarchical SQL:
SELECT colA||colB||colC AS A1, colD||colE||colF AS B1, colA, colB, colC, level
FROM testable
CONNECT BY PRIOR A1 = B1
START WITH A1 = 'aaa||bbbb||cccc'
With big tables the performance of this construct is very bad. I can't use functional based indexes to create an Index on "colA||colB||colC" and "colD||colE||colF"
Has anyone an idea how I can get a real better performance for this hierarchical construct, if I have to combine multiple colums? Or is there any better way (with an PL/SQL or view trigger) solve something like this ??
Thanks in advance for your investigation :)
CarmenWhy not
CONNECT BY PRIOR colA = colD
and PRIOR colB = colE
and ...
? It is not the same thing, but I suspect my version is correct:-) -
Performance Problems with "For all Entries" and a big internal table
We have big Performance Problems with following Statement:
SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
FOR ALL ENTRIES IN gt_zmon_help
WHERE
status = 'IAI200' AND
logdat IN gs_dat AND
ztrack = gt_zmon_help-ztrack.
In the internal table gt_zmon_help are over 1000000 entries.
Anyone an Idea how to improve the Performance?
Thank you!>
Matthias Weisensel wrote:
> We have big Performance Problems with following Statement:
>
>
SELECT * FROM zeedmt_zmon INTO TABLE gt_zmon_help
> FOR ALL ENTRIES IN gt_zmon_help
> WHERE
> status = 'IAI200' AND
> logdat IN gs_dat AND
> ztrack = gt_zmon_help-ztrack.
>
> In the internal table gt_zmon_help are over 1000000 entries.
> Anyone an Idea how to improve the Performance?
>
> Thank you!
You can't expect miracles. With over a million entries in your itab any select is going to take a bit of time. Do you really need all these records in the itab? How many records is the select bringing back? I'm assuming that you have got and are using indexes on your ZEEDMT_ZMON table.
In this situation, I'd first of all try to think of another way of running the query and restricting the amount of data, but if this were not possible I'd just run it in the background and accept that it is going to take a long time. -
Performance problem with table COSS...
Hi
Anyone encountered performance problem with these table : COSS, COSB, COEP
Best Regards>
gsana sana wrote:
> Hi Guru's
>
> this is the select Query which is taking much time in Production. so please help me to improve the preformance with BSEG.
>
> this is my select query:
>
> select bukrs
> belnr
> gjahr
> bschl
> koart
> umskz
> shkzg
> dmbtr
> ktosl
> zuonr
> sgtxt
> kunnr
> from bseg
> into table gt_bseg1
> for all entries in gt_bkpf
> where bukrs eq p_bukrs
> and belnr eq gt_bkpf-belnr
> and gjahr eq p_gjahr
> and buzei in gr_buzei
> and bschl eq '40'
> and ktosl ne 'BSP'.
>
> UR's
> GSANA
Hi,
This is what I know and please if any expert think its incorrect, please do correct me.
BSEG is a cluster table with BUKRS, BELNR, GJAHR and BUZEI as the key whereas other key will be stored in database as raw data thus SAP apps will need to convert that raw data first if we are using other keys in where condition. Hence, I suggest to use up to buzei in the where condition and filter other condition in internal table level like using Delete statement. Hope its help.
Regards,
Abraham -
Performance problem with sdn_nn - new 10g install
I am having a performance problem with sdn_nn after migrating to a new machine. The old Oracle version was 9.0.1.4. The new is 10g. The new machine is faster in general. Most (non-spatial) batch processes run in half the time. However, the below statement is radically slower. The below statement ran in 45 minutes before. On the new machine it hasn't finished after 10 hours. I am able to get a 5% sample of the customers to finish in 45 minutes.
Does anyone have any ideas on how to approach this problem? Any chance something isn't installed correctly on the new machine (the nth version of the query finishe, albeit 20 times slower)?
Appreciate any help. Thanks.
- Jack
create table nearest_store
as
select /*+ ordered */
a.customer_id,
b.store_id nearest_store,
round(mdsys.sdo_nn_distance(1),4) distance
from customers a,
stores b
where mdsys.sdo_nn(
b.geometry,
a.geometry,
'sdo_num_res=1, unit=mile',
1
) = 'TRUE'
;Dan,
customers 110,000 (involved in this query)
stores 28,000
Here is the execution plan on the current machine:
CREATE TABLE STATEMENT cost = 81947
LOAD AS SELECT
PX COORDINATOR
PX SEND QC (RANDOM) :TQ10000
ROW NESTED LOOPS
1 1 PX BLOCK ITERATOR
1 1ROW TABLE ACCESS FULL CUSTOMERS
1 3 PARTITION RANGE ALL
1 3 TABLE ACCESS BY LOCAL INDEX ROWID STORES
DOMAIN INDEX STORES_SIDX
I can't capture the execution plan on the old database. It is gone. I don't remember it being any different from the above (full scan customers, probe stores index once for each row in customers).
I am trying the query without the create table (just doing a count). I'll let you know on that one.
I am at 10.0.1.3.
Here is how I created the index:
create index stores_sidx
on stores(geometry)
indextype is mdsys.spatial_index LOCAL
Note that the stores table is partitioned by range on store type. There are three store types (each in its own partition). The query returns the nearest store of each type (three rows per customer). This is by design (based on the documented behavior of sdo_nn).
In addition to running the query without the CTAS, I am also going try running it on a different machine tonight. I let you know how that turns out.
The reason I ask about the install, is that the Database Quick Installation Guide for Solaris says this:
"If you intend to use Oracle JVM or Oracle interMedia, you must install the Oracle Database 10g Products installation type from the Companion CD. This installation optimizes the performance of those products on your system."
And, the Database Installlation Guide says:
"If you plan to use Oracle JVM or Oracle interMedia, Oracle strongly recommends that you install the natively compiled Java libraries (NCOMPs) used by those products from the Oracle Database 10g Companion CD. These libraries are required to improve the performance of the products on your platform."
Based on that, I am suspicious that maybe I have the product installed on the new machine, but not correctly (forgot to set fast=true).
Let me know if you think of anything else you'd like to see. Thanks.
- Jack -
View with columns based on function - problem with query
Hi,
I'm using Oracle 9i;
I've created a view which has columns based on a table columns (multiple columns from 1 table) and funtion (multiple columns based on 1 function).
The function takes ID as the first argument and name of the column to determine which value to return as the second one.
Here is a sample of such function (simplified):
FUNCTION my_function
(in_id IN NUMBER, in_col_name IN VARCHAR2)
RETURN VARCHAR2
IS
c_name VARCHAR2(100);
c_last_name VARCHAR2(100);
BEGIN
SELECT T.NAME, T.LAST_NAME
INTO c_name, c_last_name
FROM TABLE_1 T, TABLE_2 Z
WHERE T.PK = Z.FK
AND Z.ID = in_id;
IF in_col_name = 'NAME' THEN
RETURN c_name;
ELSIF in_col_name = 'LAST_NAME' THEN
RETURN c_last_name;
END IF;
END;
For simplicty I've restricted the number of columns.
CREATE OR REPLACE VIEW my_view
(ID, NAME, LAST_NAME)
AS
SELECT
T.ID ID
,CAST(my_function(T.ID,'NAME') AS VARCHAR2(100)) NAME
,CAST(my_function(T.ID,'LAST_NAME') AS VARCHAR2(100)) LAST_NAME
FROM TABLE T;
There is no problem with query:
SELECT * FROM my_view;
The problem arises when I query the view (regardles of '=' or 'LIKE'):
SELECT * FROM my_view
WHERE name LIKE '%some_part_of_name%'
The query returns rows for same names, for same it doesn't. If I put '=' and the whole name the query returns nothing, but when I put 'LIKE' and the first letter it returns rows in some cases.
I've tried to debug this situation and I've discovered that the function recives ID not in the proper order and not the same amount of times - in explicit:
for each ID in (1, 2, 3, 4, 5, 6, ... , 100) the function should be called twice for each ID and in the same order, but it does not.
I get 1, 1, 2, 3, 3, 6, 20, 20 and so on.
Help needed.
Greetings.The problem is more complicated than the solutions provided here.
The reason why I'm using the function is this:
the original view was constructed using multiple union all selects and the speed was terrible. I've created the index on the base table to obtain a proper sort. For retriving all records at once the view works perfectly, but if one wants to query by columns based on function the results are suprisng - sometimes there are, some times there are none, or if you serch with "like" and only a part of string there are results, but with "=" there are no results.
Here are real DDLs:
View:
CREATE OR REPLACE VIEW V_DOK_ARCH
(ID_ZDAR, TYP, STAN, DATE_CREATED, CREATED_BY,
DATE_MODIFIED, MODIFIED_BY, SPRA_ID_SPRA, PODM_ID_PODM, PODM_UMOW_ID_UMOW,
NR_WFS, WFS_NR_INTER, UWAGI_OPER, FUNDUSZ, NUMER,
DATA_PODPISANIA, RODZAJ, TYP_PRZY, TYP_UBEZ, NAZWISKO,
IMIE, IMIE_OJCA, NAZWA_FIRMY, NAZWA_FIRMY_SKR, DANE_KLIE)
AS
SELECT /*+ INDEX(Z ZDAR_DATE_CREATED_DESC_I) */
Z.ID_ZDAR ID_ZDAR
, Z.TYP TYP
, Z.STAN STAN
, Z.DATE_CREATED DATE_CREATED
, Z.CREATED_BY CREATED_BY
, Z.DATE_MODIFIED DATE_MODIFIED
, Z.MODIFIED_BY MODIFIED_BY
, Z.SPRA_ID_SPRA SPRA_ID_SPRA
, Z.PODM_ID_PODM PODM_ID_PODM
, Z.PODM_UMOW_ID_UMOW PODM_UMOW_ID_UMOW
, Z.NR_WFS NR_WFS
, Z.WFS_NR_INTER WFS_NR_INTER
, Z.UWAGI_OPER UWAGI_OPER
, Z.FUNDUSZ FUNDUSZ
, CAST(F_Rej_Zdar_Char(Z.ID_ZDAR, 'NUMER') AS VARCHAR2(30)) NUMER
, F_Rej_Zdar_Date(Z.ID_ZDAR, 'DATA_PODPISANIA') DATA_PODPISANIA
, CAST(F_Rej_Zdar_Char(Z.ID_ZDAR, 'RODZAJ') AS VARCHAR2(4)) RODZAJ
, CAST(F_Rej_Zdar_Char(Z.ID_ZDAR, 'TYP_PRZY') AS VARCHAR2(4)) TYP_PRZY
, CAST(F_Rej_Zdar_Char(Z.ID_ZDAR, 'TYP_UBEZ') AS VARCHAR2(3)) TYP_UBEZ
, CAST(F_Rej_Zdar_Char(Z.ID_ZDAR, 'NAZWISKO') AS VARCHAR2(30)) NAZWISKO
, CAST(F_Rej_Zdar_Char(Z.ID_ZDAR, 'IMIE') AS VARCHAR2(30)) IMIE
, CAST(F_Rej_Zdar_Char(Z.ID_ZDAR, 'IMIE_OJCA') AS VARCHAR2(30)) IMIE_OJCA
, CAST(F_Rej_Zdar_Char(Z.ID_ZDAR, 'NAZWA_FIRMY') AS VARCHAR2(300)) NAZWA_FIRMY
, CAST(F_Rej_Zdar_Char(Z.ID_ZDAR, 'NAZWA_FIRMY_SKR') AS VARCHAR2(100)) NAZWA_FIRMY_SKR
, CAST(LTRIM(F_Rej_Zdar_Char(Z.ID_ZDAR, 'NAZWISKO')||' '||F_Rej_Zdar_Char(Z.ID_ZDAR, 'IMIE')||' '||F_Rej_Zdar_Char(Z.ID_ZDAR, 'IMIE_OJCA')||F_Rej_Zdar_Char(Z.ID_ZDAR, 'NAZWA_FIRMY')||DECODE(F_Rej_Zdar_Char(Z.ID_ZDAR, 'NAZWA_FIRMY'),NULL,F_Rej_Zdar_Char(Z.ID_ZDAR, 'NAZWA_FIRMY_SKR'),NULL)) AS VARCHAR2(492)) DANE_KLIE
FROM T_ZDARZENIA Z
WHERE F_Rej_Zdar_Char(Z.ID_ZDAR, 'JEST') = 'T';
and functions:
CREATE OR REPLACE FUNCTION F_Rej_Zdar_Char
(WE_ID_ZDAR IN NUMBER
,WE_KOLUMNA IN VARCHAR2
RETURN VARCHAR2
IS
c_numer T_PRZYSTAPIENIA.NUMER%TYPE;--VARCHAR2(30);
c_rodzaj T_KLIENCI.RODZAJ%TYPE;--VARCHAR2(1);
c_typ_przy T_PRZYSTAPIENIA.TYP_PRZY%TYPE;--VARCHAR2(1);
c_typ_ubez T_PRZYSTAPIENIA.TYP_UBEZ%TYPE;--VARCHAR2(3);
c_nazwisko T_KLIENCI.NAZWISKO%TYPE;--VARCHAR2(30);
c_imie T_KLIENCI.IMIE%TYPE;--VARCHAR2(30);
c_imie_ojca T_KLIENCI.IMIE_OJCA%TYPE;--VARCHAR2(30);
c_nazwa_firmy T_KLIENCI.NAZWA_FIRMY%TYPE;--VARCHAR2(300);
c_nazwa_firmy_skr T_KLIENCI.NAZWA_FIRMY%TYPE;--VARCHAR2(100);
c_jest VARCHAR2(1) := 'T';
c EXCEPTION;
BEGIN
--dbms_output.put_line('id zdar wykonania '||WE_ID_ZDAR);
BEGIN
SELECT p.NUMER, k.RODZAJ,p.TYP_PRZY,p.TYP_UBEZ,k.nazwisko, k.imie, k.imie_ojca, k.nazwa_firmy, k.nazwa_firmy_skr
INTO c_numer, c_rodzaj, c_typ_przy, c_typ_ubez, c_nazwisko, c_imie, c_imie_ojca, c_nazwa_firmy, c_nazwa_firmy_skr
FROM T_KLIENCI k, T_PRZYSTAPIENIA p, T_ZDARZENIA z, T_PODMIOTY D1, T_PODMIOTY D2
WHERE p.KLIE_ID_KLIE = k.ID_KLIE
AND z.PODM_ID_PODM = D1.ID_PODM
AND D1.KLIE_ID_KLIE = p.KLIE_ID_KLIE
AND Z.PODM_UMOW_ID_UMOW = D2.ID_PODM
AND D2.PRZY_ID_PRZY = P.ID_PRZY
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT p.NUMER, k.RODZAJ,p.TYP_PRZY,p.TYP_UBEZ,k.nazwisko, k.imie, k.imie_ojca, k.nazwa_firmy, k.nazwa_firmy_skr
INTO c_numer, c_rodzaj, c_typ_przy, c_typ_ubez, c_nazwisko, c_imie, c_imie_ojca, c_nazwa_firmy, c_nazwa_firmy_skr
FROM T_KLIENCI k, T_PRZYSTAPIENIA p, T_ZDARZENIA z, T_PODMIOTY D
WHERE z.PODM_UMOW_ID_UMOW IS NULL
AND z.PODM_ID_PODM = D.ID_PODM
AND D.KLIE_ID_KLIE = k.ID_KLIE
AND p.KLIE_ID_KLIE = k.ID_KLIE
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT NULL NUMER, NULL RODZAJ,NULL TYP_PRZY,NULL TYP_UBEZ, I.nazwisko, I.imie, I.imie_ojca, I.NAZWA NAZWA_FIRMY, I.NAZWA_SKR nazwa_firmy_skr
INTO c_numer, c_rodzaj, c_typ_przy, c_typ_ubez, c_nazwisko, c_imie, c_imie_ojca, c_nazwa_firmy, c_nazwa_firmy_skr
FROM T_ZDARZENIA z, T_INSTYTUCJE I
WHERE Z.TYP IN ('WFS526','WFS542','WFS553','WFS609','WFS611','WYP_KS','WYP_PO','WYP_SB','DI_ZAT')
AND z.PODM_UMOW_ID_UMOW IS NULL
AND Z.PODM_ID_PODM = I.ID_INST
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT p.NUMER NUMER, DECODE(a.TYP_AGENTA,'A','F','P') RODZAJ, DECODE(a.TYP_AGENTA,'P','R',a.TYP_AGENTA) TYP_PRZY,NULL TYP_UBEZ,a.nazwisko, a.imie, a.imie_ojca, a.nazwa_firmy, a.nazwa_firmy_skr
INTO c_numer, c_rodzaj, c_typ_przy, c_typ_ubez, c_nazwisko, c_imie, c_imie_ojca, c_nazwa_firmy, c_nazwa_firmy_skr
FROM T_AG_AGENCI a, T_AG_UMOWY p, T_ZDARZENIA z
WHERE a.ID_AGAG = p.AGAG_ID_AGAG
AND z.PODM_UMOW_ID_UMOW = p.ID_AGUM
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT p.NUMER NUMER, DECODE(a.TYP_AGENTA,'A','F','P') RODZAJ, DECODE(a.TYP_AGENTA,'P','R',a.TYP_AGENTA) TYP_PRZY,NULL TYP_UBEZ,a.nazwisko, a.imie, a.imie_ojca, a.nazwa_firmy, a.nazwa_firmy_skr
INTO c_numer, c_rodzaj, c_typ_przy, c_typ_ubez, c_nazwisko, c_imie, c_imie_ojca, c_nazwa_firmy, c_nazwa_firmy_skr
FROM T_AG_AGENCI a, T_AG_UMOWY p, T_ZDARZENIA z
WHERE a.ID_AGAG = p.AGAG_ID_AGAG
AND z.PODM_ID_PODM = a.ID_AGAG
AND z.PODM_UMOW_ID_UMOW IS NULL
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT p.NUMER_UMOWY NUMER, DECODE(p.TYP_AGENTA,'A','F','P') RODZAJ, DECODE(p.TYP_AGENTA,'P','R',p.TYP_AGENTA) TYP_PRZY,NULL TYP_UBEZ,p.nazwisko, p.imie_pierwsze, p.imie_ojca, p.nazwa_firmy, p.nazwa_firmy_skr
INTO c_numer, c_rodzaj, c_typ_przy, c_typ_ubez, c_nazwisko, c_imie, c_imie_ojca, c_nazwa_firmy, c_nazwa_firmy_skr
FROM T_AG_KANDYDACI a, T_AG_UMOWY_TAB p, T_ZDARZENIA z
WHERE a.ID_AGKAN = p.TECH_PODM_ID_PODM
AND z.PODM_UMOW_ID_UMOW = p.TECH_ID_AGUMT
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT p.NUMER_UMOWY NUMER, DECODE(p.TYP_AGENTA,'A','F','P') RODZAJ, DECODE(p.TYP_AGENTA,'P','R',p.TYP_AGENTA) TYP_PRZY,NULL TYP_UBEZ,p.nazwisko, p.imie_pierwsze, p.imie_ojca, p.nazwa_firmy, p.nazwa_firmy_skr
INTO c_numer, c_rodzaj, c_typ_przy, c_typ_ubez, c_nazwisko, c_imie, c_imie_ojca, c_nazwa_firmy, c_nazwa_firmy_skr
FROM T_AG_KANDYDACI a, T_AG_UMOWY_TAB p, T_ZDARZENIA z
WHERE a.ID_AGKAN = p.TECH_PODM_ID_PODM
AND z.PODM_ID_PODM = a.ID_AGKAN
AND z.PODM_UMOW_ID_UMOW IS NULL
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT k.NUMER_UMOWY NUMER, DECODE(k.TYP_PRZYSTAPIENIA,'P','F','P') RODZAJ,k.TYP_PRZYSTAPIENIA TYP_PRZY,'NPO' TYP_UBEZ, k.nazwisko, k.imie_pierwsze, k.imie_ojca, k.nazwa_firmy nazwa_firmy, k.nazwa_firmy_skr nazwa_firmy_skr
INTO c_numer, c_rodzaj, c_typ_przy, c_typ_ubez, c_nazwisko, c_imie, c_imie_ojca, c_nazwa_firmy, c_nazwa_firmy_skr
FROM T_WE_UM_NPO_TAB k, T_ZDARZENIA z
WHERE z.ID_ZDAR = k.TECH_ZDAR_ID_ZDAR
AND k.TYP_PRZYSTAPIENIA IN ('P','W')
AND z.PODM_ID_PODM IS NULL
AND z.PODM_UMOW_ID_UMOW IS NULL
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT k.NUMER_UMOWY NUMER, 'F' RODZAJ,'-' TYP_PRZY,'OPS' TYP_UBEZ, k.nazwisko, k.imie_pierwsze, k.imie_ojca, NULL nazwa_firmy, NULL nazwa_firmy_skr
INTO c_numer, c_rodzaj, c_typ_przy, c_typ_ubez, c_nazwisko, c_imie, c_imie_ojca, c_nazwa_firmy, c_nazwa_firmy_skr
FROM T_WE_UM_OPS_TAB k,T_ZDARZENIA z
WHERE z.ID_ZDAR = k.TECH_ZDAR_ID_ZDAR
AND z.PODM_ID_PODM IS NULL
AND z.PODM_UMOW_ID_UMOW IS NULL
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT NULL NUMER, NULL RODZAJ,NULL TYP_PRZY,NULL TYP_UBEZ, NULL nazwisko, NULL imie_pierwsze, NULL imie_ojca, NULL nazwa_firmy, NULL nazwa_firmy_skr
INTO c_numer, c_rodzaj, c_typ_przy, c_typ_ubez, c_nazwisko, c_imie, c_imie_ojca, c_nazwa_firmy, c_nazwa_firmy_skr
FROM T_ZDARZENIA z
WHERE z.TYP NOT IN ('UM_OPS','UM_NPO','NPO_OP','UZUP_U')
AND z.PODM_ID_PODM IS NULL
AND z.PODM_UMOW_ID_UMOW IS NULL
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
--dbms_output.put_line('id zdar wykonania '||WE_ID_ZDAR||' ostatni wyjatek');
NULL;
END;
END;
END;
END;
END;
END;
END;
END;
END;
END;
--raise c;
IF WE_KOLUMNA = 'NUMER' THEN
RETURN c_numer;
ELSIF WE_KOLUMNA = 'RODZAJ' THEN
RETURN c_rodzaj;
ELSIF WE_KOLUMNA = 'TYP_PRZY' THEN
RETURN c_typ_przy;
ELSIF WE_KOLUMNA = 'TYP_UBEZ' THEN
RETURN c_typ_ubez;
ELSIF WE_KOLUMNA = 'NAZWISKO' THEN
RETURN c_nazwisko;
ELSIF WE_KOLUMNA = 'IMIE' THEN
RETURN c_imie;
ELSIF WE_KOLUMNA = 'IMIE_OJCA' THEN
RETURN c_imie_ojca;
ELSIF WE_KOLUMNA = 'NAZWA_FIRMY' THEN
RETURN c_nazwa_firmy;
ELSIF WE_KOLUMNA = 'NAZWA_FIRMY_SKR' THEN
RETURN c_nazwa_firmy_skr;
ELSIF WE_KOLUMNA = 'JEST' THEN
RETURN c_jest;
END IF;
END;
CREATE OR REPLACE FUNCTION F_Rej_Zdar_Date
(WE_ID_ZDAR IN NUMBER
,WE_KOLUMNA IN VARCHAR2
RETURN DATE
IS
d_data DATE;
BEGIN
BEGIN
SELECT p.DATA_PODPISANIA
INTO d_data
FROM T_KLIENCI k, T_PRZYSTAPIENIA p, T_ZDARZENIA z, T_PODMIOTY D1, T_PODMIOTY D2
WHERE p.KLIE_ID_KLIE = k.ID_KLIE
AND z.PODM_ID_PODM = D1.ID_PODM
AND D1.KLIE_ID_KLIE = p.KLIE_ID_KLIE
AND Z.PODM_UMOW_ID_UMOW = D2.ID_PODM
AND D2.PRZY_ID_PRZY = P.ID_PRZY
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT p.DATA_PODPISANIA
INTO d_data
FROM T_KLIENCI k, T_PRZYSTAPIENIA p, T_ZDARZENIA z, T_PODMIOTY D
WHERE z.PODM_UMOW_ID_UMOW IS NULL
AND z.PODM_ID_PODM = D.ID_PODM
AND D.KLIE_ID_KLIE = k.ID_KLIE
AND p.KLIE_ID_KLIE = k.ID_KLIE
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT NULL DATA_PODPISANIA
INTO d_data
FROM T_ZDARZENIA z, T_INSTYTUCJE I
WHERE Z.TYP IN ('WFS526','WFS542','WFS553','WFS609','WFS611','WYP_KS','WYP_PO','WYP_SB','DI_ZAT')
AND z.PODM_UMOW_ID_UMOW IS NULL
AND Z.PODM_ID_PODM = I.ID_INST
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT p.DATA_PODPISANIA DATA_PODPISANIA
INTO d_data
FROM T_AG_AGENCI a, T_AG_UMOWY p, T_ZDARZENIA z
WHERE a.ID_AGAG = p.AGAG_ID_AGAG
AND z.PODM_UMOW_ID_UMOW = p.ID_AGUM
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT p.DATA_PODPISANIA DATA_PODPISANIA
INTO d_data
FROM T_AG_AGENCI a, T_AG_UMOWY p, T_ZDARZENIA z
WHERE a.ID_AGAG = p.AGAG_ID_AGAG
AND z.PODM_ID_PODM = a.ID_AGAG
AND z.PODM_UMOW_ID_UMOW IS NULL
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT p.DATA_PODPISU_AGENTA DATA_PODPISANIA
INTO d_data
FROM T_AG_KANDYDACI a, T_AG_UMOWY_TAB p, T_ZDARZENIA z
WHERE a.ID_AGKAN = p.TECH_PODM_ID_PODM
AND z.PODM_UMOW_ID_UMOW = p.TECH_ID_AGUMT
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT p.DATA_PODPISU_AGENTA DATA_PODPISANIA
INTO d_data
FROM T_AG_KANDYDACI a, T_AG_UMOWY_TAB p, T_ZDARZENIA z
WHERE a.ID_AGKAN = p.TECH_PODM_ID_PODM
AND z.PODM_ID_PODM = a.ID_AGKAN
AND z.PODM_UMOW_ID_UMOW IS NULL
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT k.DATA_PODPISANIA_UM DATA_PODPISANIA
INTO d_data
FROM T_WE_UM_NPO_TAB k, T_ZDARZENIA z
WHERE z.ID_ZDAR = k.TECH_ZDAR_ID_ZDAR
AND k.TYP_PRZYSTAPIENIA IN ('P','W')
AND z.PODM_ID_PODM IS NULL
AND z.PODM_UMOW_ID_UMOW IS NULL
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT k.DATA_PODPISANIA_UM DATA_PODPISANIA
INTO d_data
FROM T_WE_UM_OPS_TAB k,T_ZDARZENIA z
WHERE z.ID_ZDAR = k.TECH_ZDAR_ID_ZDAR
AND z.PODM_ID_PODM IS NULL
AND z.PODM_UMOW_ID_UMOW IS NULL
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
BEGIN
SELECT NULL DATA_PODPISANIA
INTO d_data
FROM T_ZDARZENIA z
WHERE z.TYP NOT IN ('UM_OPS','UM_NPO','NPO_OP','UZUP_U')
AND z.PODM_ID_PODM IS NULL
AND z.PODM_UMOW_ID_UMOW IS NULL
AND z.ID_ZDAR = WE_ID_ZDAR;
EXCEPTION
WHEN NO_DATA_FOUND THEN
d_data := NULL;
END;
END;
END;
END;
END;
END;
END;
END;
END;
END;
IF WE_KOLUMNA = 'DATA_PODPISANIA' THEN
RETURN d_data;
END IF;
END; -
Performance problem with ojdbc14.jar
Hi,
We are having performance problem with ojdbc14.jar in selecting and updating (batch updates) entries in a table. The queries are taking minutes to execute. The same java code works fine with classes12.zip ans queries taking sub seconds to execute.
We have Oracle 9.2.0.5 Database Server and I have downloaded the ojdbc14.jar from Oracle site for the same. Tried executing the java code from windows 2000, Sun Solaris and Opteron machines and having the same problem.
Does any one know a solution to this problem? I also tried ojdbc14.jar meant for Oracle 10g, that did not help.
Please help.
Thanks
YuvaMy code is doing some thing which might be working well with classes12.zip and which does not work well with ojdbc14.jar? Any general suggestions to make the code better, especially for batch updates.
But for selecting a row from the table, I am using index columns in the where cluase. In the code using PreparedStatement, setting all the reuired fields. Here is the code. We have a huge index with 14 fields!!. All the parameters are for where clause.
if(longCallPStmt == null) {
longCallPStmt = conn.prepareStatement(longCallQuery);
log(Level.FINE, "CdrAggLoader: Loading tcdragg entry for "
+GeneralUtility.formatDate(cdrAgg.time_hour, "MM/dd/yy HH"));
longCallPStmt.clearParameters();
longCallPStmt.setInt(1, cdrAgg.iintrunkgroupid);
longCallPStmt.setInt(2, cdrAgg.iouttrunkgroupid);
longCallPStmt.setInt(3, cdrAgg.iintrunkgroupnumber);
longCallPStmt.setInt(4, cdrAgg.iouttrunkgroupnumber);
longCallPStmt.setInt(5, cdrAgg.istateregionid);
longCallPStmt.setTimestamp(6, cdrAgg.time_hour);
longCallPStmt.setInt(7, cdrAgg.icalltreatmentcode);
longCallPStmt.setInt(8, cdrAgg.icompletioncode);
longCallPStmt.setInt(9, cdrAgg.bcallcompleted);
longCallPStmt.setInt(10, cdrAgg.itodid);
longCallPStmt.setInt(11, cdrAgg.iasktodid);
longCallPStmt.setInt(12, cdrAgg.ibidtodid);
longCallPStmt.setInt(13, cdrAgg.iaskzoneid);
longCallPStmt.setInt(14, cdrAgg.ibidzoneid);
rs = longCallPStmt.executeQuery();
if(rs.next()) {
cdr_agg = new CdrAgg(
rs.getInt(1),
rs.getInt(2),
rs.getInt(3),
rs.getInt(4),
rs.getInt(5),
rs.getTimestamp(6),
rs.getInt(7),
rs.getInt(8),
rs.getInt(9),
rs.getInt(10),
rs.getInt(11),
rs.getInt(12),
rs.getInt(13),
rs.getInt(14),
rs.getInt(15),
rs.getInt(16)
}//if
end_time = System.currentTimeMillis();
log(Level.INFO, "CdrAggLoader: Loaded "+((cdr_agg==null)?0:1) + " "
+ GeneralUtility.formatDate(cdrAgg.time_hour, "MM/dd/yy HH")
+" tcdragg entry in "+(end_time - start_time)+" msecs");
} finally {
GeneralUtility.closeResultSet(rs);
GeneralUtility.closeStatement(pstmt);
Why that code works well for classes12.zip (comes back in around 10 msec) and not for ojdbc14.jar (comes back in 6-7 minutes)?
Please advise. -
Performance problem with Oracle
We are currently getting a system developed in Unix/Weblogic/Tomcat/Oracle environment. We have developed a screen that contains 5 or 6 different parameters to select from. We could select multiple parameters in each of these selections. The idea behind the subsequent screens is to attach information to already existing data/ possible future data that matches the selection criteria.
Based on these selections, existing data located within the system in a table is searched and those that match are selected. Also new rows are created in the table against combinations that do not currently have a match. Frequently multiple parameters are selected, and 2000 different combinations need to be searched in the table. Of these selections, only about 100 or 200 combinations will be available in existing data. So the system is having to insert 1800 rows. The user meanwhile waits for the system to come up with data based on their selections. The user is not willing to wait more than 30 seconds to get to the next screen. In the above mentioned scenario, the system takes more than an hour to insert the new records and bring the information up. We need suggestions to see if the performance can be improved this drastically. If not what are the alternatives? ThanksThe #1 cause for performance problems with Oracle is not using it correctly.
I find it hard to believe that with the small data volumes mentioned, that you can have perfornance problems.
You need to perform a sanity check. Are you using Oracle correctly? Do you know what bind variables are? Are you using indexes correctly? Are you using PL/SQL correctly? Is the instance setup correctly? What about storage, are you using SAME (RAID10) or something else? Etc.
Facts. Oracle peforms exceptionally well. Oracle exceptionally well.
Simple example from a benchmark I did on this exact same subject. App-tier developers not understanding and not using Oracle correctly. Incorrect usage of Oracle doing a 100,000 SQL statements. 24+ minutes elapsed time. Doing those exact same 100,000 SQL statement correctly (using bind variables) - 8 seconds elapsed time. (benchmark using Oracle 10.1.0.3 on a Sunfire V20z server)
But then you need to use Oracle correctly. Are you familiar with the Oracle Concepts Guide? Have you read the Oracle Application Developer Fundamentals Guide? -
Performance problem with CR SDK
Hi,
I'am currently on a customer site and I have the following problem :
The client have a performance problem with a J2EE application wich call a Crystal report with th CR SDK. To reproduce the problem on the local machine (the CR server), I have developped a little jsp page wich used the Crystal SDK to open a Crystal report on the server (this report is based on a XML data source), setting the new data source (with a new xml data flow) and refresh the report in PDF format.
The problem is that the 2 first sequences take about 5 seconde each (5 sec for the opening report and 5 seconds for the setting data source). Then the total process take about 15 seconds to open and refresh the document that is very long for a little document.
The document is a 600Ko file, the xml source is a 80Ko file.
My jsp page is directly deployed on the tomcat of the Crystal Report Server (CRXIR2 without Service Pack).
The Filestore and the MySQL database are on the CR server.
The server is a 4 quadripro (16 proc) with 16Go of RAM and is totally dedicated to Crystal Report. For the moment, there is no activity on the server (it is also used for the test).
The mains jsp orders are the followings :
IEnterpriseSession es = CrystalEnterprise.getSessionMgr().logon("administrator", "", "EDITBI:6400", "secEnterprise");
IInfoStore infoStore = (IInfoStore) es.getService("", "InfoStore");
IInfoObjects infoObjects = infoStore.query("SELECT * FROM CI_INFOOBJECTS WHERE SI_NAME='CPA_EV' AND SI_INSTANCE=0 ");
IInfoObject report = (IInfoObject) infoObjects.get(0);
IReportAppFactory reportAppFactory = (IReportAppFactory)es.getService("RASReportFactory");
ReportClientDocument reportClientDoc = reportAppFactory.openDocument(report.getID(), 0, null);
IXMLDataSet xmlDataSet = new XMLDataSet();
xmlDataSet.setXMLData(new ByteArray(ligne_data_xml));
xmlDataSet.setXMLSchema(new ByteArray(ligne_schema_xml));
DatabaseController db = reportClientDoc.getDatabaseController();
db.setDataSource(xmlDataSet, "", "");
ByteArrayInputStream bt = (ByteArrayInputStream)reportClientDoc.getPrintOutputController().export(ReportExportFormat.PDF);
My question is : does this method is the good one to do this ?
Thank's in advance for your help
Best regards
EmmanuelHi,
My problem is not resolved and I have'nt news from the support.
If you have any idea/info, don't forget me
Thank's in advance
Emmanuel -
Problems with retrieving data from tables with 240 and more records
Hi,
I've been connecting to Oracle 11g Server (not sure exact version) using Oracle 10.1.0 Client and O10 Oracle 10g driver. Everything was ok.
I installed Oracle 11.2.0 Client and I started to have problems with retrieving data from tables.
First I used the same connection string, driver and so on (O10 Oracle 10g) then I tried ORA Oracle but with no luck. The result is like this:
I'm able to connect to database. I'm able to retrieve data but from small tables (e.g. with 110 records it works perfectly using both O10 and ORA drivers). When I try to retrieve data from tables with like 240 and more records retrieval simply hangs (nothing happens at all - no error, no timeout). Application seems to hang forever.
I'm using Powerbuilder to connect to Database (either PB10.5 using O10 driver or PB12 using ORA driver). I used DBTrace, so I see that query hangs on the first FETCH.
So for the retrievals that hang I have something like:
(3260008): BIND SELECT OUTPUT BUFFER (DataWindow):(DBI_SELBIND) (0.186 MS / 18978.709 MS)
(3260008): ,len=160,type=DECIMAL,pbt=4,dbt=0,ct=0,prec=0,scale=0
(3260008): ,len=160,type=DECIMAL,pbt=4,dbt=0,ct=0,prec=0,scale=1
(3260008): ,len=160,type=DECIMAL,pbt=4,dbt=0,ct=0,prec=0,scale=0
(3260008): EXECUTE:(DBI_DW_EXECUTE) (192.982 MS / 19171.691 MS)
(3260008): FETCH NEXT:(DBI_FETCHNEXT)
and this is the last line,
while for retrievals that end, I have FETCH producing time, data in buffer and moving to the next Fetch until all data is retrieved
On the side note, I have no problems with retrieving data either by SQL Developer or DbVisualizer.
Problems started when I installed 11.2.0 Client. Even if I want to use 10.0.1 Client, the same problem occurs. So I guess something from 11.2.0 overrides 10.0.1 settings.
I will appreciate any comments/hints/help.
Thank you very much.pgoel wrote:
I've been connecting to Oracle 11g Server (not sure exact version) using Oracle 10.1.0 Client and O10 Oracle 10g driver. Everything was ok.Earlier (before installing new stuff) did you ever try retrieving data from big tables (like 240 and more records), if yes, was it working?Yes, with Oracle 10g client (before installing 11g) I was able to retrieve any data, either it was 10k+ records or 100 records. Installing 11g client changed something that even using old 10g client (which I still have installed) fails to work. The same problem occur no matter I'm using 10g or 11g client now. Powerbuilder hangs on retrieving tables with more than like 240 records.
Thanks. -
Performance problem with Integration with COGNOS and Bex
Hi Gems
I have a performance problem with some of my queries when integrating with the COGNOS
My query is simple which gets the data for the date interval : "
From Date: 20070101
To date:20070829
When executing the query in the Bex it takes 2mins but when it is executed in the COGNOS it takes almost 10mins and above..
Any where can we debug the report how the data is sending to the cognos. Like debugging the OLEDB ..
and how to increase the performance.. of the query in the Cognos ..
Thanks in Advance
Regards
AKHi,
Please check the following CA Unicenter config files on the SunMC server:
- is the Event Adapter (ea-start) running ?, without these daemon no event forwarding is done the CA Unicenter nor discover from Ca unicenter is working.
How to debug:
- run ea-start in debug mode:
# /opt/SUNWsymon/SunMC-TNG/sbin/ea-start -d9
- check if the Event Adaptor is been setup,
# /var/opt/SUNWsymon/SunMC-TNG/cfg_sunmctotng
- check the CA log file
# /var/opt/SUNWsymon/SunMC-TNG/SunMCToTngAdaptorMain.log
After that is all fine check this side it explains how to discover an SunMC agent from CA Unicenter.
http://docs.sun.com/app/docs/doc/817-1101/6mgrtmkao?a=view#tngtrouble-6
Kind Regards -
Performance issues with version enable partitioned tables?
Hi all,
Are there any known performance issues with version enable partitioned tables?
Ive been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
Tanks in advance,
Vitor
Example:
Object Name Rows Bytes Cost Object Node In/Out PStart PStop
UPDATE STATEMENT Optimizer Mode=CHOOSE 1 249
UPDATE SIG.SIG_QUA_IMG_LT
NESTED LOOPS SEMI 1 266 249
PARTITION RANGE ALL 1 9
TABLE ACCESS FULL SIG.SIG_QUA_IMG_LT 1 259 2 1 9
VIEW SYS.VW_NSO_1 1 7 247
NESTED LOOPS 1 739 247
NESTED LOOPS 1 677 247
NESTED LOOPS 1 412 246
NESTED LOOPS 1 114 244
INDEX RANGE SCAN WMSYS.MODIFIED_TABLES_PK 1 62 2
INDEX RANGE SCAN SIG.QIM_PK 1 52 243
TABLE ACCESS BY GLOBAL INDEX ROWID SIG.SIG_QUA_IMG_LT 1 298 2 ROWID ROW L
INDEX RANGE SCAN SIG.SIG_QUA_IMG_PKI$ 1 1
INDEX RANGE SCAN WMSYS.WM$NEXTVER_TABLE_NV_INDX 1 265 1
INDEX UNIQUE SCAN WMSYS.MODIFIED_TABLES_PK 1 62
/* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */
UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1
SET z1.nextver =
SYS.ltutil.subsversion
(z1.nextver,
SYS.ltutil.getcontainedverinrange (z1.nextver,
'SIG.SIG_QUA_IMG',
'NpCyPCX3dkOAHSuBMjGioQ==',
4574,
4575
4574
WHERE z1.ROWID IN (
(SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
t2.ROWID
FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j1,
sig.sig_qua_img_lt t1,
sig.sig_qua_img_lt t2,
wmsys.wm$nextver_table j2,
(SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
UNIQUE VERSION
FROM wmsys.wm$modified_tables
WHERE table_name = 'SIG.SIG_QUA_IMG'
AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
AND VERSION > 4574
AND VERSION <= 4575) j3
WHERE t1.VERSION = j1.VERSION
AND t1.ima_id = t2.ima_id
AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
AND t2.nextver != '-1'
AND t2.nextver = j2.next_vers
AND j2.VERSION = j3.VERSION))Hello Vitor,
There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
Thank You,
Ben -
Performance problems with jdk 1.5 on Linux plattform
Performance problems with jdk 1.5 on Linux plattform
(not tested on Windows, might be the same)
After refactoring using the new features from java 1.5 I lost
performance significantly:
public Vector<unit> units;
The new code:
for (unit u: units) u.accumulate();
runs more than 30% slower than the old code:
for (int i = 0; i < units.size(); i++) units.elementAt(i).accumulate();
I expected the opposite.
Is there any information available that helps?Here's the complete benchmark code I used:package test;
import java.text.NumberFormat;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Vector;
public class IterationPerformanceTest {
private int m_size;
public IterationPerformanceTest(int size) {
m_size = size;
public long getArrayForLoopDuration() {
Integer[] testArray = new Integer[m_size];
for (int item = 0; item < m_size; item++) {
testArray[item] = new Integer(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
for (int index = 0; index < m_size; index++) {
builder.append(testArray[index]);
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
public long getArrayForEachDuration() {
Integer[] testArray = new Integer[m_size];
for (int item = 0; item < m_size; item++) {
testArray[item] = new Integer(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
for (Integer item : testArray) {
builder.append(item);
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
public long getArrayListForLoopDuration() {
ArrayList<Integer> testList = new ArrayList<Integer>();
for (int item = 0; item < m_size; item++) {
testList.add(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
for (int index = 0; index < m_size; index++) {
builder.append(testList.get(index));
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
public long getArrayListForEachDuration() {
ArrayList<Integer> testList = new ArrayList<Integer>();
for (int item = 0; item < m_size; item++) {
testList.add(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
for (Integer item : testList) {
builder.append(item);
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
public long getArrayListIteratorDuration() {
ArrayList<Integer> testList = new ArrayList<Integer>();
for (int item = 0; item < m_size; item++) {
testList.add(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
Iterator<Integer> iterator = testList.iterator();
while(iterator.hasNext()) {
builder.append(iterator.next());
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
public long getLinkedListForLoopDuration() {
LinkedList<Integer> testList = new LinkedList<Integer>();
for (int item = 0; item < m_size; item++) {
testList.add(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
for (int index = 0; index < m_size; index++) {
builder.append(testList.get(index));
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
public long getLinkedListForEachDuration() {
LinkedList<Integer> testList = new LinkedList<Integer>();
for (int item = 0; item < m_size; item++) {
testList.add(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
for (Integer item : testList) {
builder.append(item);
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
public long getLinkedListIteratorDuration() {
LinkedList<Integer> testList = new LinkedList<Integer>();
for (int item = 0; item < m_size; item++) {
testList.add(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
Iterator<Integer> iterator = testList.iterator();
while(iterator.hasNext()) {
builder.append(iterator.next());
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
public long getVectorForLoopDuration() {
Vector<Integer> testVector = new Vector<Integer>();
for (int item = 0; item < m_size; item++) {
testVector.add(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
for (int index = 0; index < m_size; index++) {
builder.append(testVector.get(index));
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
public long getVectorForEachDuration() {
Vector<Integer> testVector = new Vector<Integer>();
for (int item = 0; item < m_size; item++) {
testVector.add(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
for (Integer item : testVector) {
builder.append(item);
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
public long getVectorIteratorDuration() {
Vector<Integer> testVector = new Vector<Integer>();
for (int item = 0; item < m_size; item++) {
testVector.add(item);
StringBuilder builder = new StringBuilder();
long start = System.nanoTime();
Iterator<Integer> iterator = testVector.iterator();
while(iterator.hasNext()) {
builder.append(iterator.next());
long end = System.nanoTime();
System.out.println(builder.length());
return end - start;
* @param args
public static void main(String[] args) {
IterationPerformanceTest test = new IterationPerformanceTest(1000000);
System.out.println("\n\nRESULTS:");
long arrayForLoop = test.getArrayForLoopDuration();
long arrayForEach = test.getArrayForEachDuration();
long arrayListForLoop = test.getArrayListForLoopDuration();
long arrayListForEach = test.getArrayListForEachDuration();
long arrayListIterator = test.getArrayListIteratorDuration();
// long linkedListForLoop = test.getLinkedListForLoopDuration();
long linkedListForEach = test.getLinkedListForEachDuration();
long linkedListIterator = test.getLinkedListIteratorDuration();
long vectorForLoop = test.getVectorForLoopDuration();
long vectorForEach = test.getVectorForEachDuration();
long vectorIterator = test.getVectorIteratorDuration();
System.out.println("Array for-loop: " + getPercentage(arrayForLoop, arrayForLoop) + "% ("+getDuration(arrayForLoop)+" sec)");
System.out.println("Array for-each: " + getPercentage(arrayForLoop, arrayForEach) + "% ("+getDuration(arrayForEach)+" sec)");
System.out.println("ArrayList for-loop: " + getPercentage(arrayForLoop, arrayListForLoop) + "% ("+getDuration(arrayListForLoop)+" sec)");
System.out.println("ArrayList for-each: " + getPercentage(arrayForLoop, arrayListForEach) + "% ("+getDuration(arrayListForEach)+" sec)");
System.out.println("ArrayList iterator: " + getPercentage(arrayForLoop, arrayListIterator) + "% ("+getDuration(arrayListIterator)+" sec)");
// System.out.println("LinkedList for-loop: " + getPercentage(arrayForLoop, linkedListForLoop) + "% ("+getDuration(linkedListForLoop)+" sec)");
System.out.println("LinkedList for-each: " + getPercentage(arrayForLoop, linkedListForEach) + "% ("+getDuration(linkedListForEach)+" sec)");
System.out.println("LinkedList iterator: " + getPercentage(arrayForLoop, linkedListIterator) + "% ("+getDuration(linkedListIterator)+" sec)");
System.out.println("Vector for-loop: " + getPercentage(arrayForLoop, vectorForLoop) + "% ("+getDuration(vectorForLoop)+" sec)");
System.out.println("Vector for-each: " + getPercentage(arrayForLoop, vectorForEach) + "% ("+getDuration(vectorForEach)+" sec)");
System.out.println("Vector iterator: " + getPercentage(arrayForLoop, vectorIterator) + "% ("+getDuration(vectorIterator)+" sec)");
private static NumberFormat percentageFormat = NumberFormat.getInstance();
static {
percentageFormat.setMinimumIntegerDigits(3);
percentageFormat.setMaximumIntegerDigits(3);
percentageFormat.setMinimumFractionDigits(2);
percentageFormat.setMaximumFractionDigits(2);
private static String getPercentage(long base, long value) {
double result = (double) value / (double) base;
return percentageFormat.format(result * 100.0);
private static NumberFormat durationFormat = NumberFormat.getInstance();
static {
durationFormat.setMinimumIntegerDigits(1);
durationFormat.setMaximumIntegerDigits(1);
durationFormat.setMinimumFractionDigits(4);
durationFormat.setMaximumFractionDigits(4);
private static String getDuration(long nanos) {
double result = (double)nanos / (double)1000000000;
return durationFormat.format(result);
}
Maybe you are looking for
-
My MacBook Pro does not recognize my Android Incrdible 2 cell phone and therefore I can not transfer any photos from my cell phone to my Mac. I was able to do this in the past, but can not do so now. I have not upgraded to any new software. My USB ca
-
Help please... My computer's hard drive died and was replaced. So I go to reinstall iTunes and it creates a whole new setup. I have all my music still on my iPod nano. Can someone please point me to a paper or instructions on how to recreate my iTune
-
KM API: Need to add LastModified in input parameters
Hi All, I am trying to add LastModified parameter in my input selection. i already have Service area, country, owner and Document Category as my input parameters. Also i need to select Date from calender. Thanks Sweta
-
Hi ABAPERS, I am working on the HR Payslip in PE51. Does anyone know which internal table stores the Earnings and Deductions that are displayed on the HR Payslip? It seems it is not the RT table that contains these. In fact, RT contains the results s
-
Anyone know a third party program that will perform the tasks of MS Paint? Here is the thing.....I use MS Paint for all image editing and graphic design needs. I looked at Paintbrush for Mac and you cannot "free transform" images. Meaning you can't r