Help with performance of a subroutine
Hi Experts,
can anyone help me tweek the code so that the performance is improved? will appreciate ur response.
thanks in advance.
*& Form GET_PHY_QUANTITY
text
-->P_LT_LAGP text
<--P_LS_LAGP text
<--P_LT_PHYSICAL text
form get_phy_quantity using p_lt_aqua type tt_aqua
p_lt_lagp type tt_lagp
changing p_ls_aqua type gs_aqua
p_ls_lagp type gs_lagp
p_lt_physical.
types: begin of ty_hu,
lgnum_hu type /scwm/lgnum,
huident type /scwm/de_huident,
guid_hu type guid,
end of ty_hu.
types: begin of ty_ordim,
lgnum type /scwm/lgnum,
lgpla type /scwm/lgpla,
guid_stock type /lime/guid_stock,
end of ty_ordim.
types: begin of ty_nquan,
guid_stock type /lime/guid_stock,
guid_parent type /lime/guid_parent,
punit type /lime/unit,
pquan type /lime/quantity,
coo type /scwm/de_coo,
lgpla type /scwm/lgpla, "RJ
end of ty_nquan.
data: ls_hu type ty_hu,
ls_ordim type ty_ordim,
ls_nquan type ty_nquan,
lv_tabix type sy-tabix.
data: lt_guid type standard table of ty_hu,
ls_guid type ty_hu,
lt_hu type standard table of ty_hu,
lt_ordim like standard table of ls_ordim,
lt_nquan like standard table of ls_nquan .
data: ls_data type zspm_binquan.
data: lt_data type standard table of zspm_binquan.
lgpla in p_lt_lagp and huident in /scwm/hu_iw01 have different data types.
So, we are moving the lgpla field to a huident field before selection from the table.
loop at p_lt_lagp into p_ls_lagp.
move p_ls_lagp-lgnum to ls_hu-lgnum_hu.
move p_ls_lagp-lgpla to ls_hu-huident.
append ls_hu to lt_hu.
read table p_lt_aqua into p_ls_aqua with key lgnum = p_ls_lagp-lgnum
lgpla = p_ls_lagp-lgpla
matid = '00000000000000000000000000000000'.
if sy-subrc = 0.
" If AQUA table has blank Product, we have to get it from WT table.
IF p_ls_aqua-matid IS INITIAL.
move p_ls_aqua-guid_stock to ls_ordim-guid_stock.
move p_ls_aqua-lgnum to ls_ordim-lgnum.
move p_ls_aqua-lgpla to ls_ordim-lgpla.
append ls_ordim to lt_ordim.
ENDIF.
endif.
endloop.
clear: ls_ordim, ls_hu, p_ls_aqua, p_ls_lagp.
select lgnum_hu huident guid_hu
from /scwm/hu_iw01
into table lt_guid
for all entries in lt_hu
where lgnum_hu = lt_hu-lgnum_hu and huident = lt_hu-huident.
if sy-subrc = 0.
sort lt_guid by guid_hu.
delete adjacent duplicates from lt_guid comparing all fields.
endif.
select n~guid_stock
n~guid_parent
n~unit
n~quan
q~coo
into table lt_nquan
from ( /lime/nquan as n
left outer join /scwm/quan as q
on qguid_stock = nguid_stock
and qguid_parent = nguid_parent )
for all entries in lt_guid
where n~guid_parent = lt_guid-guid_hu
and n~guid_stock gt '0000000000000000'
and n~vsi eq space
and n~quan gt 0.
sort lt_nquan by guid_parent guid_stock.
loop at lt_nquan into ls_nquan. "RJ
lv_tabix = sy-tabix.
read table lt_guid into ls_guid with key guid_hu = ls_nquan-guid_parent.
if sy-subrc eq 0.
ls_nquan-lgpla = ls_guid-huident.
modify lt_nquan from ls_nquan index lv_tabix.
endif.
endloop.
loop at lt_nquan into ls_nquan.
If there is physical qty but no available quantity, move to lt_ordim table.
read table p_lt_aqua into p_ls_aqua with key guid_stock = ls_nquan-guid_stock
lgpla = ls_nquan-lgpla.
if sy-subrc ne 0.
read table lt_guid into ls_hu with key guid_hu = ls_nquan-guid_parent binary search.
if sy-subrc = 0.
move ls_hu-lgnum_hu to ls_ordim-lgnum.
move ls_hu-huident to ls_ordim-lgpla.
move ls_nquan-guid_stock to ls_ordim-guid_stock.
append ls_ordim to lt_ordim.
endif.
endif.
endloop.
sort lt_ordim by lgpla guid_stock.
clear: ls_nquan, ls_ordim.
loop at lt_nquan into ls_nquan.
move-corresponding ls_nquan to ls_data.
If not in AQUA table, get the batch, stock type from WT tables, ORDIM_O or ORDIM_C.
read table lt_ordim into ls_ordim with key lgpla = ls_nquan-lgpla
guid_stock = ls_nquan-guid_stock binary search.
if sy-subrc = 0.
select single matid cat charg wdatu
from /scwm/ordim_o
into (ls_data-matid, ls_data-cat, ls_data-charg, ls_data-wdatu)
where lgnum eq ls_ordim-lgnum and guid_stock eq ls_ordim-guid_stock.
if sy-subrc ne 0.
select single matid cat charg wdatu
from /scwm/ordim_c
into (ls_data-matid, ls_data-cat, ls_data-charg, ls_data-wdatu)
where lgnum eq ls_ordim-lgnum and guid_stock eq ls_ordim-guid_stock.
endif.
ls_data-lgnum = ls_ordim-lgnum.
ls_data-huident = ls_ordim-lgpla.
else. " if there is avail. qty., get batch and other info from AQUA table.
read table p_lt_aqua into p_ls_aqua with key guid_stock = ls_nquan-guid_stock
lgpla = ls_nquan-lgpla.
if sy-subrc = 0.
ls_data-lgnum = p_ls_aqua-lgnum.
ls_data-huident = p_ls_aqua-lgpla.
ls_data-matid = p_ls_aqua-matid.
ls_data-cat = p_ls_aqua-cat.
ls_data-charg = p_ls_aqua-charg.
ls_data-wdatu = p_ls_aqua-wdatu.
endif.
endif.
append ls_data to lt_data.
endloop.
p_lt_physical = lt_data.
refresh: lt_data, lt_ordim, lt_guid, lt_hu, lt_nquan.
endform. " GET_PHY_QUANTITY
Hi friends,
I have a function module which does array insertions and is causing a slowdown can you please recommend any changes ? or any ideas as to what can be done? Thanks in advance.
Thanks,
Kathy.
<code>
function Zwmsid_ship_id_insert .
""Update Function Module:
""Local Interface:
*" IMPORTING
*" VALUE(IT_SHIP_ID_HD) TYPE Z_TT_SID_HD
*" VALUE(IT_SHIP_ID_ITM) TYPE Z_TT_SID_IT
*" VALUE(IT_SHIP_ID_PR) TYPE Z_TT_SID_PR
*" VALUE(IT_SHIP_ID_HU) TYPE Z_TT_SID_HU
*" VALUE(IT_SHIP_ID_REFDOC) TYPE Z_TT_SID_RFD
*" EXCEPTIONS
*" INSERTION_FAILED
This is for Inserting Data from the IMPORT parameter Tables into
the DB tables.
To avoid run-time dump, ACCEPTING DUPLICATE KEYS addition is used.
if exception is result, ROLLBACK work is done in the calling module
Local Data Objects
Variables
data: lv_initial_sid type c,
Work Area
ls_ship_id_hd type Z_tsid_hd, " Ship-Id Header
ls_sid_ht type Z_tsid_ht, " Ship-ID History
Internal Tables
lt_sid_ht type Z_tt_sid_ht. " Ship-ID History Table
data: lv_timestamp(15) type c,
lv_systimstp type /scdl/dl_cretst,
lv_tz type tznzone value 'UTC'.
Build the Ship-ID data for building History table
loop at it_ship_id_hd into ls_ship_id_hd.
if ls_ship_id_hd-ship_id is initial.
lv_initial_sid = gc_x.
endif.
ls_sid_ht-ship_id = ls_ship_id_hd-ship_id . " Ship-ID
ls_sid_ht-status = gc_created . " CR
ls_sid_ht-direction = ls_ship_id_hd-direction. " Direction
ls_sid_ht-username = sy-uname . " User Name
append ls_sid_ht to lt_sid_ht.
clear: ls_sid_ht, ls_ship_id_hd.
endloop.
if not lv_initial_sid is initial.
clear: lv_initial_sid.
message e014 raising insertion_failed.
endif.
Inserting Ship-ID Header Data
insert Z_tsid_hd from table it_ship_id_hd
accepting duplicate keys.
if sy-subrc ne 0.
message e011 raising insertion_failed.
endif.
Inserting Ship-ID Item Data
insert Z_tsid_it from table it_ship_id_itm
accepting duplicate keys.
if sy-subrc ne 0.
message e011 raising insertion_failed.
endif.
Inserting Ship-ID Partner Data
insert Z_tsid_pr from table it_ship_id_pr
accepting duplicate keys.
if sy-subrc ne 0.
message e011 raising insertion_failed.
endif.
Inserting Ship-ID HU (handling Unit) Data
insert Z_tsid_hu from table it_ship_id_hu
accepting duplicate keys.
if sy-subrc ne 0.
message e011 raising insertion_failed.
endif.
Inserting Ship-ID Reference Document table Data
insert Z_tsid_rfd from table it_ship_id_refdoc
accepting duplicate keys.
if sy-subrc ne 0.
message e011 raising insertion_failed.
endif.
Inserting into Ship-ID History Table
insert Z_tsid_ht from table lt_sid_ht
accepting duplicate keys.
if sy-subrc ne 0.
message e011 raising insertion_failed.
endif.
endfunction.
<code>
Similar Messages
-
Need help with performance & memory tuning in a data warehousing environment
Dear All,
Good Day.
We had successfully migrated from a 4 node half-rack V2 Exadata to a 2 node quarter rack X4-2 Exadata. However, we are facing some issues with performance only for few loads while others have in fact shown good improvement.
1. The total memory on the OS is 250GB for each node (two compute nodes for a quarter rack).
2. Would be grateful if someone could help me with the best equation to calculate the SGA and PGA ( and also in allocation of shared_pool, large_pool etc) or whether Automatic Memory Management is advisable?
3. We had run exachk report which suggested us to configure huge pages.
4. When we tried to increase the SGA to more than 30GB the system doesn't allow us to do so. We had however set the PGA to 85GB.
5. Also, we had observed that some of the queries involving joins and indexes are taking longer time.
Any advise would be greatly appreciated.
Warm Regards,
Vikram.Hi Vikram,
There is no formula about SGA and PGA, but the best practices for OLTP environments is for a give ammount of memory (which cannot be up to 80% of total RAM from server) you should make 80% to SGA and 20% to PGA. For Data Warehouse envs, the values are like 60% SGA and 40% PGA or it can be up to 50%-50%. Also, some docs disencourage you to keep the database in Automatic Memory Management when you are using a big SGA (> 10G).
As you are using a RAC environment, you should configure Huge Pages. And if the systems are not allowing you to increase memory, just take a look at the semaphore parameters, probably they are set lower values. And for the poor performance queries, we need to see explain plans, table structure and you would also analyze if smart scan is playing the game.
Regards. -
Hi peep, looking for some pointers on performance after installing a fresh copy of 7 starter. Have upgraded RAM to 2Gig but still booting takes 15 mins. Running the machine is incredibly slow. I have an S10-3 model with an intel atom, which i dont expect to be a power house but surely it should be useable. Any help would be great thanks.
Hello
I would suggest you start looking at the startrup programs and deselect some of them which you think you may not need during boot times.
Cheers and regards,
• » νιנαソѕαяα∂нι ѕαмανє∂αм ™ « •
●๋•کáŕádhí'ک díáŕý ツ
I am a volunteer here. I don't work for Lenovo -
Help with Performing a Clean Install of Windows 8.1
Hey. I just bought a laptop today and it came with Windows 8.1. However, my issue is that my OS resides in a huge partition (900+ MB). And I don't want this. I want to re-allocate space, and take a portion from my OS's drive in order to create more partitions.
However, to do that, I have to format the drive, which I can't 'cause it contains the OS.
Bottom line: I don't have a CD installer for Windows 8 or Windows 8.1. I am trying to perform a clean install of Windows 8.1 in my laptop. Can I get the installer online? Can I use the product key here in my already activated Windows 8.1 OS? Do I need to
install Windows 8 before installing Windows 8.1?
you may be able to shrink the existing volume(s):
https://technet.microsoft.com/en-us/library/cc731894(v=ws.10).aspx
Don
(Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!) -
Help with Performance issue with a view
Hi
We developed a custom view to get the data from gl_je_lines table with source as Payables. We are bringing the data for the last year and current year till date ie., from 01-JAN-2012 to SYSDATE. This view is in a package body, which is called from a concurrent program to write the data to a outbound file.
The problem I am facing is that this view is fetching around 72 lakhs of records for the above date range and the program is running for a long time and completes abruptly with out any result. Can anyone please let me know if there is an alternative to this. I checked the view query and there seems to be not much scope to improve the performance.
Will inserting al this data into a Global Temporary Table will help? Please revert at the earliest as this solution is very urgent for our clients.
Message was edited by: 988490e8-2268-414d-b867-9d9a911c0053This is the view query:
select GCC.SEGMENT1 "EMPRESA",
GCC.SEGMENT2 "CCUSTO",
GCC.SEGMENT3 "CONTA",
GCC.SEGMENT4 "PRODUTO",
GCC.SEGMENT5 "SERVICO",
GCC.SEGMENT6 "CANAL",
GCC.SEGMENT7 "PROJECT",
GCC.SEGMENT8 "FORWARD1",
GCC.SEGMENT9 "FORWARD2",
FFVT.DESCRIPTION "CONTA_DESCR",
ltrim(substr(XRX_CONSOLIDATION_MAPPING('XDMO_LOCAL_USGAAP_LEGAL_ENTITY',
'XDMO_REPORT_USGAAP_LEGAL_COMPANY',
GCC.SEGMENT1,
GCC.SEGMENT3),
1,
80)) "LEGAL_COMPANY",
ltrim(substr(XRX_CONSOLIDATION_MAPPING('XDMO_LOCAL_USGAAP_ACCOUNT',
'XDMO_REPORT_USGAAP_FIN_ACCOUNT',
GCC.SEGMENT3,
GCC.SEGMENT3),
1,
80)) "GRA",
ltrim(substr(XRX_CONSOLIDATION_MAPPING('XDMO_LOCAL_USGAAP_BUDGET_CENTER',
'XDMO_REPORT_USGAAP_RESPONSIBILITY',
GCC.SEGMENT2,
GCC.SEGMENT3),
1,
80)) "RESP",
ltrim(substr(XRX_CONSOLIDATION_MAPPING('XDMO_LOCAL_USGAAP_PRODUCT',
'XDMO_REPORT_USGAAP_TEAM',
GCC.SEGMENT4,
GCC.SEGMENT3),
1,
80)) "TEAM",
ltrim(substr(XRX_CONSOLIDATION_MAPPING('XDMO_LOCAL_USGAAP_ACCOUNT',
'XDMO_REPORT_USGAAP_FIN_ACCOUNT',
GCC.SEGMENT3,
GCC.SEGMENT3),
164,
80)) "GRA_DESCR",
GJH.NAME "IDLANC",
GJS.USER_JE_SOURCE_NAME "ORIGEM",
GJC.USER_JE_CATEGORY_NAME "CATEGORIA",
GJL.DESCRIPTION "DESCRICAO",
decode(GJH.JE_SOURCE, 'Payables', GJL.REFERENCE_2, '') "INVOICE_ID",
decode(GJH.JE_SOURCE, 'Payables', GJL.REFERENCE_5, '') "NOTA",
decode(GJH.JE_SOURCE, 'Payables', GJL.REFERENCE_1, '') "FORNECEDOR",
GJH.DEFAULT_EFFECTIVE_DATE "DTEFET",
to_char(GJB.POSTED_DATE, 'DD-MON-YYYY HH24:MI:SS') "DTPOSTED",
GJH.CURRENCY_CONVERSION_TYPE "TPTAX",
substr(GCC.SEGMENT9, 8, 1) "TAXA",
GJH.CURRENCY_CONVERSION_DATE "DTCONV",
-- nvl(GJL.ACCOUNTED_DR,0)-nvl(GJL.ACCOUNTED_CR,0) "VALOR",
-- added as per ITT #517830
nvl(GJL.ENTERED_DR, 0) - nvl(GJL.ENTERED_CR, 0) "VALOR",
GJH.CURRENCY_CODE "MOEDA",
-- decode(gcc.segment9, '00000000', 0, '00000001', nvl(GJL.ACCOUNTED_DR,0)-nvl(GJL.ACCOUNTED_CR,0)) "VALOR_FUNCIONAL",
-- added as per ITT #517830
(nvl(GJL.ACCOUNTED_DR, 0) - nvl(GJL.ACCOUNTED_CR, 0)) "VALOR_FUNCIONAL",
GSOB.CURRENCY_CODE "FUNCIONAL",
GJH.PERIOD_NAME "PERIODO",
GJB.STATUS "STATUS",
GSOB.SHORT_NAME "LIVRO",
GJL.LAST_UPDATE_DATE "JL_LAST_UPDATE_DATE",
GJH.LAST_UPDATE_DATE "JH_LAST_UPDATE_DATE",
GJB.LAST_UPDATE_DATE "JB_LAST_UPDATE_DATE",
GJL.JE_HEADER_ID "JE_HEADER_ID",
GJL.JE_LINE_NUM "JE_LINE_NUM"
from GL.GL_JE_LINES GJL,
GL.GL_JE_HEADERS GJH,
GL.GL_JE_BATCHES GJB,
--GL.GL_SETS_OF_BOOKS GSOB, ---As GL_SETS_OF_BOOKS table dropped in R12 so replaced with GL_LEDGERS table,Commented as part of DMO R12 Upgrade-RFC#411290.
GL.GL_LEDGERS GSOB, ---Added as part of DMO R12 Upgrade-RFC#411290.
GL.GL_JE_SOURCES_TL GJS,
GL.GL_JE_CATEGORIES_TL GJC,
GL.GL_CODE_COMBINATIONS GCC,
APPLSYS.FND_FLEX_VALUES_TL FFVT,
APPLSYS.FND_FLEX_VALUES FFV,
APPLSYS.FND_FLEX_VALUE_SETS FFVS
where GJL.CODE_COMBINATION_ID = GCC.CODE_COMBINATION_ID
and GJL.JE_HEADER_ID = GJH.JE_HEADER_ID
and GJH.JE_BATCH_ID = GJB.JE_BATCH_ID
--and GJB.SET_OF_BOOKS_ID = GSOB.SET_OF_BOOKS_ID ---Changing the mappings between the tables GL_JE_HEADERS and GL_JE_BATCHES As column SET_OF_BOOKS_ID of table GL_JE_BATCHES dropped in R12,Commented as part of DMO R12 Upgrade-RFC#411290.
and GJH.LEDGER_ID = GSOB.LEDGER_ID ---Added as part of DMO R12 Upgrade-RFC#411290.
and GJH.JE_SOURCE = GJS.JE_SOURCE_NAME
and GJH.JE_CATEGORY = GJC.JE_CATEGORY_NAME
and GCC.SEGMENT3 = FFV.FLEX_VALUE
and FFV.FLEX_VALUE_ID = FFVT.FLEX_VALUE_ID
and FFV.FLEX_VALUE_SET_ID = FFVS.FLEX_VALUE_SET_ID
and FFVS.FLEX_VALUE_SET_NAME = 'XDMO_LOCAL_USGAAP_ACCOUNT'
and GSOB.SHORT_NAME in ('XBRA BRL LOCAL GAAP', 'XBRA BRL USGAAP')
and gcc.chart_of_accounts_id = gsob.chart_of_accounts_id
and gjh.actual_flag = 'A'
DB VErsion: 11.2.0.3.0
The problem I am facing is that the above query fetches huge data and I want to know if there is anyway to improve the performance of this query. You are right that view is stored in DB. I am using this view query in a cursor to fetch the records -
Help with Performance tunning PL/SQL
Hi All,
I have a PL/SQL procedure, it works fine. No errors, and no bugs. However its taking forever to finish. I am using the concatenation operator (||), and I know its expensive. How can I improve performance to the procedure ?
Here is the code
create or replace
PROCEDURE POST_ADDRESS_CLEANSE AS
CURSOR C1 IS
SELECT Z.ROW_ID,
Z.NAME
FROM STGDATA.ACCOUNT_SOURCE Z;
CURSOR C2 IS
SELECT DISTINCT CLEANSED_NAME || CLEANSED_STREET_ADDRESS ||
CLEANSED_STREET_ADDRESS_2 || CLEANSED_CITY || CLEANSED_STATE ||
CLEANSED_POSTAL_CODE AS FULLRECORD
FROM STGDATA.ACCOUNT_SOURCE_CLEANSED;
V_ROWID Number := 1;
V_FLAG VARCHAR2(30);
TEMP_ROW_ID VARCHAR2(10) := NULL;
BEGIN
-- This loop will update CLEANSED_NAME column in ACCOUNT_SOURCE_CLEANSED table.
FOR X IN C1 LOOP
TEMP_ROW_ID := TO_CHAR(X.ROW_ID);
UPDATE STGDATA.ACCOUNT_SOURCE_CLEANSED A
SET A.CLEANSED_NAME = X.NAME
WHERE A.ROW_ID = TEMP_ROW_ID;
COMMIT;
END LOOP;
-- This loop will update columns EM_PRIMARY_FLAG, EM_GROUP_ID in ACCOUNT_SOURCE_CLEANSED table
FOR Y IN C2 LOOP
UPDATE STGDATA.ACCOUNT_SOURCE_CLEANSED
SET EM_GROUP_ID = V_ROWID
WHERE CLEANSED_NAME || CLEANSED_STREET_ADDRESS || CLEANSED_STREET_ADDRESS_2 ||
CLEANSED_CITY || CLEANSED_STATE || CLEANSED_POSTAL_CODE = Y.FULLRECORD;
UPDATE STGDATA.ACCOUNT_SOURCE_CLEANSED
SET EM_PRIMARY_FLAG = 'Y'
WHERE CLEANSED_NAME || CLEANSED_STREET_ADDRESS || CLEANSED_STREET_ADDRESS_2 ||
CLEANSED_CITY || CLEANSED_STATE || CLEANSED_POSTAL_CODE = Y.FULLRECORD
AND ROWNUM = 1;
V_ROWID := V_ROWID + 1;
COMMIT;
END LOOP;
UPDATE STGDATA.ACCOUNT_SOURCE_CLEANSED
SET EM_PRIMARY_FLAG = 'N'
WHERE EM_PRIMARY_FLAG IS NULL;
COMMIT;
--dbms_output.put_line('V_ROW:'||V_ROWID);
--dbms_output.put_line('CLEANSED_NAME:'||Y.FULLRECORD);
END POST_ADDRESS_CLEANSE;
Thanks in advance.
Message was edited by: Rooney -- added code using syntax highlightI was able to modify my code a bit, however I don't see a way of not using loops.
In the loop, I am updating 2 columns. EM_PRIMARY_FLAG, and EM_GROUP_ID. The data I am working with has duplicate records, and that is why I am using a distinct in the cursor. The requirements are is to make one record a primary record, and the rest are reference records. What makes my record primary is updating column EM_PRIMARY_FLAG with a 'Y', and updating EM_GROUP_ID with a number combines all duplicate records into a group.
In the procedure, I am getting the distinct records, looping through each one, and then doing 2 updates:
1 - Update EM_PRIMARY_FLAG to 'Y' where rownum = 1, this will set one record to be primary
2 - Update EM_GROUP_ID to a number (V_ROWID := V_ROWID + 1) where V_ROWID starts from 1, to group all records into a set.
Here is my latest code after modifying it:
create or replace
PROCEDURE POST_ADDRESS_CLEANSE AS
CURSOR C1 IS
SELECT DISTINCT NVL(CLEANSED_NAME, '') AS NAME_CLEANSED,
NVL(CLEANSED_STREET_ADDRESS, '') AS ADDRESS_CLEANSED,
NVL(CLEANSED_STREET_ADDRESS_2, '') AS ADDRESS2_CLEANSED,
NVL(CLEANSED_CITY, '') AS CITY_CLEANSED,
NVL(CLEANSED_STATE, '') AS STATE_CLEANSED,
NVL(CLEANSED_POSTAL_CODE, '') AS POSTAL_CODE_CLEANSED
FROM STGDATA.ACCOUNT_SOURCE_CLEANSED;
V_ROWID Number := 1;
V_FLAG VARCHAR2(30);
TEMP_ROW_ID VARCHAR2(10) := NULL;
BEGIN
UPDATE STGDATA.ACCOUNT_SOURCE_CLEANSED A
SET A.CLEANSED_NAME = (SELECT Z.NAME
FROM STGDATA.ACCOUNT_SOURCE Z
WHERE Z.ROW_ID = (SELECT TO_NUMBER(B.ROW_ID)
FROM STGDATA.ACCOUNT_SOURCE_CLEANSED B
WHERE B.ROW_ID = A.ROW_ID));
COMMIT;
-- This loop will update columns EM_PRIMARY_FLAG, EM_GROUP_ID in ACCOUNT_SOURCE_CLEANSED table
FOR Y IN C1 LOOP
UPDATE STGDATA.ACCOUNT_SOURCE_CLEANSED
SET EM_GROUP_ID = V_ROWID
WHERE CLEANSED_NAME = Y.NAME_CLEANSED
AND CLEANSED_STREET_ADDRESS = Y.ADDRESS_CLEANSED
AND CLEANSED_STREET_ADDRESS_2 = Y.ADDRESS2_CLEANSED
AND CLEANSED_CITY = Y.CITY_CLEANSED
AND CLEANSED_STATE = Y.STATE_CLEANSED
AND CLEANSED_POSTAL_CODE = Y.POSTAL_CODE_CLEANSED;
UPDATE STGDATA.ACCOUNT_SOURCE_CLEANSED
SET EM_PRIMARY_FLAG = 'Y'
WHERE CLEANSED_NAME = Y.NAME_CLEANSED
AND CLEANSED_STREET_ADDRESS = Y.ADDRESS_CLEANSED
AND CLEANSED_STREET_ADDRESS_2 = Y.ADDRESS2_CLEANSED
AND CLEANSED_CITY = Y.CITY_CLEANSED
AND CLEANSED_STATE = Y.STATE_CLEANSED
AND CLEANSED_POSTAL_CODE = Y.POSTAL_CODE_CLEANSED
AND ROWNUM = 1;
V_ROWID := V_ROWID + 1;
END LOOP;
COMMIT;
UPDATE STGDATA.ACCOUNT_SOURCE_CLEANSED
SET EM_PRIMARY_FLAG = 'N'
WHERE EM_PRIMARY_FLAG IS NULL;
COMMIT;
END POST_ADDRESS_CLEANSE;
Thanks
Message was edited by: Rooney - Just added the code in SQL block using syntax highlight. -
Help with performance SQL tuning - Rewriting the query
Hi
I have serious performance issues with some 8 update queries
These were earlier taking 5 mins . Now taking 2.5 hours
This is one of the culprit UPDATE statement (These are 7 such other update statements on different tables but same logic)
We have change the update to MERGE and used PARALLEL hints but have not got desired results
There are appropriate indexes on the tables
Is there a way to rewrite the UPDATE statement in a better way to improve the performance
update TABLE_dob
set key_act =
(select skey from table_subs
where sub_act = sub_num)
where exists
(select 1 from table_subs
where sub_act = sub_num);
Table_DOB has 37 million records
Table_subs has 20 million recordsaashoo_5 wrote:
Hi
I have serious performance issues with some 8 update queries
These were earlier taking 5 mins . Now taking 2.5 hours
This is one of the culprit UPDATE statement (These are 7 such other update statements on different tables but same logic)
We have change the update to MERGE and used PARALLEL hints but have not got desired results
There are appropriate indexes on the tables
Is there a way to rewrite the UPDATE statement in a better way to improve the performance
update TABLE_dob
set key_act =
(select skey from table_subs
where sub_act = sub_num)
where exists
(select 1 from table_subs
where sub_act = sub_num);
Table_DOB has 37 million records
Table_subs has 20 million recordsThread: HOW TO: Post a SQL statement tuning request - template posting
HOW TO: Post a SQL statement tuning request - template posting -
Need help with performance for very very huge tables...
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production.
My DB has many tables and out of which I am interested in getting data from product and sales.
select /*parallel 32*/count(1) from (
select /*parallel 32*/distinct prod_code from product pd, sales s
where pd.prod_opt_cd is NULL
and s.sales_id = pd.sales_id
and s.creation_dts between to_date ('2012-07-01','YYYY-MM-DD') and
to_date ('2012-07-31','YYYY-MM-DD')
More information -
Total Rows in sales table - 18001217
Total rows in product table - 411800392
creation_dts dont have index on it.
I started query in background but after 30 hours I saw the error saying -
ORA-01555: snapshot too old: rollback segment number 153 with name
Is there any other way to get above data in optimized way?Formatting your query a bit (and removing the hints), it evaluates to:
SELECT COUNT(1)
FROM (SELECT DISTINCT prod_code
FROM product pd
INNER JOIN sales s
ON s.sales_id = pd.sales_id
WHERE pd.prod_opt_cd is NULL
AND s.creation_dts BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD')
AND TO_DATE('2012-07-31','YYYY-MM-DD')
);This should be equivalent to
SELECT COUNT(DISTINCT prod_code)
FROM product pd
INNER JOIN sales s
ON s.sales_id = pd.sales_id
WHERE pd.prod_opt_cd is NULL
AND s.creation_dts BETWEEN TO_DATE('2012-07-01','YYYY-MM-DD')
AND TO_DATE('2012-07-31','YYYY-MM-DD');On the face of it, that's a ridiculously simple query If s.sales_id and pd.sales_id are both indexed, then I don't see why it would take a huge amount of time. Even having to perform a FTS on the sales table because creation_dts isn't indexed shouldn't make it a 30-hour query. If either of those two is not indexed, then it's a much uglier prospect in joining the two tables. However, if you often join the product and sales tables (which seems likely), then not having those fields indexed would be contraindicated. -
Need help with Performance Tuning
Following query takes 8 secs. Any help will be appreciated
SELECT SUM(FuturesMarketVal) FuturesMarketVal
FROM (SELECT CASE WHEN null IS NULL THEN FuturesMarketVal
ELSE DECODE(FUTURES_NAME, null, FuturesMarketVal, 0)
END FuturesMarketVal
FROM (SELECT SUM( (a.FUTURES_ALLOC * (NVL(b.Futures_EOD_Price,0)/100 - NVL(c.Futures_EOD_Price,0)/100) * a.CONTRACT_SIZE) / DECODE(a.CONTRACT_SIZE,100000,1,1000000,4,3000000,12,1) ) FuturesMarketVal,
a.FUTURES_NAME
FROM cms_futures_trans a,
cms_futures_price b,
cms_futures_price c
Where c.history_date (+) = TO_DATE(fas_pas_pkg.get_weekday(to_date('12/30/2005') - 1),'mm/dd/yyyy')
and a.FUTURES_NAME = b.FUTURES_NAME (+)
AND a.trade_date < TO_DATE('12/30/2005','mm/dd/yyyy')
AND b.history_date (+) = TO_DATE('12/30/2005','mm/dd/yyyy')
AND a.FUTURES_NAME = c.FUTURES_NAME (+)
GROUP BY a.FUTURES_NAME
/Eric:
But there are only 5 records in cms_futures_price and 10 in cms_futures_trans :-)
OP:
I'm not usre what you are trying to fo here, but a couple of comments.
Since NULL IS NULL will always be true, you don;t really need the CASE statement. as it stands, your query will always return FuturesMarketVal. If the results are correct, then you can do without the DECODE as well.
Why are you calling fas_pas_pkg.get_weekday with a constant value? Can you not just use whatever it returns as a constant instead of calling the function?
Are you sure you need all those outer joins? They almost guarantee full scans of the outer joined tables.
Perhaps if you post some representative data from the two tables and an explanation of what you are trying to accomplish someone may have a better idea.
John -
I have a query that is taking to long where it should takes less than 5 sec. How could I solve this problem? I think the u_final_result_user is the problem but i am not sure and don't know how I could fix this.
Explain
SQL Statement which produced this data:
select * from table(dbms_xplan.display)
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
| 0 | SELECT STATEMENT | | 584 | 3522K| | 4484 |
|* 1 | FILTER | | | | | |
|* 2 | CONNECT BY WITH FILTERING | | | | | |
|* 3 | FILTER | | | | | |
| 4 | COUNT | | | | | |
|* 5 | HASH JOIN | | 584 | 3522K| | 4484 |
| 6 | VIEW | | 74 | 80660 | | 428 |
| 7 | WINDOW SORT | | 74 | 740 | | 428 |
| 8 | SORT GROUP BY | | 74 | 740 | | 428 |
|* 9 | TABLE ACCESS FULL | U_FINALRESULT_USER | 74 | 740 | | 425 |
|* 10 | HASH JOIN OUTER | | 789 | 3918K| 3120K| 4038 |
| 11 | VIEW | | 789 | 3106K| | 3530 |
| 12 | FILTER | | | | | |
| 13 | TABLE ACCESS BY INDEX ROWID | TEST | 772K| 10M| | 3 |
| 14 | NESTED LOOPS | | 789 | 33927 | | 3530 |
| 15 | NESTED LOOPS | | 789 | 22881 | | 1163 |
| 16 | NESTED LOOPS | | 383 | 6894 | | 14 |
| 17 | NESTED LOOPS | | 1 | 10 | | 3 |
| 18 | TABLE ACCESS BY INDEX ROWID| SDG | 1 | 4 | | 2 |
|* 19 | INDEX UNIQUE SCAN | PK_SDG | 865 | | | 1 |
| 20 | TABLE ACCESS BY INDEX ROWID| SDG_USER | 1 | 6 | | 1 |
|* 21 | INDEX UNIQUE SCAN | PK_SDG_USER | 1 | | | |
| 22 | TABLE ACCESS BY INDEX ROWID | SAMPLE | 1 | 8 | | 11 |
|* 23 | INDEX RANGE SCAN | FK_SAMPLE_SDG | 383 | | | 2 |
|* 24 | TABLE ACCESS BY INDEX ROWID | ALIQUOT | 1 | 11 | | 3 |
|* 25 | INDEX RANGE SCAN | FK_ALIQUOT_SAMPLE | 2 | | | 2 |
|* 26 | INDEX RANGE SCAN | FK_TEST_ALIQUOT | 1 | | | 2 |
| 27 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 28 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 29 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 30 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 31 | VIEW | | 37 | 38998 | | 428 |
| 32 | SORT UNIQUE | | 37 | 555 | | 428 |
| 33 | WINDOW SORT | | 37 | 555 | | 428 |
|* 34 | TABLE ACCESS FULL | U_FINALRESULT_USER | 37 | 555 | | 425 |
| 35 | HASH JOIN | | | | | |
| 36 | CONNECT BY PUMP | | | | | |
| 37 | COUNT | | | | | |
|* 38 | HASH JOIN | | 584 | 3522K| | 4484 |
| 39 | VIEW | | 74 | 80660 | | 428 |
| 40 | WINDOW SORT | | 74 | 740 | | 428 |
| 41 | SORT GROUP BY | | 74 | 740 | | 428 |
|* 42 | TABLE ACCESS FULL | U_FINALRESULT_USER | 74 | 740 | | 425 |
|* 43 | HASH JOIN OUTER | | 789 | 3918K| 3120K| 4038 |
| 44 | VIEW | | 789 | 3106K| | 3530 |
| 45 | FILTER | | | | | |
| 46 | TABLE ACCESS BY INDEX ROWID | TEST | 772K| 10M| | 3 |
| 47 | NESTED LOOPS | | 789 | 33927 | | 3530 |
| 48 | NESTED LOOPS | | 789 | 22881 | | 1163 |
| 49 | NESTED LOOPS | | 383 | 6894 | | 14 |
| 50 | NESTED LOOPS | | 1 | 10 | | 3 |
| 51 | TABLE ACCESS BY INDEX ROWID| SDG | 1 | 4 | | 2 |
|* 52 | INDEX UNIQUE SCAN | PK_SDG | 865 | | | 1 |
| 53 | TABLE ACCESS BY INDEX ROWID| SDG_USER | 1 | 6 | | 1 |
|* 54 | INDEX UNIQUE SCAN | PK_SDG_USER | 1 | | | |
| 55 | TABLE ACCESS BY INDEX ROWID | SAMPLE | 1 | 8 | | 11 |
|* 56 | INDEX RANGE SCAN | FK_SAMPLE_SDG | 383 | | | 2 |
|* 57 | TABLE ACCESS BY INDEX ROWID | ALIQUOT | 1 | 11 | | 3 |
|* 58 | INDEX RANGE SCAN | FK_ALIQUOT_SAMPLE | 2 | | | 2 |
|* 59 | INDEX RANGE SCAN | FK_TEST_ALIQUOT | 1 | | | 2 |
| 60 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 61 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 62 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 63 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 64 | VIEW | | 37 | 38998 | | 428 |
| 65 | SORT UNIQUE | | 37 | 555 | | 428 |
| 66 | WINDOW SORT | | 37 | 555 | | 428 |
|* 67 | TABLE ACCESS FULL | U_FINALRESULT_USER | 37 | 555 | | 425 |
Predicate Information (identified by operation id):
1 - filter("FR_PIVOT"."MAXLEVEL"=LEVEL)
2 - filter("FR_PIVOT"."RANK"=1)
3 - filter("FR_PIVOT"."RANK"=1)
5 - access("FR_PIVOT"."REF"=TO_CHAR("FR"."U_SDG_ID")||TO_CHAR("FR"."U_TEST_TEMPLATE_ID"))
9 - filter(NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT","U_FINALRESULT_USER"."U_CALCULATED_RESULT
")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=TO_NUMBER(:Z))
10 - access("SD"."SDG_ID"="FR"."U_SDG_ID"(+) AND "SD"."TEST_TEMPLATE_ID"="FR"."U_TEST_TEMPLATE_ID"(
+))
19 - access("SYS_ALIAS_4"."SDG_ID"=TO_NUMBER(:Z))
21 - access("SYS_ALIAS_4"."SDG_ID"="SDG_USER"."SDG_ID")
23 - access("SYS_ALIAS_4"."SDG_ID"="SYS_ALIAS_3"."SDG_ID")
24 - filter("SYS_ALIAS_2"."STATUS"='C' OR "SYS_ALIAS_2"."STATUS"='P' OR "SYS_ALIAS_2"."STATUS"='V')
25 - access("SYS_ALIAS_2"."SAMPLE_ID"="SYS_ALIAS_3"."SAMPLE_ID")
26 - access("SYS_ALIAS_1"."ALIQUOT_ID"="SYS_ALIAS_2"."ALIQUOT_ID")
34 - filter("U_FINALRESULT_USER"."U_REQUESTED"='T' AND NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT
","U_FINALRESULT_USER"."U_CALCULATED_RESULT")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=
TO_NUMBER(:Z))
38 - access("FR_PIVOT"."REF"=TO_CHAR("FR"."U_SDG_ID")||TO_CHAR("FR"."U_TEST_TEMPLATE_ID"))
42 - filter(NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT","U_FINALRESULT_USER"."U_CALCULATED_RESULT
")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=TO_NUMBER(:Z))
43 - access("SD"."SDG_ID"="FR"."U_SDG_ID"(+) AND "SD"."TEST_TEMPLATE_ID"="FR"."U_TEST_TEMPLATE_ID"(
+))
52 - access("SYS_ALIAS_4"."SDG_ID"=TO_NUMBER(:Z))
54 - access("SYS_ALIAS_4"."SDG_ID"="SDG_USER"."SDG_ID")
56 - access("SYS_ALIAS_4"."SDG_ID"="SYS_ALIAS_3"."SDG_ID")
57 - filter("SYS_ALIAS_2"."STATUS"='C' OR "SYS_ALIAS_2"."STATUS"='P' OR "SYS_ALIAS_2"."STATUS"='V')
58 - access("SYS_ALIAS_2"."SAMPLE_ID"="SYS_ALIAS_3"."SAMPLE_ID")
59 - access("SYS_ALIAS_1"."ALIQUOT_ID"="SYS_ALIAS_2"."ALIQUOT_ID")
67 - filter("U_FINALRESULT_USER"."U_REQUESTED"='T' AND NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT
","U_FINALRESULT_USER"."U_CALCULATED_RESULT")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=
TO_NUMBER(:Z))
Note: cpu costing is off
Tkprof
TKPROF: Release 9.2.0.1.0 - Production on Fri Jul 13 15:03:47 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: d:\oracle\admin\nautdev\udump\nautdev_ora_13020.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
alter session set sql_trace true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
select VALUE
from
nls_session_parameters where PARAMETER='NLS_NUMERIC_CHARACTERS'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select VALUE
from
nls_session_parameters where PARAMETER='NLS_DATE_FORMAT'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select VALUE
from
nls_session_parameters where PARAMETER='NLS_CURRENCY'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select to_char(9,'9C')
from
dual
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
total 3 0.00 0.00 0 3 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 TABLE ACCESS FULL DUAL
SELECT sd.u_bas_stockseed_code,
sd.u_bas_storage_code,
sd.description as test,
case when fr.resultcount < 8
then null
else case when fr.resultdistinct > 1
then 'spl'
else fr.resultfinal
end
end as result,
case when level >=2
then substr(sys_connect_by_path(valcount,','),2)
end as spl
FROM
SELECT sd.sdg_id,
sa.sample_id,
t.test_template_id,
sdu.u_bas_stockseed_code,
sdu.u_bas_storage_code,
t.description
FROM lims_sys.sdg sd, lims_sys.sdg_user sdu, lims_sys.sample sa, lims_sys.aliquot a,lims_sys.test t
WHERE sd.sdg_id = sdu.sdg_id
AND sd.sdg_id = sa.sdg_id
AND a.sample_id = sa.sample_id
AND t.aliquot_id = a.aliquot_id
AND a.status IN ('V','P','C')
AND sd.sdg_id IN (:SDGID)
) sd,
SELECT distinct fr.u_sdg_id,
fr.u_sample_id,
fr.u_test_template_id,
nvl(fr.u_overruled_result, fr.u_calculated_result) as Resultfinal,
count(distinct nvl(fr.u_overruled_result, fr.u_calculated_result)) over (partition by concat(fr.u_sdg_id, fr.u_test_template_id)) as resultdistinct,
count(nvl(fr.u_overruled_result, fr.u_calculated_result)) over (partition by concat(fr.u_sdg_id, fr.u_test_template_id)) as resultcount
FROM lims_sys.u_finalresult_user fr
WHERE fr.u_requested = 'T'
AND nvl(fr.u_overruled_result,fr.u_calculated_result) != 'X'
AND fr.u_sdg_id IN (:SDGID)
) fr,
SELECT concat(fr.u_sdg_id, fr.u_test_template_id) as ref,
nvl( fr.u_overruled_result, fr.u_calculated_result),
to_char(count(*)) || 'x' || nvl(fr.u_overruled_result, fr.u_calculated_result) as valcount,
row_number() over (partition by concat(fr.u_sdg_id, fr.u_test_template_id) order by count(*) desc, nvl(fr.u_overruled_result, fr.u_calculated_result)) as rank,
count(*) over (partition by concat(fr.u_sdg_id, fr.u_test_template_id)) AS MaxLevel
FROM lims_sys.u_finalresult_user fr
WHERE nvl(fr.u_overruled_result,fr.u_calculated_result) != 'X'
AND fr.u_sdg_id IN (:SDGID)
GROUP BY concat(fr.u_sdg_id, fr.u_test_template_id), nvl(fr.u_overruled_result,fr.u_calculated_result)
) fr_pivot
WHERE sd.sdg_id = fr.u_sdg_id (+)
AND sd.test_template_id = fr.u_test_template_id (+)
AND fr_pivot.ref = concat(fr.u_sdg_id,fr.u_test_template_id)
AND level = maxlevel
start with rank = 1 connect by
prior fr.u_sdg_id = fr.u_sdg_id
and prior fr.u_test_template_id = fr.u_test_template_id
and prior rank = rank - 1
call count cpu elapsed disk query current rows
Parse 1 0.03 0.58 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 1 8344.424154501457.66 15955 140391 178371 500
total 4 8344.454154501458.25 15955 140391 178371 500
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
500 FILTER
507 CONNECT BY WITH FILTERING
24731 FILTER
169667 COUNT
169667 HASH JOIN
34 VIEW
34 WINDOW SORT
34 SORT GROUP BY
312 TABLE ACCESS FULL U_FINALRESULT_USER
24731 HASH JOIN OUTER
546 VIEW
546 FILTER
546 TABLE ACCESS BY INDEX ROWID TEST
1093 NESTED LOOPS
546 NESTED LOOPS
123 NESTED LOOPS
1 NESTED LOOPS
1 TABLE ACCESS BY INDEX ROWID SDG
1 INDEX UNIQUE SCAN PK_SDG (object id 54343)
1 TABLE ACCESS BY INDEX ROWID SDG_USER
1 INDEX UNIQUE SCAN PK_SDG_USER (object id 54368)
123 TABLE ACCESS BY INDEX ROWID SAMPLE
123 INDEX RANGE SCAN FK_SAMPLE_SDG (object id 54262)
546 TABLE ACCESS BY INDEX ROWID ALIQUOT
546 INDEX RANGE SCAN FK_ALIQUOT_SAMPLE (object id 53620)
546 INDEX RANGE SCAN FK_TEST_ALIQUOT (object id 54493)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
291 VIEW
291 SORT UNIQUE
291 WINDOW SORT
291 TABLE ACCESS FULL U_FINALRESULT_USER
1312330604 HASH JOIN
169667 CONNECT BY PUMP
2036004 COUNT
2036004 HASH JOIN
408 VIEW
408 WINDOW SORT
408 SORT GROUP BY
3744 TABLE ACCESS FULL U_FINALRESULT_USER
296772 HASH JOIN OUTER
6552 VIEW
6552 FILTER
6552 TABLE ACCESS BY INDEX ROWID TEST
13116 NESTED LOOPS
6552 NESTED LOOPS
1476 NESTED LOOPS
12 NESTED LOOPS
12 TABLE ACCESS BY INDEX ROWID SDG
12 INDEX UNIQUE SCAN PK_SDG (object id 54343)
12 TABLE ACCESS BY INDEX ROWID SDG_USER
12 INDEX UNIQUE SCAN PK_SDG_USER (object id 54368)
1476 TABLE ACCESS BY INDEX ROWID SAMPLE
1476 INDEX RANGE SCAN FK_SAMPLE_SDG (object id 54262)
6552 TABLE ACCESS BY INDEX ROWID ALIQUOT
6552 INDEX RANGE SCAN FK_ALIQUOT_SAMPLE (object id 53620)
6552 INDEX RANGE SCAN FK_TEST_ALIQUOT (object id 54493)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
3492 VIEW
3492 SORT UNIQUE
3492 WINDOW SORT
3492 TABLE ACCESS FULL U_FINALRESULT_USER
select 'x'
from
dual
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 6 0 2
total 6 0.00 0.00 0 6 0 2
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 TABLE ACCESS FULL DUAL
begin :id := sys.dbms_transaction.local_transaction_id; end;
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 2
Fetch 0 0.00 0.00 0 0 0 0
total 4 0.00 0.00 0 0 0 2
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.03 0.59 0 0 0 0
Execute 11 0.00 0.00 0 0 0 2
Fetch 7 8344.424154501457.66 15955 140400 178371 506
total 27 8344.454154501458.25 15955 140400 178371 508
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 40 0.00 0.00 0 0 0 0
Execute 40 0.00 0.00 0 0 0 0
Fetch 40 0.00 0.00 0 81 0 40
total 120 0.00 0.00 0 81 0 40
Misses in library cache during parse: 0
10 user SQL statements in session.
40 internal SQL statements in session.
50 SQL statements in session.
Trace file: d:\oracle\admin\nautdev\udump\nautdev_ora_13020.trc
Trace file compatibility: 9.00.01
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
40 internal SQL statements in trace file.
50 SQL statements in trace file.
10 unique SQL statements in trace file.
544 lines in trace file.I altered the query as you suggested and added also a ordered hint. It seems that this solved my problem. Thank you.
Explain
SQL Statement which produced this data:
select * from table(dbms_xplan.display)
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
| 0 | SELECT STATEMENT | | 366 | 759K| | 4050 |
| 1 | SORT UNIQUE | | 366 | 759K| 1960K| 4050 |
|* 2 | FILTER | | | | | |
|* 3 | CONNECT BY WITH FILTERING | | | | | |
|* 4 | FILTER | | | | | |
| 5 | COUNT | | | | | |
| 6 | FILTER | | | | | |
|* 7 | HASH JOIN | | 366 | 759K| | 3918 |
| 8 | NESTED LOOPS | | 766 | 32938 | | 3461 |
| 9 | NESTED LOOPS | | 766 | 22214 | | 1163 |
| 10 | NESTED LOOPS | | 383 | 6894 | | 14 |
| 11 | NESTED LOOPS | | 1 | 10 | | 3 |
| 12 | TABLE ACCESS BY INDEX ROWID| SDG | 1 | 4 | | 2 |
|* 13 | INDEX UNIQUE SCAN | PK_SDG | 865 | | | 1 |
| 14 | TABLE ACCESS BY INDEX ROWID| SDG_USER | 1 | 6 | | 1 |
|* 15 | INDEX UNIQUE SCAN | PK_SDG_USER | 1 | | | |
| 16 | TABLE ACCESS BY INDEX ROWID | SAMPLE | 383 | 3064 | | 11 |
|* 17 | INDEX RANGE SCAN | FK_SAMPLE_SDG | 383 | | | 2 |
|* 18 | TABLE ACCESS BY INDEX ROWID | ALIQUOT | 2 | 22 | | 3 |
|* 19 | INDEX RANGE SCAN | FK_ALIQUOT_SAMPLE | 2 | | | 2 |
|* 20 | TABLE ACCESS BY INDEX ROWID | TEST | 1 | 14 | | 3 |
|* 21 | INDEX RANGE SCAN | FK_TEST_ALIQUOT | 1 | | | 2 |
| 22 | VIEW | | 74 | 150K| | 455 |
| 23 | WINDOW SORT | | 74 | 150K| 408K| 455 |
| 24 | VIEW | | 74 | 150K| | 428 |
| 25 | SORT UNIQUE | | 74 | 740 | | 428 |
| 26 | WINDOW SORT | | 74 | 740 | | 428 |
|* 27 | TABLE ACCESS FULL | U_FINALRESULT_USER | 74 | 740 | | 425 |
| 28 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 29 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 30 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 31 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 32 | HASH JOIN | | | | | |
| 33 | CONNECT BY PUMP | | | | | |
| 34 | COUNT | | | | | |
| 35 | FILTER | | | | | |
|* 36 | HASH JOIN | | 366 | 759K| | 3918 |
| 37 | NESTED LOOPS | | 766 | 32938 | | 3461 |
| 38 | NESTED LOOPS | | 766 | 22214 | | 1163 |
| 39 | NESTED LOOPS | | 383 | 6894 | | 14 |
| 40 | NESTED LOOPS | | 1 | 10 | | 3 |
| 41 | TABLE ACCESS BY INDEX ROWID| SDG | 1 | 4 | | 2 |
|* 42 | INDEX UNIQUE SCAN | PK_SDG | 865 | | | 1 |
| 43 | TABLE ACCESS BY INDEX ROWID| SDG_USER | 1 | 6 | | 1 |
|* 44 | INDEX UNIQUE SCAN | PK_SDG_USER | 1 | | | |
| 45 | TABLE ACCESS BY INDEX ROWID | SAMPLE | 383 | 3064 | | 11 |
|* 46 | INDEX RANGE SCAN | FK_SAMPLE_SDG | 383 | | | 2 |
|* 47 | TABLE ACCESS BY INDEX ROWID | ALIQUOT | 2 | 22 | | 3 |
|* 48 | INDEX RANGE SCAN | FK_ALIQUOT_SAMPLE | 2 | | | 2 |
|* 49 | TABLE ACCESS BY INDEX ROWID | TEST | 1 | 14 | | 3 |
|* 50 | INDEX RANGE SCAN | FK_TEST_ALIQUOT | 1 | | | 2 |
| 51 | VIEW | | 74 | 150K| | 455 |
| 52 | WINDOW SORT | | 74 | 150K| 408K| 455 |
| 53 | VIEW | | 74 | 150K| | 428 |
| 54 | SORT UNIQUE | | 74 | 740 | | 428 |
| 55 | WINDOW SORT | | 74 | 740 | | 428 |
|* 56 | TABLE ACCESS FULL | U_FINALRESULT_USER | 74 | 740 | | 425 |
| 57 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 58 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 59 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
| 60 | INDEX UNIQUE SCAN | PK_OPERATOR_GROUP | 1 | 4 | | |
Predicate Information (identified by operation id):
2 - filter("FR"."MAXLEVEL"=LEVEL)
3 - filter("FR"."RANK"=1)
4 - filter("FR"."RANK"=1)
7 - access("SYS_ALIAS_4"."SDG_ID"="FR"."U_SDG_ID" AND "SYS_ALIAS_1"."TEST_TEMPLATE_ID"="FR"."U_T
EST_TEMPLATE_ID")
13 - access("SYS_ALIAS_4"."SDG_ID"=TO_NUMBER(:Z))
15 - access("SYS_ALIAS_4"."SDG_ID"="SDG_USER"."SDG_ID")
17 - access("SYS_ALIAS_4"."SDG_ID"="SYS_ALIAS_3"."SDG_ID")
18 - filter("SYS_ALIAS_2"."STATUS"='V' OR "SYS_ALIAS_2"."STATUS"='P' OR "SYS_ALIAS_2"."STATUS"='C
19 - access("SYS_ALIAS_2"."SAMPLE_ID"="SYS_ALIAS_3"."SAMPLE_ID")
20 - filter("SYS_ALIAS_1"."DESCRIPTION" IS NOT NULL)
21 - access("SYS_ALIAS_1"."ALIQUOT_ID"="SYS_ALIAS_2"."ALIQUOT_ID")
27 - filter(NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT","U_FINALRESULT_USER"."U_CALCULATED_RESU
LT")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=TO_NUMBER(:Z))
36 - access("SYS_ALIAS_4"."SDG_ID"="FR"."U_SDG_ID" AND "SYS_ALIAS_1"."TEST_TEMPLATE_ID"="FR"."U_T
EST_TEMPLATE_ID")
42 - access("SYS_ALIAS_4"."SDG_ID"=TO_NUMBER(:Z))
44 - access("SYS_ALIAS_4"."SDG_ID"="SDG_USER"."SDG_ID")
46 - access("SYS_ALIAS_4"."SDG_ID"="SYS_ALIAS_3"."SDG_ID")
47 - filter("SYS_ALIAS_2"."STATUS"='V' OR "SYS_ALIAS_2"."STATUS"='P' OR "SYS_ALIAS_2"."STATUS"='C
48 - access("SYS_ALIAS_2"."SAMPLE_ID"="SYS_ALIAS_3"."SAMPLE_ID")
49 - filter("SYS_ALIAS_1"."DESCRIPTION" IS NOT NULL)
50 - access("SYS_ALIAS_1"."ALIQUOT_ID"="SYS_ALIAS_2"."ALIQUOT_ID")
56 - filter(NVL("U_FINALRESULT_USER"."U_OVERRULED_RESULT","U_FINALRESULT_USER"."U_CALCULATED_RESU
LT")<>'X' AND "U_FINALRESULT_USER"."U_SDG_ID"=TO_NUMBER(:Z))
Note: cpu costing is off
[.pre]
Tkprof
[pre]
TKPROF: Release 9.2.0.1.0 - Production on Mon Jul 16 11:28:18 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: d:\oracle\admin\nautdev\udump\nautdev_ora_13144.trc
Sort options: default
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
alter session set sql_trace true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
select VALUE
from
nls_session_parameters where PARAMETER='NLS_NUMERIC_CHARACTERS'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select VALUE
from
nls_session_parameters where PARAMETER='NLS_DATE_FORMAT'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select VALUE
from
nls_session_parameters where PARAMETER='NLS_CURRENCY'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
total 3 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 FIXED TABLE FULL X$NLS_PARAMETERS
select to_char(9,'9C')
from
dual
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 3 0 1
total 3 0.00 0.00 0 3 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 TABLE ACCESS FULL DUAL
SELECT distinct sd.u_bas_stockseed_code,
sd.u_bas_storage_code,
sd.description as test,
case when fr.resultcount < 8
then null
else case when fr.resultdistinct > 1
then 'spl'
else fr.resultfinal
end
end as result,
case when level >=2 and fr.resultcount > 7
then substr(sys_connect_by_path(valcount,','),2)
end as spl
FROM
SELECT /*+ ORDERED */ sd.sdg_id,
sa.sample_id,
t.test_template_id,
sdu.u_bas_stockseed_code,
sdu.u_bas_storage_code,
t.description
FROM lims_sys.sdg sd, lims_sys.sdg_user sdu, lims_sys.sample sa, lims_sys.aliquot a,lims_sys.test t
WHERE sd.sdg_id = sdu.sdg_id
AND sd.sdg_id = sa.sdg_id
AND a.sample_id = sa.sample_id
AND t.aliquot_id = a.aliquot_id
AND a.status IN ('V','P','C')
AND t.description is not null
AND sd.sdg_id IN (:SDGID)
) sd,
SELECT u_sdg_id,
u_test_template_id,
resultfinal,
valcount,
resultcount,
resultdistinct,
row_number() over (partition by u_sdg_id, u_test_template_id order by resultfinal) as rank,
count(*) over (partition by u_sdg_id, u_test_template_id) AS MaxLevel
FROM
SELECT distinct u_sdg_id, u_test_template_id,
nvl( u_overruled_result, u_calculated_result) as resultfinal,
to_char(count(*) over (partition by u_sdg_id, u_test_template_id,nvl(u_overruled_result, u_calculated_result))) || 'x' || nvl(u_overruled_result, u_calculated_result) as valcount,
count(nvl(u_overruled_result, u_calculated_result)) over (partition by u_sdg_id, u_test_template_id) as resultcount,
count(distinct nvl(u_overruled_result, u_calculated_result)) over (partition by u_sdg_id, u_test_template_id) as resultdistinct
FROM lims_sys.u_finalresult_user
WHERE nvl(u_overruled_result,u_calculated_result) != 'X'
AND u_sdg_id IN (:SDGID)
) fr
WHERE sd.sdg_id = fr.u_sdg_id (+)
AND sd.test_template_id = fr.u_test_template_id (+)
AND level = maxlevel
start with rank = 1 connect by
prior fr.u_sdg_id = fr.u_sdg_id
and prior fr.u_test_template_id = fr.u_test_template_id
and prior rank = rank - 1
call count cpu elapsed disk query current rows
Parse 1 0.06 0.64 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 2.26 2.79 2180 29539 0 38
total 3 2.32 3.44 2180 29539 0 38
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
38 SORT UNIQUE
25381 FILTER
27648 CONNECT BY WITH FILTERING
455 FILTER
610 COUNT
610 FILTER
610 HASH JOIN
455 NESTED LOOPS
459 NESTED LOOPS
12 NESTED LOOPS
1 NESTED LOOPS
1 TABLE ACCESS BY INDEX ROWID SDG
1 INDEX UNIQUE SCAN PK_SDG (object id 54343)
1 TABLE ACCESS BY INDEX ROWID SDG_USER
1 INDEX UNIQUE SCAN PK_SDG_USER (object id 54368)
12 TABLE ACCESS BY INDEX ROWID SAMPLE
12 INDEX RANGE SCAN FK_SAMPLE_SDG (object id 54262)
459 TABLE ACCESS BY INDEX ROWID ALIQUOT
460 INDEX RANGE SCAN FK_ALIQUOT_SAMPLE (object id 53620)
455 TABLE ACCESS BY INDEX ROWID TEST
459 INDEX RANGE SCAN FK_TEST_ALIQUOT (object id 54493)
51 VIEW
51 WINDOW SORT
51 VIEW
51 SORT UNIQUE
251 WINDOW SORT
251 TABLE ACCESS FULL U_FINALRESULT_USER
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
1849 HASH JOIN
610 CONNECT BY PUMP
2440 COUNT
2440 FILTER
2440 HASH JOIN
1820 NESTED LOOPS
1836 NESTED LOOPS
48 NESTED LOOPS
4 NESTED LOOPS
4 TABLE ACCESS BY INDEX ROWID SDG
4 INDEX UNIQUE SCAN PK_SDG (object id 54343)
4 TABLE ACCESS BY INDEX ROWID SDG_USER
4 INDEX UNIQUE SCAN PK_SDG_USER (object id 54368)
48 TABLE ACCESS BY INDEX ROWID SAMPLE
48 INDEX RANGE SCAN FK_SAMPLE_SDG (object id 54262)
1836 TABLE ACCESS BY INDEX ROWID ALIQUOT
1840 INDEX RANGE SCAN FK_ALIQUOT_SAMPLE (object id 53620)
1820 TABLE ACCESS BY INDEX ROWID TEST
1836 INDEX RANGE SCAN FK_TEST_ALIQUOT (object id 54493)
204 VIEW
204 WINDOW SORT
204 VIEW
204 SORT UNIQUE
1004 WINDOW SORT
1004 TABLE ACCESS FULL U_FINALRESULT_USER
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
0 INDEX UNIQUE SCAN PK_OPERATOR_GROUP (object id 54041)
select 'x'
from
dual
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 6 0 2
total 6 0.00 0.00 0 6 0 2
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
Rows Row Source Operation
1 TABLE ACCESS FULL DUAL
begin :id := sys.dbms_transaction.local_transaction_id; end;
call count cpu elapsed disk query current rows
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 2
Fetch 0 0.00 0.00 0 0 0 0
total 4 0.00 0.00 0 0 0 2
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 44
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.06 0.65 0 0 0 0
Execute 10 0.00 0.00 0 0 0 2
Fetch 7 2.26 2.79 2180 29548 0 44
total 26 2.32 3.45 2180 29548 0 46
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 61 0.00 0.00 0 0 0 0
Execute 61 0.00 0.00 0 0 0 0
Fetch 61 0.00 0.00 0 124 0 61
total 183 0.00 0.00 0 124 0 61
Misses in library cache during parse: 0
10 user SQL statements in session.
61 internal SQL statements in session.
71 SQL statements in session.
Trace file: d:\oracle\admin\nautdev\udump\nautdev_ora_13144.trc
Trace file compatibility: 9.00.01
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
61 internal SQL statements in trace file.
71 SQL statements in trace file.
10 unique SQL statements in trace file.
701 lines in trace file.
[.pre] -
Help with Performance of Jquery in IE8
Hi,
I'm new to Jquery and not sure if this is cause by Jquery or CSS? I have this demo that runs good everywhere except ie8.
Any help appreciated.
http://www.lab2x.com/testdrive/hc/20-mapa/demo3/
Today I've learned that using css display is better than using opacity for ie, but not sure what other things I should do.In IE9, hit F12 (developer tools). You can test your page in various modes.
I'm on DSL & your page doesn't run slower or break down for me in either FF 3.6, IE7, IE8 or IE9.
I was told that opacity could be a problem in the CSS and changed it and
used display:none and block instead. Not sure what other things can be
affecting or I should do.
I don't see how any of this is relevant to jQuery being slow or breaking down.
Perhaps the few people experiencing problems are on slow internet connections. I don't know...
Nancy O.
Alt-Web Design & Publishing
Web | Graphics | Print | Media Specialists
http://alt-web.com/
http://twitter.com/altweb -
Need help with Recurring Kernel Panic- [ WITH LOGS ]
For the past two weeks or so I have been getting a lot of Kernel Panics. Most of them happen when I select Restart or Shutdown. There have also been cases of Final Cut Pro crashing regularly, but I thnk the Kernel Panics are far more important to diagnose. I have not changes any hardware for at least a year. The machine has 4.5gb of RAM and 40% of the system drive free. I have run Disk Utility to repir the disk as well as the permissions, all coming back with no errors. The ONLY thing that I can think of that changed is that I purchased and ran Disk Warrior 4. It seemed to help tremendously with performance, however, it is the only thing that I can think of that has changed. I am also suffereing from some performance loss as well. I get at least one Kernel Panic a day. I have no idea what the KP mean, but they all look similar. I have posted the two latest ones below. Any help as to what might be causing this would be much appreciated. Thanks in advance.
Thu Oct 25 22:00:24 2007
Unresolved kernel trap(cpu 0): 0x400 - Inst access DAR=0x0000000001960014 PC=0x000000005AD15550
Latest crash info for cpu 0:
Exception state (sv=0x5E98B500)
PC=0x5AD15550; MSR=0x10009030; DAR=0x01960014; DSISR=0x10000000; LR=0x00D4B1CC; R1=0x4527B7F0; XCP=0x00000010 (0x400 - Inst access)
Backtrace:
0x00D4B1B4 0x00D4CF48 0x003015AC 0x00301420 0x002BA498 0x002BA4EC
0x0008A2FC 0x0008A36C 0x0002922C 0x0001ABBC 0x000295D0 0x00048F3C 0x0001DA04 0x0001FDB4
0x00021604 0x00038180 0x002667B0 0x0027322C 0x000252AC 0x000ABDB8
Kernel loadable modules in backtrace (with dependencies):
com.apple.iokit.IOATABlockStorage(1.4.3)@0xd49000
dependency: com.apple.iokit.IOStorageFamily(1.5)@0x442000
dependency: com.apple.iokit.IOATAFamily(1.6.0f2)@0x803000
Proceeding back via exception chain:
Exception state (sv=0x5E98B500)
previously dumped as "Latest" state. skipping...
Exception state (sv=0x5D9F5780)
PC=0x9000B348; MSR=0x0200F030; DAR=0xE1523000; DSISR=0x42000000; LR=0x9000B29C; R1=0xBFFFEB40; XCP=0x00000030 (0xC00 - System call)
Kernel version:
Darwin Kernel Version 8.10.0: Wed May 23 16:50:59 PDT 2007; root:xnu-792.21.3~1/RELEASE_PPC
panic(cpu 0 caller 0xFFFF0004): 0x400 - Inst access
Latest stack backtrace for cpu 0:
Backtrace:
0x000952D8 0x000957F0 0x00026898 0x000A8004 0x000AB980
Proceeding back via exception chain:
Exception state (sv=0x5E98B500)
PC=0x5AD15550; MSR=0x10009030; DAR=0x01960014; DSISR=0x10000000; LR=0x00D4B1CC; R1=0x4527B7F0; XCP=0x00000010 (0x400 - Inst access)
Backtrace:
0x00D4B1B4 0x00D4CF48 0x003015AC 0x00301420 0x002BA498 0x002BA4EC
0x0008A2FC 0x0008A36C 0x0002922C 0x0001ABBC 0x000295D0 0x00048F3C 0x0001DA04 0x0001FDB4
0x00021604 0x00038180 0x002667B0 0x0027322C 0x000252AC 0x000ABDB8
Kernel loadable modules in backtrace (with dependencies):
com.apple.iokit.IOATABlockStorage(1.4.3)@0xd49000
dependency: com.apple.iokit.IOStorageFamily(1.5)@0x442000
dependency: com.apple.iokit.IOATAFamily(1.6.0f2)@0x803000
Exception state (sv=0x5D9F5780)
PC=0x9000B348; MSR=0x0200F030; DAR=0xE1523000; DSISR=0x42000000; LR=0x9000B29C; R1=0xBFFFEB40; XCP=0x00000030 (0xC00 - System call)
Kernel version:
Darwin Kerne`
Sun Oct 28 10:35:10 2007
Unresolved kernel trap(cpu 0): 0x400 - Inst access DAR=0x00000000E0F65000 PC=0x0000000000000000
Latest crash info for cpu 0:
Exception state (sv=0x5E155C80)
PC=0x00000000; MSR=0x40009030; DAR=0xE0F65000; DSISR=0x40000000; LR=0x00D4B1CC; R1=0x452F3890; XCP=0x00000010 (0x400 - Inst access)
Backtrace:
0x00D4B1B4 0x00D4CF48 0x003015AC 0x00301420 0x002BA498 0x002BA4EC
0x0008A2FC 0x0008A36C 0x0002922C 0x0001ABBC 0x000295D0 0x00048F3C 0x0001DA04 0x0001FDB4
0x00021604 0x00038180 0x002667B0 0x0026661C 0x002AB548 0x000ABB30 0x00000000
backtrace terminated - frame not mapped or invalid: 0xBFFFDC50
Kernel loadable modules in backtrace (with dependencies):
com.apple.iokit.IOATABlockStorage(1.4.3)@0xd49000
dependency: com.apple.iokit.IOStorageFamily(1.5)@0x442000
dependency: com.apple.iokit.IOATAFamily(1.6.0f2)@0x803000
Proceeding back via exception chain:
Exception state (sv=0x5E155C80)
previously dumped as "Latest" state. skipping...
Exception state (sv=0x5D502A00)
PC=0x90014E0C; MSR=0x0000D030; DAR=0xE0F65000; DSISR=0x42000000; LR=0x90014C68; R1=0xBFFFDC50; XCP=0x00000030 (0xC00 - System call)
Kernel version:
Darwin Kernel Version 8.10.0: Wed May 23 16:50:59 PDT 2007; root:xnu-792.21.3~1/RELEASE_PPC
panic(cpu 0 caller 0xFFFF0004): 0x400 - Inst access
Latest stack backtrace for cpu 0:
Backtrace:
0x000952D8 0x000957F0 0x00026898 0x000A8004 0x000AB980
Proceeding back via exception chain:
Exception state (sv=0x5E155C80)
PC=0x00000000; MSR=0x40009030; DAR=0xE0F65000; DSISR=0x40000000; LR=0x00D4B1CC; R1=0x452F3890; XCP=0x00000010 (0x400 - Inst access)
Backtrace:
0x00D4B1B4 0x00D4CF48 0x003015AC 0x00301420 0x002BA498 0x002BA4EC
0x0008A2FC 0x0008A36C 0x0002922C 0x0001ABBC 0x000295D0 0x00048F3C 0x0001DA04 0x0001FDB4
0x00021604 0x00038180 0x002667B0 0x0026661C 0x002AB548 0x000ABB30 0x00000000
backtrace terminated - frame not mapped or invalid: 0xBFFFDC50
Kernel loadable modules in backtrace (with dependencies):
com.apple.iokit.IOATABlockStorage(1.4.3)@0xd49000
dependency: com.apple.iokit.IOStorageFamily(1.5)@0x442000
dependency: com.apple.iokit.IOATAFamily(1.6.0f2)@0x803000
Exception state (sv=0x5D502A00)Hello! I would be inclined to think you probably should re-install. The fact you said DW helped with performance indicates it found problems. My experience with OSX has been that mine has run pretty much perfectly for 3 years plus even though I tinker a lot and try out apps and have several dozen apps installed along with several haxies. It runs just as fast as it did when I first installed it as a fresh install. DW helps fix directory errors but it's not designed to do anything else. You could have bad blocks on the hard drive which could be causing the problem. I would erase using the zero data option and reinstall fresh. Ideally you would do this on a second partition or on another hard drive. If you only have one hard drive now is a good time to invest in another one so you can have a bootable backup for problem solving. Tom
-
Need help with Berkeley XML DB Performance
We need help with maximizing performance of our use of Berkeley XML DB. I am filling most of the 29 part question as listed by Oracle's BDB team.
Berkeley DB XML Performance Questionnaire
1. Describe the Performance area that you are measuring? What is the
current performance? What are your performance goals you hope to
achieve?
We are measuring the performance while loading a document during
web application startup. It is currently taking 10-12 seconds when
only one user is on the system. We are trying to do some testing to
get the load time when several users are on the system.
We would like the load time to be 5 seconds or less.
2. What Berkeley DB XML Version? Any optional configuration flags
specified? Are you running with any special patches? Please specify?
dbxml 2.4.13. No special patches.
3. What Berkeley DB Version? Any optional configuration flags
specified? Are you running with any special patches? Please Specify.
bdb 4.6.21. No special patches.
4. Processor name, speed and chipset?
Intel Xeon CPU 5150 2.66GHz
5. Operating System and Version?
Red Hat Enterprise Linux Relase 4 Update 6
6. Disk Drive Type and speed?
Don't have that information
7. File System Type? (such as EXT2, NTFS, Reiser)
EXT3
8. Physical Memory Available?
4GB
9. Are you using Replication (HA) with Berkeley DB XML? If so, please
describe the network you are using, and the number of Replica’s.
No
10. Are you using a Remote Filesystem (NFS) ? If so, for which
Berkeley DB XML/DB files?
No
11. What type of mutexes do you have configured? Did you specify
–with-mutex=? Specify what you find inn your config.log, search
for db_cv_mutex?
None. Did not specify -with-mutex during bdb compilation
12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
Which compiler and version?
Java 1.5
13. If you are using an Application Server or Web Server, please
provide the name and version?
Oracle Appication Server 10.1.3.4.0
14. Please provide your exact Environment Configuration Flags (include
anything specified in you DB_CONFIG file)
Default.
15. Please provide your Container Configuration Flags?
final EnvironmentConfig envConf = new EnvironmentConfig();
envConf.setAllowCreate(true); // If the environment does not
// exist, create it.
envConf.setInitializeCache(true); // Turn on the shared memory
// region.
envConf.setInitializeLocking(true); // Turn on the locking subsystem.
envConf.setInitializeLogging(true); // Turn on the logging subsystem.
envConf.setTransactional(true); // Turn on the transactional
// subsystem.
envConf.setLockDetectMode(LockDetectMode.MINWRITE);
envConf.setThreaded(true);
envConf.setErrorStream(System.err);
envConf.setCacheSize(1024*1024*64);
envConf.setMaxLockers(2000);
envConf.setMaxLocks(2000);
envConf.setMaxLockObjects(2000);
envConf.setTxnMaxActive(200);
envConf.setTxnWriteNoSync(true);
envConf.setMaxMutexes(40000);
16. How many XML Containers do you have? For each one please specify:
One.
1. The Container Configuration Flags
XmlContainerConfig xmlContainerConfig = new XmlContainerConfig();
xmlContainerConfig.setTransactional(true);
xmlContainerConfig.setIndexNodes(true);
xmlContainerConfig.setReadUncommitted(true);
2. How many documents?
Everytime the user logs in, the current xml document is loaded from
a oracle database table and put it in the Berkeley XML DB.
The documents get deleted from XML DB when the Oracle application
server container is stopped.
The number of documents should start with zero initially and it
will grow with every login.
3. What type (node or wholedoc)?
Node
4. Please indicate the minimum, maximum and average size of
documents?
The minimum is about 2MB and the maximum could 20MB. The average
mostly about 5MB.
5. Are you using document data? If so please describe how?
We are using document data only to save changes made
to the application data in a web application. The final save goes
to the relational database. Berkeley XML DB is just used to store
temporary data since going to the relational database for each change
will cause severe performance issues.
17. Please describe the shape of one of your typical documents? Please
do this by sending us a skeleton XML document.
Due to the sensitive nature of the data, I can provide XML schema instead.
18. What is the rate of document insertion/update required or
expected? Are you doing partial node updates (via XmlModify) or
replacing the document?
The document is inserted during user login. Any change made to the application
data grid or other data components gets saved in Berkeley DB. We also have
an automatic save every two minutes. The final save from the application
gets saved in a relational database.
19. What is the query rate required/expected?
Users will not be entering data rapidly. There will be lot of think time
before the users enter/modify data in the web application. This is a pilot
project but when we go live with this application, we will expect 25 users
at the same time.
20. XQuery -- supply some sample queries
1. Please provide the Query Plan
2. Are you using DBXML_INDEX_NODES?
Yes.
3. Display the indices you have defined for the specific query.
XmlIndexSpecification spec = container.getIndexSpecification();
// ids
spec.addIndex("", "id", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
spec.addIndex("", "idref", XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// index to cover AttributeValue/Description
spec.addIndex("", "Description", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_SUBSTRING, XmlValue.STRING);
// cover AttributeValue/@value
spec.addIndex("", "value", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// item attribute values
spec.addIndex("", "type", XmlIndexSpecification.PATH_EDGE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// default index
spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ELEMENT | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
spec.addDefaultIndex(XmlIndexSpecification.PATH_NODE | XmlIndexSpecification.NODE_ATTRIBUTE | XmlIndexSpecification.KEY_EQUALITY, XmlValue.STRING);
// save the spec to the container
XmlUpdateContext uc = xmlManager.createUpdateContext();
container.setIndexSpecification(spec, uc);
4. If this is a large query, please consider sending a smaller
query (and query plan) that demonstrates the problem.
21. Are you running with Transactions? If so please provide any
transactions flags you specify with any API calls.
Yes. READ_UNCOMMITED in some and READ_COMMITTED in other transactions.
22. If your application is transactional, are your log files stored on
the same disk as your containers/databases?
Yes.
23. Do you use AUTO_COMMIT?
No.
24. Please list any non-transactional operations performed?
No.
25. How many threads of control are running? How many threads in read
only mode? How many threads are updating?
We use Berkeley XML DB within the context of a struts web application.
Each user logged into the web application will be running a bdb transactoin
within the context of a struts action thread.
26. Please include a paragraph describing the performance measurements
you have made. Please specifically list any Berkeley DB operations
where the performance is currently insufficient.
We are clocking 10-12 seconds of loading a document from dbd when
five users are on the system.
getContainer().getDocument(documentName);
27. What performance level do you hope to achieve?
We would like to get less than 5 seconds when 25 users are on the system.
28. Please send us the output of the following db_stat utility commands
after your application has been running under "normal" load for some
period of time:
% db_stat -h database environment -c
% db_stat -h database environment -l
% db_stat -h database environment -m
% db_stat -h database environment -r
% db_stat -h database environment -t
(These commands require the db_stat utility access a shared database
environment. If your application has a private environment, please
remove the DB_PRIVATE flag used when the environment is created, so
you can obtain these measurements. If removing the DB_PRIVATE flag
is not possible, let us know and we can discuss alternatives with
you.)
If your application has periods of "good" and "bad" performance,
please run the above list of commands several times, during both
good and bad periods, and additionally specify the -Z flags (so
the output of each command isn't cumulative).
When possible, please run basic system performance reporting tools
during the time you are measuring the application's performance.
For example, on UNIX systems, the vmstat and iostat utilities are
good choices.
Will give this information soon.
29. Are there any other significant applications running on this
system? Are you using Berkeley DB outside of Berkeley DB XML?
Please describe the application?
No to the first two questions.
The web application is an online review of test questions. The users
login and then review the items one by one. The relational database
holds the data in xml. During application load, the application
retrieves the xml and then saves it to bdb. While the user
is making changes to the data in the application, it writes those
changes to bdb. Finally when the user hits the SAVE button, the data
gets saved to the relational database. We also have an automatic save
every two minues, which saves bdb xml data and saves it to relational
database.
Thanks,
Madhav
[email protected]Could it be that you simply do not have set up indexes to support your query? If so, you could do some basic testing using the dbxml shell:
milu@colinux:~/xpg > dbxml -h ~/dbenv
Joined existing environment
dbxml> setverbose 7 2
dbxml> open tv.dbxml
dbxml> listIndexes
dbxml> query { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }
dbxml> queryplan { collection()[//@date-tip]/*[@chID = ('ard','zdf')] (: example :) }Verbosity will make the engine display some (rather cryptic) information on index usage. I can't remember where the output is explained; my feeling is that "V(...)" means the index is being used (which is good), but that observation may not be accurate. Note that some details in the setVerbose command could differ, as I'm using 2.4.16 while you're using 2.4.13.
Also, take a look at the query plan. You can post it here and some people will be able to diagnose it.
Michael Ludwig -
What init parameters will help with the OBIEE performance..?
Hi All
By any chance does any one have info on What init parameters will help with the OBIEE performance..?
Thanksfast=true ;-)
What performance is causing you a problem? Data retrieval, general UI navigation? Its a massive area to cover. Can you be more specific? -
Need help with premiere pro cs6 having performance issues please help
need help with premiere pro cs6 having performance issues please help
Welcome to the forum.
First thing that I would do would be to look at this Adobe KB Article to see if it helps.
Next, I would try the tips in this ARTICLE.
If that does not help, a Repair Install would definitely be in order.
Good luck,
Hunt
Maybe you are looking for
-
Things that don't seem editable after you take a PDF file to word doc
It adds lines that can't be removed and some things are lighter grey in word whereas they were black in the PDF file. How do I fix these issues.
-
On my external drive (and not on my internal drive) my file names do not delete after deleting the actual file but leave a ghost behind. I have thousands of these ghosts now on my external drive and I don't know how to delete them. When I try to dele
-
IP addresses assigned by WRT54G
Can I configure the WRT54G router not to refresh the IP addresses?
-
F.13/MR11 clearing of old items / changed config
Hi, I my customer have been live for 3 years with G/L and have recently activated MM! The problem is that the sort key for the GR/IR account has been changed during this period and now is it not possible to clear those old items with MR11 or F.13. Wh
-
Supervisor Desktop on XP using VMware
Is there anyone that has installed Cisco Supervisor Desktop on XP in a VMware environment??? We have it installed and the supervisor logged in but we have no views for agents and no queue data. Works elswhere on non-VMware machines. Thanks.