Performance Problem while Aggregation
Performance problem while aggrgating
These r my dimension and cube i wrote customized Aggrgation map and i m aggragating all dimension (except for last level a unique one (PK) + cube .
My system config is good ..
But Aggrgation deployment (calculation ) is really really very slow compared to other vendors product
i.e. It took me 3 hours to aggrgate all dimension (all levels except last) and cube (only containing 1000 rows to check and deleted all others rows)
Dimensions Number of rows
dim_product 156,0
t_time 730
dim_promotion 186,4
dim_store 25
dim_distributor 102,81
Cube Number of Row
Cube_SalesFact 300,000
Plz solve my problem coz if it take that much time then i must say the performance of software is not that good where it should be....
and i must suggest oracle to do some thing about this serious problem
Thanks
Well wisher of Oracle Corporation
BEGIN
cwm2_olap_manager.set_echo_on;
CWM2_OLAP_MANAGER.BEGIN_LOG('D:\', 'AggMap_CubeSalesfact.log');
DBMS_AW.EXECUTE('aw attach RTTARGET.AW_WH_SALES RW' );
BEGIN
DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_DISTRIBUTOR', 'RTTARGET', 'DIM_DISTRIBUTOR');
DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_PRODUCT', 'RTTARGET', 'DIM_PRODUCT');
DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_PROMOTION', 'RTTARGET', 'DIM_PROMOTION');
DBMS_AWM.DELETE_AWDIMLOAD_SPEC('DIM_STORE', 'RTTARGET', 'DIM_STORE');
DBMS_AWM.DELETE_AWDIMLOAD_SPEC('T_TIME', 'RTTARGET', 'T_TIME');
--Deleting AW_CubeLoad_Spec
DBMS_AWM.DELETE_AWCUBELOAD_SPEC('CUBESALESFACT', 'RTTARGET', 'CUBE_SALESFACT');
DBMS_AW.EXECUTE('upd '||'RTTARGET'||'.'||'AW_WH_SALES' ||'; commit');
Commit;
--Deleting AggMap
DBMS_AWM.Delete_AWCUBEAGG_SPEC('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT');
DBMS_AW.EXECUTE('upd '||'RTTARGET'||'.'||'AW_WH_SALES' ||'; commit');
Commit;
EXCEPTION WHEN OTHERS THEN NULL;
END;
--Creating Agg Map for cube cube_salesfact
-- DBMS_AWM.CREATE_AWCUBEAGG_SPEC(AggMap_Name , USER , AW_NAME, CUBE_NAME);
DBMS_AWM.CREATE_AWCUBEAGG_SPEC('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT');
--Specifying aggrgation for measures of cube
DBMS_AWM.ADD_AWCUBEAGG_SPEC_MEASURE('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'STORECOST');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_MEASURE('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'STORESALES');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_MEASURE('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'UNITSALES');
--Specifying aggrgation for different level of dimensions
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_ALLYEARS');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_YEAR');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_QUARTER');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_T_TIME', 'L_MONTH');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_ALLCOUNTRIES');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_COUNTRY');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_PROVINCE');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_STORE', 'L_CITY');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_ALLPRODUCTS');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_BRANDCLASS');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_BRANDCATEGORY');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PRODUCT', 'L_BRAND');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_DISTRIBUTOR', 'L_ALLDIST');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_DISTRIBUTOR', 'L_DISTINCOME');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PROMOTION', 'L_ALLPROM');
DBMS_AWM.ADD_AWCUBEAGG_SPEC_LEVEL('AGG_CUBESALESFACT', 'RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'WH_DIM_PROMOTION', 'L_PROMOTIONMEDIA');
Begin
--************************ CODE **********************************
--aw_dim.sql
DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_DISTRIBUTOR', 'RTTARGET', 'DIM_DISTRIBUTOR', 'FULL_LOAD_ADDITIONS_ONLY');
DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_DISTRIBUTOR', 'DIM_DISTRIBUTOR');
commit;
DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_PRODUCT', 'RTTARGET', 'DIM_PRODUCT', 'FULL_LOAD_ADDITIONS_ONLY');
DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_PRODUCT', 'DIM_PRODUCT');
commit;
DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_PROMOTION', 'RTTARGET', 'DIM_PROMOTION', 'FULL_LOAD_ADDITIONS_ONLY');
DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_PROMOTION', 'DIM_PROMOTION');
commit;
DBMS_AWM.CREATE_AWDIMLOAD_SPEC('DIM_STORE', 'RTTARGET', 'DIM_STORE', 'FULL_LOAD_ADDITIONS_ONLY');
DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_DIM_STORE', 'DIM_STORE');
commit;
DBMS_AWM.CREATE_AWDIMLOAD_SPEC('T_TIME', 'RTTARGET', 'T_TIME', 'FULL_LOAD_ADDITIONS_ONLY');
DBMS_AWM.REFRESH_AWDIMENSION('RTTARGET', 'AW_WH_SALES', 'WH_T_TIME', 'T_TIME');
commit;
--aw_cube.sql
DBMS_AWM.CREATE_AWCUBELOAD_SPEC('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'LOAD_DATA');
dbms_awm.add_awcubeload_spec_measure('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'STORECOST', 'STORECOST', 'STORECOST');
dbms_awm.add_awcubeload_spec_measure('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'STORESALES', 'STORESALES', 'STORESALES');
dbms_awm.add_awcubeload_spec_measure('CUBE_SALESFACT', 'RTTARGET', 'CUBE_SALESFACT', 'UNITSALES', 'UNITSALES', 'UNITSALES');
DBMS_AWM.REFRESH_AWCUBE('RTTARGET', 'AW_WH_SALES', 'WH_CUBE_SALESFACT', 'CUBE_SALESFACT');
EXCEPTION WHEN OTHERS THEN NULL;
END;
-- Now build the cube. This may take some time on large cubes.
-- DBMS_AWM.aggregate_awcube(USER, AW_NAME, CUBE_NAME, aggspec);
DBMS_AWM.aggregate_awcube('RTTARGET','AW_WH_SALES', 'WH_CUBE_SALESFACT','AGG_CUBESALESFACT');
DBMS_AW.EXECUTE('upd '||'RTTARGET'||'.'||'AW_WH_SALES' ||'; commit');
Commit;
CWM2_OLAP_METADATA_REFRESH.MR_REFRESH();
CWM2_OLAP_METADATA_REFRESH.MR_AC_REFRESH();
DBMS_AW.Execute('aw detach RTTARGET.AW_WH_Sales');
CWM2_OLAP_MANAGER.END_LOG;
cwm2_olap_manager.set_echo_off;
EXCEPTION WHEN OTHERS THEN NULL;
-- EXCEPTION WHEN OTHERS THEN RAISE;
END;
Similar Messages
-
Performance Problem While Data updating In Customize TMS System
Dear guys
I have developed Customize time management system,but there is performance problem while updating machine date in monthly roster table,there is if else condition,which check record status either is late or Present on time etc.
have any clue to improve performance.
Thanks in advance
--regard
furqanFurqan wrote:
Dear guys
I have developed Customize time management system,but there is performance problem while updating machine date in monthly roster table,there is if else condition,which check record status either is late or Present on time etc.
have any clue to improve performance.From that description and without any database version, code, explain plans, execution traces or anything that would be remotely useful.... erm... No.
Hint:-
How to Post an SQL statement tuning request...
HOW TO: Post a SQL statement tuning request - template posting -
Performance problem while CPU is 80% Idel ?
Hi,
My end users are claim for performance problem during execution of batch process.
As you can see there are 1,745 statement executing each second.
Awr report shows 98.1% of the time , waits on CPU .
Also Awr report shows that Host CPU is :79.9% Idel.
The second wait event shows only 212 seconds waits on db file sequential read.
Yet , 4 minute in 1 hour period is seems not an issue.
Please advise
DB Name DB Id Instance Inst Num Startup Time Release RAC
QERP xxx erp 1 21-Jan-13 15:40 11.2.0.2.0 ; NO
Host Name Platform CPUs Cores Sockets Memory(GB)
erptst HP-UX IA (64-bit) 16 16 4 127.83
Snap Id Snap Time Sessions Curs/Sess
Begin Snap: 40066 22-Jan-13 20:00:52 207 9.6
End Snap: 40067 22-Jan-13 21:00:05 210 9.6
Elapsed: 59.21 (mins)
DB Time: 189.24 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 8,800M 8,800M Std Block Size: 8K
Shared Pool Size: 1,056M 1,056M Log Buffer: 49,344K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 3.2 ; 0.1 ; 0.00 ; 0.05
DB CPU(s): 3.1 ; 0.1 ; 0.00 ; 0.05
Redo size: 604,285.1 ; 27,271.3
Logical reads: 364,792.3 ; 16,463.0
Block changes: 3,629.5 ; 163.8
Physical reads: 21.5 ; 1.0
Physical writes: 95.3 ; 4.3
User calls: 68.7 ; 3.1
Parses: 212.9 ; 9.6
Hard parses: 0.3 ; 0.0
W/A MB processed: 1.2 ; 0.1
Logons: 0.3 ; 0.0
Executes: 1,745.2 ; 78.8
Rollbacks: 1.2 ; 0.1
Transactions: 22.2
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 ; Redo NoWait %: 100.00
Buffer Hit %: 99.99 ; In-memory Sort %: 100.00
Library Hit %: 99.95 ; Soft Parse %: 99.85
Execute to Parse %: 87.80 ; Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 74.76 ; % Non-Parse CPU: 99.89
Shared Pool Statistics Begin End
Memory Usage %: 75.37 ; 76.85
% SQL with executions>1: 95.31 ; 85.98
% Memory for SQL w/exec>1: 90.33 ; 82.84
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg
wait % DB
Event Waits Time(s) (ms) time Wait Class
DB CPU 11,144 98.1
db file sequential read 52,714 214 4 1.9 User I/O
SQL*Net break/reset to client 29,050 6 0 .1 Applicatio
log file sync 2,536 6 2 .0 Commit
buffer busy waits 4,338 2 1 .0 Concurrenc
Host CPU (CPUs: 16 Cores: 16 Sockets: 4)
~~~~~~~~ Load Average
Begin End %User %System %WIO %Idle
0.34 ; 0.33 ; 19.7 ; 0.4 ; 1.8 ; 79.9Nikolay Savvinov wrote:
if the users are complaining about performance of the batch process, then that's what you should be looking at, not the entire system.I find it strange to see "end users" and "the batch process" in the same sentence (as it was in the first post). "End users" gives me the feeling of a significant number of concurrent sessions with people waiting for results in real time at the far end, while "batch process" carries the image a small number of large scale processes running overnight to prepare the data for the following morning.
I mention this because my first view of the AWR output was: you've got 16 CPUs, only three in use, virtually no users, and doing very little work, how can the users complain. (One answer, of course, is that the 13 CPUs could be locked out of use as far as Oracle is concerned). On the second read I decided that the "users" had gone home, and the complaint was simply that the batch process wasn't completing in time.
In this case I think "the entire system" IS "the batch process"
Determine which stored procedures and/or SQL statements took longer than usual and then find out why. Most likely you'll be able to find
everything you need in AWR views (DBA_HIST_SQL%) and ASH archive (DBA_HIST_ACTIVE_SESS_HISTORY).
If the batch process has changed dramatically and recently, then a simple first step might be to look at the current AWR report, find the few most time-consuming SQL statements, and use the awrsqrpt.sql script to find their history of execution plans.
But I'd also just look at the expensive SQL - bearing in mind, particularly, that there are very few user calls per second, yet many hundred executions per second: it strikes me that there could be quite a lot of PL/SQL going on doing something a little bit expensive many times or some PL/SQL function that calls some SQL that used to be called rarely from an SQL statement but is now (due, perhaps to a change in plan) being called much more frequently - so check SQL Ordered by Executions.
Regards
Jonathan Lewis -
Performance Problem while signing into Application
Hello
Could someone plz throw some light into which area I can look into for my performance problem. its an E-business suite version 11.5.10.2 which was upgrade from 11.5.8.
the Problem : When the Sign in Page is displayed , After the user name / Pwd is entered it sort of takes for ever for the System to actually log the user in. Sometimes I have to click twice on the Sign in Button.
I have run purge sign on audit / purge concurrent Request/manager logs / gather schema stats but its still slow. Are there any way of check whether the Middle Tier is the bottle neck.
Thanks
Ninican you check the profile option FND%diagnostic% if it was enabled or not
fadi -
Performance Problem While Selecting a Query....
Hi Users
I have problem with performance with appliction
where are using D2K frent end and backend oracle 9i
we have a validaion buttion that is taking lot of time to check the date like 500 records
i want some links about performance issue process pls help.
Here some of the query taking more than 1sec to more
1)
SELECT
/*+(INDEX(KBS_CHKSHTCARDTB(IND1_CHKSHTCARD))*/
COUNT(DISTINCT A.KBCK_CHKSHEET_NO)
FROM
KBS_CHKSHTCARDTB A ,KBS_CHKSHTHDRTB B WHERE A.KBCK_CHKSHEET_NO=B.KBCH_CHKSHEET_NO
AND KBCK_E_DATE =TRUNC(SYSDATE)
AND KBCH_PRINT_STATUS='P'
OutPut: 206
Time : 1sec
2)
UPDATE KBS_CARDMASTERTB
SET KBCM_LOCK_FROM = KBCM_LOCK_FROM_CONTROL,
KBCM_LOCK_STATUS= NULL
WHERE KBCM_LOCK_FROM_CONTROL is not null
and KBCM_LOCK_FROM IS NULL
and KBCM_LOCK_FROM_CONTROL <=trunc(sysdate)
AND KBCM_LOCK_STATUS = 'Y'
and KBCM_UNIQUE_IDNO IN(SELECT DISTINCT KBSA_UNIQUE_IDNO
FROM KBS_SCANTB WHERE TRUNC(KBSA_E_DATE) = TRUNC(:KANBAN_CTRL_BLK.DAT)
AND KBSA_TRUCK_SQ_NO = :Kanban_ctrl_blk.cycl
AND KBSA_ERROR_CODE IS NULL)
AND (KBCM_VENDOR_NO,KBCM_PLANT_CODE)in (SELECT DISTINCT kbsa_vendor_no,KBSA_PLANT_CODE
FROM KBS_SCANTB WHERE TRUNC(KBSA_E_DATE) = TRUNC(:KANBAN_CTRL_BLK.DAT) AND KBSA_TRUCK_SQ_NO = :Kanban_ctrl_blk.cycl AND KBSA_ERROR_CODE IS NULL)
AND KBCM_PROCESS_CODE IN (SELECT DISTINCT KBSA_PROCESS_CODE
FROM KBS_SCANTB WHERE TRUNC(KBSA_E_DATE) = TRUNC(:KANBAN_CTRL_BLK.DAT) AND KBSA_TRUCK_SQ_NO = :Kanban_ctrl_blk.cycl
AND KBSA_ERROR_CODE IS NULL);
OutPut: Totatl Number of Table:29288
Time : more than 5sec
3)
CURSOR GET_TEMP_CARDS_SWIPED_CUR IS
SELECT KBCM_VENDOR_NO
,KBCM_PLANT_CODE
,KBCM_FAMILY --ADDED BY SUJITH.C TO SUPPORT PSMS2
,KBCM_BACK_NO
,KBCM_UNIQUE_IDNO
,KBCM_KANBAN_TYPE
FROM KBS_CARDMASTERTB
WHERE KBCM_KANBAN_TYPE IN ('T','B')
AND KBCM_UNIQUE_IDNO IN
(SELECT KBSA_UNIQUE_IDNO
FROM KBS_SCANTB
WHERE KBSA_E_DATE = :DAT
AND KBSA_TRUCK_SQ_NO = :CYCL
AND KBSA_ERROR_CODE IS NULL
AND KBSA_TYPE IS NULL);
Thanks Advance ........[When your query takes too long...|http://forums.oracle.com/forums/thread.jspa?messageID=1812597#1812597]
[How to post a SQL statement tuning request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0] -
Performance problem while creating table at runtim
I have pl/sql block like
Begin
Exceute immediate 'drop table x '
Execute immediate 'create table x as select .... stament (complex select by joining 7-8 tables);
Execute immediate ('create index ind1 on table x'); -- i am not writing the full syntax
End;
The select statement used in create table is fetching 10 millions of rows (approx).
The above pl/sql block is taking 30-45 minutes.
Without going in depth of query used in select (as i have to explain the functionality otherwise),
Could any one please suggest to create a table in fatset way like nolooging or seperate tablespace with bigger block size or any change in any DB parameter etc.
The db server is having excelent hardware configuration 32GB ram , multi CPU (16 CPUs) , Huge hardisk.
ThanksCREATE OR REPLACE VIEW VW_CUST_ACCT_BUS_REQ AS
SELECT FC.V_CUST_NUMBER,
FC.V_ACCT_NUMBER,
FC.V_ACCT_CUST_ROLE_CODE
from Fct_Acct_Cust_Roles FC --current schema table
join dim_jurisdiction DC on DC.V_JURISDICTION_CODE = FC.V_SRC_CNTRY_CODE
JOIN VW_APPLN_PARAMS APP ON APP.V_PARAM_CATEGORY = 'KYC'
AND APP.V_PARAM_IDENTIFIER =
'KYC_PROCESSING_DATE'
AND APP.N_CNTRY_KEY = DC.N_JURISDICTION_KEY
AND FC.FIC_MIS_DATE = APP.d_Param_Date
UNION
SELECT BUS_CUST_ACCT.CUST_INTRL_ID,
BUS_CUST_ACCT.ACCT_INTRL_ID,
BUS_CUST_ACCT.CUST_ACCT_ROLE_CD
FROM BUS_CUST_ACCT --another schema's table containing rows in millions.
--Can you tell me any other method to acheive the above select
CREATE TABLE vw_kyc_dr_ip as
SELECT FCU.V_SRC_CNTRY_CODE JRSDCN_CD,
FCU.V_CUST_NUMBER v_cust_id,
FACRS.V_CUST_NUMBER v_ip_cust_id,
ROLS.F_CONTROLLING_ROLE f_cntrl_role
FROM VW_CUST_BUS_REQ FCU -- This is another Mview it contains data approx 50,000
JOIN VW_CUST_ACCT_BUS_REQ FACR/* see above view definition, contains rows in millions */ ON FCU.V_CUST_NUMBER = FACR.V_CUST_NUMBER
JOIN VW_CUST_ACCT_BUS_REQ FACRS ON FACR.V_ACCT_NUMBER = FACRS.V_ACCT_NUMBER
JOIN DIM_ACCT_CUST_ROLE_TYPE ROLS ON ROLS.V_ACCT_CUST_ROLE_CODE =FACRS.V_ACCT_CUST_ROLE_CODE
UNION
(SELECT FCU.V_SRC_CNTRY_CODE JRSDCN_CD,
FCU.V_CUST_NUMBER v_cust_id,
FCR.V_RELATED_CUST_NUMBER v_ip_cust_id,
'N' f_cntrl_role
FROM VW_CUST_BUS_REQ FCU
JOIN VW_CUST_CUST_BUS_REQ FCR ON FCU.V_CUST_NUMBER =
FCR.V_CUST_NUMBER
JOIN VW_APPLN_PARAMS P ON P.V_PARAM_IDENTIFIER = 'KYC_PROCESSING_DATE'
AND P.V_PARAM_CATEGORY = 'KYC'
AND FCR.D_RELATIONSHIP_EXPIRY_DATE >=
P.D_PARAM_DATE
JOIN DIM_JURISDICTION ON DIM_JURISDICTION.N_JURISDICTION_KEY =
P.N_CNTRY_KEY
AND DIM_JURISDICTION.V_JURISDICTION_CODE =
FCU.V_SRC_CNTRY_CODE
MINUS
SELECT DISTINCT FCU.V_SRC_CNTRY_CODE JRSDCN_CD,
FCU.V_CUST_NUMBER v_cust_id,
FACRS.V_CUST_NUMBER v_ip_cust_id,
'N'
FROM VW_CUST_BUS_REQ FCU
JOIN VW_CUST_ACCT_BUS_REQ FACR ON FCU.V_CUST_NUMBER =
FACR.V_CUST_NUMBER
JOIN VW_CUST_ACCT_BUS_REQ FACRS ON FACR.V_ACCT_NUMBER =
FACRS.V_ACCT_NUMBER
JOIN DIM_ACCT_CUST_ROLE_TYPE ROLS ON ROLS.V_ACCT_CUST_ROLE_CODE =
FACRS.V_ACCT_CUST_ROLE_CODE
AND ROLS.F_CONTROLLING_ROLE = 'Y'
UNION
(SELECT FCU.V_SRC_CNTRY_CODE JRSDCN_CD,
FCU.V_CUST_NUMBER v_cust_id,
FCR.V_CUST_NUMBER v_ip_cust_id,
'N' f_cntrl_role
FROM VW_CUST_BUS_REQ FCU
JOIN VW_CUST_CUST_BUS_REQ FCR ON FCU.V_CUST_NUMBER =
FCR.V_RELATED_CUST_NUMBER
JOIN VW_APPLN_PARAMS P ON P.V_PARAM_IDENTIFIER = 'KYC_PROCESSING_DATE'
AND P.V_PARAM_CATEGORY = 'KYC'
AND FCR.D_RELATIONSHIP_EXPIRY_DATE >=
P.D_PARAM_DATE
JOIN DIM_JURISDICTION ON DIM_JURISDICTION.V_JURISDICTION_CODE =
FCU.V_SRC_CNTRY_CODE
AND DIM_JURISDICTION.N_JURISDICTION_KEY =
P.N_CNTRY_KEY
MINUS
SELECT DISTINCT FCU.V_SRC_CNTRY_CODE JRSDCN_CD,
FCU.V_CUST_NUMBER v_cust_id,
FACRS.V_CUST_NUMBER v_ip_cust_id,
'N'
FROM VW_CUST_BUS_REQ FCU
JOIN VW_CUST_ACCT_BUS_REQ FACR ON FCU.V_CUST_NUMBER =
FACR.V_CUST_NUMBER
JOIN VW_CUST_ACCT_BUS_REQ FACRS ON FACR.V_ACCT_NUMBER =
FACRS.V_ACCT_NUMBER
JOIN DIM_ACCT_CUST_ROLE_TYPE ROLS ON ROLS.V_ACCT_CUST_ROLE_CODE =
FACRS.V_ACCT_CUST_ROLE_CODE
AND ROLS.F_CONTROLLING_ROLE = 'Y'
Kindlt advice me on technical side , i think it is difficult to make you understand functionality. -
Problem while selecting BELNR from BSEG
Hi Experts,
I have a report performance problem while fetching BELNR from BSEG table.
I have to print latest BELNR from BSEG where BUZID = M but at the time of execution of report, It is taking too much time (More that hour and sometimes it gets hanged).
I have also gone through the comments provided by experts for previous problems asked in this forum e.g. BSEG is a cluster table that is why data retrieval takes long time etc.
Can any one has any other idea or suggestion or any other way to solve this problem
Regards,
NeerajHi,
1) Try to create an index on BUZID field
2) Don't use SELECT/ENDSELECT statement. Instead of that extract all the concerned entries from BSEG into an internal table :
select belnr from bseg appending table itab where buzid = 'M'.
then do this :
sort itab by belnr.
describe itab lines n.
read table itab index n.
Please reward if helpful.
Regards,
Nicolas. -
Performance problems with 0EC_PCA_3 datasource
Hi experts,
We have recently upgraded the Business Content in our BW system, as well as the plug-in on R/3 side. Now we have BC3.3 and PI2004.1. Given the opportunity, we decided to apply the new 0EC_PCA_3 and 0EC_PCA_4 datasources that provide more detailed data from table GLPCA.
The new datasources have been activated and transported to the QA system, where we experience serious performance problems while extracting data from R/3. All other data extractions work as before so there should not be any problem with the hardware.
Do you use 0EC_PCA_3? Have you experienced any problem with the speed of data extraction/transfer? We already have applied the changes suggested in note 597909 (Performance of FI-SL line item extractors: Creating indexes) and created secondary indexes on GLPCA table but it did not help.
thanks and regards,
CsabaSeems the problem was caused by a custom development - quantity conversion in user exit. However, we tried loading earlier after removal of the exit, it did not help then (loading took even longer...). Now it did.
-
Performance problem with selecting records from BSEG and KONV
Hi,
I am having performance problem while selecting records from BSEG and KONV table. As these two tables have large amount of data , they are taking lot of time . Can anyone help me in improving the performance . Thanks in advance .
Regards,
PrashantHi,
Some steps to improve performance
SOME STEPS USED TO IMPROVE UR PERFORMANCE:
1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
3. Design your Query to Use as much index fields as possible from left to right in your WHERE statement
4. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
5. Avoid using nested SELECT statement SELECT within LOOPs.
6. Avoid using INTO CORRESPONDING FIELDS OF TABLE. Instead use INTO TABLE.
7. Avoid using SELECT * and Select only the required fields from the table.
8. Avoid nested loops when working with large internal tables.
9. Use assign instead of into in LOOPs for table types with large work areas
10. When in doubt call transaction SE30 and use the examples and check your code
11. Whenever using READ TABLE use BINARY SEARCH addition to speed up the search. Be sure to sort the internal table before binary search. This is a general thumb rule but typically if you are sure that the data in internal table is less than 200 entries you need not do SORT and use BINARY SEARCH since this is an overhead in performance.
12. Use "CHECK" instead of IF/ENDIF whenever possible.
13. Use "CASE" instead of IF/ENDIF whenever possible.
14. Use "MOVE" with individual variable/field moves instead of "MOVE-
CORRESPONDING" creates more coding but is more effcient. -
JDBC performance problem with Blob
Hi,
I've some performance problem while inserting Blob in an Oracle
8i database. I'm using JVM 1.17 on HP-UX 11.0 with Oracle thin
client
The table I used contains only two columns : an integer (the
primary key) and a blob.
First I insert a row in the table with an empty_blob()
Then I select back this row to get the blob
and finally I fill in the blob with the data.
But it takes an average 4.5 seconds to insert a blob of 47 Kb.
Am I doing something wrong?
Any suggestion or hint will be welcome.
Thanks in advance
Didier
nullDon S. (guest) wrote:
: Didier Derck (guest) wrote:
: : Hi,
: : I've some performance problem while inserting Blob in an
Oracle
: : 8i database. I'm using JVM 1.17 on HP-UX 11.0 with Oracle
thin
: : client
: : The table I used contains only two columns : an integer (the
: : primary key) and a blob.
: : First I insert a row in the table with an empty_blob()
: : Then I select back this row to get the blob
: : and finally I fill in the blob with the data.
: : But it takes an average 4.5 seconds to insert a blob of 47
Kb.
: : Am I doing something wrong?
: : Any suggestion or hint will be welcome.
: : Thanks in advance
: : Didier
: In our testing if you use blob.putBytes() you will get better
: performance. The draw back we found was the 32k limit we ran
in
: to. We had to chunk things larger than that and make calls to
: the append method. I was dissappointed in Oracle's phone
support
: on what causes the 32k limit. In addition the getBytes() for
: retrieve doesn't seem to work. You'll have to use the
: InputStream for that. Oh, and FYI. We ran into a 2k limit on
: putChars() for CLOB's.
the thin drivers currently use the package "dbms_lob" behind the
scenes while the jdbc oci 815 drivers and higher uses the native
oci calls which makes them much faster.
there also is a 32k limit on pl/sql stored procedures/functions
for parms.
you may have run into that.
null -
Performance issues while opening business rule
Hi,
we're working with Hyperion version 9.2.1 and we're having some performance problems while opening business rules. I analyzed the issue and found out that it has something to do with assigning access privileges to the rule.
The authorization plan looks as followed:
User A is assigned to group G1
User B is assigned to group G2
Group G1 ist assigned to Group XYZ
Group G2 ist assigned to Group XYZ
Group XYZ holds the provision "basic user" for the planning application.
Without assigning any access priviliege the business rule opens immediately.
By assigning access privilege to group G1/G2 (validate or launch) the business rule opens immediately.
By assigning access privilege to group XYZ the business rule opens after 2-5 minutes.
Has anyone an idea why this happens and how to solve this?
Kind regards
Uli
Edited by: user13110201 on 12.05.2010 04:31This has been an issue with Business Rules for quite awhile. Oracle has made steps both forward and backward in later releases than yours; and they've issued patches addressing, if not completely resolving, the problem. Things finally seem to be much better in 11.1.1.3, although YMMV.
-
Hi all
I am facing performance problem, While selecting field 'DATAB' ( Validity start date of the condition record ) from Pooled table 'A017',I am getting timed out error ( slong ).IS there any efficient way to retrieve data.
Regards
Sreenivasa ReddyHi,
follow below condition to improve performance
1) Remove * from select
2) Select field in sequence as defined in database
3) Avoid unnecessary selects
i.e check for internal table not initial
4) Use all entries and sort table by key fields
5) Remove selects ferom loop and use binary search
6) Try to use secondary index when you don't have
full key.
7) Modify internal table use transporting option
8) Avoid nested loop . Use read table and loop at itab
from sy-tabix statement.
9) free intrenal table memory wnen table is not
required for further processing.
10)
Follow below logic.
FORM SUB_SELECTION_AUFKTAB.
if not it_plant[] is initial.
it_plant1[] = it_plant[].
sort it_plant1 by werks.
delete adjacent duplicates from it_plant1 comparing werks
SELECT AUFNR KTEXT USER4 OBJNR INTO CORRESPONDING FIELDS OF TABLE
I_AUFKTAB
FROM AUFK
FOR ALL ENTRIES IN it_plant1
WHERE AUFNR IN S_AUFNR AND
KTEXT IN S_KTEXT AND
WERKS IN S_WERKS AND
AUART IN S_AUART AND
USER4 IN S_USER4 AND
werks eq it_plant1-werks.
free it_plant1.
Endif.
ENDFORM. "SUB_SELECTION_AUFKTAB
Regards
Amole -
Facing a major problem while performing restoration of my mssql DB
Dear Experts,
I am facing a major problem while performing restoration of my mssql DB. The situation is like
1. I have successfully take full and transactional log backup in a external device using MSSQL Server Managemnt Studio.
Backup was successfully completed as per MSSQL Server Managemnt Studio message.
2. Try to restore the same using MSSQL Server Managemnt Studio. It is showing the following error
System.Data.SqlClient.SqlError: RESTORE cannot process database <DB_SID> because
it is in use by this session. It is recommended that the master database be used when
performing this operation.
I have followed the guidelines specified in the link
[SAP Help Link for Restoring the <SAPSID> Backup from a Device |http://help.sap.com/saphelp_nw70/helpdata/en/f2/31ad56810c11d288ec0000e8200722/frameset.htm]
But everytime I am getting the same error message. I have checked with all options but there
was no resolutions. Kindly advise me in this regard.
Thanks and Regards,
Parthahttp://social.msdn.microsoft.com/Forums/en-CA/sqltools/thread/37ee8e24-7aaa-472b-861a-fc0cc513338a
hope it helps -
Hi to all ,
I' ve problem while fetching bsad for a report , in t-code se30 i've seen that it takes %89,1 performance of overall.
SELECT belnr buzei dmbtr blart budat augdt augbl sgtxt
into table odemelerg
FROM bsad
WHERE bukrs EQ bukrs
AND kunnr EQ kunnr
AND ( umsks EQ space OR umsks IS NULL )
AND ( umskz EQ space OR umskz IS NULL )
AND augbl EQ i_augbl
AND augdt GE i_budat
AND gjahr EQ gjahr
AND belnr NE i_belnr
AND bsadbelnr NE bsadaugbl
AND ( blart EQ blart_bt OR blart EQ blart_hf
OR blart EQ blart_mi ).
here : blart_bt is declared as a constant type and its value is 'BT'. such as blart_hf, blart_mi
How can I make this Select query working in a better performance
Kind regards,
CaglarHi
If you know the bill number:
-1) Search FI document:
Get header data
select * from bkpf where AWTYP = 'VBRK'
and AWKEY = BILL NUMBER.
EXIT.
ENDSELECT.
Get items data
SELECT * FROM BSEG INTO TABLE T_BSEG
WHERE BUKRS = BKPF-BUKRS
AND BELNR = BKPF-BELNR
AND GJAHR = BKPF-GJAHR
AND KOART = 'D'.
Payment:
LOOP AT T_BSEG WHERE AUGDT <> '00000000'.
IF T_BSEG-AUGBL <> _BKPF-BELNR
T_BSEG-AUGDT <> _BKPF-BUDAT.
SELECT * FROM BKPF INTO _BKPF
WHERE BUKRS = T_BSEG-BUKRS
AND BELNR = T_BSEG-AUGBL
AND BUDAT = T_BSEG-AUGDT.
EXIT.
ENDSELECT.
SELECT * FROM BSEG APPENDING TABLE T_PAYMENT
WHERE BUKRS = _BKPF-BUKRS
AND BELNR = _BKPF-BELNR
AND GJAHR = _BKPF-GJAHR
AND KOART = 'D'.
ENDIF.
ENDLOOP.
Partial payment
SELECT * FROM BSAD INTO TABLE WHERE BUKRS = BKPF-BUKRS
AND KUNNR = T_BSEG-KUNNR
AND REBZG = BKPF-BELNR
AND REBZJ = BKPF-GJAHR.
Max -
Performance problem in Zstick report...
Hi Experts,
I am facing performance problem in Custoom Stock report of Material Management.
In this report i am fetching all the materials with its batches to get the desired output, at a time this report executes 36,000 plus unique combination of material and batch.
This report takes around 30 mins to execute. And this report is to be viewed regularly in every 2 hours.
To read the batch characteristics value I am using FM -> '/SAPMP/CE1_BATCH_GET_DETAIL'
Is there any way out to increase the performance of this report, the output of the report is in ALV.
May i have any refresh button in the report so that data may get refreshed automatically without executing it again. or is there any cache memory concept.
Note: I have declared all the itabs with type sorted, all the select queries are fetched with key and index.
Thanks
Rohit GharwarHello,
SE30 is old. Switch on trace on ST12 while running this progarm and identify where exactly most of the time is being spent. If you see high CPU time this problem with the ABAP code. You can exactly figure out the program/function module from ST12 trace where exactly time is being spent. If you see high database time in ST12, problem is with database related issue. So basically you have to analyze sql statement from performance traces in ST12. These could resolve your issue.
Yours Sincerely
Dileep
Maybe you are looking for
-
Nokia 5300: please help!
i have a problem with my nokia 5300 xpress music phone! I can listen to the radio with my headphones, but when i play my music, it only plays via the loudspeakers and not through the headphones!! i've emailed nokia and they've repeatedly advised me t
-
Failed JCO destination name 'WD_RFC_METADATA_DEST'
Hi Friends I am created "WD_RFC_METADATA_DEST". this meta data.When i am created this metadata i was called message server as technical system of CRM server. means i am created metadata for CRM System Once metadata had complted then i was cheing
-
Error on Control Center while trying to run jobs
Hi, I got a problem while trying to run the scheduled jobs or process flows. We tried restarting the control center service manually. It is started without throwing any errors at the backend. But when I open the control center service and try to star
-
Odd Patterns in Display After Sleep
Recently, after either waking the iMac from sleep, or (tonight) after restarting it, bizarre but repeating patterns appear on the screen, the only remedy for which is to shut down the computer and restart. The patterns have typically been small grey
-
PO order exceeded error while GR
Hello Gurus, I created a PO for 100 units with the tolerence for underlimit and overlimit as 10%. Now wen I want to post GR for 107 units units its gives the error "PO order exceeded by 7 units". Does the overlimit of 10% doesn't mean that i can post