Improving performance for SM35
Hi all,
Are there any ways to improve the performance (time taken to load data) of SM35?
We are aware of executing the session in backgroud, but due to high data volume (~>10,000 records) per file, the time taken is still slow (about 3 hours per file).
Hi Raj,
The previous posters gave already all the information you need, but since the question is still open, let me try to summarize it.
You're getting almost 1 transaction processed per second, which might be ok depending on the application area and the complexity of the executed transaction. So as Hermann initially pointed out, you should first profile the transaction you're running and check for any inefficiencies (custom coding in exits/BAdI's are often sources of slow-downs). If you find any problems, tune your transaction/application (not SM35).
If your application is fast enough (i.e. you cannot find any easy measures for making your transaction faster), you can compare application/transaction processing time versus total time taken in SM35. I personally doubt that you'll find any worthwhile discrepancy there (i.e. process time taken up by SM35, which is not due to the called transaction). Thus you should be left with Hermann's initial point of running several BDC's in parallel - meaning that you'll have to split your input file (you can automate that if you have to run such loads regularly). Without parallel processing you will always encounter unacceptable processing times when running huge data loads (even with optimal coding throughout the application).
Kind regards, harald
Similar Messages
-
To improve performance for report
Hi Expert,
i have generated the opensales order report which is fetching data from VBAK it is taking more time exectuing in the forground itself.
it is going in to dump in foreground and i have executed in the background also but it is going in to dump.
SELECT vbeln
auart
submi
vkorg
vtweg
spart
knumv
vdatu
vprgr
ihrez
bname
kunnr
FROM vbak
APPENDING TABLE itab_vbak_vbap
FOR ALL ENTRIES IN l_itab_temp
*BEGIN OF change 17/Oct/2008.
WHERE erdat IN s_erdat AND
submi = l_itab_temp-submi AND
*End of Changes 17/Oct/2008.
auart = l_itab_temp-auart AND
*BEGIN OF change 17/Oct/2008.
submi = l_itab_temp-submi AND
*End of Changes 17/Oct/2008.
vkorg = l_itab_temp-vkorg AND
vtweg = l_itab_temp-vtweg AND
spart = l_itab_temp-spart AND
vdatu = l_itab_temp-vdatu AND
vprgr = l_itab_temp-vprgr AND
ihrez = l_itab_temp-ihrez AND
bname = l_itab_temp-bname AND
kunnr = l_itab_temp-sap_kunnr.
DELETE itab_temp FROM l_v_from_rec TO l_v_to_rec.
ENDDO.
Please give me suggession for improving performance for the programmes.hi,
you try like this
DATA:BEGIN OF itab1 OCCURS 0,
vbeln LIKE vbak-vbeln,
END OF itab1.
DATA: BEGIN OF itab2 OCCURS 0,
vbeln LIKE vbap-vbeln,
posnr LIKE vbap-posnr,
matnr LIKE vbap-matnr,
END OF itab2.
DATA: BEGIN OF itab3 OCCURS 0,
vbeln TYPE vbeln_va,
posnr TYPE posnr_va,
matnr TYPE matnr,
END OF itab3.
SELECT-OPTIONS: s_vbeln FOR vbak-vbeln.
START-OF-SELECTION.
SELECT vbeln FROM vbak INTO TABLE itab1
WHERE vbeln IN s_vbeln.
IF itab1[] IS NOT INITIAL.
SELECT vbeln posnr matnr FROM vbap INTO TABLE itab2
FOR ALL ENTRIES IN itab1
WHERE vbeln = itab1-vbeln.
ENDIF. -
Times ten to improve performance for search results in Oracle eBS
Hi ,
We have various search scenarios in our ERP implementaion using Oracle Apps eBS, for example searching for an item . Oracle apps does provide item search but performance is not great. We have about 30 million items and hence to improve the performance of the search thought Times ten may help.
Can anyone please clarify if Times ten can be used to improve performance on the eBS database , if yes how.Vikash,
We were thinking along the same lines (using TimesTen for massive item search in e-Business Suite). In our case massive Item / parametric search leveraging the Product Information Management application. We were thinking about setting up a POC on a Linux Server with a Vision Instance. We should compare notes?
SParker -
Need to improve performance for bex queries
Dear Experts,
Here we have bex queries buit on BW infoset, further infoset is buit on 2 dsos and 4 infoobjects.
we have built secondary indices to the two dso assuming will improve performance, but still query execution time is very long.
Could you suggest me on this.
Thanks in advance,
MannuHI,
Thanks for the repsonse.
But as I have mentioned the infoset is based on DSOs and Infoobjects. So we could not perform on aggregates.
in RSRT
I have tried look in read mode of the query i.e. in 'x', which is also valid as qurey needs to fetch huge data.
Could you pls look into other possible areas, in order to improve this.
Thanks in advance,
Mannu -
BIA to improve performance for BPS Applications
Hi All,
Is it possible to improve performance of BPS applications using BIA. Currently we are running applications on BI-BPS which because of huge range of period are having a performance issue.
Would request to please share whether in this read and write option of BPS would BIA be helpful and to what extent can the performance be increased?
Request an early reply as system is in really bad shape and users are grappling with poor performance?
Rgds,
RajeevHi Rajeev,
If the performance issue you are facing is while running the query on real-time (transactional) infocube being used in BPS, then BIA can help. The closed requests from real-time cube can be indexed in BIA. At the query runtime, analytic engine reads data from database for open request and from BIA for closed and indexed requests. It combines this data with the plan buffer cache and produce the result.
Hence if you are facing issue with query response time, BIA will defenitely help.
Regards,
Praveen -
How to improve performance for bulk data load in Dynamics CRM 2013 Online
Hi all,
We need to bulk update (or create) contacts into Dynamics CRM 2013 online every night due to data updated from another external data source. The data size is around 100,000 and the data loading duration was around 6 hours.
We are already using ExecuteMultiple web services to handle the integration, however, the 6 hours integraton duration is still not acceptable and we are seeking for any advise for further improvement.
Any help is highly appreciated. Many thanks.
GaryI think Andrii's referring to running multiple threads in parallel (see
http://www.mscrmuk.blogspot.co.uk/2012/02/data-migration-performance-to-crm.html - it's a bit dated, but should still be relevant).
Microsoft do have some throttling limits applied in Crm Online, and it is worth contacting them to see if you can get those raised.
100 000 records per night seems a large number. Are all these records new or updated records, or are there some that are unchanged, in which case you could filter them out before uploading ? Or are there useful ways to summarise the data before loading
Microsoft CRM MVP - http://mscrmuk.blogspot.com/ http://www.excitation.co.uk -
How to improve performance for Custom Extractor in BI..
HI all,
I am new to BI and started working on BI for couple of weeks.. I created a Custom Extractor(Data View) in the Source system and when i pull data takes lot of time.. Can any one respond to this, suggesting how to improve the performance of my custom Extractor.. Please do the needfull..
Thanks and Regards,
Venugopal..Dear Venugopal,
use transaction ST05 to check if your SQL statements are optimal and that you do not have redundant database calls. You should use as much as possible "bulking", which means to fetch the required data with one request to database and not with multiple requests to database.
Use transaction SE30 to check if you are wasting time in loops and if yes, optimize the algorithm.
Best Regards,
Sylvia -
How can I improve performance for BC4J/JSP-application
Hi,
I have developed a JSP-Applikcation with the master-detail views. This Application has been implemented
as SSO enabled web portlet into portal. Whenn I click on a row retrieved from the master view (Master.jsp), it take 15 seconds
until the associated data in detail view will be displayed on the other window(Detail.jsp). The master table has about 2500 records and detail table about 7000. (Other case: it takes 2 seconds for 162 rows (master) and 228 rows (detail) respectively)
In Master.jsp I set rangesize = "10" to reduce data loading time.
======================== master.jsp =================================
<jbo:DataSource id="dsMaster appid=testAppId viewobject="MaterView" rangesize="10">
<a href="detail.jsp?RowKeyValue=<jbo:ShowValue datasource="dsMaster" dataitem="RowKey"/>">
Here Click
</a>
==================================================================
Because all records from master view have firstly to be retrieved to locate right row, I set rangesize="-1" in detail.jsp. Consequently this leads to a lower performance.
When rangesize="20" sets instead rangesize="-1", The performance is good, but the wanted Data from detail view are not displayed if the records of master view cliked on ist not in this range.
======================== detail.jsp ======================================
<jbo:DataSource id="dsMaster appid=testAppId viewobject="MaterView" rangesize="-1">
<jbo:RefreshDataSource datasource="dsMaster">
<jbo:Row id="msRow" datasource="dsMaster" action="Find" rowkeyparam="RowkeyValue">
<jbo:dataSource id="dsDetail" appid="testAppId viewobject="DetailView">
======================================================================
Is my programming logic not to be suited for the high performance?
How can I improve the performance, if it is so?
Many thanks for your help.
regards,
Yoohttp://forums.adobe.com/thread/1369260?tstart=0
-
How to improve Performance for Select statement
Hi Friends,
Can you please help me in improving the performance of the following query:
SELECT SINGLE MAX( policyterm ) startterm INTO (lv_term, lv_cal_date) FROM zu1cd_policyterm WHERE gpart = gv_part GROUP BY startterm.
Thanks and Regards,
Johnylong lists can not be produced with a SELECT SINGLE, there is also nothing to group.
But I guess the SINGLE is a bug
SELECT MAX( policyterm ) startterm
INTO (lv_term, lv_cal_date)
FROM zu1cd_policyterm
WHERE gpart = gv_part
GROUP BY startterm.
How many records are in zu1cd_policyterm ?
Is there an index starting with gpart?
If first answer is 'large' and second 'no' => slow
What is the meaning of gpart? How many different values can it assume?
If many different values then an index makes sense, if you are allowed to create
an index.
Otherwise you must be patient.
Siegfried -
Improving performance for java
I'm new to this so please bare with me ... I have 2 basic questions
I just upgraded my server to SunOS 5.10 Generic_139555-08 sun4u sparc SUNW,Sun-Fire-V440
I also upgraded java to java version "1.6.0_14"
This is a 4 processor box. Top gives me:
last pid: 26233; load averages: 2.79, 2.99, 3.12 13:23:57
174 processes: 172 sleeping, 2 on cpu
CPU states: 40.2% idle, 54.2% user, 5.6% kernel, 0.0% iowait, 0.0% swap
Memory: 8192M real, 3059M free, 6156M swap in use, 4105M swap free
PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
17294 prodslic 270 0 0 654M 641M cpu/1 527:36 50.02% java
*!st Question:*
*1. Why is java using so much cpu time?*
When I run ps -ef | grep java:
root 15666 1 0 Aug 10 ? 4:52 /usr/java/bin/java -server -Xmx128m -XX:+UseParallelGC -XX:ParallelGCThreads=4
prodslic 17294 1 25 18:07:14 ? 530:07 /usr/jdk/instances/jdk1.6.0/bin/java -Xmx1024m -Djava.awt.headless=true -Djava.
*2nd Question:*
*2. Why are there 2 java version running?*
/usr/java/bin/java -versionjava version "1.5.0_18"
/usr/jdk/instances/jdk1.6.0/bin/java -versionjava version "1.6.0_14"
This is confusiing to me. I'd also like to know what the different command line options mean
Thanks
Edited by: BH80477 on Aug 14, 2009 8:19 AMClaudia,
We have not yet released a "production" version of OHJ 4.2 because 1) we'd like to add a few more features before calling it production, and 2) OHJ 4.2 has not yet shipped with an Oracle product and therefore hasn't undergone the rigorous testing that takes place on larger Oracle products. If you're concerned about the label of "production" versus "beta," I think you'd be pretty safe going with the latest release of OHJ, or we wouldn't have put it OTN for you to use. The developers have tested thoroughly. :) I'd recommend trying out 4.2.1 since it's got some substantial improvements over the 4.1 branch, and on the off-chance you run into problems, please let us know.
- Ryan -
How to improve performance for Azure Table Storage bulk loads
Hello all,
Would appreciate your help as we are facing a challenge.
We are tried to bulk load Azure table storage. We have a file that contains nearly 2 million rows.
We would need to reach a point where we could bulk load 100000-150000 entries per minute. Currently, it takes more than 10 hours to process the file..
We have tried Parallel.Foreach but it doesn't help. Today I discovered Partitioning in PLINQ. Would that be the way to go??
Any ideas? I have spent nearly two days in trying to optimize it using PLINQ, but still I am not sure what is the best thing to do.
Kindly, note that we shouldn't be using SQL/Azure SQL for this.
I would really appreciate your help.
ThanksI'd think you're just pooling the parallel connections to Azure, if you do it on one system. You'd also have a bottleneck of round trip time from you, through the internet to Azure and back again.
You could speed it up by moving the data file to the cloud and process it with a Cloud worker role. That way you'd be in the datacenter (which is a much faster, more optimized network.)
Or, if that's not fast enough - if you can split the data so multiple WorkerRoles could each process part of the file, you can use the VM's scale to put enough machines to it that it gets done quickly.
Darin R. -
How to improve performance for this code
Hi,
LOOP AT lt_element INTO ls_element.
READ TABLE lt_element_ident INTO ls_element_ident
WITH KEY element_id = ls_element-element_id BINARY SEARCH.
IF sy-subrc EQ 0.
MOVE ls_element_ident-value TO lv_guid.
SELECT * FROM zcm_valuation_at
APPENDING CORRESPONDING FIELDS OF TABLE lt_caseattributes
WHERE case_guid = lv_guid.
ENDIF.
ENDLOOP.
LOOP AT lt_caseattributes INTO ls_caseattributes.
IF ls_caseattributes-ext_key IS INITIAL.
SELECT SINGLE ext_key
INTO CORRESPONDING FIELDS OF ls_caseattributes
FROM scmg_t_case_attr
WHERE case_guid = ls_caseattributes-case_guid.
ENDIF.
*To get the Status description of the Case
SELECT SINGLE stat_ordno_descr
INTO ls_caseattributes-status
FROM scmgstatprofst AS a
INNER JOIN scmg_t_case_attr AS b
ON aprofile_id = bprofile_id
AND astat_orderno = bstat_orderno
WHERE case_guid = ls_caseattributes-case_guid.
MODIFY lt_caseattributes FROM ls_caseattributes INDEX sy-tabix TRANSPORTING status ext_key.
ENDLOOP.
READ TABLE lt_caseattributes INTO ls_caseattributes INDEX 1.
Regards,
MarutiHi,
try this kind of code:
==================================
start new
DATA:
lt_scmgstatprofst LIKE scmgstatprofst OCCURS 0 WITH HEADER LINE,
wa_scmg_t_case_attr LIKE scmg_t_case_attr.
SELECT * FROM scmgstatprofst INTO TABLE lt_scmgstatprofst.
SORT lt_scmgstatprofst BY profile_id stat_orderno.
end new
LOOP AT lt_element INTO ls_element.
READ TABLE lt_element_ident INTO ls_element_ident
WITH KEY element_id = ls_element-element_id BINARY SEARCH.
IF sy-subrc EQ 0.
MOVE ls_element_ident-value TO lv_guid.
SELECT * FROM zcm_valuation_at
APPENDING CORRESPONDING FIELDS OF TABLE lt_caseattributes
WHERE case_guid = lv_guid.
ENDIF.
ENDLOOP.
LOOP AT lt_caseattributes INTO ls_caseattributes.
IF ls_caseattributes-ext_key IS INITIAL.
SELECT SINGLE ext_key
INTO CORRESPONDING FIELDS OF ls_caseattributes
FROM scmg_t_case_attr
WHERE case_guid = ls_caseattributes-case_guid.
ENDIF.
*To get the Status description of the Case
start deletion
SELECT SINGLE stat_ordno_descr
INTO ls_caseattributes-status
FROM scmgstatprofst AS a
INNER JOIN scmg_t_case_attr AS b
ON aprofile_id = bprofile_id
AND astat_orderno = bstat_orderno
WHERE case_guid = ls_caseattributes-case_guid.
end deletion
start new
CLEAR wa_scmg_t_case_attr.
SELECT SINGLE * FROM scmg_t_case_attr INTO wa_scmg_t_case_attr
WHERE case_guid = ls_caseattributes-case_guid.
READ TABLE lt_scmgstatprofst WITH KEY
profile_id = wa_scmg_t_case_attr-profile_id
stat_orderno = wa_scmg_t_case_attr-stat_orderno
BINARY SEARCH.
IF sy-subrc IS INITIAL.
ls_caseattributes-status = lt_scmgstatprofst-stat_ordno_descr.
ENDIF.
end new
MODIFY lt_caseattributes FROM ls_caseattributes INDEX sy-tabix
TRANSPORTING status ext_key.
ENDLOOP.
READ TABLE lt_caseattributes INTO ls_caseattributes INDEX 1.
==================================
Regards
Walter Habich
Edited by: Walter Habich on Jun 17, 2008 8:41 AM -
How to improve performace for this program
Hi,
I want to improve performance for this program.Pls help me.Here is the code.
Thanks
kumar
--# SCCS: @(#)nscdprpt_rpt_fam_fweq_detail_aggr_dml.sql 2.1 08/17/05 12:16:11
--# Location: /appl/dpdm/src/sql/SCCS/s.nscdprpt_rpt_fam_fweq_detail_aggr_dml.sql
-- =============================================================================
-- Purpose:
-- DML Script to calculate aggregate rows from table NSCDPRPT.RPT_FAM_FWEQ_DETAIL
-- inserts one (1) monthly total row for each comparison type (6 of them)
-- Modification History
-- Date Version Who What
-- 28-Jun-05 V1.00 J. Myers Initial Version
-- 17-Aug-05 gzuerc Replaced UNION ALL code with INSERT's to avoid
-- Oracle error out of undo tablespace.
-- ============================================================================
-- NOTES:
-- This process pulls data from RPT_FAM_FWEQ_DETAIL table.
-- The target table RPT_FAM_FWEQ_DETAIL_AGGR is truncated by plan_ctry and loaded daily;
-- it is the data used in the Forecast Accuracy Measurements Report.
-- The input data is 'wide' (6 quantities), and this script outputs a single
-- row for each quantity as a 'thin' table (single column F1_QTY and F2_QTY shared by all quantities), and
-- with a ROW_TYPE = 'TOTAL'
-- Two (2) other scripts add additional rows to this table as follows:
-- 1) nscdprpt_rpt_fam_FWEQ_detail_aggr_sesn_avg_dml.sql => calculates the
-- total quantities F1_QTY, F2_QTY and AE for each LEVEL_VALUE
-- 2) nscdprpt_rpt_fam_FWEQ_detail_aggr_pct_swg_dml.sql => calculates the
-- percent swing in quantities between DISP_EVENTS for each grouping
-- of PLAN_CTRY, DIVISION, SEASON and TRG_EVENT_NUM for only LEVEL_TYPE = 'SUB_CAT'
-- =============================================================================
-- The result set from joining RPT_FAM_FWEQ_DETAIL rows with comparison types
-- is inserted into the table RPT_FAM_FWEQ_DETAIL_AGGR
INSERT INTO rpt_fam_FWEQ_detail_aggr
SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
compare_name,
first_display_event_num,
level_type,
level_value,
'TOTAL' AS row_type,
material,
key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
curr_prod_i2_life_cyc_cd,
NULL AS promo_ind, -- for future use
-- Each RPT_FAM_FWEQ_DETAIL row's eight (8) quantity columns are broken down
-- into F1_QTY from DISP_aaaa_QTY, F2_QTY from TRG_aaaa_QTY and
-- AE from aaaa_AE where 'aaaa' is equal to one of the COMPARE_TYPEs below:
-- F1_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
disp_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
disp_net_qty
WHEN compare_type = 'NET_AO' THEN
disp_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
disp_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
disp_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
disp_auth_futr_qty
END as f1_qty,
-- F2_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
trg_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
trg_net_qty
WHEN compare_type = 'NET_AO' THEN
trg_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
trg_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
trg_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
trg_auth_futures_qty
END as f2_qty,
-- AE
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
abs(disp_dlvry_plan_qty - trg_dlvry_plan_qty)
WHEN compare_type = 'NET' THEN
abs(disp_net_qty - trg_net_qty)
WHEN compare_type = 'NET_AO' THEN
abs(disp_ao_qty - trg_ao_qty)
WHEN compare_type = 'NET_FTRS' THEN
abs(disp_futr_qty - trg_futr_qty)
WHEN compare_type = 'NET_REPLENS' THEN
abs(disp_replen_qty - trg_replen_qty)
WHEN compare_type = 'AUTH_FTRS' THEN
abs(disp_auth_futr_qty - trg_auth_futures_qty)
END as ae,
SYSDATE AS zz_insert_tmst
FROM
-- The following in-line view provides three (3) result sets from the RPT_FAM_FWEQ_DETAIL table
-- This in-line view returns only LEVEL_NUM = 1 or LEVEL_TYPE = 'SUB_CAT' data
SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
-- The following NULL'd columns' values cannot be saved due to aggregation
NULL AS material,
NULL AS key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
NULL AS sty_colr_sdesc,
NULL AS sty_grp_nbr,
NULL AS sty_grp_desc,
NULL AS matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
NULL AS curr_prod_i2_life_cyc_cd,
sum(disp_net_qty) AS disp_net_qty,
sum(trg_net_qty) AS trg_net_qty,
-- ABS(sum(trg_net_qty) - sum(disp_net_qty)) AS net_AE,
sum(disp_dlvry_plan_qty) AS disp_dlvry_plan_qty,
sum(trg_dlvry_plan_qty) AS trg_dlvry_plan_qty,
-- ABS(sum(trg_dlvry_plan_qty) - sum(disp_dlvry_plan_qty)) AS dlvry_plan_AE,
sum(disp_futr_qty) AS disp_futr_qty,
sum(trg_futr_qty) AS trg_futr_qty,
-- ABS(sum(trg_futr_qty) - sum(disp_futr_qty)) AS futr_AE,
sum(disp_ao_qty) AS disp_ao_qty,
sum(trg_ao_qty) AS trg_ao_qty,
-- ABS(sum(trg_ao_qty) - sum(disp_ao_qty)) AS ao_AE,
sum(disp_replen_qty) AS disp_replen_qty,
sum(trg_replen_qty) AS trg_replen_qty,
-- ABS(sum(trg_replen_qty) - sum(disp_replen_qty)) AS replen_AE,
sum(disp_futr_qty) AS disp_auth_futr_qty,
sum(trg_auth_futures_qty) AS trg_auth_futures_qty --,
-- ABS(sum(trg_auth_futures_qty) - sum(disp_futr_qty)) AS auth_futures_AE
FROM rpt_fam_FWEQ_detail
WHERE plan_ctry &where_plan_ctry
and level_num = 1 -- 'SUB-CAT'
-- AND (promo_ind <> 'Y' OR promo_ind IS null)
GROUP BY plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc -- 'Type Group' in Brio
) dtl,
-- The following in-line view returns all of the different combinations
-- of comparison types in the RPT_FAM_FWEQ_DETAIL table
-- This select returns the pairing of all forecast types
SELECT event_type, compare_type, compare_name
from
(SELECT event_type, compare_type, compare_type_description || ' to ' || compare_type_description AS compare_name
FROM rpt_fam_compare_types
WHERE event_type = 'FCST'
UNION
-- This select returns the pairing of all bookings types with forecast types
SELECT bkng.event_type, fcst.compare_type, bkng.compare_type_description || ' to ' || fcst.compare_type_description AS compare_name
FROM rpt_fam_compare_types fcst,
rpt_fam_compare_types bkng
WHERE fcst.event_type = 'FCST'
AND bkng.event_type = 'BKNG'
AND fcst.compare_type = bkng.compare_type
WHERE event_type || ' ' || compare_type <> 'FCST AUTH_FTRS'
AND compare_type NOT IN ('NET_SHIP', 'SHIP_NET_FTRS', 'NET_ETS', 'GROSS_FTRS')
) cmpr
-- The two (2) in-line views are joined by EVENT_TYPE (i.e. 'FCST' and 'BKNG')
-- to form a product of all RPT_FAM_FWEQ_DETAIL rows with comparison types
WHERE dtl.trg_event_type = cmpr.event_type
ORDER BY plan_ctry,
division,
season,
monthly_seq,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value
COMMIT
INSERT INTO rpt_fam_FWEQ_detail_aggr
SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
compare_name,
first_display_event_num,
level_type,
level_value,
'TOTAL' AS row_type,
material,
key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
curr_prod_i2_life_cyc_cd,
NULL AS promo_ind, -- for future use
-- Each RPT_FAM_FWEQ_DETAIL row's eight (8) quantity columns are broken down
-- into F1_QTY from DISP_aaaa_QTY, F2_QTY from TRG_aaaa_QTY and
-- AE from aaaa_AE where 'aaaa' is equal to one of the COMPARE_TYPEs below:
-- F1_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
disp_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
disp_net_qty
WHEN compare_type = 'NET_AO' THEN
disp_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
disp_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
disp_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
disp_auth_futr_qty
END as f1_qty,
-- F2_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
trg_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
trg_net_qty
WHEN compare_type = 'NET_AO' THEN
trg_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
trg_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
trg_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
trg_auth_futures_qty
END as f2_qty,
-- AE
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
abs(disp_dlvry_plan_qty - trg_dlvry_plan_qty)
WHEN compare_type = 'NET' THEN
abs(disp_net_qty - trg_net_qty)
WHEN compare_type = 'NET_AO' THEN
abs(disp_ao_qty - trg_ao_qty)
WHEN compare_type = 'NET_FTRS' THEN
abs(disp_futr_qty - trg_futr_qty)
WHEN compare_type = 'NET_REPLENS' THEN
abs(disp_replen_qty - trg_replen_qty)
WHEN compare_type = 'AUTH_FTRS' THEN
abs(disp_auth_futr_qty - trg_auth_futures_qty)
END as ae,
SYSDATE AS zz_insert_tmst
FROM
-- This in-line view returns only LEVEL_NUM = 2 or LEVEL_TYPE = 'STYLE_GROUP' data
(SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
-- The following NULL'd columns' values cannot be saved due to aggregation
NULL AS material,
NULL AS key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
NULL AS sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
NULL AS matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
NULL AS curr_prod_i2_life_cyc_cd,
sum(disp_net_qty) AS disp_net_qty,
sum(trg_net_qty) AS trg_net_qty,
-- ABS(sum(trg_net_qty) - sum(disp_net_qty)) AS net_AE,
sum(disp_dlvry_plan_qty) AS disp_dlvry_plan_qty,
sum(trg_dlvry_plan_qty) AS trg_dlvry_plan_qty,
-- ABS(sum(trg_dlvry_plan_qty) - sum(disp_dlvry_plan_qty)) AS dlvry_plan_AE,
sum(disp_futr_qty) AS disp_futr_qty,
sum(trg_futr_qty) AS trg_futr_qty,
-- ABS(sum(trg_futr_qty) - sum(disp_futr_qty)) AS futr_AE,
sum(disp_ao_qty) AS disp_ao_qty,
sum(trg_ao_qty) AS trg_ao_qty,
-- ABS(sum(trg_ao_qty) - sum(disp_ao_qty)) AS ao_AE,
sum(disp_replen_qty) AS disp_replen_qty,
sum(trg_replen_qty) AS trg_replen_qty,
-- ABS(sum(trg_replen_qty) - sum(disp_replen_qty)) AS replen_AE,
sum(disp_futr_qty) AS disp_auth_futr_qty,
sum(trg_auth_futures_qty) AS trg_auth_futures_qty --,
-- ABS(sum(trg_auth_futures_qty) - sum(disp_futr_qty)) AS auth_futures_AE
FROM rpt_fam_FWEQ_detail
WHERE plan_ctry &where_plan_ctry
and level_num = 2 -- 'STYLE-GRP' or 'STYLE'
-- AND (promo_ind <> 'Y' OR promo_ind IS null)
GROUP BY plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_grp_nbr,
sty_grp_desc,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc -- 'Type Group' in Brio
) dtl,
-- The following in-line view returns all of the different combinations
-- of comparison types in the RPT_FAM_FWEQ_DETAIL table
-- This select returns the pairing of all forecast types
SELECT event_type, compare_type, compare_name
from
(SELECT event_type, compare_type, compare_type_description || ' to ' || compare_type_description AS compare_name
FROM rpt_fam_compare_types
WHERE event_type = 'FCST'
UNION
-- This select returns the pairing of all bookings types with forecast types
SELECT bkng.event_type, fcst.compare_type, bkng.compare_type_description || ' to ' || fcst.compare_type_description AS compare_name
FROM rpt_fam_compare_types fcst,
rpt_fam_compare_types bkng
WHERE fcst.event_type = 'FCST'
AND bkng.event_type = 'BKNG'
AND fcst.compare_type = bkng.compare_type
WHERE event_type || ' ' || compare_type <> 'FCST AUTH_FTRS'
AND compare_type NOT IN ('NET_SHIP', 'SHIP_NET_FTRS', 'NET_ETS', 'GROSS_FTRS')
) cmpr
-- The two (2) in-line views are joined by EVENT_TYPE (i.e. 'FCST' and 'BKNG')
-- to form a product of all RPT_FAM_FWEQ_DETAIL rows with comparison types
WHERE dtl.trg_event_type = cmpr.event_type
ORDER BY plan_ctry,
division,
season,
monthly_seq,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value
COMMIT
INSERT INTO rpt_fam_FWEQ_detail_aggr
SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
compare_name,
first_display_event_num,
level_type,
level_value,
'TOTAL' AS row_type,
material,
key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
curr_prod_i2_life_cyc_cd,
NULL AS promo_ind, -- for future use
-- Each RPT_FAM_FWEQ_DETAIL row's eight (8) quantity columns are broken down
-- into F1_QTY from DISP_aaaa_QTY, F2_QTY from TRG_aaaa_QTY and
-- AE from aaaa_AE where 'aaaa' is equal to one of the COMPARE_TYPEs below:
-- F1_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
disp_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
disp_net_qty
WHEN compare_type = 'NET_AO' THEN
disp_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
disp_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
disp_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
disp_auth_futr_qty
END as f1_qty,
-- F2_QTY
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
trg_dlvry_plan_qty
WHEN compare_type = 'NET' THEN
trg_net_qty
WHEN compare_type = 'NET_AO' THEN
trg_ao_qty
WHEN compare_type = 'NET_FTRS' THEN
trg_futr_qty
WHEN compare_type = 'NET_REPLENS' THEN
trg_replen_qty
WHEN compare_type = 'AUTH_FTRS' THEN
trg_auth_futures_qty
END as f2_qty,
-- AE
CASE WHEN compare_type = 'DELIVERY_PLAN' THEN
abs(disp_dlvry_plan_qty - trg_dlvry_plan_qty)
WHEN compare_type = 'NET' THEN
abs(disp_net_qty - trg_net_qty)
WHEN compare_type = 'NET_AO' THEN
abs(disp_ao_qty - trg_ao_qty)
WHEN compare_type = 'NET_FTRS' THEN
abs(disp_futr_qty - trg_futr_qty)
WHEN compare_type = 'NET_REPLENS' THEN
abs(disp_replen_qty - trg_replen_qty)
WHEN compare_type = 'AUTH_FTRS' THEN
abs(disp_auth_futr_qty - trg_auth_futures_qty)
END as ae,
SYSDATE AS zz_insert_tmst
FROM
-- This in-line view returns only LEVEL_NUM = 3 or LEVEL_TYPE = 'MATL' data
(SELECT plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
material,
key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
curr_prod_i2_life_cyc_cd,
sum(disp_net_qty) AS disp_net_qty,
sum(trg_net_qty) AS trg_net_qty,
-- ABS(sum(trg_net_qty) - sum(disp_net_qty)) AS net_AE,
sum(disp_dlvry_plan_qty) AS disp_dlvry_plan_qty,
sum(trg_dlvry_plan_qty) AS trg_dlvry_plan_qty,
-- ABS(sum(trg_dlvry_plan_qty) - sum(disp_dlvry_plan_qty)) AS dlvry_plan_AE,
sum(disp_futr_qty) AS disp_futr_qty,
sum(trg_futr_qty) AS trg_futr_qty,
-- ABS(sum(trg_futr_qty) - sum(disp_futr_qty)) AS futr_AE,
sum(disp_ao_qty) AS disp_ao_qty,
sum(trg_ao_qty) AS trg_ao_qty,
-- ABS(sum(trg_ao_qty) - sum(disp_ao_qty)) AS ao_AE,
sum(disp_replen_qty) AS disp_replen_qty,
sum(trg_replen_qty) AS trg_replen_qty,
-- ABS(sum(trg_replen_qty) - sum(disp_replen_qty)) AS replen_AE,
sum(disp_futr_qty) AS disp_auth_futr_qty,
sum(trg_auth_futures_qty) AS trg_auth_futures_qty --,
-- ABS(sum(trg_auth_futures_qty) - sum(disp_futr_qty)) AS auth_futures_AE
FROM rpt_fam_FWEQ_detail
WHERE plan_ctry &where_plan_ctry
and level_num = 3 -- 'MATERIAL'
-- AND promo_ind <> 'Y'
GROUP BY plan_ctry,
division,
season,
monthly_seq,
bucket_month_date,
tier_num,
level_num,
tier_volume,
trg_event_num,
trg_event_type,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value,
material,
key_material,
sap_cat_cd,
sap_cat_desc,
sap_sub_cat_cd,
sap_sub_cat_desc,
sty_colr_sdesc,
sty_grp_nbr,
sty_grp_desc,
matl_typ,
sap_prod_typ_grp, -- 'Type Group' in Brio
typ_grp_desc, -- 'Type Group' in Brio
curr_prod_i2_life_cyc_cd
) dtl,
-- The following in-line view returns all of the different combinations
-- of comparison types in the RPT_FAM_FWEQ_DETAIL table
-- This select returns the pairing of all forecast types
SELECT event_type, compare_type, compare_name
from
(SELECT event_type, compare_type, compare_type_description || ' to ' || compare_type_description AS compare_name
FROM rpt_fam_compare_types
WHERE event_type = 'FCST'
UNION
-- This select returns the pairing of all bookings types with forecast types
SELECT bkng.event_type, fcst.compare_type, bkng.compare_type_description || ' to ' || fcst.compare_type_description AS compare_name
FROM rpt_fam_compare_types fcst,
rpt_fam_compare_types bkng
WHERE fcst.event_type = 'FCST'
AND bkng.event_type = 'BKNG'
AND fcst.compare_type = bkng.compare_type
WHERE event_type || ' ' || compare_type <> 'FCST AUTH_FTRS'
AND compare_type NOT IN ('NET_SHIP', 'SHIP_NET_FTRS', 'NET_ETS', 'GROSS_FTRS')
) cmpr
-- The two (2) in-line views are joined by EVENT_TYPE (i.e. 'FCST' and 'BKNG')
-- to form a product of all RPT_FAM_FWEQ_DETAIL rows with comparison types
WHERE dtl.trg_event_type = cmpr.event_type
ORDER BY plan_ctry,
division,
season,
monthly_seq,
tier_num,
level_num,
tier_volume,
trg_event_num,
disp_event_num,
max_event_num,
first_display_event_num,
level_type,
level_value
COMMIT
/I agree.
SELECT ticket_no,name_of_the_passenger..
FROM ticket_master
WHERE ticket_no NOT IN
(SELECT Ticket_no
FROM ticket_cancellations
WHERE trip_id = my_trip_id);
This involves creating a little temp table for each record in ticket_no. Then full scanning it.
Change it to
SELECT ticket_no,name_of_the_passenger..
FROM ticket_master B
WHERE ticket_no NOT EXISTS
(SELECT null
FROM ticket_cancellations A
WHERE A.ticket_no = B.ticket_no
AND trip_id = my_trip_id);
Then you get an index hit in both cases. -
How to improve performance when there are many TextBlocks in ItemsControl items?
Hi,
I'm trying to find a way to improve performance for a situation when there is an ItemsControl using UI and Data virtualization and each item on that control has 36 TextBlocks. Basically the item is a single string. There are so many TextBlocks
to allow assigning different brushes to different parts of the string. Performance of this construction is terrible. I have 37 items visible on the screen and if I try to scroll up or down it scrolls into the black space and then it takes a second or two to
show the items.
I tried different things. For example, the most successful performance-wise was to replace TextBlocks with Borders and then draw bitmaps. In other words, I prepared 127 bitmaps for each character (I need ASCII only) and then I used those bitmaps
to set Border.Backgrounds. It improved performance about 1.5 - 2 times but it consumed much more memory (which is not surprising, of course). Required amount of memory is so big that it throws OutOfMemoryException on 512MB emulator but works on 1GB. As a result
I don't thing it is a good solution.
Another thing that worked perfect is to replace 36 TextBlocks with only 6 TextBlocks. In this case the performance improvement is about 5 - 10 times but I lose the ability to set different colors to different parts of the string. It seems that
the performance degrades dramatically with the increase of number of TextBlocks. Is there another technique to draw strings where literally each character can be of different color with decent performance?
Thank you
AlexUsing Runs inside TextBlocks gives approximately the same improvement as using bitmaps 1.5 - 2 times faster but it is not even close to the case with just a couple of TextBlocks in the ItemsControl item. Any other ideas?
Alex -
Rewrite SQL query to improve performance
Hello,
The below queries are very time consuming.Could you please suggest how to improve performance for these 2 queries:
QUERY1:
SELECT avbeln aposnr aauart avkorg avtweg aspart
avkbur akunnr bmatnr bkwmeng b~vrkme
bnetwr bwerks blgort bvstel babgru berdat b~ernam
cfaksk cktext cvdatu czzbrsch ckvgr1 caugru
INTO CORRESPONDING FIELDS OF TABLE g_t_sodata
FROM vapma AS a INNER JOIN vbap AS b ON
avbeln = bvbeln AND aposnr = bposnr
INNER JOIN vbak AS c ON avbeln = cvbeln
WHERE a~vkorg IN so_vkorg
AND a~vtweg IN so_vtweg
AND a~vkbur IN so_vkbur
AND a~auart IN so_auart
AND a~vbeln IN so_vbeln
AND b~abgru IN so_abgru
AND c~faksk IN so_faksk
AND ( b~erdat GT g_f_zenkai_date OR
( berdat EQ g_f_zenkai_date AND berzet GE g_f_zenkai_time ) )
AND ( b~erdat LT g_f_kaisi_date OR
( berdat EQ g_f_kaisi_date AND berzet LT g_f_kaisi_time ) ).
QUERY2:
SELECT avbeln aposnr aauart avkorg avtweg aspart
avkbur akunnr bmatnr bkwmeng b~vrkme
b~netwr
bwerks blgort bvstel babgru berdat bernam
cfaksk cktext cvdatu czzbrsch ckvgr1 caugru
INTO CORRESPONDING FIELDS OF TABLE g_t_sodata
FROM vapma AS a INNER JOIN vbap AS b ON
avbeln = bvbeln AND aposnr = bposnr
INNER JOIN vbak AS c ON avbeln = cvbeln
WHERE a~vkorg IN so_vkorg
AND a~vtweg IN so_vtweg
AND a~vkbur IN so_vkbur
AND a~auart IN so_auart
AND a~vbeln IN so_vbeln
AND b~abgru IN so_abgru
AND c~faksk IN so_faksk.Questions like this a one of the favorites here in this forum.
I guess that the statements are o.k.
The problem is the usage! There are so many Ranges, if they are filled then some index should support it. If no range is filled then it is usually slow, you can not do anything.
You must find out, which are actually used and under which conditions it is slow and when it is o.k.
Ask again, when you can provide the actual cases.
Use SQL trace to check the performance:
SQL trace:
The SQL Trace (ST05) Quick and Easy
I know the probabilty is high, that you will ignore this recommendation and you will points to the
'Use FOR ALL ENTRIES' recommendations' ... but then I can not help you.
Siegfried
Maybe you are looking for
-
Advice on printing large photos?
I have a Canon PowerShot SD400 Digital Elph camera. As far as I can tell, the largest it can take is 2592 x 1944, which shows up as 180 dpi when I open it in Photoshop. Based on this, I think that the largest standard size I can order is 10" x 8" thr
-
Disable auto launch of iphoto when iphone is connected to mac
Hey everyone, anyone know how to prevent iphoto from launching when I connect my iphone? Thank you. Ken
-
Can't able to speak in normal mode. Able to talk only in speaker mode
Dear Team, I bought the Z10 yesterday, When I make a call, the opponent persons can't hear when I talk in normal mode. They can able to hear me only when I put the call in speaker mode. Please do the needful
-
I need a help. The environment, SharePoint and Project Server 2013 and SQL Server 2012, running on Windows Server 2008. Editing Details Web Part in a Schedule PDP Page, saved invalid Project Web App URL. After that, can display the Project Informatio
-
New Franz Ferdinand NOT available at iTMS Canada
While since Oct. 4 all the iTMS sell the new Franz Ferdinand album, the Canadian store does not. No wonder people download from Limewire etc. Does anyone know what the deal is? I'd spend a lot more money if we had the same tunes as the US or UK store