BW BCS cube(0bcs_vc10 ) Report huge performance issue
Hi Masters,
I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
Some of the querys is taking more 15 to 20 minutes to execute the report.
This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
Current service pack we are using is
SAP_BW 350 0016 SAPKW35016
FINBASIS 300 0012 SAPK-30012INFINBASIS
BI_CONT 353 0008 SAPKIBIFP8
SEM-BW 400 0012 SAPKGS4012
Best of Luck
Chris
BW BCS cube(0bcs_vc10 ) Report huge performance issue
Ravi,
I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
they resolved it.
Regards,
Chris
Similar Messages
-
How should I report forum performance issues?
The forums rely heavily on the caching features of browsers to improve the speed of page rendering. Performance of these forums should greatly improve after a few pages because more and more of the images, css and javascript is cached in the browser. As a consequence, when reporting forums performance issues the report should include some information on the state of the browser cache to determine whether the issue is a browser issue or a server issue. Such detailed information is generally not available from just watching the browser screen, but needs to come from specialized tools such as performance monitor plugins and recording proxies.
The preferred report method for performance issues is to use the speed reporting features build into or available as a plugin for a browser for both the page you want to report a problem with and several refence pages in the site. Detailed instructions are listed below separated out for different browsers. If possible, please use Firefox for submitting the report because it provides an export format that can be read back electronically.
Known performance issues
The performance issues with any screen with a Rich Text Editor, such as the Reply window and the compose Private Message window have been acknowleged and improvements are being implemented.
Mozilla Firefox (preferred)
Warning: it is currently not recommended to generate a speed report when logged in. The speed report has enough detail for somebody else to hijack your session and impersonate you on the forums. If you really must report while logged in, make sure you log out your browser after generating the speed report and wait at least 4 hours before posting.
Install the Firebug plugin
Install the NetExport 0.6 extension for Firebug
Enable all Firebug panels
Switch to the "Net" panel in Firebug
Click on this link
Export the data from the Firebug Net panel
Click on this link
Export the data from the Firebug Net panel
Browse to the page where you are experiencing the performance problem.
Export the data from the Firebug Net panel
Click on this link
Export the data from the Firebug Net panel
Click on this link
Export the data from the Firebug Net panel
Browse to the page where you are experiencing the performance problem.
Export the data from the Firebug Net panel
When you report a performance problem please attach the 6 exports from the Firebug Net panel and an explanation of how you are experiencing the issues (for instance how much slower it is then normal) and include a description of your internet connection (dial-up, dsl, cable etc.) and the country from where you are connecting. If you have non-standard tweaks to your Firefox configuration (such as pipelining enabled) or are running any plugins please include that information in your report as well.
Google Chrome
Open the Developer Tools (Ctrl-Shift-J)
Navigate to the resources tab
Enable resource tracking.
Click on this link
Export the resource loading data.
Reset the data by disabling and enabling resource tracking
Click on this link
Export the data
Reset the data by disabling and enabling resource tracking
Navigate to the page where you experience the performance problem
Export the data
Reset the data by disabling and enabling resource tracking
Click on this link
Export the data
Reset the data by disabling and enabling resource tracking
Click on this link
Export the data
Reset the data by disabling and enabling resource tracking
Navigate to the page where you experience the performance problem
Export the data
Since Google Chrome does not have an export format for the Resource Tracking information best current practice is to take a screenshot and note the hover details for any resource with a tail that is longer then 25% of the total load time. When you report a performance problem please attach the screenshots and an explanation of how you are experiencing the issues (for instance how much slower it is then normal) and include a description of your internet connection (dial-up, dsl, cable etc.) and the country from where you are connecting.
Apple Safari
The Apple Safari Web Inspector has a Resources panel similar to the Resources panel in the Google Chrome developer tools.To get there, follow these steps:
Show the menu bar.
Go to preferences
Go to the Advanced Tab
Check “Show Develop menu in menu bar”.
From the Develop menu select “Show Web Inspector”.
Collecting the performance information and exporting works exactly the same as in Google Chrome. Please refer to the instructions for Google Chrome.
Microsoft Internet Explorer
IE does not have native features to analyze web traffic. No plugins have been found that produce the required information (please let us know if we missed any). For now, please reproduce the issue with Firefox, Chrome or Safari.
Please note that due to the reliance on Javascript for the interactive effects the performance of these forums will be much better on MS IE 8 then on previous versions of MS IE.Hi
It works, check once again...
regards
Swami -
Huge performances issues after EHP5 upgrade
Hi All,
we are running on DB29.7 FP4 and had recently upgadedfrom ehp4 to EHP5 , we have huge performance issues, for example see below DB time difference for LP12 transactions
i have done onleline roerg of table and indexes , but no luck,
from 23-29/jan
steps TRT total TDBT DBT
LP12 3,131 6,449 2,059.8 427 136.4 52 16.5 5,962 1,904.3
30/jan-5feb
LP12 2,727 127,961 46,923.7 4,489 1,646.2 50 18.3 123,432 45,263.0
RegarsHi tthumma,
are the statistics on the DB updated regularly and completed without errors?
You can schedule easily using transaction DB13
Regards,
Valerio -
Huge Performance issue and RSRT
Hi BW Gurus,
We are using BCS cube for our consolidation queries and reports . There is a huge prformance problem.
I need to know that wht should be the appropriate size of the Global cache as compared to Local Cache. My global cache size is 100 MB and Global Cache size is 200 MB.
Also when I go to RSRT properties
Read Mode is H: Query to read when you navigate or expand hierarchy .
Cache Mode is : 4 persistent cache across each application server
persistence mode : 3 transparent table (BLOB).
Do I have to change these settings ....please give your suggestions
will appreciated with lot of points
ThanksHi Folks,..
Could you'll please tell me where exactly we put the break point I will paste my code. I did Run SE30 and the list cube extraction simaltaneoulsy and gave me a message error generating the test frame
tatics:
FUNCTION RSSEM_CONSOLIDATION_INFOPROV3.
""Lokale Schnittstelle:
*" IMPORTING
*" REFERENCE(I_INFOPROV) TYPE RSINFOPROV
*" REFERENCE(I_KEYDATE) TYPE RSDRC_SRDATE
*" REFERENCE(I_TH_SFC) TYPE RSDD_TH_SFC
*" REFERENCE(I_TH_SFK) TYPE RSDD_TH_SFK
*" REFERENCE(I_TSX_SELDR) TYPE RSDD_TSX_SELDR
*" REFERENCE(I_FIRST_CALL) TYPE RS_BOOL
*" REFERENCE(I_PACKAGESIZE) TYPE I
*" EXPORTING
*" REFERENCE(E_T_DATA) TYPE STANDARD TABLE
*" REFERENCE(E_END_OF_DATA) TYPE RS_BOOL
*" REFERENCE(E_T_MSG) TYPE RS_T_MSG
*" EXCEPTIONS
*" ERROR_IN_BCS
statics:
UT begin:
this flag is switched in order to record data returned by the current query in UT
it can only be switched on/off in debug mode.
s_record_mode type rs_bool,
s_qry_memo type char256, " at the moment, for query name
package No, UUID, for unit testing
s_packageno type i,
s_guid type guid_22,
UT end.
s_first_call like i_first_call,
s_destination type rfcdest,
s_basiccube type rsinfoprov,
s_dest_back type rfcdest,
s_report type programm,
s_bw_local type rs_bool,
sr_data type ref to data,
sr_data_p type ref to data,
st_sfc type t_sfc,
st_sfk type t_sfk,
st_range type t_seqnr_range,
st_hienode type t_seqnr_hienode,
st_hienodename type t_seqnr_hienodename,
st_seltype type t_seqnr_seltype,
st_datadescr type T_DATADESCR,
s_end_of_data type rs_bool
data:
l_ucr_data_read_3 type funcname value 'UCR_DATA_READ_3',
l_packagesize like i_packagesize,
lt_message type t_message,
ls_message like line of e_t_msg,
l_xstring type xstring,
l_nr type i.
field-symbols:
<ls_message> type s_message,
<lt_data> type standard table,
<ls_data> type any,"nos100804
<lt_data_p> type hashed table."nos100804
clear: e_t_data, e_end_of_data, e_t_msg.
react on packagesize -1
if i_packagesize le 0. "nos050705
l_packagesize = rssem_cs_integer-max.
else.
l_packagesize = i_packagesize.
endif.
if i_first_call = rs_c_true.
s_first_call = rs_c_true.
clear s_end_of_data.
begin "nos100804
data:
lo_structdescr type ref to cl_abap_structdescr
,lo_tabledescr type ref to cl_abap_tabledescr
,lo_typedescr type ref to cl_abap_typedescr
data:
lt_key type table of abap_compname.
field-symbols <ls_component> type abap_compdescr.
create data sr_data_p like line of e_t_data.
assign sr_data_p->* to <ls_data>.
CALL METHOD CL_ABAP_STRUCTDESCR=>DESCRIBE_BY_DATA
EXPORTING
P_DATA = <ls_data>
RECEIVING
P_DESCR_REF = lo_typedescr.
lo_structdescr ?= lo_typedescr.
collect all key components to lt_key
loop at lo_structdescr->components assigning <ls_component>.
insert <ls_component>-name into table lt_key.
if <ls_component>-name = '&KEYEND'.
exit.
endif.
endloop.
data ls_sfk like line of i_th_sfk.
data l_key type abap_compname.
loop at i_th_sfk into ls_sfk.
l_key = ls_sfk-kyfnm.
if l_key is not initial.
delete table lt_key from l_key.
endif.
l_key = ls_sfk-value_returnnm.
if l_key is not initial.
delete table lt_key from l_key.
endif.
endloop.
create data sr_data_p like hashed table of <ls_data>
with unique key (lt_key).
create data sr_data_p like e_t_data.
create data sr_data like e_t_data.
end "nos100804
perform determine_destinations using i_infoprov
changing s_destination
s_dest_back
s_report
s_basiccube.
perform is_bw_local changing s_bw_local.
***--> convert the selection, enhance non-Sid-values.
--> Handle fiscper7
data:
lt_SFC TYPE RSDRI_TH_SFC
,lt_sfk TYPE RSDRI_TH_SFK
,lt_range TYPE RSDRI_T_RANGE
,lt_RANGETAB TYPE RSDRI_TX_RANGETAB
,lt_HIER TYPE RSDRI_TSX_HIER
,lt_adj_hier type t_sfc "nos290704
statics: so_convert type ref to lcl_sid_no_sid
, sx_seldr_fp34 type xstring
, s_fieldname_fp7 type RSALIAS
, st_sfc_fp34 TYPE RSDD_TH_SFC
create object so_convert type lcl_sid_no_sid
exporting i_infoprov = i_infoprov.
Transform SIDs...
perform convert_importing_parameter
using i_th_sfc
i_th_sfk
i_tsx_seldr
so_convert
e_t_data
changing lt_sfc
lt_sfk
lt_range
lt_rangetab
lt_hier
sx_seldr_fp34
"Complete SELDR as XSTRING
st_sfc_fp34
"SFC of a selection with
"FISCPER3/FISCYEAR
s_fieldname_fp7
"Name of Field for 0FISCPER
"(if requested)
This is the old routine, but ST_HIENDODE and ST_HIENODENAME can
be neglected, since they are not used at all.
perform prepare_selections
using lt_sfc
lt_sfk
lt_range
lt_rangetab
lt_hier
changing st_sfc
st_sfk
st_range
st_hienode
st_hienodename
st_seltype.
endif.
assign sr_data->* to <lt_data>.
assign sr_data_p->* to <lt_data_p>.
describe table <lt_data_p> lines l_nr.
while l_nr < l_packagesize and s_end_of_data is initial.
if s_dest_back is initial and s_bw_local = rs_c_true.
Local call
call function l_UCR_DATA_READ_3
EXPORTING
IT_SELTYPE = sT_SELTYPE
IT_HIENODE = sT_HIENODE "not used
IT_HIENODENAME = sT_HIENODENAME "not used
IT_RANGE = sT_RANGE
I_PACKAGESIZE = i_packagesize
I_KEYDATE = i_Keydate
IT_SFC = sT_SFC
IT_SFK = sT_SFK
i_infoprov = i_infoprov
i_rfcdest = s_destination
ix_seldr = sx_seldr_fp34
it_bw_sfc = st_sfc_fp34
it_bw_sfk = i_th_sfk
i_fieldname_fp7 = s_fieldname_fp7
IMPORTING
ET_DATA = <lT_DATA>
E_END_OF_DATA = s_END_OF_DATA
ET_MESSAGE = lT_MESSAGE
et_adj_hier = lt_adj_hier "nos290704
CHANGING
c_first_call = s_first_call.
elseif s_dest_back is initial and s_bw_local = rs_c_false.
!!! Error !!! No SEM-BCS destination registered for infoprovider!
if 1 = 2.
message e151(rssem) with i_infoprov.
endif.
ls_message-msgty = 'E'.
ls_message-msgid = 'RSSEM'.
ls_message-msgno = '151'.
ls_message-msgv1 = i_infoprov.
insert ls_message into table e_t_msg.
else.
remote call to SEM-BCS
** Call UCR_DATA_READ_3 ...
if s_first_call is not initial.
get the datadescription to create the requested return-structure
in the RFC-System.
perform get_datadescr
using <lt_data>
changing st_datadescr
endif.
call function 'UCR_DATA_READ_4'
destination s_dest_back
exporting i_infoprov = i_infoprov
i_rfcdest = s_destination
i_first_call = s_first_call
i_packagesize = i_packagesize
i_keydate = i_keydate
ix_seldr = sx_seldr_fp34
it_bw_sfc = st_sfc_fp34
it_bw_sfk = i_th_sfk
it_datadescr = st_datadescr
i_fieldname_fp7 = s_fieldname_fp7
importing c_first_call = s_first_call
e_end_of_data = s_end_of_data
e_xstring = l_xstring
tables it_seltype = st_seltype
it_range = st_range
it_hienode = st_hienode "not used
it_hienodename = st_hienodename "not used
it_sfc = st_sfc
it_sfk = st_sfk
et_message = lt_message
et_adj_hier = lt_adj_hier. "nos290704.
clear <lt_data>.
if lt_message is initial.
call function 'RSSEM_UCR_DATA_UNWRAP'
EXPORTING
i_xstring = l_xstring
CHANGING
ct_data = <lt_data>.
endif.
endif.
convert the returned data (SID & Hierarchy).
call method so_convert->convert_nosid2sid
exporting it_adj_hier = lt_adj_hier[] "nos290704
CHANGING
ct_data = <lt_data>.
e_t_data = <lt_data>.
Begin "nos100804
data l_collect type sy-subrc.
l_collect = 1.
if <lt_data_p> is initial and
<lt_data> is not initial.
call function 'ABL_TABLE_HASH_STATE'
exporting
itab = <lt_data>
IMPORTING
HASH_RC = l_collect "returns 0 if hash key exist.
endif.
if l_collect is initial.
<lt_data_p> = <lt_data>.
else.
loop at <lt_data> assigning <ls_data>.
collect <ls_data> into <lt_data_p>.
endloop.
endif.
append lines of <lt_data> to <lt_data_p>.
End "nos100804
messages
loop at lt_message assigning <ls_message>.
move-corresponding <ls_message> to ls_message.
insert ls_message into table e_t_msg.
endloop.
if e_t_msg is not initial.
raise error_in_bcs.
endif.
describe table <lt_data_p> lines l_nr.
endwhile.
if l_nr <= l_packagesize.
e_t_data = <lt_data_p>.
clear <lt_data_p>.
e_end_of_data = s_end_of_data.
else.
Begin "nos100804
<lt_data> = <lt_data_p>.
append lines of <lt_data> to l_packagesize to e_t_data.
data l_from type i.
l_from = l_packagesize + 1.
clear <lt_data_p>.
insert lines of <lt_data> from l_from into table <lt_data_p>.
clear <lt_data>.
End "nos100804
endif.
UT begin: start to record data
if s_record_mode = rs_c_true.
if i_first_call = rs_c_true.
clear: s_guid, s_packageno.
perform prepare_unit_test_rec_param
using
e_end_of_data
i_infoprov
i_keydate
i_th_sfc
i_th_sfk
i_tsx_seldr
i_packagesize
lt_key
e_t_data
s_qry_memo
changing
s_guid.
endif.
add 1 to s_packageno.
perform prepare_unit_test_rec_data
using
s_guid
s_packageno
e_t_data
i_infoprov
e_end_of_data.
endif. "s_record_mode = rs_c_true
UT end.
if not e_end_of_data is initial.
clean-up
clear: s_first_call, s_destination, s_report, s_bw_local,
st_sfc, st_sfk, st_range, st_hienode, s_basiccube,
st_hienodename, st_seltype, s_dest_back, sr_data,
so_convert , s_end_of_data, sr_data_p."nos100804
free: <lt_data> , <lt_data_p>.
endif.
endfunction.
It stores query parameters into cluster table
form prepare_unit_test_rec_param using i_end_of_data type rs_bool
i_infoprov type rsinfoprov
i_keydate type rrsrdate
i_th_sfc type RSDD_TH_SFC
i_th_sfk type RSDD_TH_SFk
i_tsx_seldr type rsdd_tsx_seldr
i_packagesize type i
it_key type standard table
it_retdata type standard table
i_s_memo type char256
changing c_guid type guid_22.
data:
ls_key type g_rssem_typ_key,
ls_cluster type rssem_rfcpack,
l_timestamp type timestampl.
get GUID, ret component type
call function 'GUID_CREATE'
importing
ev_guid_22 = c_guid.
ls_key-idxrid = c_guid.
clear ls_key-packno.
cluster record
get time stamp field l_timestamp.
ls_cluster-infoprov = i_infoprov.
ls_cluster-end_of_data = i_end_of_data.
ls_cluster-system_time = l_timestamp.
ls_cluster-username = sy-uname.
return data type
data:
lo_tabtype type ref to cl_abap_tabledescr,
lo_linetype type ref to cl_abap_structdescr,
lt_datadescr type t_datadescr,
ls_datadescr like line of lt_datadescr,
lt_retcomptab type abap_compdescr_tab,
ls_retcomptab like line of lt_retcomptab,
lt_rangetab type t_seqnr_range.
lo_tabtype ?= cl_abap_typedescr=>describe_by_data( it_retdata ).
lo_linetype ?= lo_tabtype->get_table_line_type( ).
lt_retcomptab = lo_linetype->components.
call the sub procedure to use external format of C, instead of interal format (unicode).
otherwise, when create data type from internal format, it won't be the same length as stored in cluster.
PERFORM get_datadescr USING it_retdata
CHANGING lt_datadescr.
loop at lt_datadescr into ls_datadescr.
move-corresponding ls_datadescr to ls_retcomptab.
append ls_retcomptab to lt_retcomptab.
endloop.
range, excluding
record param
export p_infoprov from i_infoprov
p_keydate from i_keydate
p_th_sfc from i_th_sfc
p_th_sfk from i_th_sfk
p_txs_seldr from i_tsx_seldr
p_packagesize from i_packagesize
p_t_retcomptab from lt_retcomptab
p_t_key from it_key
p_memo from i_s_memo
to database rssem_rfcpack(ut)
from ls_cluster
client sy-mandt
id ls_key.
endform.
It stores return data to cluster table
form prepare_unit_test_rec_data using
i_guid type guid_22
i_packageno type i
it_retdata type standard table
i_infoprov type rsinfoprov
i_end_of_data type rs_bool.
data:
l_lines type i,
ls_key type g_rssem_typ_key,
ls_cluster type rssem_rfcpack,
l_timestamp type timestampl.
ls_key-idxrid = i_guid.
ls_key-packno = i_packageno.
describe table it_retdata lines l_lines.
if l_lines = 0.
clear it_retdata.
endif.
cluster record
get time stamp field l_timestamp.
ls_cluster-infoprov = i_infoprov.
ls_cluster-end_of_data = i_end_of_data.
ls_cluster-system_time = l_timestamp.
ls_cluster-username = sy-uname.
export p_t_retdata from it_retdata
to database rssem_rfcpack(ut)
from ls_cluster
client sy-mandt
id ls_key.
endform.
form convert_importing_parameter
using i_th_sfc TYPE RSDD_TH_SFC
i_th_sfk TYPE RSDD_TH_SFK
i_tsx_seldr TYPE RSDD_TSX_SELDR
io_convert type ref to lcl_sid_no_sid
i_t_data type any table
changing et_sfc TYPE RSDRI_TH_SFC
et_sfk TYPE RSDRI_TH_SFK
et_range TYPE RSDRI_T_RANGE
et_rangetab TYPE RSDRI_TX_RANGETAB
et_hier TYPE RSDRI_TSX_HIER
ex_seldr type xstring
e_th_sfc TYPE RSDD_TH_SFC
e_fieldname_fp7 type rsalias
data lt_seldr TYPE RSDD_TSX_SELDR.
data ls_th_sfc type RRSFC01.
0) rename 0BCSREQUID > 0REQUID
data l_tsx_seldr like i_tsx_seldr.
data l_th_sfc like i_th_sfc.
data l_th_sfc2 like i_th_sfc. "nos070605
l_tsx_seldr = i_tsx_seldr.
l_th_sfc = i_th_sfc.
data ls_sfc_requid type RRSFC01.
data ls_seldr_requid type RSDD_SX_SELDR.
ls_sfc_requid-chanm = '0BCS_REQUID'.
read table l_th_sfc from ls_sfc_requid into ls_sfc_requid.
if sy-subrc = 0.
delete table l_th_sfc from ls_sfc_requid.
ls_sfc_requid-chanm = '0REQUID'.
insert ls_sfc_requid into table l_th_sfc.
endif.
ls_seldr_requid-chanm = '0BCS_REQUID'.
read table l_tsx_seldr from ls_seldr_requid into ls_seldr_requid.
if sy-subrc = 0.
delete table l_tsx_seldr from ls_seldr_requid.
ls_seldr_requid-chanm = '0REQUID'.
field-symbols: <ls_range> like line of ls_seldr_requid-range-range.
loop at ls_seldr_requid-range-range assigning <ls_range>.
check <ls_range>-keyfl is not initial. "jhn190106
if <ls_range>-sidlow is initial and <ls_range>-low is not initial.
<ls_range>-sidlow = <ls_range>-low.
clear <ls_range>-low.
endif.
if <ls_range>-sidhigh is initial and <ls_range>-high is not initial.
<ls_range>-sidhigh = <ls_range>-high.
clear <ls_range>-high.
endif.
clear <ls_range>-keyfl. "jhn190106
endloop.
insert ls_seldr_requid into table l_tsx_seldr.
endif.
*1) Convert SIDs..., so that all parameter look like the old ones.
call method io_convert->convert_sid2nosid
EXPORTING
it_sfc = l_th_sfc
it_sfk = i_th_sfk
it_seldr = l_tsx_seldr
it_data = i_t_data
IMPORTING
et_sfc = et_sfc
et_sfk = et_sfk
et_range = et_range
et_rangetab = et_rangetab
e_th_sfc = l_th_sfc2 "nos070605
Ignore the old hierachy information:
clear et_hier.
delete et_range where chanm = '0REQUID'.
delete table et_sfc with table key chanm = '0REQUID'.
*2) Eliminate FISCPER7, from new strucutres:
lt_seldr = i_tsx_seldr. "nos131004
e_th_sfc = l_th_sfc.
the fiscper7 can be deleted completly from the SID-selection, because
it is also treated within et_range...
clear e_fieldname_fp7.
delete lt_seldr where chanm = cs_iobj_time-fiscper7."nos131004
Begin "nos131004
Ensure that there is no gap in the seldr.
data:
ls_seldr like line of lt_seldr
,l_fems_act like ls_seldr-fems
,l_fems_new like ls_seldr-fems
loop at l_tsx_seldr into ls_seldr
where chanm ne cs_iobj_time-fiscper7.
if ls_seldr-fems ne l_fems_act.
l_fems_act = ls_seldr-fems.
add 1 to l_fems_new.
endif.
ls_seldr-fems = l_fems_new.
insert ls_seldr into table lt_seldr.
endloop.
end "nos131004
e_th_sfc = l_th_sfc2. "nos070605
Is fiscper7 in the query? (BCS requires allways two fields)
read table e_th_sfc with key chanm = cs_iobj_time-fiscper7
into ls_th_sfc.
if sy-subrc = 0.
==> YES
--> change the SFC, so that FISCPER3 and FISCYEAR is requested.
The table ET_RANGE does contain also the selection for
FISCPER3/FISCYEAR
But since also E_FIELDNAME_FP7 is transferred to BCS, the
transformation of the data, back to FISCPER7 is done on BCS-side.
e_fieldname_fp7 = ls_th_sfc-KEYRETURNNM.
"begin nos17060
if e_fieldname_fp7 is initial.
e_fieldname_fp7 = ls_th_sfc-sidRETURNNM.
translate e_fieldname_fp7 using 'SK'.
endif.
"end nos17060
delete table e_th_sfc from ls_th_sfc.
ls_th_sfc-chanm = cs_iobj_time-fiscper3.
ls_th_sfc-keyreturnnm = ls_th_sfc-chanm.
insert ls_th_sfc into table e_th_sfc.
ls_th_sfc-chanm = cs_iobj_time-fiscyear.
ls_th_sfc-keyreturnnm = ls_th_sfc-chanm.
insert ls_th_sfc into table e_th_sfc.
endif.
Store the SELDR in a XSTRING and unpack it just before selecting data
in BW. It is not interpreted in BCS!
export t_seldr = lt_seldr
Store also the SFC, because the BW-Systems migth be differrnt rel./SP.
t_bw_sfc = e_th_sfc to data buffer ex_seldr compression on.
endform. "convert_importing_parameter
*& Form get_datadescr
text
-->IT_DATA text
-->ET_DATADESCtext
form get_datadescr
using it_data type any table
changing et_datadescr type t_datadescr
data: lr_data type ref to data
, lo_descr TYPE REF TO CL_ABAP_TYPEDESCR
, lo_elemdescr TYPE REF TO CL_ABAP_elemDESCR
, lo_structdescr TYPE REF TO CL_ABAP_structDESCR
, lt_components type abap_component_tab
, ls_components type abap_componentdescr
, ls_datadescr type s_datadescr
field-symbols: <ls_data> type any
, <ls_components> type abap_compdescr
clear et_datadescr.
create data lr_data like line of it_data.
assign lr_data->* to <ls_data>.
CALL METHOD CL_ABAP_STRUCTDESCR=>DESCRIBE_BY_DATA
EXPORTING
P_DATA = <ls_data>
RECEIVING
P_DESCR_REF = lo_descr.
lo_structdescr ?= lo_descr.
CALL METHOD lo_structdescr->GET_COMPONENTS
RECEIVING
P_RESULT = lt_components.
loop at lo_structdescr->components assigning <ls_components>.
move-corresponding <ls_components> to ls_datadescr.
if ls_datadescr-type_kind = cl_abap_elemdescr=>typekind_char
or ls_datadescr-type_kind = cl_abap_elemdescr=>typekind_num
read table lt_components with key name = <ls_components>-name
into ls_components.
if sy-subrc = 0.
lo_elemdescr ?= ls_components-type.
ls_datadescr-length = lo_elemdescr->output_length.
endif.
endif.
append ls_datadescr to et_datadescr.
endloop.
endform. "get_datadescr
Try to give your inputs will appreciate that
thanks -
SQL Services 2012 Reporting Services Performance Issue - PowerView
Power view Reports are loading very slow while opening in SharePoint 2013, it is taking more than 15 secs. It is development environment, maximum 10 users , no traffic at all but still not sure why it is taking such long time.
We have 2 servers in SharePoint farm one is for SharePoint and other is for SQL. I have gone through the logs in reporting database, attached the same below. Can you please help me what we can say from the sheet attached ,whether it is slow or fast. Or where
we are having issue.
SQL server version is SQL 2012 SP2.
SharePoint 2013 is RTM.
Gone through the below blogs but no luck.
http://blogs.msdn.com/b/psssql/archive/2013/07/29/tracking-down-power-view-performance-problems.aspx
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/4ed01ff4-139a-4eb3-9e2e-df12a9c316ff/ssrs-2008-r2-and-sharepoint-2010-performance-problems
Thanks.
Thanks, Ram ChHi Ram Ch,
According to your description, your have performance issue when running your Power View report. Right?
In this scenario, based on your screenshot, it takes long time on data retrieval. How is the performance when executing the query in SQL Server Management Studio? Since you mention there's no traffic at all and 15 seconds will not cause query
time out, we suggest you optimize the query for retrieving data. Please refer to links below:
Troubleshooting Reports: Report Performance
Please share some detail information about the data query if possible. Thanks.
Best Regards,
Simon Hou -
Crystal Reports 11g Performance Issue
We just upgraded our database to Oracle 11g from Oracle 10g and we are having significant performance issues with Crystal Reports.
Our DEV and TEST environments are on 11g and are very slow to connect to the database and attach to specific tables. It is unusable. Our PROD environment is still 10g and works fine.
We have tested with both the lastest version - Crystal Reports 2008 V1 and Crystal Reports XI R2 SP6. We have also tested on several different machines.
We are using Oracle 10g ODBC drivers.
Does anyone have any recommendations?You could also try our Data direct drivers we have available on our WEB site using this [link|https://smpdl.sap-ag.de/~sapidp/012002523100008666562008E/cr_datadirect53_win32.zip].
Those drivers are the most recent and do support 11g. It also has a wired driver that doesn't require the client to be installed.
Also, highly recommended that when you do update the Oracle client to uninstall 10 first. There have been issues with CR and Oracle mixing the Oracle dependencies and causing problems.
Thank you
Don -
Crystal Report 9 performance issue wehn we move old DB to new DB server
We are using Crystal Reports 9 version, recently we move to new database server and reports data source are pointing to Old database server and old database server is no longer in the network.
, Now we have issue with the performance, if the we call the reports from web application, it is talking long to retrieve the report at least 1 min.. , if we manually update the report data source pointing to new server and the reports are pulling in 1 second, Is there any easy solution to update all the report pointing to new server, we have 3000 reports, we can not use manual process, it will take long time and it is also issue with the future if we change the server.
Thanks!
RamThe 1 minute timeout is due to ODBC driver trying to connect, it's the default timeout you are seeing. There is not tool Cr has to migrate/update database connection info. You can develop your own though or search the web for a third party tool, I'm sure there are lots out there.
-
How to solve the report generating performance issue in BI server
->I HAVE USED ONLY ONE PIVOT IN MY REPORT QUERY AND THE REPORT QUERY IS FULLY DEPEND ON ONE FACT TABLE
->IN THE REPORT I AM SHOWING DATA BY MORE THAN 150 COLUMNS.
->I HAVE USED REPORT FUNCTIONS FOR 3 FIELDS IN THE REPORT.ALL THREE REPORT FUNCTIONS JUST ADD TWO COLUMN FIELD AND MAKE IT IN TO ONE COLUMN FILED
->REPORT QUERY INDIVIDUALLY RUNNING BY 2.40 SECONDS.WHEN I USED THAT TO GENERATE A REPORT IT TAKES TO RUN MORE THAN 8 MINS.WHY THIS DELAY?WILL YOU EXPLAIN
Edited by: 873091 on Jul 18, 2011 3:50 AMHello Dude,
So from your post I understand that there is a report that basically takes 8 minutes to retrieve the data, but when you get the logical sql from the cache, and run it against the database, it only takes less than 20 seconds to retrieve the same data?
This should never happen. There might be a delay of more 20-40 seconds more than the time taken against the database to load the page elements etc. Is this happening for only this particular report or all reports ? Are you running the query against the same instance's database or a different one?
Try to re-boot all the BI services and run the report again, if the issue still exists, enable caching and try.
Assign points if helpful.
Thanks,
-Amith. -
Report Painter Performance Issue
Helloo experts,
I have a report painter report which is pulling the data from a custom table (which has 32million records). Its taking a lot of time .
Is there any method to improve its performance ?Hi,
Please check report painter selecting records using primary key or secondary index. If not please create secondary index for custom table (after consultation with basis, due to creating the secondary index will increase the load on server of custom table with 32 millions records)
If not possible. extract the resultset from custom table into another table and make query for report painter on that. and update the new table on daily basis or hourly basis.
aRs -
Crystal Report / Subreport - Performance Issue
I am having problems with a Crystal Report / Subreport with performance. I am using Crystal Version 11, directly in Crystal Developer. There are no other programs involved. I am linking to a SQL Server database using ODBC.
I have narrowed the problem down to this:
The main report has 4 tables. See diagram using the following link. http://screencast.com/t/TA9YYlwwl7
The subreport has 1 table, this table has > 2 Million records in it.
The main report links to the subreport with one field:
Main report field SAMPLE.PATIENT = subreport field Z_RESULT_HISTORY.PHIN
When I set the subreport linking within Crystal it automatically generates the following in the record selection for the subreport:
{Z_RESULT_HISTORY.PHIN} = {?Pm-SAMPLE.PATIENT}
The problem is that the report execution time is dependent on the field that I am using for the record selection in the main report.
Case I works lightening fast:
There are 16 records returned on the Main Report each one of these has about 1-5 records returned on the subreport.
{SAMPLE.PATIENT} = "MOUSEMICKEY" and {SAMPLE.SAMPLE_NUMBER} < 200
Case II brutally slow there are 51 records in the main report that qualify with a few records each in the subreport. By brutally slow I mean a few minutes:
{BATCH.NAME} = "HEP_ARCH-20090420-1"
In this case, I can see in the bottom right of the Crystal Preview window, that it is reading through all 2M records in the Z_RESULT_HISTORY table
Case III brutally slow - a couple minutes
{BATCH_OBJECTS.OBJECT_ID} = 111
This returns 1 record on the main report and 0 records on the subreport. Yet I can see it reading through ALL 2 Million records before the report is displayed.
What I can t understand is why the field used for record selection on the MAIN report is affecting the speed of the execution of the subreport. I need the main report to be selected by BATCH.NAME yet I can t figure out what I can change to make the report run fast. When I record select the main report by SAMPLE.PATIENT, I do NOT see the subreport reading all 2M reocrds - the report preview is returned in less than 1 second.
Any help would be much appreciated.Lindell - your response was very helpful. I was able to create a SQL Command on the subreport and change the subreport links on the main report to use the paramters in the SQL Command. The report is now very fast - even when there are lots of detail records on the main report. It is properly executing the query and not reading all 2M records into memory for each subreport.
I'm still totally confused as to why Crystal was misbehaving so poorly and changing how the subreport queried the database when the only change was the fields used for record selection on the main report. It really looks like a bug to me, but maybe someone can still enlighten me.
thanks again so much. We are in Parallel testing for a production rollout - and the users are MUCH happier now! (which of course makes me much happier!)
Susan
PS I was not aware of the SQL Command - had never used it before. Took me a little while to figure it out and how to do the linking - but it is very powerful. Thanks again. -
Powerbook G4 1.67GHz 2GB ram ... HUGE performance issues
Hardware:
17" Powerbook G4 1.67GHz with 2GB ram
System:
10.5.4 Leopard
*Ever since i upgraded to Leopard, i have noticed tremendous latency / pinwheel issues in many areas such as:*
1. Switching between running applications hangs up sometimes as long as 15 seconds
2. Video playback in quicktime is choppy
3. Various programs take a tremendous amount of time to open
4. Opening relatively small illustrator and or photoshop files pinwheels for unusually long time
5. Opening finder windows unusually slow
One of the MOST frustrating of all issues:
6. Powerbook heats up rather quickly... gets very hot and the fans have NEVER come on... VERY frustrating, since i would have no problem with noise if it meant better performance...
_*Overall, the entire computer feels VERY sluggish*_
i have tried clean installs, as well as regular maintenance... and upgraded my RAM to the max of 2GB
Is there anything else i can do? Anyone else having similar problems and or know the best way to repair?
DESPERATE for a solution here... while i realize the G4 can only handle so much, i just feel like it cant possibly be this slow under leopard... or can it?!
*ANY HELP AT ALL greatly appreciated...*
Best,
Mike
+Mac user since 1989+Hi,
Just to chime in, I am running a PB G4 1.67, and I am running Leopard (10.5.4) on it just fine. In fact, I am *more than* pleased with the fantastic performance. I am a very very heavy user (enterprise software developer permanently running 101 things simultaneously on his machine) and I have no problems.
First of all, try to figure out what is chomping performance. Is it CPU? Fire up activity monitor, go to the CPU tab, list "all processes" and order by CPU usage. See if a process is swamping your CPU. Until I let it run overnight to index my 250Gb drive, the Spotlight indexing engine (mds) was eating all my CPU. Thereafter, it's fine. Or you can disable it.
So yeah - let's first figure out what is causing your CPU to over-work (and hence, your machine in general to heat up, they do get hot, especially in hot climates) and work from there. No reason to suspect hardware failure yet... -
Report Script Performance Issues
Essbase Nation,
We have a report script that extracts a full 12 months worth of history in 7 minutes. The script that is used to extract the period dimension is as follows:
<Link (<Descendants("Dec YTD") And <Lev("Period",0))
The line above is then changed to pull just one month of data, and now the report script runs for 8 hours.
Please advise as to why the difference in performance.
Thank you.ID 581459.1:
Goal
How to optimize Hyperion Essbase Report Scripts?
Solution
To optimize your Report follow the suggested guidelines below:
1. Decrease the amount of Dynamic Calcs in your outline. If you have to, make it dynamic calc and store.
2. Use the <Sparse command at the beginning of the report script.
3. Use the <Column command for the dense dimensions instead of using the Page command. The order of the dense dimensions in the Column command should
be the same as the order of the dense dimension in the outline. (Ex. <Column (D1, D2)).
4. Use the <Row command for the sparse dimensions. The order of the sparse dimensions in the Row command should be in the opposite order of the sparse
dimension in the outline. (Ex. <Row (S3, S2, S1)). This is commonly called sparse bottom up method.
5. If the user does not want to use the <Column command for the dense dimensions, then the dense dimensions should be placed at the end of the <Row command.
(Ex. <Row (S3, S2, S1, D1, D2)).
6. Do not use the Page command, use the Column command instead. -
Sap Crystal Reports 2013 Performance Issues
Hi,
We are planing to do the report migration to sap crystal report 2013. now we have crystal report xi Developer licensed version and we download the 30 days evaluation(trial) version of sap crystal report 2013 only for testing. we have 1 report that created in crystal report xi version it runs within 5 mins for 2 lakhs records and same report we are testing on sap crystal report 2013 evaluation(trial) version with same connection and same report parameters it took 10 mins for 2 lakhs records.every report take more time in sap crystal report 2013 evaluation(trial) version as compared to crystal report xi Developer licensed version.
Why this happened? is it issue of evaluation(trial) version of sap crystal report 2013?
Kind Regards,
GaneshHi Jamie,
what database are you using?
Oracle
what specific db version?
Oracle 11g
what connection method?
Oracle Server
Above same thing we r using for crystal report xi licensed version and sap crystal report 2013 evaluation(trial) version but now issues is that report takes 45 min in sap crystal report 2013 evaluation(trial) version for 2 lac records,but same report takes only 5 min to run in crystal report xi licensed version. is this issue of licensed bcoz now we are run report on sap crystal report 2013 evaluation(trial) version. -
Report Script- Performance Issue
Hi,
I ran this report script and it is taking around 2 hours to complete. Is there any possiblity to better tune this script. Please advice me where else can we better tune this.
Thanks,
UB.ID 581459.1:
Goal
How to optimize Hyperion Essbase Report Scripts?
Solution
To optimize your Report follow the suggested guidelines below:
1. Decrease the amount of Dynamic Calcs in your outline. If you have to, make it dynamic calc and store.
2. Use the <Sparse command at the beginning of the report script.
3. Use the <Column command for the dense dimensions instead of using the Page command. The order of the dense dimensions in the Column command should
be the same as the order of the dense dimension in the outline. (Ex. <Column (D1, D2)).
4. Use the <Row command for the sparse dimensions. The order of the sparse dimensions in the Row command should be in the opposite order of the sparse
dimension in the outline. (Ex. <Row (S3, S2, S1)). This is commonly called sparse bottom up method.
5. If the user does not want to use the <Column command for the dense dimensions, then the dense dimensions should be placed at the end of the <Row command.
(Ex. <Row (S3, S2, S1, D1, D2)).
6. Do not use the Page command, use the Column command instead. -
Performance issue in Webi rep when using custom object from SAP BW univ
Hi All,
I had to design a report that runs for the previous day and hence we had created a custom object which ranks the dates and then a pre-defined filter which picks the date with highest rank.
the definition for the rank variable(in universe) is as follows:
<expression>Rank([0CALDAY].Currentmember, Order([0CALDAY].Currentmember.Level.Members ,Rank([0CALDAY].Currentmember,[0CALDAY].Currentmember.Level.Members), BDESC))</expression>
Now to the issue I am currently facing,
The report works fine when we ran it on a test environment ie :with small amount of data.
Our production environment has millions of rows of data and when I run the report with filter it just hangs.I think this is because it tries to rank all the dates(to find the max date) and thus resulting in a huge performance issue.
Can someone suggest how this performance issue can be overcome?
I work on BO XI3.1 with SAP BW.
Thanks and Regards,
Smitha.Hi,
Using a variable on the BW side is not feasible since we want to use the same BW query for a few other reports as well.
Could you please explain what you mean by 'use LAG function'.How can it be used in this scenario?
Thanks and Regards,
Smitha Mohan.
Maybe you are looking for
-
IChat 2.1 no longer working after upgrade to Tiger
I had my technical friend across the country upgrade my mac to Tiger just recently. I didn't have time to test every application for usability and didn't realize iChat didn't work until I got back home. Now I'm stuck here with no iChat and no disc (I
-
Hi, I'm getting an error when trying to configure the standard Database Table resource adapter on an Oracle DB. I'm running IdM 5.5 on Tomcat with a MySql repository. The Oracle database is fairly large but I've been able to configure the database ta
-
Want to change image properties programmatically
want to change image properties programmatically like adjust_to_fit,selection_rectangle,pane,show_pallete. Reply as soon as possible. [email protected]
-
Will this hard-drive work in a MacBook?
Will this hard-drive work in a MacBook? http://www.lambda-tek.com/componentshop/index.pl?prodID=1075517 Thanks, Steve.
-
SRM Product Category replication
Hi, I have a problem while replicating the product category from SRM 5.0 to R/4 4.7 system. When I run the load the entry is stuck in the queue and the error message says: The current application triggered a termination with a short dump. I have rese