Powerbook G4 1.67GHz 2GB ram ... HUGE performance issues
Hardware:
17" Powerbook G4 1.67GHz with 2GB ram
System:
10.5.4 Leopard
*Ever since i upgraded to Leopard, i have noticed tremendous latency / pinwheel issues in many areas such as:*
1. Switching between running applications hangs up sometimes as long as 15 seconds
2. Video playback in quicktime is choppy
3. Various programs take a tremendous amount of time to open
4. Opening relatively small illustrator and or photoshop files pinwheels for unusually long time
5. Opening finder windows unusually slow
One of the MOST frustrating of all issues:
6. Powerbook heats up rather quickly... gets very hot and the fans have NEVER come on... VERY frustrating, since i would have no problem with noise if it meant better performance...
_*Overall, the entire computer feels VERY sluggish*_
i have tried clean installs, as well as regular maintenance... and upgraded my RAM to the max of 2GB
Is there anything else i can do? Anyone else having similar problems and or know the best way to repair?
DESPERATE for a solution here... while i realize the G4 can only handle so much, i just feel like it cant possibly be this slow under leopard... or can it?!
*ANY HELP AT ALL greatly appreciated...*
Best,
Mike
+Mac user since 1989+
Hi,
Just to chime in, I am running a PB G4 1.67, and I am running Leopard (10.5.4) on it just fine. In fact, I am *more than* pleased with the fantastic performance. I am a very very heavy user (enterprise software developer permanently running 101 things simultaneously on his machine) and I have no problems.
First of all, try to figure out what is chomping performance. Is it CPU? Fire up activity monitor, go to the CPU tab, list "all processes" and order by CPU usage. See if a process is swamping your CPU. Until I let it run overnight to index my 250Gb drive, the Spotlight indexing engine (mds) was eating all my CPU. Thereafter, it's fine. Or you can disable it.
So yeah - let's first figure out what is causing your CPU to over-work (and hence, your machine in general to heat up, they do get hot, especially in hot climates) and work from there. No reason to suspect hardware failure yet...
Similar Messages
-
BW BCS cube(0bcs_vc10 ) Report huge performance issue
Hi Masters,
I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
Some of the querys is taking more 15 to 20 minutes to execute the report.
This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
Current service pack we are using is
SAP_BW 350 0016 SAPKW35016
FINBASIS 300 0012 SAPK-30012INFINBASIS
BI_CONT 353 0008 SAPKIBIFP8
SEM-BW 400 0012 SAPKGS4012
Best of Luck
Chris
BW BCS cube(0bcs_vc10 ) Report huge performance issueRavi,
I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
they resolved it.
Regards,
Chris -
Huge performances issues after EHP5 upgrade
Hi All,
we are running on DB29.7 FP4 and had recently upgadedfrom ehp4 to EHP5 , we have huge performance issues, for example see below DB time difference for LP12 transactions
i have done onleline roerg of table and indexes , but no luck,
from 23-29/jan
steps TRT total TDBT DBT
LP12 3,131 6,449 2,059.8 427 136.4 52 16.5 5,962 1,904.3
30/jan-5feb
LP12 2,727 127,961 46,923.7 4,489 1,646.2 50 18.3 123,432 45,263.0
RegarsHi tthumma,
are the statistics on the DB updated regularly and completed without errors?
You can schedule easily using transaction DB13
Regards,
Valerio -
Satellite A110-103 - 2GB RAM - gaming performance tuning - SOLUTION
Hello,
For those of you who have 2Gb RAM installed on laptop and want to play games like FEAR2-PROJECT ORIGIN this is a performance tuning that I tested on my Toshiba Satellite A110-103!
If you want to add 2Gb of RAM or more to your laptop please read [this post and my experience with RAM upgrades|http://forums.computers.toshiba-europe.com/forums/message.jspa?messageID=153005#153005]! It might help you save money!
Before having installed 2Gb RAM I used to have a 4096 Paging File set in Windows for C drive.
After I installed the 2GB RAM I wanted to play FEAR2-Project Origin at low settings.
However the game played horrible with low framerates. So I wanted to know if there is a way to disable Virtual memory(paging file) and only use real RAM.
Most threads regarding disabling paging files stated that there is no way or point in doing that!
Well I decided to give it a try!
I disabled paging file from *Start->Control Panel->System ->Advanced->Performance->Settings->Advanced->Virtual memory->Change->No paging file*. I rebooted my laptop and after the reboot in my case I could see the big difference in speed!
TEST: I opened FEAR2-Project Origin started to play and my character's movements where much more responsive and without lags! :D
Conclusion: In my case disabling Virtual Memory(Paging file) really boosted my gaming experience! And I play on a ATI X200M card with 256Mb shared memory!
Test this and if you get good results please post them here! I
Best regards,
Marius GigiHi Marius
Thanks for this message
Usually the paging file should help you to increase the RAM performance
Usin the page file, the data which should be stored in RAM would be moved to HDD and therefore a parts of RAM would be free
However, thanks for this feedback and your observations! -
Having difficulty; message says select volume destination to install iMovie 09 6.0.3? How is this done? Thanx
There is red exclamation mark in hd icon a nd following directions leads to continuing instructions requiring earlier version, 6.0.3, 6.0.2, 6,0,1 etc? thanx
-
Huge Performance issue and RSRT
Hi BW Gurus,
We are using BCS cube for our consolidation queries and reports . There is a huge prformance problem.
I need to know that wht should be the appropriate size of the Global cache as compared to Local Cache. My global cache size is 100 MB and Global Cache size is 200 MB.
Also when I go to RSRT properties
Read Mode is H: Query to read when you navigate or expand hierarchy .
Cache Mode is : 4 persistent cache across each application server
persistence mode : 3 transparent table (BLOB).
Do I have to change these settings ....please give your suggestions
will appreciated with lot of points
ThanksHi Folks,..
Could you'll please tell me where exactly we put the break point I will paste my code. I did Run SE30 and the list cube extraction simaltaneoulsy and gave me a message error generating the test frame
tatics:
FUNCTION RSSEM_CONSOLIDATION_INFOPROV3.
""Lokale Schnittstelle:
*" IMPORTING
*" REFERENCE(I_INFOPROV) TYPE RSINFOPROV
*" REFERENCE(I_KEYDATE) TYPE RSDRC_SRDATE
*" REFERENCE(I_TH_SFC) TYPE RSDD_TH_SFC
*" REFERENCE(I_TH_SFK) TYPE RSDD_TH_SFK
*" REFERENCE(I_TSX_SELDR) TYPE RSDD_TSX_SELDR
*" REFERENCE(I_FIRST_CALL) TYPE RS_BOOL
*" REFERENCE(I_PACKAGESIZE) TYPE I
*" EXPORTING
*" REFERENCE(E_T_DATA) TYPE STANDARD TABLE
*" REFERENCE(E_END_OF_DATA) TYPE RS_BOOL
*" REFERENCE(E_T_MSG) TYPE RS_T_MSG
*" EXCEPTIONS
*" ERROR_IN_BCS
statics:
UT begin:
this flag is switched in order to record data returned by the current query in UT
it can only be switched on/off in debug mode.
s_record_mode type rs_bool,
s_qry_memo type char256, " at the moment, for query name
package No, UUID, for unit testing
s_packageno type i,
s_guid type guid_22,
UT end.
s_first_call like i_first_call,
s_destination type rfcdest,
s_basiccube type rsinfoprov,
s_dest_back type rfcdest,
s_report type programm,
s_bw_local type rs_bool,
sr_data type ref to data,
sr_data_p type ref to data,
st_sfc type t_sfc,
st_sfk type t_sfk,
st_range type t_seqnr_range,
st_hienode type t_seqnr_hienode,
st_hienodename type t_seqnr_hienodename,
st_seltype type t_seqnr_seltype,
st_datadescr type T_DATADESCR,
s_end_of_data type rs_bool
data:
l_ucr_data_read_3 type funcname value 'UCR_DATA_READ_3',
l_packagesize like i_packagesize,
lt_message type t_message,
ls_message like line of e_t_msg,
l_xstring type xstring,
l_nr type i.
field-symbols:
<ls_message> type s_message,
<lt_data> type standard table,
<ls_data> type any,"nos100804
<lt_data_p> type hashed table."nos100804
clear: e_t_data, e_end_of_data, e_t_msg.
react on packagesize -1
if i_packagesize le 0. "nos050705
l_packagesize = rssem_cs_integer-max.
else.
l_packagesize = i_packagesize.
endif.
if i_first_call = rs_c_true.
s_first_call = rs_c_true.
clear s_end_of_data.
begin "nos100804
data:
lo_structdescr type ref to cl_abap_structdescr
,lo_tabledescr type ref to cl_abap_tabledescr
,lo_typedescr type ref to cl_abap_typedescr
data:
lt_key type table of abap_compname.
field-symbols <ls_component> type abap_compdescr.
create data sr_data_p like line of e_t_data.
assign sr_data_p->* to <ls_data>.
CALL METHOD CL_ABAP_STRUCTDESCR=>DESCRIBE_BY_DATA
EXPORTING
P_DATA = <ls_data>
RECEIVING
P_DESCR_REF = lo_typedescr.
lo_structdescr ?= lo_typedescr.
collect all key components to lt_key
loop at lo_structdescr->components assigning <ls_component>.
insert <ls_component>-name into table lt_key.
if <ls_component>-name = '&KEYEND'.
exit.
endif.
endloop.
data ls_sfk like line of i_th_sfk.
data l_key type abap_compname.
loop at i_th_sfk into ls_sfk.
l_key = ls_sfk-kyfnm.
if l_key is not initial.
delete table lt_key from l_key.
endif.
l_key = ls_sfk-value_returnnm.
if l_key is not initial.
delete table lt_key from l_key.
endif.
endloop.
create data sr_data_p like hashed table of <ls_data>
with unique key (lt_key).
create data sr_data_p like e_t_data.
create data sr_data like e_t_data.
end "nos100804
perform determine_destinations using i_infoprov
changing s_destination
s_dest_back
s_report
s_basiccube.
perform is_bw_local changing s_bw_local.
***--> convert the selection, enhance non-Sid-values.
--> Handle fiscper7
data:
lt_SFC TYPE RSDRI_TH_SFC
,lt_sfk TYPE RSDRI_TH_SFK
,lt_range TYPE RSDRI_T_RANGE
,lt_RANGETAB TYPE RSDRI_TX_RANGETAB
,lt_HIER TYPE RSDRI_TSX_HIER
,lt_adj_hier type t_sfc "nos290704
statics: so_convert type ref to lcl_sid_no_sid
, sx_seldr_fp34 type xstring
, s_fieldname_fp7 type RSALIAS
, st_sfc_fp34 TYPE RSDD_TH_SFC
create object so_convert type lcl_sid_no_sid
exporting i_infoprov = i_infoprov.
Transform SIDs...
perform convert_importing_parameter
using i_th_sfc
i_th_sfk
i_tsx_seldr
so_convert
e_t_data
changing lt_sfc
lt_sfk
lt_range
lt_rangetab
lt_hier
sx_seldr_fp34
"Complete SELDR as XSTRING
st_sfc_fp34
"SFC of a selection with
"FISCPER3/FISCYEAR
s_fieldname_fp7
"Name of Field for 0FISCPER
"(if requested)
This is the old routine, but ST_HIENDODE and ST_HIENODENAME can
be neglected, since they are not used at all.
perform prepare_selections
using lt_sfc
lt_sfk
lt_range
lt_rangetab
lt_hier
changing st_sfc
st_sfk
st_range
st_hienode
st_hienodename
st_seltype.
endif.
assign sr_data->* to <lt_data>.
assign sr_data_p->* to <lt_data_p>.
describe table <lt_data_p> lines l_nr.
while l_nr < l_packagesize and s_end_of_data is initial.
if s_dest_back is initial and s_bw_local = rs_c_true.
Local call
call function l_UCR_DATA_READ_3
EXPORTING
IT_SELTYPE = sT_SELTYPE
IT_HIENODE = sT_HIENODE "not used
IT_HIENODENAME = sT_HIENODENAME "not used
IT_RANGE = sT_RANGE
I_PACKAGESIZE = i_packagesize
I_KEYDATE = i_Keydate
IT_SFC = sT_SFC
IT_SFK = sT_SFK
i_infoprov = i_infoprov
i_rfcdest = s_destination
ix_seldr = sx_seldr_fp34
it_bw_sfc = st_sfc_fp34
it_bw_sfk = i_th_sfk
i_fieldname_fp7 = s_fieldname_fp7
IMPORTING
ET_DATA = <lT_DATA>
E_END_OF_DATA = s_END_OF_DATA
ET_MESSAGE = lT_MESSAGE
et_adj_hier = lt_adj_hier "nos290704
CHANGING
c_first_call = s_first_call.
elseif s_dest_back is initial and s_bw_local = rs_c_false.
!!! Error !!! No SEM-BCS destination registered for infoprovider!
if 1 = 2.
message e151(rssem) with i_infoprov.
endif.
ls_message-msgty = 'E'.
ls_message-msgid = 'RSSEM'.
ls_message-msgno = '151'.
ls_message-msgv1 = i_infoprov.
insert ls_message into table e_t_msg.
else.
remote call to SEM-BCS
** Call UCR_DATA_READ_3 ...
if s_first_call is not initial.
get the datadescription to create the requested return-structure
in the RFC-System.
perform get_datadescr
using <lt_data>
changing st_datadescr
endif.
call function 'UCR_DATA_READ_4'
destination s_dest_back
exporting i_infoprov = i_infoprov
i_rfcdest = s_destination
i_first_call = s_first_call
i_packagesize = i_packagesize
i_keydate = i_keydate
ix_seldr = sx_seldr_fp34
it_bw_sfc = st_sfc_fp34
it_bw_sfk = i_th_sfk
it_datadescr = st_datadescr
i_fieldname_fp7 = s_fieldname_fp7
importing c_first_call = s_first_call
e_end_of_data = s_end_of_data
e_xstring = l_xstring
tables it_seltype = st_seltype
it_range = st_range
it_hienode = st_hienode "not used
it_hienodename = st_hienodename "not used
it_sfc = st_sfc
it_sfk = st_sfk
et_message = lt_message
et_adj_hier = lt_adj_hier. "nos290704.
clear <lt_data>.
if lt_message is initial.
call function 'RSSEM_UCR_DATA_UNWRAP'
EXPORTING
i_xstring = l_xstring
CHANGING
ct_data = <lt_data>.
endif.
endif.
convert the returned data (SID & Hierarchy).
call method so_convert->convert_nosid2sid
exporting it_adj_hier = lt_adj_hier[] "nos290704
CHANGING
ct_data = <lt_data>.
e_t_data = <lt_data>.
Begin "nos100804
data l_collect type sy-subrc.
l_collect = 1.
if <lt_data_p> is initial and
<lt_data> is not initial.
call function 'ABL_TABLE_HASH_STATE'
exporting
itab = <lt_data>
IMPORTING
HASH_RC = l_collect "returns 0 if hash key exist.
endif.
if l_collect is initial.
<lt_data_p> = <lt_data>.
else.
loop at <lt_data> assigning <ls_data>.
collect <ls_data> into <lt_data_p>.
endloop.
endif.
append lines of <lt_data> to <lt_data_p>.
End "nos100804
messages
loop at lt_message assigning <ls_message>.
move-corresponding <ls_message> to ls_message.
insert ls_message into table e_t_msg.
endloop.
if e_t_msg is not initial.
raise error_in_bcs.
endif.
describe table <lt_data_p> lines l_nr.
endwhile.
if l_nr <= l_packagesize.
e_t_data = <lt_data_p>.
clear <lt_data_p>.
e_end_of_data = s_end_of_data.
else.
Begin "nos100804
<lt_data> = <lt_data_p>.
append lines of <lt_data> to l_packagesize to e_t_data.
data l_from type i.
l_from = l_packagesize + 1.
clear <lt_data_p>.
insert lines of <lt_data> from l_from into table <lt_data_p>.
clear <lt_data>.
End "nos100804
endif.
UT begin: start to record data
if s_record_mode = rs_c_true.
if i_first_call = rs_c_true.
clear: s_guid, s_packageno.
perform prepare_unit_test_rec_param
using
e_end_of_data
i_infoprov
i_keydate
i_th_sfc
i_th_sfk
i_tsx_seldr
i_packagesize
lt_key
e_t_data
s_qry_memo
changing
s_guid.
endif.
add 1 to s_packageno.
perform prepare_unit_test_rec_data
using
s_guid
s_packageno
e_t_data
i_infoprov
e_end_of_data.
endif. "s_record_mode = rs_c_true
UT end.
if not e_end_of_data is initial.
clean-up
clear: s_first_call, s_destination, s_report, s_bw_local,
st_sfc, st_sfk, st_range, st_hienode, s_basiccube,
st_hienodename, st_seltype, s_dest_back, sr_data,
so_convert , s_end_of_data, sr_data_p."nos100804
free: <lt_data> , <lt_data_p>.
endif.
endfunction.
It stores query parameters into cluster table
form prepare_unit_test_rec_param using i_end_of_data type rs_bool
i_infoprov type rsinfoprov
i_keydate type rrsrdate
i_th_sfc type RSDD_TH_SFC
i_th_sfk type RSDD_TH_SFk
i_tsx_seldr type rsdd_tsx_seldr
i_packagesize type i
it_key type standard table
it_retdata type standard table
i_s_memo type char256
changing c_guid type guid_22.
data:
ls_key type g_rssem_typ_key,
ls_cluster type rssem_rfcpack,
l_timestamp type timestampl.
get GUID, ret component type
call function 'GUID_CREATE'
importing
ev_guid_22 = c_guid.
ls_key-idxrid = c_guid.
clear ls_key-packno.
cluster record
get time stamp field l_timestamp.
ls_cluster-infoprov = i_infoprov.
ls_cluster-end_of_data = i_end_of_data.
ls_cluster-system_time = l_timestamp.
ls_cluster-username = sy-uname.
return data type
data:
lo_tabtype type ref to cl_abap_tabledescr,
lo_linetype type ref to cl_abap_structdescr,
lt_datadescr type t_datadescr,
ls_datadescr like line of lt_datadescr,
lt_retcomptab type abap_compdescr_tab,
ls_retcomptab like line of lt_retcomptab,
lt_rangetab type t_seqnr_range.
lo_tabtype ?= cl_abap_typedescr=>describe_by_data( it_retdata ).
lo_linetype ?= lo_tabtype->get_table_line_type( ).
lt_retcomptab = lo_linetype->components.
call the sub procedure to use external format of C, instead of interal format (unicode).
otherwise, when create data type from internal format, it won't be the same length as stored in cluster.
PERFORM get_datadescr USING it_retdata
CHANGING lt_datadescr.
loop at lt_datadescr into ls_datadescr.
move-corresponding ls_datadescr to ls_retcomptab.
append ls_retcomptab to lt_retcomptab.
endloop.
range, excluding
record param
export p_infoprov from i_infoprov
p_keydate from i_keydate
p_th_sfc from i_th_sfc
p_th_sfk from i_th_sfk
p_txs_seldr from i_tsx_seldr
p_packagesize from i_packagesize
p_t_retcomptab from lt_retcomptab
p_t_key from it_key
p_memo from i_s_memo
to database rssem_rfcpack(ut)
from ls_cluster
client sy-mandt
id ls_key.
endform.
It stores return data to cluster table
form prepare_unit_test_rec_data using
i_guid type guid_22
i_packageno type i
it_retdata type standard table
i_infoprov type rsinfoprov
i_end_of_data type rs_bool.
data:
l_lines type i,
ls_key type g_rssem_typ_key,
ls_cluster type rssem_rfcpack,
l_timestamp type timestampl.
ls_key-idxrid = i_guid.
ls_key-packno = i_packageno.
describe table it_retdata lines l_lines.
if l_lines = 0.
clear it_retdata.
endif.
cluster record
get time stamp field l_timestamp.
ls_cluster-infoprov = i_infoprov.
ls_cluster-end_of_data = i_end_of_data.
ls_cluster-system_time = l_timestamp.
ls_cluster-username = sy-uname.
export p_t_retdata from it_retdata
to database rssem_rfcpack(ut)
from ls_cluster
client sy-mandt
id ls_key.
endform.
form convert_importing_parameter
using i_th_sfc TYPE RSDD_TH_SFC
i_th_sfk TYPE RSDD_TH_SFK
i_tsx_seldr TYPE RSDD_TSX_SELDR
io_convert type ref to lcl_sid_no_sid
i_t_data type any table
changing et_sfc TYPE RSDRI_TH_SFC
et_sfk TYPE RSDRI_TH_SFK
et_range TYPE RSDRI_T_RANGE
et_rangetab TYPE RSDRI_TX_RANGETAB
et_hier TYPE RSDRI_TSX_HIER
ex_seldr type xstring
e_th_sfc TYPE RSDD_TH_SFC
e_fieldname_fp7 type rsalias
data lt_seldr TYPE RSDD_TSX_SELDR.
data ls_th_sfc type RRSFC01.
0) rename 0BCSREQUID > 0REQUID
data l_tsx_seldr like i_tsx_seldr.
data l_th_sfc like i_th_sfc.
data l_th_sfc2 like i_th_sfc. "nos070605
l_tsx_seldr = i_tsx_seldr.
l_th_sfc = i_th_sfc.
data ls_sfc_requid type RRSFC01.
data ls_seldr_requid type RSDD_SX_SELDR.
ls_sfc_requid-chanm = '0BCS_REQUID'.
read table l_th_sfc from ls_sfc_requid into ls_sfc_requid.
if sy-subrc = 0.
delete table l_th_sfc from ls_sfc_requid.
ls_sfc_requid-chanm = '0REQUID'.
insert ls_sfc_requid into table l_th_sfc.
endif.
ls_seldr_requid-chanm = '0BCS_REQUID'.
read table l_tsx_seldr from ls_seldr_requid into ls_seldr_requid.
if sy-subrc = 0.
delete table l_tsx_seldr from ls_seldr_requid.
ls_seldr_requid-chanm = '0REQUID'.
field-symbols: <ls_range> like line of ls_seldr_requid-range-range.
loop at ls_seldr_requid-range-range assigning <ls_range>.
check <ls_range>-keyfl is not initial. "jhn190106
if <ls_range>-sidlow is initial and <ls_range>-low is not initial.
<ls_range>-sidlow = <ls_range>-low.
clear <ls_range>-low.
endif.
if <ls_range>-sidhigh is initial and <ls_range>-high is not initial.
<ls_range>-sidhigh = <ls_range>-high.
clear <ls_range>-high.
endif.
clear <ls_range>-keyfl. "jhn190106
endloop.
insert ls_seldr_requid into table l_tsx_seldr.
endif.
*1) Convert SIDs..., so that all parameter look like the old ones.
call method io_convert->convert_sid2nosid
EXPORTING
it_sfc = l_th_sfc
it_sfk = i_th_sfk
it_seldr = l_tsx_seldr
it_data = i_t_data
IMPORTING
et_sfc = et_sfc
et_sfk = et_sfk
et_range = et_range
et_rangetab = et_rangetab
e_th_sfc = l_th_sfc2 "nos070605
Ignore the old hierachy information:
clear et_hier.
delete et_range where chanm = '0REQUID'.
delete table et_sfc with table key chanm = '0REQUID'.
*2) Eliminate FISCPER7, from new strucutres:
lt_seldr = i_tsx_seldr. "nos131004
e_th_sfc = l_th_sfc.
the fiscper7 can be deleted completly from the SID-selection, because
it is also treated within et_range...
clear e_fieldname_fp7.
delete lt_seldr where chanm = cs_iobj_time-fiscper7."nos131004
Begin "nos131004
Ensure that there is no gap in the seldr.
data:
ls_seldr like line of lt_seldr
,l_fems_act like ls_seldr-fems
,l_fems_new like ls_seldr-fems
loop at l_tsx_seldr into ls_seldr
where chanm ne cs_iobj_time-fiscper7.
if ls_seldr-fems ne l_fems_act.
l_fems_act = ls_seldr-fems.
add 1 to l_fems_new.
endif.
ls_seldr-fems = l_fems_new.
insert ls_seldr into table lt_seldr.
endloop.
end "nos131004
e_th_sfc = l_th_sfc2. "nos070605
Is fiscper7 in the query? (BCS requires allways two fields)
read table e_th_sfc with key chanm = cs_iobj_time-fiscper7
into ls_th_sfc.
if sy-subrc = 0.
==> YES
--> change the SFC, so that FISCPER3 and FISCYEAR is requested.
The table ET_RANGE does contain also the selection for
FISCPER3/FISCYEAR
But since also E_FIELDNAME_FP7 is transferred to BCS, the
transformation of the data, back to FISCPER7 is done on BCS-side.
e_fieldname_fp7 = ls_th_sfc-KEYRETURNNM.
"begin nos17060
if e_fieldname_fp7 is initial.
e_fieldname_fp7 = ls_th_sfc-sidRETURNNM.
translate e_fieldname_fp7 using 'SK'.
endif.
"end nos17060
delete table e_th_sfc from ls_th_sfc.
ls_th_sfc-chanm = cs_iobj_time-fiscper3.
ls_th_sfc-keyreturnnm = ls_th_sfc-chanm.
insert ls_th_sfc into table e_th_sfc.
ls_th_sfc-chanm = cs_iobj_time-fiscyear.
ls_th_sfc-keyreturnnm = ls_th_sfc-chanm.
insert ls_th_sfc into table e_th_sfc.
endif.
Store the SELDR in a XSTRING and unpack it just before selecting data
in BW. It is not interpreted in BCS!
export t_seldr = lt_seldr
Store also the SFC, because the BW-Systems migth be differrnt rel./SP.
t_bw_sfc = e_th_sfc to data buffer ex_seldr compression on.
endform. "convert_importing_parameter
*& Form get_datadescr
text
-->IT_DATA text
-->ET_DATADESCtext
form get_datadescr
using it_data type any table
changing et_datadescr type t_datadescr
data: lr_data type ref to data
, lo_descr TYPE REF TO CL_ABAP_TYPEDESCR
, lo_elemdescr TYPE REF TO CL_ABAP_elemDESCR
, lo_structdescr TYPE REF TO CL_ABAP_structDESCR
, lt_components type abap_component_tab
, ls_components type abap_componentdescr
, ls_datadescr type s_datadescr
field-symbols: <ls_data> type any
, <ls_components> type abap_compdescr
clear et_datadescr.
create data lr_data like line of it_data.
assign lr_data->* to <ls_data>.
CALL METHOD CL_ABAP_STRUCTDESCR=>DESCRIBE_BY_DATA
EXPORTING
P_DATA = <ls_data>
RECEIVING
P_DESCR_REF = lo_descr.
lo_structdescr ?= lo_descr.
CALL METHOD lo_structdescr->GET_COMPONENTS
RECEIVING
P_RESULT = lt_components.
loop at lo_structdescr->components assigning <ls_components>.
move-corresponding <ls_components> to ls_datadescr.
if ls_datadescr-type_kind = cl_abap_elemdescr=>typekind_char
or ls_datadescr-type_kind = cl_abap_elemdescr=>typekind_num
read table lt_components with key name = <ls_components>-name
into ls_components.
if sy-subrc = 0.
lo_elemdescr ?= ls_components-type.
ls_datadescr-length = lo_elemdescr->output_length.
endif.
endif.
append ls_datadescr to et_datadescr.
endloop.
endform. "get_datadescr
Try to give your inputs will appreciate that
thanks -
Performance issue in APO Module
Hi All,
While running the Demand Planning Book in APO Module, (Transaction Code :: <b>/n sdp94</b>) with 150 users, we faced a huge performance issue in both (while doing screen navigation & saving data).
The planning book volume was for about 250 products and for large no of products across India. We are running it on HP Super Dom where the SEM Server had 8 CPUs and 32 GB RAM.
The life cache server was having 8 CPUs and 120 GB RAM.
Would like to know if somebody has faced similar problem and if ues, how the problem was then solved?Have you checked SAP notes for this? There are a number of them. Start with 966490.
Rob -
C30/C300 performance issues
Hello,
I've got 2C30s + 1C300 on an ISP network and these are being used for both incoming and outgoing mails.
Recently, we started having performances issues where the workqueue was paused several times daily(reason paused on antivirus,antispam,etc). This eventually causes the workqueue to backup like 10k-20k and the units don't process mails rapidly.
I also noted some viruses(i.e: MyTob) being detected and was wondering whether IronPort/Sophos engine is not being able to scan the messages properly, thus resulting in this huge performance issue.
We also get lots of sophos timeouts daily. it's set to 120 seconds.
RAM comes up to 60%, even if traffic is not that huge.
Has anyone experienced a similar problem?
Thanks,
VineshHi,
Here's a sample of the mail logs.
I did increase/decrease the antivirus timeouts, but no changes.
It seems that it has difficulty scanning the files.
Thu Nov 29 15:43:39 2007 Info: Start MID 233665168 ICID 702663276
Thu Nov 29 15:43:39 2007 Info: MID 233665168 ICID 702663276 From:
Thu Nov 29 15:43:39 2007 Info: MID 233665168 ICID 702663276 RID 0 To:
Thu Nov 29 15:43:47 2007 Info: MID 233665168 Message-ID '<6d9jas>'
Thu Nov 29 15:43:47 2007 Info: MID 233665168 Subject 'Error'
Thu Nov 29 15:43:47 2007 Info: MID 233665168 ready 64728 bytes from
Thu Nov 29 15:44:49 2007 Warning: MID 233665168: scanning error (name=u'doc.scr', type=executable/exe): viewer bailed out
Thu Nov 29 15:44:49 2007 Info: MID 233665168 matched all recipients for per-recipient policy DEFAULT in the outbound table
Thu Nov 29 15:45:03 2007 Info: MID 233665168 interim AV verdict using Sophos VIRAL
Thu Nov 29 15:45:03 2007 Info: MID 233665168 antivirus positive 'W32/Mytob-C'
Thu Nov 29 15:45:03 2007 Info: Message aborted MID 233665168 Dropped by antivirus
Thu Nov 29 15:45:03 2007 Info: Message finished MID 233665168 done -
Performance issues with 0CO_OM_WBS_1
We use BW3.5 & R/3 4.7 and encounter huge performance issues with 0CO_OM_WBS_1? Always having to do a full load involving approx 15M records even though there are on the average 100k new records since previous load. This takes a longtime.
Is there a way to delta-enable this datasource?Hi,
This DS is not delta enabled and you can only do a full load. For a delta enabled one, you need to use 0CO_OM_WBS_6. This works as other Financials extractors, as it has a safety delta (configurable, default 2 hours, in table BWOM_SETTINGS).
What you should do is maybe, use the WBS_6 as a delta and only extract full loads for WBS_1 for shorter durations.
As you must have an ODS for WBS_1 at the first stage, I would suggest do a full load only for posting periods that are open. This will reduce the data load.
You may also look at creating your own generic data source with delta; if you are clear on the tables and logic used.
cheers... -
Performance issue in Webi rep when using custom object from SAP BW univ
Hi All,
I had to design a report that runs for the previous day and hence we had created a custom object which ranks the dates and then a pre-defined filter which picks the date with highest rank.
the definition for the rank variable(in universe) is as follows:
<expression>Rank([0CALDAY].Currentmember, Order([0CALDAY].Currentmember.Level.Members ,Rank([0CALDAY].Currentmember,[0CALDAY].Currentmember.Level.Members), BDESC))</expression>
Now to the issue I am currently facing,
The report works fine when we ran it on a test environment ie :with small amount of data.
Our production environment has millions of rows of data and when I run the report with filter it just hangs.I think this is because it tries to rank all the dates(to find the max date) and thus resulting in a huge performance issue.
Can someone suggest how this performance issue can be overcome?
I work on BO XI3.1 with SAP BW.
Thanks and Regards,
Smitha.Hi,
Using a variable on the BW side is not feasible since we want to use the same BW query for a few other reports as well.
Could you please explain what you mean by 'use LAG function'.How can it be used in this scenario?
Thanks and Regards,
Smitha Mohan. -
Query Performance Issue (help)
I'm having issues w/ huge performance issues on the following. The sub intersect query lists duplicates in table1 and table2...and deletes those results from table2. But, the dups criteria is not looking at all fields, only those in the subquery....
DELETE FROM isw.accounts2
WHERE id_user||''||SYSTEM_ID||''||NM_DATABASE IN (
SELECT id_user||''||SYSTEM_ID||''||NM_DATABASE
FROM (
SELECT id_user, domain_name, system_name, user_description,
user_dn, fl_system_user, dt_user_created,
dt_user_modified, pw_changed, user_disabled,
user_locked, pw_neverexpired, pw_expired,
pw_locked, cd_geid, user_type, nm_database,
cd_altname, fl_lob, cd_account_sid, system_id
FROM isw.accounts -- accounts
WHERE SYSTEM_ID IN (SELECT SYSTEM_ID FROM SYSTEMS
WHERE FL_LOB = 'type' AND
FL_SYSTEM_TYPE = 'Syst')
INTERSECT
SELECT id_user, domain_name, system_name, user_description,
user_dn, fl_system_user, dt_user_created,
dt_user_modified, pw_changed, user_disabled,
user_locked, pw_neverexpired, pw_expired,
pw_locked, cd_geid, user_type, nm_database,
cd_altname, fl_lob, cd_account_sid, system_id
FROM isw.accounts2 --accounts_temp
WHERE SYSTEM_ID IN (SELECT SYSTEM_ID FROM SYSTEMS
WHERE FL_LOB = 'type'
AND FL_SYSTEM_TYPE = 'syst')
)Edited by: Topher34 on Oct 24, 2008 12:00 PM
Edited by: Topher34 on Oct 24, 2008 12:01 PMPLAN_TABLE_OUTPUT
Plan hash value: 2030965500
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 623 | 2269 (7)| 00:00:28 |
|* 1 | FILTER | | | | | |
|* 2 | HASH JOIN SEMI | | 1 | 623 | 236 (2)| 00:00:03 |
| 3 | TABLE ACCESS FULL | ACCOUNTS_BAX2 | 1 | 603 | 222 (1)| 00:00:03 |
|* 4 | TABLE ACCESS FULL | SYSTEMS | 15 | 300 | 14 (8)| 00:00:01 |
| 5 | VIEW | | 1 | 117 | 2032 (7)| 00:00:25 |
| 6 | INTERSECTION | | | | | |
| 7 | SORT UNIQUE | | 2145 | 418K| | |
|* 8 | HASH JOIN | | 2145 | 418K| 1793 (8)| 00:00:22 |
|* 9 | TABLE ACCESS FULL| SYSTEMS | 15 | 300 | 14 (8)| 00:00:01 |
|* 10 | TABLE ACCESS FULL| ACCOUNTS_BAX | 2269 | 398K| 1779 (8)| 00:00:22 |
| 11 | SORT UNIQUE | | 1 | 588 | | |
|* 12 | HASH JOIN | | 1 | 588 | 236 (2)| 00:00:03 |
|* 13 | TABLE ACCESS FULL| ACCOUNTS_BAX2 | 1 | 568 | 222 (1)| 00:00:03 |
|* 14 | TABLE ACCESS FULL| SYSTEMS | 15 | 300 | 14 (8)| 00:00:01 |
Edited by: Topher34 on Oct 27, 2008 8:08 AM -
Performance Issue in NetworkInterface.getNetworkInterface in windows JRE 7
NetworkInterface.getNetworkInterface() call takes 10 times more time when run in Windows JRE 7. The same call runs much faster in JRE 6.
Sample Program,
I wrote a small program which just fetches the network interfaces using java.net.NetworkInterface.getNetworkInterfaces() as below,
------------------------------------ Program Start ------------------------------------
import java.net.*;
import java.util.*;
public class PerfNetTest {
public static void main(String args[]) throws Exception {
long startTime = System.currentTimeMillis();
Enumeration niEnum = NetworkInterface.getNetworkInterfaces();
long endTime = System.currentTimeMillis();
System.out.println ( "Total Time Taken For One Call: " + (endTime-startTime));
startTime = System.currentTimeMillis();
for (int i = 0; i < 10; i++) {
niEnum = NetworkInterface.getNetworkInterfaces();
endTime = System.currentTimeMillis();
System.out.println ( "Total Time Taken For Ten Call: " + (endTime-startTime));
------------------------------------ Program End ------------------------------------
Compiled the above code in Java 6 and ran the above program in JRE 6 and JRE 7. JRE 7 takes approximately 10 times more time than JRE 6. This leads to a huge performance issue in our project.
I ran it 5 times in each JRE versions and below are the test results,
When run in JRE 6
c:\test\net>java PerfNetTest
Total Time Taken For One Call: 18
Total Time Taken For Ten Call: 81
c:\test\net>java PerfNetTest
Total Time Taken For One Call: 17
Total Time Taken For Ten Call: 80
c:\test\net>java PerfNetTest
Total Time Taken For One Call: 19
Total Time Taken For Ten Call: 80
c:\test\net>java PerfNetTest
Total Time Taken For One Call: 18
Total Time Taken For Ten Call: 79
c:\test\net>java PerfNetTest
Total Time Taken For One Call: 18
Total Time Taken For Ten Call: 80
When run in JRE 7
c:\test\net>java PerfNetTest
Total Time Taken For One Call: 98
Total Time Taken For Ten Call: 891
c:\test\net>java PerfNetTest
Total Time Taken For One Call: 100
Total Time Taken For Ten Call: 869
c:\test\net>java PerfNetTest
Total Time Taken For One Call: 98
Total Time Taken For Ten Call: 859
c:\test\net>java PerfNetTest
Total Time Taken For One Call: 99
Total Time Taken For Ten Call: 871
c:\test\net>java PerfNetTest
Total Time Taken For One Call: 99
Total Time Taken For Ten Call: 888
Is there any other way to optimize the performance in JRE 7 internally, Can the above issue considered as a bug with Windows JRE 7? Can I go and submit a bug for this?I have simplified the program to point out the exact issue. In the actual usage we need to know immediately or whenever there is an ip change, it might be because of the change in ip to same nic card or when connected to a network via any of the nic card (wifi enabled, etc).
Overall it is particularly important to as we used to get the NetworkInterfaces for every features. As a result of this, 60 feature which gets executed roughly around .5 milliseconds in java6, now consumes almost 6-10 seconds in java 7.
Also in our client server application, where client queries for NetworkInterfaces and further sends a requests to the server where we have to support at least 100 transaction per seconds(TPS for complete client-server throughput), with 100's of client systems querying for its own NetworkInterface is consuming time and certainly reduces the overall throughput.
Currently I am more inclined to cache it and refresh in a separate thread internally, but certainly I wanted to avoid it because the throughput will be less for the first request and especially when it works perfectly fine with java6.
Edited by: Niran on Jun 28, 2012 3:08 AM -
Treeview datawindow with icon - performance issue
Hi all,
i have a treeview dw and have set the "use tree node icon" property with 2 different icons (bmp) at the first level (when the level is expanded or collapsed)
Then a different icon (ico) file is used for the second level
Unfortunately, this is causing a huge performance issue when i run the application.
If i disable the "use tree node icon" property, everything works fine.
Is there any known issues on the treeview dw with icons? any work around?
am using pb11.2
thanking you in advance,
-aFirst question is which jdev version you are using?
I made a quick test and did not see a long busy state.
Run your app with -Djba.debugoutout=console as java option. You get more output in the log window, but may be some hint about what's going on.
Timo -
Performance issue with form after 10g upgrade
Hi Team,
Last week we have upgraded our systems to 10g database.
Ever since we did an upgrade there is a huge performance issues with the custom forms and this is causing a major setback to our business. Before the upgrade the forms were running with our any performance issues.
Can anyone please help me in finding out the reason behind the performance issue(May be a tar or performance tuning).
Many Thanks in Advance.
Regards
KumarLike Jan said,
You must supply more information so we can help you, like where the degradation happens?, in processing? in navigation? in forms loading?? where?
You may also do a little test. Create a one button form, just a canvas and a button, you can include a message in when-button-pressed trigger.
run it and see what happens.
Tony -
Hardware:
17" Powerbook G4 1.67GHz with 2GB ram
System:
10.5.4 Leopard
*Ever since i upgraded to Leopard, i have noticed tremendous latency / pinwheel issues in many areas such as:*
1. Switching between running applications hangs up sometimes as long as 15 seconds
2. Video playback in quicktime is choppy
3. Various programs take a tremendous amount of time to open
4. Opening relatively small illustrator and or photoshop files pinwheels for unusually long time
5. Opening finder windows unusually slow
One of the MOST frustrating of all issues:
6. *Powerbook heats up* rather quickly... gets very hot and the fans have NEVER come on... VERY frustrating, since i would have no problem with noise if it meant better performance...
*Overall, the entire computer feels VERY sluggish*
i have tried clean installs, as well as regular maintenance... and upgraded my RAM to the max of 2GB
Is there anything else i can do? Anyone else having similar problems and or know the best way to repair?
DESPERATE for a solution here... while i realize the G4 can only handle so much, i just feel like it cant possibly be this slow under leopard... or can it?!
ANY HELP AT ALL greatly appreciated...
Best,
Mike
+Mac user since 1989+6. *Powerbook heats up* rather quickly... gets very hot and the fans have NEVER come on... VERY frustrating, since i would have no problem with noise if it meant better performance...
Have you run the Apple hardware test? One of the tests is to specifically test the fans.
*Overall, the entire computer feels VERY sluggish*
How much free disk space do you have? You need 10% to 15% free for systems usage. Less than that an performance can really slow down.
If you bring up the activity monitor, under system memory, what does it say for page ins and outs?
Maybe you are looking for
-
I think what loaded was a firefox for google pack. But I'm not sure about that. Tried to use Yahoo, it said to try to clear my browser, or clear my cookies, or several other suggestions I can't remember. My AVG something or other has been disabled be
-
How to Repair A File Changed into an Alias?
hello, so i have a really frustrating problem. and here it is: --oct 25 i backed up all my files onto an external hard drive (EHD) --i upgraded my osx from snow leopard to mavericks --attached my EHD to laptop to retrieve files --discovered that seve
-
WRONG iPhoto library, can't correct
I upgraded my harddrive to a larger size, but before I did, I cloned it and rebooted to the new drive. However, Aperture still sees my iPhoto masters on the old drive (I renamed the old drive from MyDrive to MyDrive2, but Aperture still sees iPhoto i
-
I have embedded a spreadsheet into my PDF but.. i created the PDF using publisher and i'm now using acrobat 8 to create a menu using bookmarks, if i now edit the spreadsheet is there a way that i can get the new version of the PDF into acrobat and ke
-
Followup question about wireless scanning (and many thanks to PaHu)
Hi, Got my Canon MP980 communicating properly with my macs AND our linux machine now! I'm having no luck using the bundled MP Navigator EX to scan documents and photos.. everything appears to be turned on and initialized correctly, the printer is on