Constantly growing
How to see the growing file in the size?
We wanted to monitor files which are growing rapidly.
Currently we are using following logic to check the status of the large file (from cron job)
45 6,14,22 * * * ( date ; find /opt -size +10000 ) >> /home/dn/LargeFile.log 2>&1Once our opt file system reaches over 90 % we receives LargeFile.log and messages via email.
This is done by code.
But I would like to see the files which are constantly growing.
Also wanted to see top 10 large files existed in the FileSystem.
thx
D
To see size differences in time, you need lists of files and their sizes, in different points in time.
To find the top 10 largest files in the /opt filesystem:
find /opt -type f -exec du -sm {} \; | sort -n | tail -10
Similar Messages
-
Swap constantly grows - but why?
Hello.
I've noticed that my system constantly makes the swapfile bigger but I don't quite understand why…
MacBook-Pro:~ alex$ du -sch /private/var/vm/swapfile*
64M /private/var/vm/swapfile0
64M /private/var/vm/swapfile1
128M /private/var/vm/swapfile2
256M /private/var/vm/swapfile3
512M /private/var/vm/swapfile4
512M /private/var/vm/swapfile5
512M /private/var/vm/swapfile6
512M /private/var/vm/swapfile7
512M /private/var/vm/swapfile8
512M /private/var/vm/swapfile9
3,5G total
If I sum up the amount of the physical memory shown by activity monitor, I end up with about 2.4 GB. I've got 4 GB RAM. Why is any Swap used at all in this case? And even more important: Why does it keep on growing?
!http://i.pgu.me/Vsava6Fi_original.png!
Thanks,
AlexanderBecause between the time you booted your Mac and when you look at /var/vm,
you ran applications that wanted more memory than you have phyical RAM.
And looking at your resident RAM usage is meaningless with respect to swap usage.
There is no easy way to figure out what is in swap.
Virtual memory is filled with empty holes, so that cannot be used
to measure swap usage.
And both virtual and resident RAM may be counting shared library code
so there may be double (or greater) counting of the same physical RAM page.
If you have 3.5GB of swap, it is because you peaked out at needing 3.5GB
of swap.
The more important question is, are you seeing a performance issue due
to swapping?
I like to start a Terminal session and run the following command:
sar -g 60 100
which will show pageout activity once per minute for 100 minutes
(adjust values to suit your tastes).
When you see jumps in pageout activity, you can cross reference
that against what you are currently doing.
Web browsers can be memory intensive depending on what web sites
you visit. Also photo and movie editors are very memory intensive. -
System tablespace constantly growing.
Dear all,
My system tablespace has grown 1G to 2G in one day.
AQ_SRVNTFN_TABLE in system tablespace is having 900M space occoupied.
And it is increasing constantly.
What is reason behind it. and how do I stop the growth og this table.
Please help.
Regards.Ideally , No user should be assigned to system tablespace , To know if users are assigned to System tablespace , fire the below query :-
sql > select username , default_tablespace
from dba_users ;
If this query shows , some of your users are assigned to System tablespace , then this query will make you smile .
sql > alter user default_permanent_tablespace = <name> ;
hare krishna
Alok -
A constantly growing LR catalog?
I usually remove folders from my catalog after a few months of their primary use, keeping them as files on my backup external rive. My view has always been that having fewer folders/photos in my catalog improves Lightroom’s overall performance, keeping it always clean and fast.
Recently, I’ve been exploring the idea of keyword management and using smart collections to automatically group photos. This sounds great, but seems only worth the time if one were to keep ALL their photos in Lightroom, never removing folders from the catalog.
I am now wondering about amassing thousands of photos across hundreds of folders over the course of years... with proper keywording and management, this could indeed produce a powerful tool for finding and referencing any photo and/or collection. But does this come at the cost of a slower, bogged-down performance?
In short, is it a general practice of professional photographers to continually “clean out” their catalogs (i.e. remove folders) like I do once jobs are finished and ready to be archived; or is it better to just let the catalog grow to an incredible size over the years in order to make the best use of keywords and collections? Is there a recommended limit to how many photos/folders LR can handle before its performance is affected?
I'm wondering how others are thinking of this issue.
Some points to note:
-Assume each job/folder contains roughly 1,000 edited raw images
-Assume roughly 20-30 jobs/folders added to the catalog each year
-I have a new and very fast PC.
-Disk space is not an issue.
-I am aware of the “optimize catalog” option.
Thanks.Thanks for your reply.
I have been poking around trying to find more on this issue. In this post [http://digital-photography-school.com/10-tips-to-improve-lightrooms-speed-and-performance- without-additional-hardware] the author suggests (tip #8) that having more than 10,000 photos will slow down Lightroom.
Is this outdated advice, or would you simply disagree with it?
Keeping each catalog less than 10,000 images, as the author of the article suggests, would not make keywording and collection management worth the time and effort, in my opinion. -
Managing projects or how much is too much? 40Gb+ library and growing
Hi all Aperture users. First let me tell that I'm quite happy with 1.0, 1.5 and new 2.0 versions.
One question still bugs me. How big Aperture library can be, is there a limit. My library now is 40Gb, I've started using referenced images when it was 10, but with every time I plug my card in, and import images to my Library, it becomes bigger and bigger.
Backing up those files is not a big issue. My referenced files have 4 copies (2 copies on DVDs, one in DMG file on computer and DMG file backed up with Time Machine). Same goes to my Library, Time Machine takes care of that, and every now and then I open Aperture.library package and manually drag and burn those projects and folders.
Here's a question:
With library file constantly growing, is there a limit on the size or is it just the size of my HD? Every time I add Gbs to my library, I notice Aperture performance decreases. What is the size of you Aperture library?
And the main thing is, would Apple ever consider changing Vaulting feature in Aperture so you 'extract' project or folder of projects completely out of the library, back it up somewhere and load it back when you need it without replacing whole library? There is a way of doing it manually, Show Package Contents, remove folders, project files, burn them and rebuild library next time Aperture loads. But it's not really 'Apple way', it's not cool nor pretty.
What's your take on this?"if it sounds good it is good"
I do not disagree with what has been said but personally I do not use this above statement as the "final say" in my work. By definition it is only as true as your monitoring - and your ears! Your ears can get tired and mislead you. What sounds great in the studio can be a nasty surprise elsewhere (or just the following morning!). You obviously cannot eq anything without listening and your ears obviously lead what you do with a mix.
All I'm saying is that as you get experienced you will also do what seems right to your head as well as to your ears. If I have to eq a vocal by 15dB I know it will likely sound "eq'd" and unnatural. That might be what I want. that might be what "sounds good" but if I'm trying to achieve a natural sound I might wonder why I need to apply such gain and consider alternative solutions.
I think the fact you have asked this question demonstrates you are on the right lines with how you are working. I'm an advocate of cautious eq. never twiddle knobs (virtual or otherwise) randomly until "it sounds good". Sit back, listen, think about what you need to do and then try it.
Don't forget about cutting as well as boosting. Indeed most folk find that reducing frequency bands rather than boosting tends towards a more natural result - so if you want something to have more "middle" pull back the "top" and "bottom".
If I'm struggling with a mix it is nearly always due to bad or excessive eq. It is rarely the other way round. -
Is there a limit to how much catch an app can store before it starts deleting old garbage?
I find that my apps are constantly growing, and soon im out of storage even since i dont have that many apps.
My sisters iphone is even worse, she uses it every day and has to do a complete restore every other month or so because the apps are taking every last mb on the device. the programs/apps are up to over 5gb when in reality they are just about 1gb, the rest is just catches and other downloaded junk inside the app.
It cant really be that necessary to hook it up to the computer every other month to do a complete restore just to get rid of the catch??I have the 32gb iphone4 and its working somewhat allright. But i am a musician and have about half the storage filled with music. Then i want some free space for photos and so on, and normally i have about 5 gb of apps, but as the top 30 i use the most are growing from an average of 14mb each to +100mb and up its actually starting to bug me of. None of these have trashable catches. Spotify is on 500mb without any offline lists, facebook 120mb and so on.
If it was data i actually saved myself, like music, news, mail and so on it would be fine, but this i dont choose and its totally against apples "pc free" advertising. Using the phone a lot = forcing "powerusers" to restore without any actual need.... -
Closing a connection in "finally {...}"
Hello!
I have a connection pool constantly growing in an OC4J in an Oracle 9iAS. My question is kind of "basic", but I need to find the "leak":
In the Handler-classes I close the resultsets, statements and connections in a finally-block, but return-values (if any) are returned in the try-block. Will the finally-block ever be executed if no exceptions are caught? Obviously that is my assumption, but I'd appreciate a confirmation or correction.
try {
Connection con = ...;
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("...");
ArrayList sampleStrings = new ArrayList();
while(rs.next()) {
String example = rs.getString(1);
sampleStrings.add(example);
return sampleStrings;
catch(...) {...}
finally {
if(rs!=null) rs.close();
if(stmt!=null) stmt.close();
if(!con.isClosed() && con!=null) con.close();
BR,
Herman SvensenHi Herman,
Will the finally-block ever be executed if no exceptions
are caught?Of-course it will. The "finally" block is always executed.
The OTN forums don't recognize HTML tags. Like a lot of other forum sites, the OTN forums use special tags known as UBBCode. Unfortunately, the OTN forums seem to recognize a very limited set of these tags. The only two tags which always seem to work (regardless of which Web browser I use) are:
[ url ]
[ code ]
Note: The spaces are added intentionally, so that they won't be recognized. When you use them in your post, remove the spaces.
The JavaRanch forum has a Web page that explains UBBCode.
Good Luck,
Avi. -
How to improve the performance of this code
Hi gurus
code is given below with LDB
this code look big but most of lines are commented
plz help its urgent
thanks in advance
*& Report ZSALES_RECON
REPORT ZSALES_RECON.
TYPE-POOLS : SLIS.
nodes: bseg , bkpf.
data : begin of zbseg occurs 0,
kunnr like bseg-kunnr,
*lifnr like bseg-lifnr,
dmbtr like bseg-dmbtr,
*shkzg like bseg-shkzg,
*gsber like bseg-gsber,
bschl like bseg-bschl,
*sgtxt like bseg-sgtxt,
total like bseg-dmbtr,
hkont like bseg-hkont,
BUDAT LIKE Bkpf-BUDAT,
belnr LIKE BSEG-belnr,
cash like bseg-dmbtr,
credit like bseg-dmbtr,
abn_voucher like bseg-dmbtr,
barista_voucher like bseg-dmbtr,
accor like bseg-dmbtr,
sodexho like bseg-dmbtr,
gift like bseg-dmbtr,
corp like bseg-dmbtr,
card like bseg-dmbtr,
miscellaneous like bseg-dmbtr,
werks like bseg-werks,
gjahr like bseg-gjahr,
SR_NO TYPE I,
shkzg like bseg-shkzg,
end of zbseg,
TP_TBL_DATA like ZBSEG.
DATA : idx TYPE sy-tabix.
Report data to be shown.
data: it_data like ZBSEG.
Heading of the report.
data: t_heading type slis_t_listheader.
AT SELECTION-SCREEN.
get bkpf.
START-OF-SELECTION.
data : sum_mis like bseg-dmbtr,
sum_abn like bseg-dmbtr,
sum_cash like bseg-dmbtr,
sum_credit like bseg-dmbtr,
sum_card like bseg-dmbtr,
sum_barista_voucher like bseg-dmbtr,
sum_accor like bseg-dmbtr,
sum_sodexho like bseg-dmbtr,
sum_gift like bseg-dmbtr,
sum_corp like bseg-dmbtr.
data : wa1_total like bseg-dmbtr.
data : wa_belnr like bseg-belnr,
wa_kunnr like bseg-kunnr,
wa_werks like bseg-werks,
belnr1 like bseg-belnr,
wa_sr_no type i.
GET BSEG.
data : wa like line of zbseg.
data : count type i,
count1 type i.
move-corresponding bseg to zbseg.
*idx = sy-tabix.
on change of zbseg-belnr.
wa_kunnr = zbseg-kunnr.
wa_kunnr = wa_kunnr+6(4).
select single werks into wa_werks from bseg where belnr = zbseg-belnr
and kunnr = '' and gjahr = zbseg-gjahr.
if wa_kunnr = wa_werks.
if zbseg-bschl <> '01'.
clear: sum_mis,wa1_total,sum_abn,sum_cash,sum_credit,sum_card,
sum_barista_voucher,sum_accor,sum_sodexho,sum_gift,sum_corp.
wa-BUDAT = BKPF-BUDAT.
wa-bschl = zbseg-bschl.
wa-hkont = zbseg-hkont.
wa-belnr = zbseg-belnr.
wa_belnr = wa-belnr.
wa-shkzg = zbseg-shkzg.
wa-kunnr = zbseg-kunnr.
count = wa-sr_no.
*wa-sr_no = count + 1.
idx = idx + 1.
append wa to zbseg.
**count = wa-sr_no.
*wa-sr_no = wa-sr_no + 1.
clear wa-total.
endif.
endif.
endon.
*clear : wa1_total.
if wa_belnr = zbseg-belnr.
loop at zbseg into wa.
wa-total = wa1_total.
wa-bschl = zbseg-bschl.
wa-hkont = zbseg-hkont.
count = sy-tabix.
wa-sr_no = count.
count1 = count.
*wa_sr_no = count.
modify zbseg from wa transporting sr_no.
IF wa-bschl eq '40' and wa-hkont eq '0024013020'.
if sy-tabix = 1.
wa-cash = zbseg-dmbtr.
sum_cash = sum_cash + wa-cash.
wa-cash = sum_cash.
modify zbseg index idx from wa transporting cash.
endif.
endif.
IF wa-bschl eq '40' and wa-hkont eq '0026060010'.
if sy-tabix = 1.
wa-credit = zbseg-dmbtr.
sum_credit = sum_credit + wa-credit.
wa-credit = sum_credit.
modify zbseg index idx from wa transporting credit.
endif.
endif.
IF wa-bschl eq '40' and wa-hkont eq '0026060015'.
if sy-tabix = 1.
wa-abn_voucher = zbseg-dmbtr.
sum_abn = sum_abn + wa-abn_voucher.
wa-abn_voucher = sum_abn.
modify zbseg index idx from wa transporting abn_voucher.
endif.
endif.
IF wa-bschl eq '40' and wa-hkont eq '0026060017'.
if sy-tabix = 1.
wa-barista_voucher = zbseg-dmbtr.
sum_barista_voucher = sum_barista_voucher + wa-barista_voucher.
wa-barista_voucher = sum_barista_voucher.
modify zbseg index idx from wa transporting barista_voucher.
endif.
endif.
IF wa-bschl eq '40' and wa-hkont eq '0026060020'.
if sy-tabix = 1.
wa-sodexho = zbseg-dmbtr.
sum_sodexho = sum_sodexho + wa-sodexho.
wa-sodexho = sum_sodexho.
modify zbseg index idx from wa transporting sodexho.
endif.
endif.
IF wa-bschl eq '40' AND wa-hkont eq '0026060030'.
if sy-tabix = 1.
wa-accor = zbseg-dmbtr.
sum_accor = sum_accor + wa-accor.
wa-accor = sum_accor.
modify zbseg index idx from wa transporting accor.
endif.
endif.
IF wa-bschl eq '40' AND wa-hkont eq '0026070040'.
if sy-tabix = 1.
wa-gift = zbseg-dmbtr.
sum_gift = sum_gift + wa-gift.
wa-gift = sum_gift.
modify zbseg index idx from wa transporting gift.
endif.
endif.
IF wa-bschl eq '40' AND wa-hkont eq '0026060070'.
if sy-tabix = 1.
wa-card = zbseg-dmbtr.
sum_card = sum_card + wa-card.
wa-card = sum_card.
modify zbseg index idx from wa transporting card.
endif.
endif.
IF wa-bschl eq '40' AND wa-hkont eq '0026060018'.
if sy-tabix = 1.
wa-corp = zbseg-dmbtr.
sum_corp = sum_corp + wa-corp.
wa-corp = sum_corp.
modify zbseg index idx from wa transporting corp.
endif.
endif.
*IF wa-bschl eq '11' .
*wa-total = zbseg-dmbtr.
*modify zbseg index idx from wa transporting total.
*endif.
IF wa-bschl EQ '40' or wa-bschl = '01' .
if sy-tabix = 1.
wa-total = zbseg-dmbtr.
wa1_total = wa1_total + wa-total.
wa-total = wa1_total.
*if idx = 2.
*modify zbseg index 1 from wa transporting total.
*else.
modify zbseg index idx from wa transporting total.
*endif.
endif.
endif.
*IF zbseg-TOTAL NE zbseg-DMBTR.
IF wa-BSCHL NE '11' AND wa-BSCHL NE '40'. "AND wa-BSCHL NE '01'.
if sy-tabix = 1.
if wa-shkzg = 'S'.
wa-miscellaneous = - wa-miscellaneous.
endif.
wa-miscellaneous = ZBSEG-DMBTR.
sum_mis = sum_mis + wa-miscellaneous.
wa-miscellaneous = sum_mis.
modify zbseg index idx from wa transporting miscellaneous.
endif.
ENDIF.
*wa1-miscellaneous = wa-miscellaneous.
*modify zbseg index idx from wa.
*ENDIF.
*append wa to zbseg.
*clear:zbseg-dmbtr.
endloop.
endif.
*****endif.
*****endon.
*ENDFORM.
*append zbseg.
*endloop.
End-of-selection.
perform build_alv using zbseg t_heading.
*& Form build_alv
Builds and display the ALV Grid.
form build_alv using t_data
*tp_tbl_data
t_heading type slis_t_listheader.
ALV required data objects.
data: w_title type lvc_title,
w_repid type syrepid,
w_comm type slis_formname,
w_status type slis_formname,
x_layout type slis_layout_alv,
t_event type slis_t_event,
t_fieldcat type slis_t_fieldcat_alv,
t_sort type slis_t_sortinfo_alv.
refresh t_fieldcat.
refresh t_event.
refresh t_sort.
clear x_layout.
clear w_title.
Field Catalog
perform set_fieldcat2 using:
1 'SR_NO' 'SR_NO' 'BKPF' '5' space space 'SR NO' space space space
space space space space space t_fieldcat ,
2 'BELNR' 'BELNR' 'BKPF' '10' space space 'Document No' space space
space space space space space space t_fieldcat ,
3 'BUDAT' 'BUDAT' 'BKPF' '10' space space 'Document Date' space
space space space space space space space t_fieldcat ,
4 'KUNNR' space space space space space 'Site' space space
space space space space space space t_fieldcat ,
5 'TOTAL' space 'BSEG' space space space 'Total' space space space
space space space space 'X' t_fieldcat ,
6 'CASH' 'CASH' 'BSEG' space space space 'Cash Sales'
space space space space space space space 'X' t_fieldcat ,
7 'CREDIT' 'CREDIT' 'BSEG' space space space 'Credit Card'
space space space space space space space 'X' t_fieldcat ,
8 'ABN_VOUCHER' space 'BSEG' space space space 'ABN Voucher' space
space
space space space space space 'X' t_fieldcat ,
9 'BARISTA_VOUCHER' space 'BSEG' '15' space space 'BARISTA Voucher'
space space
space space space space space 'X' t_fieldcat ,
10 'CORP' 'CORP' 'BSEG' space space space 'ABN Corp' space space
space space space space space 'X' t_fieldcat ,
11 'SODEXHO' 'SODEXHO' 'BSEG' space space space 'Sodexho' space
space space space space space space 'X' t_fieldcat ,
12 'ACCOR' 'ACCOR' 'BSEG' space space space 'Accor'
space space space space space space space 'X' t_fieldcat ,
13 'GIFT' 'GIFT' 'BSEG' space space space 'Gift Coupon'
space space space space space space space 'X' t_fieldcat ,
14 'CARD' 'CARD' 'BSEG' space space space 'Diners Card' space
space space space space space space 'X' t_fieldcat ,
15 'MISCELLANEOUS' space 'BKPF' '18' space space
'Miscellaneous Income' space space space space space space space 'X'
t_fieldcat .
*14 'KBETR' 'KBETR' 'KONP' '10' space space 'Tax %age' space space
*space space space space space space t_fieldcat ,
*15 'MWSKZ1' 'MWSKZ1' 'RBKP' space space space 'Tax Type' space
*space
space space space space space space t_fieldcat ,
*16 'AMT' 'AMT' 'RBKP' space space space 'Amount Payable' space
*space
space space space space space 'X' t_fieldcat ,
*17 'WERKS' 'SITE' 'RSEG' space space space 'State' space space
*space space space space space space t_fieldcat .
*18 'GSBER' 'GSBER' 'RBKP' space space space 'Business Area' space
*space space space space space space space t_fieldcat .
Layout
x_layout-zebra = 'X'.
Top of page heading
perform set_top_page_heading using t_heading t_event.
Events
perform set_events using t_event.
GUI Status
w_status = ''.
w_repid = sy-repid.
Title
w_title = <<If you want to set a title for
the ALV, please, uncomment and edit this line>>.
User commands
w_comm = 'USER_COMMAND'.
Order
Example
PERFORM set_order USING '<field>' 'IT_DATA' 'X' space space t_sort.
Displays the ALV grid
call function 'REUSE_ALV_GRID_DISPLAY'
exporting
i_callback_program = w_repid
it_fieldcat = t_fieldcat
is_layout = x_layout
it_sort = t_sort
i_callback_pf_status_set = w_status
i_callback_user_command = w_comm
i_save = 'X'
it_events = t_event
i_grid_title = w_title
tables
t_outtab = zbseg
t_data
exceptions
program_error = 1
others = 2.
if sy-subrc <> 0.
message id sy-msgid type sy-msgty number sy-msgno
with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
endif.
endform. " build_alv.
*& Form set_top_page_heading
Creates the report headings.
form set_top_page_heading using t_heading type slis_t_listheader
t_events type slis_t_event.
data: x_heading type slis_listheader,
x_event type line of slis_t_event.
Report title
clear t_heading[].
clear x_heading.
x_heading-typ = 'H'.
x_heading-info = 'SALES RECONCILIATION REPORT'(001).
append x_heading to t_heading.
Top of page event
x_event-name = slis_ev_top_of_page.
x_event-form = 'TOP_OF_PAGE'.
append x_event to t_events.
endform.
*& Form set_events
Sets the events for ALV.
The TOP_OF_PAGE event is alredy being registered in
the set_top_page_heading subroutine.
form set_events using t_events type slis_t_event.
data: x_event type line of slis_t_event.
Example
clear x_event.
x_event-name = .
x_event-form = .
append x_event to t_event.
endform.
*& Form set_order
Adds an entry to the order table.
FORM set_order USING p_fieldname p_tabname p_up p_down p_subtot
t_sort TYPE slis_t_sortinfo_alv.
DATA: x_sort TYPE slis_sortinfo_alv.
CLEAR x_sort.
x_sort-fieldname = p_fieldname.
x_sort-tabname = p_tabname.
x_sort-up = p_up.
x_sort-down = p_down.
x_sort-subtot = p_subtot.
APPEND x_sort TO t_sort.
ENDFORM. "set_order
*& Form set_fieldcat2
Adds an entry to the field catalog.
p_colpos: Column position.
p_fieldname: Field of internal table which is being described by
* this record of the field catalog.
p_ref_fieldname: (Optional) Table field / data element which
* describes the properties of the field.
* If this field is not given, it is copied from
* the fieldname.
p_ref_tabname: (Optional) Table which holds the field referenced
* by <<p_ref_fieldname>>.
If this is not given, the parameter
<<p_ref_fieldname>> references a data element.
p_outputlen: (Optional) Column width.
p_noout: (Optional) If set to 'X', states that the field is not
* showed initially. If so, the field has to be
included in the report at runtime using the display
options.
p_seltext_m: (Optional) Medium label to be used as column header.
p_seltext_l: (Optional) Long label to be used as column header.
p_seltext_s: (Optional) Small label to be used as column header.
p_reptext_ddic: (Optional) Extra small (heading) label to be
* used as column header.
p_ddictxt: (Optional) Set to 'L', 'M', 'S' or 'R' to select
whether to use SELTEXT_L, SELTEXT_M, SELTEXT_S,
or REPTEXT_DDIC as text for column header.
p_hotspot: (Optional) If set to 'X', this field will be used
* as a hotspot area for cursor, alolowing the user
* to click on the field.
p_showasicon: (Optional) If set to 'X', this field will be shown
as an icon and the contents of the field will set
* which icon to show.
p_checkbox: (Optional) If set to 'X', this field will be shown
as a checkbox.
p_edit: (Optional) If set to 'X', this field will be editable.
p_dosum: (Optional) If set to 'X', this field will be summed
(aggregation function) according to the grouping set
by the order functions.
t_fieldcat: Table which contains the whole fieldcat.
FORM set_fieldcat2 USING
p_colpos p_fieldname p_ref_fieldname p_ref_tabname
p_outputlen p_noout
p_seltext_m p_seltext_l p_seltext_s p_reptext_ddic p_ddictxt
p_hotspot p_showasicon p_checkbox p_edit
p_dosum
t_fieldcat TYPE slis_t_fieldcat_alv.
DATA: wa_fieldcat TYPE slis_fieldcat_alv.
CLEAR wa_fieldcat.
General settings
wa_fieldcat-fieldname = p_fieldname.
wa_fieldcat-col_pos = p_colpos.
wa_fieldcat-no_out = p_noout.
wa_fieldcat-hotspot = p_hotspot.
wa_fieldcat-checkbox = p_checkbox.
wa_fieldcat-icon = p_showasicon.
wa_fieldcat-do_sum = p_dosum.
Set reference fieldname, tablenam and rollname.
If p_ref_tabname is not given, the ref_fieldname given
is a data element.
If p_ref_tabname is given, the ref_fieldname given is a
field of a table.
In case ref_fieldname is not given,
it is copied from the fieldname.
IF p_ref_tabname IS INITIAL.
wa_fieldcat-rollname = p_ref_fieldname.
ELSE.
wa_fieldcat-ref_tabname = p_ref_tabname.
IF p_ref_fieldname EQ space.
wa_fieldcat-ref_fieldname = wa_fieldcat-fieldname.
ELSE.
wa_fieldcat-ref_fieldname = p_ref_fieldname.
ENDIF.
ENDIF.
Set output length.
IF NOT p_outputlen IS INITIAL.
wa_fieldcat-outputlen = p_outputlen.
ENDIF.
Set text headers.
IF NOT p_seltext_m IS INITIAL.
wa_fieldcat-seltext_m = p_seltext_m.
ENDIF.
IF NOT p_seltext_l IS INITIAL.
wa_fieldcat-seltext_l = p_seltext_l.
ENDIF.
IF NOT p_seltext_s IS INITIAL.
wa_fieldcat-seltext_s = p_seltext_s.
ENDIF.
IF NOT p_reptext_ddic IS INITIAL.
wa_fieldcat-reptext_ddic = p_reptext_ddic.
ENDIF.
IF NOT p_ddictxt IS INITIAL.
wa_fieldcat-ddictxt = p_ddictxt.
ENDIF.
Set as editable or not.
IF NOT p_edit IS INITIAL.
wa_fieldcat-input = 'X'.
wa_fieldcat-edit = 'X'.
ENDIF.
APPEND wa_fieldcat TO t_fieldcat.
ENDFORM. "set_fieldcat2
======================== Subroutines called by ALV ================
*& Form top_of_page
Called on top_of_page ALV event.
Prints the heading.
form top_of_page.
call function 'REUSE_ALV_COMMENTARY_WRITE'
exporting
i_logo = <<If you want to set a logo, please,
uncomment and edit this line>>
it_list_commentary = t_heading.
endform. " alv_top_of_page
*& Form user_command
Called on user_command ALV event.
Executes custom commands.
form user_command using r_ucomm like sy-ucomm
rs_selfield type slis_selfield.
Example Code
Executes a command considering the sy-ucomm.
CASE r_ucomm.
WHEN '&IC1'.
Set your "double click action" response here.
Example code: Create and display a status message.
DATA: w_msg TYPE string,
w_row(4) TYPE n.
w_row = rs_selfield-tabindex.
CONCATENATE 'You have clicked row' w_row
'field' rs_selfield-fieldname
'with value' rs_selfield-value
INTO w_msg SEPARATED BY space.
MESSAGE w_msg TYPE 'S'.
ENDCASE.
End of example code.
endform. "user_command
*********************************ldb code start from here *************************************************************
DATABASE PROGRAM OF LOGICAL DATABASE ZBRM_3
top-include and nxxx-include are generated automatically
Do NOT change their names manually!!!
*include DBZBRM_3TOP . " header
*include DBZBRM_3NXXX . " all system routines
include DBZBRM_3F001 . " user defined include
PROGRAM SAPDBZBRM_3 DEFINING DATABASE ZBRM_3.
TABLES:
BKPF,
BSEG.
Hilfsfelder
DATA:
BR_SBUKRS LIKE BKPF-BUKRS,
BR_SBELNR LIKE BKPF-BELNR,
BR_SGJAHR LIKE BKPF-GJAHR,
BR_SBUDAT LIKE BKPF-BUDAT,
BR_SGSBER LIKE BSEG-GSBER.
BR_SBUZEI LIKE BSEG-BUZEI,
BR_SEBELN LIKE BSEG-EBELN,
BR_SEBELP LIKE BSEG-EBELP,
BR_SZEKKN LIKE BSEG-ZEKKN.
working areas for the authority check "n435991
for the company code "n435991
*TYPES : BEGIN OF STYPE_BUKRS, "n435991
BUKRS LIKE T001-BUKRS, "n435991
WAERS LIKE T001-WAERS, "n435991
RETCODE TYPE N, "n435991
END OF STYPE_BUKRS. "n435991
"n435991
*DATA : G_S_BUKRS TYPE STYPE_BUKRS, "n435991
G_T_BUKRS TYPE STYPE_BUKRS OCCURS 0. "n435991
"n435991
for the document type "n435991
*TYPES : BEGIN OF STYPE_BLART, "n435991
BLART LIKE BKPF-BLART, "n435991
RETCODE TYPE N, "n435991
END OF STYPE_BLART. "n435991
"n435991
*DATA : G_S_BLART TYPE STYPE_BLART, "n435991
G_T_BLART TYPE STYPE_BLART OCCURS 0. "n435991
"n435991
for the business area "n435991
*TYPES : BEGIN OF STYPE_GSBER, "n435991
GSBER LIKE BSEG-GSBER, "n435991
RETCODE TYPE N, "n435991
END OF STYPE_GSBER. "n435991
"n435991
*DATA : G_S_GSBER TYPE STYPE_GSBER, "n435991
G_T_GSBER TYPE STYPE_GSBER OCCURS 0. "n435991
"n435991
for the purchasing organization "n435991
*TYPES : BEGIN OF STYPE_EKORG, "n435991
EKORG LIKE EKKO-EKORG, "n435991
RETCODE TYPE N, "n435991
END OF STYPE_EKORG. "n435991
"n435991
*DATA : G_S_EKORG TYPE STYPE_EKORG, "n435991
G_T_EKORG TYPE STYPE_EKORG OCCURS 0. "n435991
"n435991
for the plant "n435991
*TYPES : BEGIN OF STYPE_WERKS, "n435991
WERKS LIKE EKPO-WERKS, "n435991
RETCODE TYPE N, "n435991
END OF STYPE_WERKS. "n435991
"n435991
*DATA : G_S_WERKS TYPE STYPE_WERKS, "n435991
G_T_WERKS TYPE STYPE_WERKS OCCURS 0. "n435991
"n435991
*DATA : G_F_TABIX LIKE SY-TABIX. "n435991
"n435991
working tables for array database access "n934526
*types : begin of stype_key, "n934526
bukrs type bkpf-bukrs, "n934526
belnr type bkpf-belnr, "n934526
gjahr type bkpf-gjahr, "n934526
end of stype_key, "n934526
"n934526
stab_key type standard table of "n934526
stype_key "n934526
with default key. "n934526
Initialwerte setzen
FORM INIT.
ENDFORM.
Selection Screen: Process before output
FORM PBO.
ENDFORM.
Selection Screen: Process after input
FORM PAI USING FNAME MARK.
CHECK MARK = SPACE.
ENDFORM.
Lesen BKPF und Uebergabe an den Selektionsreport
FORM PUT_BKPF.
define locla working areas "n934526
data : l_t_key type stab_key, "n934526
l_t_key_block type stab_key, "n934526
l_t_bkpf type standard table of bkpf. "n934526
"n934526
----------------------------------------------------------"n934526
"n934526
database seletion improved "n934526
at first read all FI doc keys into a lean table "n934526
data: wa like bkpf-belnr.
SELECT * FROM BKPF
where budat in br_budat
AND GJAHR EQ BR_GJAHR-LOW
AND BLART = 'RV'.
AND BLART IN BR_BLAR "n934526
"n934526
check sy-subrc is initial. "n934526
"n934526
then process the found FI doc keys in small blocks "n934526
do. "n934526
if l_t_key[] is initial. "n934526
exit. " no more keys -> leave this DO loop "n934526
endif. "n934526
"n934526
form small blocks with 100 FI docs each "n934526
refresh l_t_key_block. "n934526
append lines of l_t_key from 1 to 100 "n934526
to l_t_key_block. "n934526
delete l_t_key from 1 to 100. "n934526
"n934526
read the complete FI doc headers for the block "n934526
SELECT * FROM BKPF "n934526
into corresponding fields of table l_t_bkpf "n934526
for all entries in l_t_key_block "n934526
WHERE BUKRS = l_t_key_block-BUKRS "n934526
AND BELNR = l_t_key_block-BELNR "n934526
AND GJAHR = l_t_key_block-GJAHR. "n934526
"n934526
provide the complete structure for the PUT "n934526
loop at l_t_bkpf into bkpf. "n934526
process this company code : authority and read T001 "n934526
PERFORM F1000_COMPANY_CODE. "n934526
"n934526
go on if the first authority check was successful "n934526
CHECK : G_S_BUKRS-RETCODE IS INITIAL. "n934526
"n934526
set the currency key and save the keys "n934526
MOVE : G_S_BUKRS-WAERS TO T001-WAERS, "n934526
BKPF-BUKRS TO BR_SBUKRS, "n934526
MOVE BKPF-BELNR TO BR_SBELNR.
MOVE BKPF-GJAHR TO BR_SGJAHR . "n934526
BKPF-GJAHR TO BR_SGJAHR. "n934526
PUT BKPF. "n934526
endloop. "n934526
enddo. "n934526
ENDSELECT.
ENDFORM.
Lesen BSEG und Uebergabe an den Selektionsreport
FORM PUT_BSEG.
define local working areas "n934526
data : l_t_bseg type standard table of bseg. "n934526
"n934526
----------------------------------------------------------"n934526
BR_SGSBER = BR_GSBER-LOW.
"n934526
SELECT * FROM BSEG "n934526
WHERE BELNR EQ BR_SBELNR
AND GJAHR EQ BR_SGJAHR
AND GSBER EQ BR_SGSBER.
check sy-subrc is initial. "n934526
"n934526
loop at l_t_bseg into bseg. "n934526
MOVE BSEG-BUZEI TO BR_SBUZEI.
MOVE BSEG-EBELN TO BR_SEBELN.
MOVE BSEG-EBELP TO BR_SEBELP.
MOVE BSEG-ZEKKN TO BR_SZEKKN.
PUT BSEG.
endSELECT. "n934526
ENDFORM.
"n435991
FORM AUTHORITYCHECK_BKPF "n435991
"n435991
"n435991
*FORM AUTHORITYCHECK_BKPF. "n435991
"n435991
the authority-check for the company code was successful; "n435991
check authority for the document type here "n435991
"n435991
does the buffer contain this document type ? "n435991
READ TABLE G_T_BLART INTO G_S_BLART "n435991
WITH KEY BLART = BKPF-BLART BINARY SEARCH. "n435991
"n435991
CASE SY-SUBRC. "n435991
WHEN 0. "document type is known "n435991
"n435991
WHEN 4. "docment type is new --> insert "n435991
MOVE SY-TABIX TO G_F_TABIX. "n435991
PERFORM F1200_CREATE_BLART_ENTRY. "n435991
INSERT G_S_BLART INTO G_T_BLART "n435991
INDEX G_F_TABIX. "n435991
"n435991
WHEN 8. "document type is new --> append "n435991
PERFORM F1200_CREATE_BLART_ENTRY. "n435991
APPEND G_S_BLART TO G_T_BLART. "n435991
ENDCASE. "n435991
"n435991
set the return code "n435991
MOVE G_S_BLART-RETCODE TO SY-SUBRC. "n435991
"n435991
*ENDFORM. "authoritycheck_bkpf "n435991
"n435991
"n435991
FORM AUTHORITYCHECK_BSEG "n435991
"n435991
"n435991
*FORM AUTHORITYCHECK_BSEG. "n435991
"n435991
does the buffer contain this document type ? "n435991
READ TABLE G_T_GSBER INTO G_S_GSBER "n435991
WITH KEY GSBER = BSEG-GSBER BINARY SEARCH. "n435991
"n435991
CASE SY-SUBRC. "n435991
WHEN 0. "business area is known "n435991
"n435991
WHEN 4. "business area is new --> insert "n435991
MOVE SY-TABIX TO G_F_TABIX. "n435991
PERFORM F1300_CREATE_GSBER_ENTRY. "n435991
INSERT G_S_GSBER INTO G_T_GSBER "n435991
INDEX G_F_TABIX. "n435991
"n435991
WHEN 8. "business area is new --> append "n435991
PERFORM F1300_CREATE_GSBER_ENTRY. "n435991
APPEND G_S_GSBER TO G_T_GSBER. "n435991
ENDCASE. "n435991
"n435991
set the return code "n435991
MOVE G_S_GSBER-RETCODE TO SY-SUBRC. "n435991
"n435991
*ENDFORM. "authoritycheck_bseg "n435991
"n435991ABAP provides few tools to analyse the perfomance of the objects, which was developed by us.
Run time analysis transaction SE30
This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing.
SQL Trace transaction ST05
by using this tool we can analyse the perfomance issues related to DATABASE calls.
Perfomance Techniques for improve the perfomance of the object.
1) ABAP/4 programs can take a very long time to execute, and can make other processes have to wait before executing. Here are some tips to speed up your programs and reduce the load your programs put on the system:
2) Use the GET RUN TIME command to help evaluate performance. It's hard to know whether that optimization technique REALLY helps unless you test it out.
3) Using this tool can help you know what is effective, under what kinds of conditions. The GET RUN TIME has problems under multiple CPUs, so you should use it to test small pieces of your program, rather than the whole program.
4) Generally, try to reduce I/O first, then memory, then CPU activity. I/O operations that read/write to hard disk are always the most expensive operations. Memory, if not controlled, may have to be written to swap space on the hard disk, which therefore increases your I/O read/writes to disk. CPU activity can be reduced by careful program design, and by using commands such as SUM (SQL) and COLLECT (ABAP/4).
5) Avoid 'SELECT *', especially in tables that have a lot of fields. Use SELECT A B C INTO instead, so that fields are only read if they are used. This can make a very big difference.
6) Field-groups can be useful for multi-level sorting and displaying. However, they write their data to the system's paging space, rather than to memory (internal tables use memory). For this reason, field-groups are only appropriate for processing large lists (e.g. over 50,000 records). If you have large lists, you should work with the systems administrator to decide the maximum amount of RAM your program should use, and from that, calculate how much space your lists will use. Then you can decide whether to write the data to memory or swap space.
Use as many table keys as possible in the WHERE part of your select statements.
7)Whenever possible, design the program to access a relatively constant number of records (for instance, if you only access the transactions for one month, then there probably will be a reasonable range, like 1200-1800, for the number of transactions inputted within that month). Then use a SELECT A B C INTO TABLE ITAB statement.
8) Get a good idea of how many records you will be accessing. Log into your productive system, and use SE80 -> Dictionary Objects (press Edit), enter the table name you want to see, and press Display. Go To Utilities -> Table Contents to query the table contents and see the number of records. This is extremely useful in optimizing a program's memory allocation.
9) Try to make the user interface such that the program gradually unfolds more information to the user, rather than giving a huge list of information all at once to the user.
10) Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to be accessing. If the number of records exceeds NUM_RECS, the data will be kept in swap space (not memory).
11) Use SELECT A B C INTO TABLE ITAB whenever possible. This will read all of the records into the itab in one operation, rather than repeated operations that result from a SELECT A B C INTO ITAB... ENDSELECT statement. Make sure that ITAB is declared with OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to access.
12) If the number of records you are reading is constantly growing, you may be able to break it into chunks of relatively constant size. For instance, if you have to read all records from 1991 to present, you can break it into quarters, and read all records one quarter at a time. This will reduce I/O operations. Test extensively with GET RUN TIME when using this method.
13) Know how to use the 'collect' command. It can be very efficient.
14) Use the SELECT SINGLE command whenever possible.
15) Many tables contain totals fields (such as monthly expense totals). Use these avoid wasting resources by calculating a total that has already been calculated and stored.
Some tips:
1) Use joins where possible as redundant data is not fetched.
2) Use select single where ever possible.
3) Calling methods of a global class is faster than calling function modules.
4) Use constants instead of literals
5) Use WHILE instead of a DO-EXIT-ENDDO.
6) Unnecessary MOVEs should be avoided by using the explicit work area operations
see the follwing links for a brief insifght into performance tuning,
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_Introduction.asp
1. Debuggerhttp://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
2. Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
3. SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
4. CATT - Computer Aided Testing Too
http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
5. Test Workbench
http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
6. Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
7. Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
8. Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
9. ECATT - Extended Computer Aided testing tool.
http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
Performance tuning for Data Selection Statement
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm -
Hi, I have some 30.000 Songs in my constantly growing internal library and another 25k in my external backup library. I noticed that I lost all my ratings when fusing my libraries the last time.
As ratings and comments are essential for my work as a radio dj: How can I fuse libraries or add songs from my internal to my external library and keep ratings and comments and: How can I make sure, iTunes adds ratings and comments from songs I still have in my internal library to the same songs (without ratings and comments) in my external library? Thanks a lot for your help!I use http://decimus.net/Synk to keep my iMac's iTunes synchronized with the backup iTunes folder on an external disk. Any changes I make are automatically synced to the backup. You might find it useful.
-
Arbitrary email addresses in send and reply fields
I have been trying to figure this out for a while, so apologies if it seems obvious. I have a domain foo.com and use many different addresses [email protected], [email protected], etc to distinguish mail from different senders. For example, if I set up an account at amazon I give an email address of [email protected] to allow simple spam filtering and categorization. Then I open a gate in mac mail by adding a rule saying allow mail to "[email protected]". All these addresses (and there are MANY of them) are directed through a single account by my host service. Though the "to" field distinguishes them, the account they come in under is associated with an official email address of "[email protected]".
Sometimes I need to send an email from one of these addresses. In particular, certain sites require an email+reply confirmation process which demands that the reply come from the [email protected] address. Often the original is sent via a web form, so mac mail plays no role. In mac mail, I can set the default to reply from a fixed address (from the list of official account email addresses) or from the official address of the "account" I last looked at, etc. I can also select from a pull down list of the official account email addresses. However, without adding a bzillion email addresses to the list by hand (and selecting or changing the prefs each time), I can't seem to simply modify the reply-from address or the sent-from address to be arbitrary. I want to do 1 of 2 things:
1. Easily and directly edit the sent from or reply from field of a single email without changing any default settings or going into a preferences pane.
or 2. Set a default of "reply-from" the address to which the email was addressed (rather than the account).
I imagine the 2nd is problematic because there is no way to identify the address which is "me" in the recipient line if it is not known (like the "official" account address). I know how to do the 1st with sendmail, but that kind of defeats the purpose of using mac mail for integration.
Does anyone know how to do this?
Thanks in advance for your help.
Cheers,
KenThanks for the reply, but that doesn't quite help. I don't want to have to go into preferences and add a new address each time I reply. There are over 100 of these addresses and most get used infrequently. I want to simply edit a field in the email directly. The drop down menu solution doesn't work with a large and constantly growing set of addresses. Imagine you have [email protected] through [email protected] on your domain bar.com. If you get an email addressed to [email protected] you wouldn't want to have to go into preferences and add that to the allowed list before being able to reply. I want to simply reply as [email protected] There are legitimate reasons to set stuff up like this. I just want to be able to edit the "from" field in an email message the same way I can edit the "to" field. It would be equally frustrating if I had to go into preferences to add a recipient's email address every time I wanted to send one out.
-
SQL Server 2005 performance decreases with DB size while SQL Server 2012 is fine
Hi,
We have a C# windows service running that polls some files and inserts/updates some fields in database.
The service was tested on a local dev machine with SQL Server 2012 running and performance was quite decent with any number of records. Later on the service was moved to a test stage environment where SQL Server 2005 is installed. At that point database
was still empty and service was running just fine but later on, after some 500k records were written, performance problems came to light. After some more tests we've founds out that, basically, database operation performance in SQL Server 2005 decreases with
a direct correlation with the database size. Here are some testing results:
Run#
1
2
3
4
5
DB size (records)
520k
620k
720k
820k
920k
SQL Server 2005
TotalRunTime
25:25.1
32:25.4
38:27.3
42:50.5
43:51.8
Get1
00:18.3
00:18.9
00:20.1
00:20.1
00:19.3
Get2
01:13.4
01:17.9
01:21.0
01:21.2
01:17.5
Get3
01:19.5
01:24.6
01:28.4
01:29.3
01:24.8
Count1
00:19.9
00:18.7
00:17.9
00:18.7
00:19.1
Count2
00:44.5
00:45.7
00:45.9
00:47.0
00:46.0
Count3
00:21.7
00:21.7
00:21.7
00:22.3
00:22.3
Count4
00:23.6
00:23.9
00:23.9
00:24.9
00:24.5
Process1
03:10.6
03:15.4
03:14.7
03:21.5
03:19.6
Process2
17:08.7
23:35.7
28:53.8
32:58.3
34:46.9
Count5
00:02.3
00:02.3
00:02.3
00:02.3
00:02.1
Count6
00:01.6
00:01.6
00:01.6
00:01.7
00:01.7
Count7
00:01.9
00:01.9
00:01.7
00:02.0
00:02.0
Process3
00:02.0
00:01.8
00:01.8
00:01.8
00:01.8
SQL Server 2012
TotalRunTime
12:51.6
13:38.7
13:20.4
13:38.0
12:38.8
Get1
00:21.6
00:21.7
00:20.7
00:22.7
00:21.4
Get2
01:38.3
01:37.2
01:31.6
01:39.2
01:37.3
Get3
01:41.7
01:42.1
01:35.9
01:44.5
01:41.7
Count1
00:20.3
00:19.9
00:19.9
00:21.5
00:17.3
Count2
01:04.5
01:04.8
01:05.3
01:10.0
01:01.0
Count3
00:24.5
00:24.1
00:23.7
00:26.0
00:21.7
Count4
00:26.3
00:24.6
00:25.1
00:27.5
00:23.7
Process1
03:52.3
03:57.7
03:59.4
04:21.2
03:41.4
Process2
03:05.4
03:06.2
02:53.2
03:10.3
03:06.5
Count5
00:02.8
00:02.7
00:02.6
00:02.8
00:02.7
Count6
00:02.3
00:03.0
00:02.8
00:03.4
00:02.4
Count7
00:02.5
00:02.9
00:02.8
00:03.4
00:02.5
Process3
00:21.7
00:21.0
00:20.4
00:22.8
00:21.5
One more thing is that it's not Process2 table that constantly grows in size but is Process1 table, that gets almost 100k records each run.
After that SQL Server 2005 has also been installed on a dev machine just to test things and we got exactly the same results. Both SQL Server 2005 and 2012 instances are installed using default settings with no changes at all. The same goes for databases
created for the service.
So the question is - why are there such huge differences between performance of SQL Server 2005 and 2012? Maybe there are some settings that are set by default in SQL Server 2012 database that need to be set manually in 2005?
What else can I try to test? The main problem is that production SQL Server will be updated god-knows-when and we can't just wait for that.
Any suggestions/advices are more than welcome....One more thing is that it's not Process2 table that constantly grows in size but is
Process1 table, that gets almost 100k records each run....
Hi,
It is not clear to me what is that you are doing, but now we have a better understanding on ONE of your tables an it is obviously you will get worse result as the data become bigger. Actually your table look like a automatic build table by ORM like Entity
Framework, and it's DDL probably do not much your needs. For example if your select query is using a filter on the other column that [setID] then you have no index and the server probably have to scan the entire table in order to find the records that you
need.
Forum is a suitable place to seek advice about a specific system (as I mentioned before we are not familiar with your system), and it is more suitable for general questions. For example the fact that you have no index except the index on the column [setID]
can indicate a problem. Ultimately to optimize the system will need to investigate it more thoroughly (as it is no longer appropriate forum ... but we're not there yet). Another point is that now we can see that you are using [timestamp] column, an this
implies that your are using this column as a filter for selecting the data. If so, then maybe a better DDL will be to use clustered index on this column and if needed a nonclustered index on the [setID] if it is needed at all...
what is obviously is that next is to check if this DDL fit
your specific needs (as i mentioned before).
Next step is to understand what action do you do with this table. (1) what is your query which become slowly in a bigger data set. (2) Are you using ORM (object relational mapping, like Entity Framework
code first), and if so then which one.
[Personal Site] [Blog] [Facebook] -
Performance issue in abap program
hi,
how can we improve the performance of abap programhi,
read the follwing links
ABAP provides few tools to analyse the perfomance of the objects, which was developed by us.
Run time analysis transaction SE30
This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing.
SQL Trace transaction ST05
by using this tool we can analyse the perfomance issues related to DATABASE calls.
Perfomance Techniques for improve the perfomance of the object.
1) ABAP/4 programs can take a very long time to execute, and can make other processes have to wait before executing. Here are some tips to speed up your programs and reduce the load your programs put on the system:
2) Use the GET RUN TIME command to help evaluate performance. It's hard to know whether that optimization technique REALLY helps unless you test it out.
3) Using this tool can help you know what is effective, under what kinds of conditions. The GET RUN TIME has problems under multiple CPUs, so you should use it to test small pieces of your program, rather than the whole program.
4) Generally, try to reduce I/O first, then memory, then CPU activity. I/O operations that read/write to hard disk are always the most expensive operations. Memory, if not controlled, may have to be written to swap space on the hard disk, which therefore increases your I/O read/writes to disk. CPU activity can be reduced by careful program design, and by using commands such as SUM (SQL) and COLLECT (ABAP/4).
5) Avoid 'SELECT *', especially in tables that have a lot of fields. Use SELECT A B C INTO instead, so that fields are only read if they are used. This can make a very big difference.
6) Field-groups can be useful for multi-level sorting and displaying. However, they write their data to the system's paging space, rather than to memory (internal tables use memory). For this reason, field-groups are only appropriate for processing large lists (e.g. over 50,000 records). If you have large lists, you should work with the systems administrator to decide the maximum amount of RAM your program should use, and from that, calculate how much space your lists will use. Then you can decide whether to write the data to memory or swap space.
Use as many table keys as possible in the WHERE part of your select statements.
7)Whenever possible, design the program to access a relatively constant number of records (for instance, if you only access the transactions for one month, then there probably will be a reasonable range, like 1200-1800, for the number of transactions inputted within that month). Then use a SELECT A B C INTO TABLE ITAB statement.
8) Get a good idea of how many records you will be accessing. Log into your productive system, and use SE80 -> Dictionary Objects (press Edit), enter the table name you want to see, and press Display. Go To Utilities -> Table Contents to query the table contents and see the number of records. This is extremely useful in optimizing a program's memory allocation.
9) Try to make the user interface such that the program gradually unfolds more information to the user, rather than giving a huge list of information all at once to the user.
10) Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to be accessing. If the number of records exceeds NUM_RECS, the data will be kept in swap space (not memory).
11) Use SELECT A B C INTO TABLE ITAB whenever possible. This will read all of the records into the itab in one operation, rather than repeated operations that result from a SELECT A B C INTO ITAB... ENDSELECT statement. Make sure that ITAB is declared with OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to access.
12) If the number of records you are reading is constantly growing, you may be able to break it into chunks of relatively constant size. For instance, if you have to read all records from 1991 to present, you can break it into quarters, and read all records one quarter at a time. This will reduce I/O operations. Test extensively with GET RUN TIME when using this method.
13) Know how to use the 'collect' command. It can be very efficient.
14) Use the SELECT SINGLE command whenever possible.
15) Many tables contain totals fields (such as monthly expense totals). Use these avoid wasting resources by calculating a total that has already been calculated and stored.
Some tips:
1) Use joins where possible as redundant data is not fetched.
2) Use select single where ever possible.
3) Calling methods of a global class is faster than calling function modules.
4) Use constants instead of literals
5) Use WHILE instead of a DO-EXIT-ENDDO.
6) Unnecessary MOVEs should be avoided by using the explicit work area operations
see the follwing links for a brief insifght into performance tuning,
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_Introduction.asp
http://help.sap.com/saphelp_nw2004s/helpdata/en/d1/801f7c454211d189710000e8322d00/frameset.htm
regards
Rohan -
hello.
Has anyone ever seen this or knows how to resolve it?...
I created a logical standby database which works at the beginning. All ddl and dml gets propagated until I try a shutdown of the primary. When I bring back the primary, archive logs are still being sent correctly to standby, but standby stalls. SELECT APPLIED_SCN, NEWEST_SCN FROM DBA_LOGSTDBY_PROGRESS shows no progress for applied_scn.
CPU is at 100% by a one of the Parallel Query processes. I don't see any locks being held on any objects. looking at the v$sqlarea, the following query generates an enormous and constantly growing rows_processed also parse and execute requests are huge and growing as well. It seems that this process is stuck in an infinite loop running this query (I gave it 2 days - no change). This query is also at the bottom of the lsp trace log. I tried a different machine for a standby - same problem. couldn't find anything on metalink or any documentation.
thanks,
Henry
offending query:
ELECT C.SEGCOL#, C.INTCOL#, C.COLNAME, C.TYPE#, C.LENGTH, C.PRECISION,
C.SCALE, C.Interval_Leading_Precision,
C.Interval_Trailing_Precision, C.PROPERTY,
C.CHARSETID, C.CHARSETFORM
FROM SYSTEM.LOGMNRC_GTCS C
WHERE C.LOGMNR_UID = :Logminer_ID
AND C.OBJV# = :ObjVersion
AND C.OBJ# = :ObjNum
ORDER BY C.SEGCOL#
alert log:
Fri Dec 17 18:08:50 2004
WARNING: the following transaction makes no progress
WARNING: in the last 300 seconds for the given message!
WARNING: xid =
0x0008.012.00000283 cscn = 5835031, message# = 2, slavid = 1
lsp trace:
*** 2004-12-17 18:13:53.263
WARNING: the following transaction makes no progress
WARNING: in the last 300 seconds for the given message!
WARNING: xid = 0x0008.012.00000283 cscn = 5835031, message# = 2, slavid = 1
KNACDMP: *******************************************************
KNACDMP: Dumping apply coordinator's context at efffe9a8
KNACDMP: Apply Engine # 0
KNACDMP: Apply Engine name
KNACDMP: Coordinator's Watermarks ------------------------------
KNACDMP: Apply High Watermark = 0x0000.00590901
KNACDMP: Apply Low Watermark = 0x0000.00590901
KNACDMP: Fetch Low Watermark = 0x0000.00590934
KNACDMP: Oldest SCN = 0x0000.00590874
KNACDMP: Last replicant syncpoint SCN = 0x0000.00000000
KNACDMP: Last syncpoint at primary SCN = 0x0000.00590901
KNACDMP: First partition max SCN = 0x0000.005a89e7
KNACDMP: Last partition max SCN = 0x0000.005a89e7
KNACDMP: Last processed SCN = 0x0000.00590901
KNACDMP: Coordinator's constants -------------------------------
KNACDMP: number of apply slaves = 5
KNACDMP: safety level (K) = 1
KNACDMP: max txns in memory = 40000
KNACDMP: max constraints per table = 119
KNACDMP: hash table size (in entries) = 40000
KNACDMP: Coordinator's intervals -------------------------------
KNACDMP: syncpoint interval (ms) = 0
KNACDMP: write low watermark interval(ms)= 1
KNACDMP: Coordinator's timers/counters -------------------------
KNACDMP: current time = 1103336032
KNACDMP: low watermark timer = 0
KNACDMP: shutdown counter = 1
KNACDMP: syncpoint timer = 1103335430
KNACDMP: Coordinator's State/Flags -----------------------------
KNACDMP: Coordinator's State = KNACST_APPLY_UNTIL_END
KNACDMP: Coordinator's Flags = 4
KNACDMP: Slave counts ------------------------------------------
KNACDMP: number of reserved slaves = 0
KNACDMP: number of admin slaves = 0
KNACDMP: number of slaves in wait cmt = 3
KNACDMP: number of safe slaves = 4
KNACDMP: Slave Lists -------------------------------------------
KNACDMP: Dumping All Slaves :-
Slave id = 0, State = 8, Flags = 0, Not Assigned
Slave id = 1, State = 5, Flags = 1, Assigned Xid = 0x0008.012.00000283
Slave id = 2, State = 6, Flags = 3, Assigned Xid = 0x0009.003.0000028a
Slave id = 3, State = 6, Flags = 3, Assigned Xid = 0x0003.02d.0000028b
Slave id = 4, State = 7, Flags = 3, Assigned Xid = 0x0009.022.00000289
Slave id = 5, State = 0, Flags = 0, Not Assigned
KNACDMP: End dumping all slaves
KNACDMP: syncdep slaves = { 2 3 }
KNACDMP: cont chunk slaves = { }
KNACDMP: cont slaves = { }
KNACDMP: exec txn slaves = { }
KNACDMP:Idle slaves (1) ={ 5 }
KNACCPD: *******************************************************
v$lock information for this slave is:
type:PS, id1:1, id2:4, lmode:4, request:0
type:SR, id1:1, id2:0, lmode:4, request:0
Current SQL for this slave is:
SELECT C.SEGCOL#, C.INTCOL#, C.COLNAME, C.TYPE#, C.LENGTH, C.PRE
CISION, C.SCALE, C.Interval_Leading_Precision,
C.Interval_Trailing_Precision, C.PROPERTY, C.CHARSE
TID, C.CHARSETFORM FROM SYSTEM.LOGMNRC_GTCS C WHERE
C.LOGMNR_UID = :Logminer_ID AND C.OBJV# = :ObjVersio
n AND C.OBJ# = :ObjNum ORDER BY C.SEGC
OL#
KNACCPD: end ***************************************************Check following doc for procedure to set up a logical standby database
Creating a Logical Standby Database -
Hi , Could any one suggest, how I can increase the performance in FI , when i would be generating tax reports? and what are the tables to be used for tax generation?
hi
here r some general perfomace increasing steps u can check may be helpful for you
ABAP/4 programs can take a very long time to execute, and can make other processes have to wait before executing. Here are some tips to speed up your programs and reduce the load your programs put on the system:
 Use the GET RUN TIME command to help evaluate performance. It's hard to know whether that optimization technique REALLY helps unless you test it out. Using this tool can help you know what is effective, under what kinds of conditions. The GET RUN TIME has problems under multiple CPUs, so you should use it to test small pieces of your program, rather than the whole program.
 Generally, try to reduce I/O first, then memory, then CPU activity. I/O operations that read/write to hard disk are always the most expensive operations. Memory, if not controlled, may have to be written to swap space on the hard disk, which therefore increases your I/O read/writes to disk. CPU activity can be reduced by careful program design, and by using commands such as SUM (SQL) and COLLECT (ABAP/4).
 Avoid 'SELECT *', especially in tables that have a lot of fields. Use SELECT A B C INTO instead, so that fields are only read if they are used. This can make a very big difference.
 Field-groups can be useful for multi-level sorting and displaying. However, they write their data to the system's paging space, rather than to memory (internal tables use memory). For this reason, field-groups are only appropriate for processing large lists (e.g. over 50,000 records). If you have large lists, you should work with the systems administrator to decide the maximum amount of RAM your program should use, and from that, calculate how much space your lists will use. Then you can decide whether to write the data to memory or swap space. See the Fieldgroups ABAP example.
 Use as many table keys as possible in the WHERE part of your select statements.
 Whenever possible, design the program to access a relatively constant number of records (for instance, if you only access the transactions for one month, then there probably will be a reasonable range, like 1200-1800, for the number of transactions inputted within that month). Then use a SELECT A B C INTO TABLE ITAB statement.
 Get a good idea of how many records you will be accessing. Log into your productive system, and use SE80 -> Dictionary Objects (press Edit), enter the table name you want to see, and press Display. Go To Utilities -> Table Contents to query the table contents and see the number of records. This is extremely useful in optimizing a program's memory allocation.
 Try to make the user interface such that the program gradually unfolds more information to the user, rather than giving a huge list of information all at once to the user.
 Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to be accessing. If the number of records exceeds NUM_RECS, the data will be kept in swap space (not memory).
 Use SELECT A B C INTO TABLE ITAB whenever possible. This will read all of the records into the itab in one operation, rather than repeated operations that result from a SELECT A B C INTO ITAB... ENDSELECT statement. Make sure that ITAB is declared with OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to access.
 If the number of records you are reading is constantly growing, you may be able to break it into chunks of relatively constant size. For instance, if you have to read all records from 1991 to present, you can break it into quarters, and read all records one quarter at a time. This will reduce I/O operations. Test extensively with GET RUN TIME when using this method.
 Know how to use the 'collect' command. It can be very efficient.
 Use the SELECT SINGLE command whenever possible.
 Many tables contain totals fields (such as monthly expense totals). Use these avoid wasting resources by calculating a total that has already been calculated and stored.
You can also check the below good link
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_PerformanceAnalysisTools.asp
~~Guduri -
[CS5.5/6] - XML / Data Merge questions & Best practice.
Fellow Countrymen (and women),
I work as a graphic designer for a large outlet chain retailer which is constantly growing our base of centers. This growth has brought a workload that used to be manageable with but two people to a never ending sprint with five. Much of what we do is print, which is not my forte, but is also generally a disorganized, ad-hoc affair into which I am wading to try to help reduce overall strain.
Upon picking up InDesign I noted the power of the simple Data Merge function and have added it to our repetoire in mass merging data sources. There are some critical failures I see in this as a tool going forward for our purposes, however:
1) Data Merge cannot handle information stored and categorized in a singular column well. As an example we have centers in many cities, and each center has its own list of specific stores. Data merge cannot handle a single column, or even multiple column list of these stores very easily and has forced us into some manual operations to concatenate the data into one cell and then, using delimiter characters, find and replace hard returns to seperate them.
2) Data Merge offers no method of alternate alignment of data, or selection by ranges. That is to say: I cannot tell Data merge to start at Cell1 in one column, and in another column select say... Cell 42 as the starting point.
3) Data merge only accepts data organized in a very specific, and generally inflexible pattern.
These are just a few limitations.
ON TO MY ACTUAL DILEMMA aka Convert to XML or not?
Recently my coworker has suggested we move toward using XML as a repository / delivery system that helps us quickly get data from our SQL database into a usable form in InDesign.
I've watched some tutorials on Lynda.com and havent yet seen a clear answer to a very simple question:
"Can XML help to 'merge' large, dynamic, data sets like a list of 200 stores per center over 40 centers based off of a single template file?"
What I've seen is that I would need to manually duplicate pages, linking the correct XML entry as I go rather than the program generating a set of merged pages like that from Data Merge with very little effort on my part. Perhaps setting up a master page would allow for easy drag and drop fields for my XML data?
I'm not an idiot, I'm simply green with this -- and it's kind of scary because I genuinely want us to proceed forward with the most flexible, reliable, trainable and sustainable solution. A tall order, I know. Correct me if I'm wrong, but XML is that beast, no?
Formatting the XML
Currently I'm afraid our XML feed for our centers isnt formatted correctly with the current format looking as such:
<BRANDS>
<BRAND>
• BrandID = xxxx
[Brand Name]
[Description]
[WebMoniker]
<CATEGORIES>
<CATEGORY>
• xmlns = URL
• WebMoniker = category_type
<STORES>
<STORE>
• StoreID = ID#
• CenterID = ID#
I dont think this is currently usable because if I wanted to create a list of stores from a particular center, that information is stored as an attribute of the <Store> tag, buried deep within the data, making it impossible to 'drag-n-drop'.
Not to mention much of the important data is held in attributes rather than text fields which are children of the tag.
Im thinking of proposing the following organizational layout:
<CENTERS>
<CENTER>
[Center_name]
[Center_location]
<CATEGORIES>
<CATEGORY>
[Category_Type]
<BRANDS>
<BRAND>
[Brand_name]
My thought is that if I have the <CENTER> tag then I can simply drag that into a frame and it will auto populate all of the brands by Category (as organized in the XML) for that center into the frame.
Why is this important?
This is used on multiple documents in different layout styles, and since our store list is ever changes as leases end or begin, over 40 centers this becomes a big hairy monster. We want this to be as automated as possible, but I'd settle for a significant amount of dragging and dropping as long as it is simple and straightforward. I have a high tollerance for druding through code and creating work arounds but my co-workers do not. This needs to be a system that is repeatable and understandable and needs to be able to function whether I'm here or not -- Mainly because I would like to step away from the responsibility of setting it up every time
I'd love to hear your raw, unadulterated thoughts on the subject of Data merge and XML usage to accomplish these sorts of tasks. What are your best practices and how would you / do you accomplish these operations?
Regards-
RobertFrom what I've gleaned through watching Lynda tutorials on the subject is that what I'm hoping to do is indeed possible.
Peter, I dont disagree with you that there is a steep learning curve for me as the instigator / designer of this method for our team, but in terms of my teammates and end-users that will be softened considerably. Even so I'm used to steep learning curves and the associated frustrations -- but I cope well with new learning and am self taught in many tools and programs.
Flow based XML structures:
It seems as though as long as the initial page is set up correctly using imported XML, individual data records that cascade in a logical fashion can be flowed automatically into new pages. Basically what you do is to create an XML based layout with the dynamic portion you wish to flow in a single frame, apply paragraph styles to the different tags appropriately and then after deleting unused records, reimport the XML with some specific boxes checked (depending on how you wish to proceed).
From there simply dragging the data root into the frame will cause overset text as it imports all the XML information into the frame. Assuming that everything is cascaded correctly using auto-flow will cause new pages to be automatically generated with the tags correctly placed in a similar fashion to datamerge -- but far more powerful and flexible.
The issue then again comes down to data organization in the XML file. In order to use this method the data must be organized in the same order in which it will be displayed. For example if I had a Lastname field, and a Firstname field in that order, I could not call the Firstname first without faulting the document using the flow method. I could, however, still drag and drop content from each tag into the frame and it would populate correctly regardless of the order of appearance in the XML.
Honestly either method would be fantastic for our current set of projects, however the flow method may be particularly useful in jobs that would require more than 40 spreads or simple layouts with huge amounts of data to be merged.
Maybe you are looking for
-
Asset accounting configuaration
Hi all In Asset accounting my client doing the depreciation in different way. For the first 6 months he is doing different depreciation and remaining 6 month he is doing different depreciation. So can pls tell me how to configure this type of depreci
-
Creation of Inbound delivery without Order acknowledgment
Hi, We have a scenario where we raise Purchase order to Vendor. The Vendor sent the confirmation (Delivery date, qty, etc) via IDOC. This updates the Order acknowledgement and also the confirmation information in PO. When the Vendo creates a Outbound
-
Hi, I've made tween in mc which has normal speed specified by frame rate. Than button, which should speed up that tween twice. It works well for simle shapes going thru motion tween. But my symbol for tween is quite difficult and it seems that this f
-
Colocation of client and key owner for writes
Hi, In a distributed, replicated cache (1 backup copy) does the write to the backup owner for the key happen in parallel with the write to the primary owner? Or does the primary owner issue a synchronous update to the backup owner for the key? Other?
-
MOVED: Control Center error install Intel ME
This topic has been moved to MSI Intel boards. https://forum-en.msi.com/index.php?topic=251738.0