Information On The Performance
On the mac mini does a 2gb ram and 160gb hdd have enough power to handle everyday tasking like much and editing certain thing's and good for everyday use....
Plus Can I Hook This Monitor To The Mac Mini
http://www.newegg.com/Product/Product.aspx?Item=N82E16824112018
let me no please on the q's I askd
The base configuration works fine for every day things. You do not say what kind of editing. If you are working with Pro apps then you would be better off maxing the RAM to 4 GB.
That display should work fine with the Mac mini. The mini ships with an Apple mini-DVI to DVI adapter, so you could get a DVI to DVI cable and use the display's DVI port. Or you could get a DVI to HDMI cable and use the HDMI port.
Neither of the mini's video ports; mini-DVI or mini-DisplayPort carry an audio signal.
Dah•veed
Similar Messages
-
I find information about the performance of the ATG scenario engine.
Hi all,
It is a big point that I can release a business user directly by using a scenario of the ATG.
It is the strongest that I can use the information that tracked a site for the element of the scenario.
I almost collect the tracking information of the site for the site business in Japan.
However, it is the present conditions to use only the analysis of the count level.
I want to reclaim the world of totally new digital marketing in Japan.
It is necessary to release many scenarios to realize 'one to one marketing '.
However, such as the performance there is a question.
At first do you know the example that you use a scenario positively, and realizes an offer and recommend?
In addition, do you know which part of the sizing is important when you come true?
Regards,
hirofumiATG scenario engine is one of the feature that ATG provides for a personalised user experience, but the major limitation of scenario is it is event driven, you need to design your ecommerce application in such a way that you have to identify the possible events anddecide on appropriate actions. This will help in very few scenarios but a larger ecommerce application concerns it is very limited.
--At first do you know the example that you use a scenario positively, and realizes an offer and recommend?
This will be very difficult to achieve using a scenario, you can display some basic promotions to the customer but not a real recommendation. First to remember that scenario is not a recommendation engine.
Also this will not useful in case of anonymous customer since tracking anonymous customer behaviour is extremely difficult without a proper recommendation engine. either you have to custom build or go with a third party recommendation engine.
Cheers
R -
I keep getting the above error on all my SQL 2012 deployments (standard and enterprise) on all Windows Server 2012 Standard machines. I have already tried the following commands in administrator mode to resolve without success:
lodctr /T:perf-MSSQLSERVER-sqlctr11.1.3000.0.dll
lodctr /T:MSSQLSERVERlodctr /R:perf-MSSQLSERVER-sqlctr11.1.3000.0.dll
lodctr /R
Any other suggestions?
DianeHi,
Add the service accounts to the local server’s “Performance Monitor Users” group.
Configure Windows Service Accounts and Permissions
http://msdn.microsoft.com/en-IN/library/ms143504.aspx#VA_Desc -
How to improve the performance of General information in MSS
HI,
We are facing with critical performance issue in General information screen of MSS.
For one employee number of subordinates are more than 3500 and because of this no of database hits for fetching all the records are more than 20,000 and each database hit is taking some time and overall time for fetching all the records is around 2 minutes which is not acceptable.
so we want to cut down the number of hierarchy levels shown to the manager or by keeping the same number of levels how to increase the performance .
Any inputs will be highly appreciated and rewarded with points for sure
Bala DuvvuriHi,
We have problem in performance because when we call General information screen for managers in MSS it reads the data from HRP1001 and if manager is having more data then it will take time.
we rectified this problem by creating an index that caches all the users who have more number of objects .
we created a job that creates index and execute that every day -
Where to find the performance counter information?
Code:
class Program
static void Main(string[] args)
if(CreatePerformanceCounters())
Console.WriteLine("Created performance counters");
Console.WriteLine("Please restart application");
Console.ReadKey();
return;
var totalOperationsCounter = new PerformanceCounter(
"MyCategory",
"# operations executed",
false);
var operationPerSecondCounter = new PerformanceCounter(
"MyCategory",
"# operations / sec",
false);
totalOperationsCounter.Increment();
operationPerSecondCounter.Increment();
private static bool CreatePerformanceCounters()
if(!PerformanceCounterCategory.Exists("MyCategory"))
CounterCreationDataCollection counters = new CounterCreationDataCollection
new CounterCreationData(
"# operations executed",
"Total number of operations executed",
PerformanceCounterType.NumberOfItems32),
new CounterCreationData(
"# operations /sec",
"Number of operations executed per second",
PerformanceCounterType.RateOfCountsPerSecond32)
PerformanceCounterCategory.Create("MyCategory",
"Sample category for Codeproject", counters);
return true;
return false;
After running it, it displays two line information on the screen. But where is the location to store these information? I want to look at it.Hello ardmore,
>>After running it, it displays two line information on the screen. But where is the location to store these information? I want to look at it.
You could performance counter monitor:
http://technet.microsoft.com/en-us/library/cc749249.aspx
Or see it in the Visual Studio directly:
In the Server Explore of Visual Studio, expand the Servers, find Performance Counters, you could see the MyCategory counter and other system counters:
Regards.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Just to give an example, Microsoft IIS webserver provides the
performance counters, which one can read from registry using
APIs provided by Microsoft from a C/C++ program and get all the
performance related data ... r some similar interfaces
provided by iPlanet webserver ???The spell checked version...
I really appreciate the replies. I have looked into RMI and think it might fit the bill, ,BUT, let me clarify a bit and see if there are any other ideas floating around out there.
A user, using a web interface on machine A will click the , "I want my file" button. This flags the DB to create a file for that user. I will have multiple daemons running on other machines B,C,D whose sole job is to check the db, compile the file (This is the part that takes huge overhead so I wanted it distributed), then write the file. The trick is the file needs to be written to machine A. So I figure, using RMI, I can write a simple "write this file" object that will accept a byte stream or byte array, and a correct path to write to. Does this sound like a good methodology?
I've never done anything like this so I am really shooting in the dark.
Thanks again for the posts.
Paul -
Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube
Hi BW Guru's,
I have unresolved issue and our team is still working on it.
I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
I have requested for OSS note and searching myself but still could not found.
Finally i have executed one of the cube in RSRV with the database selection
"Database indexes of an InfoCube and its aggregates" and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated
ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
ORACLE: Index /BIC/D1001072~010 has possibly degenerated
ORACLE: Index /BIC/D1001132~010 has possibly degenerated
ORACLE: Index /BIC/D1001212~010 has possibly degenerated
ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
Thanks and Regards,
Venkathi,
check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
If you use "like" in your sql then forget indexes....
For more informations about indexes check google or your Dba .
Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
ex...
logiacal dimensions
year-half-day
company-department
fact
quantity
instead of making one...make 3,
year - department - quantity
half - department - quantity
day - department - quantity
and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
Do you use partioning functionality???
i hope i helped....
http://greekoraclebi.blogspot.com/
/////////////////////////////////////// -
Regarding the performance in report
Hi Abap Gurus,
i am working on the report. my requirement is that after executing the report data gets extracting after 11 hours.the required data is comonf perfectly. how to improve the performance. any tips to follow the performance in the report. if possible post the code.
Moderator Message: Please search the forum for available information.
Edited by: kishan P on Oct 19, 2010 4:50 PMHi,
Please check below thread;
Extract from ALV List
Regards
Jana -
How to improve the performance of this code
Hi gurus
code is given below with LDB
this code look big but most of lines are commented
plz help its urgent
thanks in advance
*& Report ZSALES_RECON
REPORT ZSALES_RECON.
TYPE-POOLS : SLIS.
nodes: bseg , bkpf.
data : begin of zbseg occurs 0,
kunnr like bseg-kunnr,
*lifnr like bseg-lifnr,
dmbtr like bseg-dmbtr,
*shkzg like bseg-shkzg,
*gsber like bseg-gsber,
bschl like bseg-bschl,
*sgtxt like bseg-sgtxt,
total like bseg-dmbtr,
hkont like bseg-hkont,
BUDAT LIKE Bkpf-BUDAT,
belnr LIKE BSEG-belnr,
cash like bseg-dmbtr,
credit like bseg-dmbtr,
abn_voucher like bseg-dmbtr,
barista_voucher like bseg-dmbtr,
accor like bseg-dmbtr,
sodexho like bseg-dmbtr,
gift like bseg-dmbtr,
corp like bseg-dmbtr,
card like bseg-dmbtr,
miscellaneous like bseg-dmbtr,
werks like bseg-werks,
gjahr like bseg-gjahr,
SR_NO TYPE I,
shkzg like bseg-shkzg,
end of zbseg,
TP_TBL_DATA like ZBSEG.
DATA : idx TYPE sy-tabix.
Report data to be shown.
data: it_data like ZBSEG.
Heading of the report.
data: t_heading type slis_t_listheader.
AT SELECTION-SCREEN.
get bkpf.
START-OF-SELECTION.
data : sum_mis like bseg-dmbtr,
sum_abn like bseg-dmbtr,
sum_cash like bseg-dmbtr,
sum_credit like bseg-dmbtr,
sum_card like bseg-dmbtr,
sum_barista_voucher like bseg-dmbtr,
sum_accor like bseg-dmbtr,
sum_sodexho like bseg-dmbtr,
sum_gift like bseg-dmbtr,
sum_corp like bseg-dmbtr.
data : wa1_total like bseg-dmbtr.
data : wa_belnr like bseg-belnr,
wa_kunnr like bseg-kunnr,
wa_werks like bseg-werks,
belnr1 like bseg-belnr,
wa_sr_no type i.
GET BSEG.
data : wa like line of zbseg.
data : count type i,
count1 type i.
move-corresponding bseg to zbseg.
*idx = sy-tabix.
on change of zbseg-belnr.
wa_kunnr = zbseg-kunnr.
wa_kunnr = wa_kunnr+6(4).
select single werks into wa_werks from bseg where belnr = zbseg-belnr
and kunnr = '' and gjahr = zbseg-gjahr.
if wa_kunnr = wa_werks.
if zbseg-bschl <> '01'.
clear: sum_mis,wa1_total,sum_abn,sum_cash,sum_credit,sum_card,
sum_barista_voucher,sum_accor,sum_sodexho,sum_gift,sum_corp.
wa-BUDAT = BKPF-BUDAT.
wa-bschl = zbseg-bschl.
wa-hkont = zbseg-hkont.
wa-belnr = zbseg-belnr.
wa_belnr = wa-belnr.
wa-shkzg = zbseg-shkzg.
wa-kunnr = zbseg-kunnr.
count = wa-sr_no.
*wa-sr_no = count + 1.
idx = idx + 1.
append wa to zbseg.
**count = wa-sr_no.
*wa-sr_no = wa-sr_no + 1.
clear wa-total.
endif.
endif.
endon.
*clear : wa1_total.
if wa_belnr = zbseg-belnr.
loop at zbseg into wa.
wa-total = wa1_total.
wa-bschl = zbseg-bschl.
wa-hkont = zbseg-hkont.
count = sy-tabix.
wa-sr_no = count.
count1 = count.
*wa_sr_no = count.
modify zbseg from wa transporting sr_no.
IF wa-bschl eq '40' and wa-hkont eq '0024013020'.
if sy-tabix = 1.
wa-cash = zbseg-dmbtr.
sum_cash = sum_cash + wa-cash.
wa-cash = sum_cash.
modify zbseg index idx from wa transporting cash.
endif.
endif.
IF wa-bschl eq '40' and wa-hkont eq '0026060010'.
if sy-tabix = 1.
wa-credit = zbseg-dmbtr.
sum_credit = sum_credit + wa-credit.
wa-credit = sum_credit.
modify zbseg index idx from wa transporting credit.
endif.
endif.
IF wa-bschl eq '40' and wa-hkont eq '0026060015'.
if sy-tabix = 1.
wa-abn_voucher = zbseg-dmbtr.
sum_abn = sum_abn + wa-abn_voucher.
wa-abn_voucher = sum_abn.
modify zbseg index idx from wa transporting abn_voucher.
endif.
endif.
IF wa-bschl eq '40' and wa-hkont eq '0026060017'.
if sy-tabix = 1.
wa-barista_voucher = zbseg-dmbtr.
sum_barista_voucher = sum_barista_voucher + wa-barista_voucher.
wa-barista_voucher = sum_barista_voucher.
modify zbseg index idx from wa transporting barista_voucher.
endif.
endif.
IF wa-bschl eq '40' and wa-hkont eq '0026060020'.
if sy-tabix = 1.
wa-sodexho = zbseg-dmbtr.
sum_sodexho = sum_sodexho + wa-sodexho.
wa-sodexho = sum_sodexho.
modify zbseg index idx from wa transporting sodexho.
endif.
endif.
IF wa-bschl eq '40' AND wa-hkont eq '0026060030'.
if sy-tabix = 1.
wa-accor = zbseg-dmbtr.
sum_accor = sum_accor + wa-accor.
wa-accor = sum_accor.
modify zbseg index idx from wa transporting accor.
endif.
endif.
IF wa-bschl eq '40' AND wa-hkont eq '0026070040'.
if sy-tabix = 1.
wa-gift = zbseg-dmbtr.
sum_gift = sum_gift + wa-gift.
wa-gift = sum_gift.
modify zbseg index idx from wa transporting gift.
endif.
endif.
IF wa-bschl eq '40' AND wa-hkont eq '0026060070'.
if sy-tabix = 1.
wa-card = zbseg-dmbtr.
sum_card = sum_card + wa-card.
wa-card = sum_card.
modify zbseg index idx from wa transporting card.
endif.
endif.
IF wa-bschl eq '40' AND wa-hkont eq '0026060018'.
if sy-tabix = 1.
wa-corp = zbseg-dmbtr.
sum_corp = sum_corp + wa-corp.
wa-corp = sum_corp.
modify zbseg index idx from wa transporting corp.
endif.
endif.
*IF wa-bschl eq '11' .
*wa-total = zbseg-dmbtr.
*modify zbseg index idx from wa transporting total.
*endif.
IF wa-bschl EQ '40' or wa-bschl = '01' .
if sy-tabix = 1.
wa-total = zbseg-dmbtr.
wa1_total = wa1_total + wa-total.
wa-total = wa1_total.
*if idx = 2.
*modify zbseg index 1 from wa transporting total.
*else.
modify zbseg index idx from wa transporting total.
*endif.
endif.
endif.
*IF zbseg-TOTAL NE zbseg-DMBTR.
IF wa-BSCHL NE '11' AND wa-BSCHL NE '40'. "AND wa-BSCHL NE '01'.
if sy-tabix = 1.
if wa-shkzg = 'S'.
wa-miscellaneous = - wa-miscellaneous.
endif.
wa-miscellaneous = ZBSEG-DMBTR.
sum_mis = sum_mis + wa-miscellaneous.
wa-miscellaneous = sum_mis.
modify zbseg index idx from wa transporting miscellaneous.
endif.
ENDIF.
*wa1-miscellaneous = wa-miscellaneous.
*modify zbseg index idx from wa.
*ENDIF.
*append wa to zbseg.
*clear:zbseg-dmbtr.
endloop.
endif.
*****endif.
*****endon.
*ENDFORM.
*append zbseg.
*endloop.
End-of-selection.
perform build_alv using zbseg t_heading.
*& Form build_alv
Builds and display the ALV Grid.
form build_alv using t_data
*tp_tbl_data
t_heading type slis_t_listheader.
ALV required data objects.
data: w_title type lvc_title,
w_repid type syrepid,
w_comm type slis_formname,
w_status type slis_formname,
x_layout type slis_layout_alv,
t_event type slis_t_event,
t_fieldcat type slis_t_fieldcat_alv,
t_sort type slis_t_sortinfo_alv.
refresh t_fieldcat.
refresh t_event.
refresh t_sort.
clear x_layout.
clear w_title.
Field Catalog
perform set_fieldcat2 using:
1 'SR_NO' 'SR_NO' 'BKPF' '5' space space 'SR NO' space space space
space space space space space t_fieldcat ,
2 'BELNR' 'BELNR' 'BKPF' '10' space space 'Document No' space space
space space space space space space t_fieldcat ,
3 'BUDAT' 'BUDAT' 'BKPF' '10' space space 'Document Date' space
space space space space space space space t_fieldcat ,
4 'KUNNR' space space space space space 'Site' space space
space space space space space space t_fieldcat ,
5 'TOTAL' space 'BSEG' space space space 'Total' space space space
space space space space 'X' t_fieldcat ,
6 'CASH' 'CASH' 'BSEG' space space space 'Cash Sales'
space space space space space space space 'X' t_fieldcat ,
7 'CREDIT' 'CREDIT' 'BSEG' space space space 'Credit Card'
space space space space space space space 'X' t_fieldcat ,
8 'ABN_VOUCHER' space 'BSEG' space space space 'ABN Voucher' space
space
space space space space space 'X' t_fieldcat ,
9 'BARISTA_VOUCHER' space 'BSEG' '15' space space 'BARISTA Voucher'
space space
space space space space space 'X' t_fieldcat ,
10 'CORP' 'CORP' 'BSEG' space space space 'ABN Corp' space space
space space space space space 'X' t_fieldcat ,
11 'SODEXHO' 'SODEXHO' 'BSEG' space space space 'Sodexho' space
space space space space space space 'X' t_fieldcat ,
12 'ACCOR' 'ACCOR' 'BSEG' space space space 'Accor'
space space space space space space space 'X' t_fieldcat ,
13 'GIFT' 'GIFT' 'BSEG' space space space 'Gift Coupon'
space space space space space space space 'X' t_fieldcat ,
14 'CARD' 'CARD' 'BSEG' space space space 'Diners Card' space
space space space space space space 'X' t_fieldcat ,
15 'MISCELLANEOUS' space 'BKPF' '18' space space
'Miscellaneous Income' space space space space space space space 'X'
t_fieldcat .
*14 'KBETR' 'KBETR' 'KONP' '10' space space 'Tax %age' space space
*space space space space space space t_fieldcat ,
*15 'MWSKZ1' 'MWSKZ1' 'RBKP' space space space 'Tax Type' space
*space
space space space space space space t_fieldcat ,
*16 'AMT' 'AMT' 'RBKP' space space space 'Amount Payable' space
*space
space space space space space 'X' t_fieldcat ,
*17 'WERKS' 'SITE' 'RSEG' space space space 'State' space space
*space space space space space space t_fieldcat .
*18 'GSBER' 'GSBER' 'RBKP' space space space 'Business Area' space
*space space space space space space space t_fieldcat .
Layout
x_layout-zebra = 'X'.
Top of page heading
perform set_top_page_heading using t_heading t_event.
Events
perform set_events using t_event.
GUI Status
w_status = ''.
w_repid = sy-repid.
Title
w_title = <<If you want to set a title for
the ALV, please, uncomment and edit this line>>.
User commands
w_comm = 'USER_COMMAND'.
Order
Example
PERFORM set_order USING '<field>' 'IT_DATA' 'X' space space t_sort.
Displays the ALV grid
call function 'REUSE_ALV_GRID_DISPLAY'
exporting
i_callback_program = w_repid
it_fieldcat = t_fieldcat
is_layout = x_layout
it_sort = t_sort
i_callback_pf_status_set = w_status
i_callback_user_command = w_comm
i_save = 'X'
it_events = t_event
i_grid_title = w_title
tables
t_outtab = zbseg
t_data
exceptions
program_error = 1
others = 2.
if sy-subrc <> 0.
message id sy-msgid type sy-msgty number sy-msgno
with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
endif.
endform. " build_alv.
*& Form set_top_page_heading
Creates the report headings.
form set_top_page_heading using t_heading type slis_t_listheader
t_events type slis_t_event.
data: x_heading type slis_listheader,
x_event type line of slis_t_event.
Report title
clear t_heading[].
clear x_heading.
x_heading-typ = 'H'.
x_heading-info = 'SALES RECONCILIATION REPORT'(001).
append x_heading to t_heading.
Top of page event
x_event-name = slis_ev_top_of_page.
x_event-form = 'TOP_OF_PAGE'.
append x_event to t_events.
endform.
*& Form set_events
Sets the events for ALV.
The TOP_OF_PAGE event is alredy being registered in
the set_top_page_heading subroutine.
form set_events using t_events type slis_t_event.
data: x_event type line of slis_t_event.
Example
clear x_event.
x_event-name = .
x_event-form = .
append x_event to t_event.
endform.
*& Form set_order
Adds an entry to the order table.
FORM set_order USING p_fieldname p_tabname p_up p_down p_subtot
t_sort TYPE slis_t_sortinfo_alv.
DATA: x_sort TYPE slis_sortinfo_alv.
CLEAR x_sort.
x_sort-fieldname = p_fieldname.
x_sort-tabname = p_tabname.
x_sort-up = p_up.
x_sort-down = p_down.
x_sort-subtot = p_subtot.
APPEND x_sort TO t_sort.
ENDFORM. "set_order
*& Form set_fieldcat2
Adds an entry to the field catalog.
p_colpos: Column position.
p_fieldname: Field of internal table which is being described by
* this record of the field catalog.
p_ref_fieldname: (Optional) Table field / data element which
* describes the properties of the field.
* If this field is not given, it is copied from
* the fieldname.
p_ref_tabname: (Optional) Table which holds the field referenced
* by <<p_ref_fieldname>>.
If this is not given, the parameter
<<p_ref_fieldname>> references a data element.
p_outputlen: (Optional) Column width.
p_noout: (Optional) If set to 'X', states that the field is not
* showed initially. If so, the field has to be
included in the report at runtime using the display
options.
p_seltext_m: (Optional) Medium label to be used as column header.
p_seltext_l: (Optional) Long label to be used as column header.
p_seltext_s: (Optional) Small label to be used as column header.
p_reptext_ddic: (Optional) Extra small (heading) label to be
* used as column header.
p_ddictxt: (Optional) Set to 'L', 'M', 'S' or 'R' to select
whether to use SELTEXT_L, SELTEXT_M, SELTEXT_S,
or REPTEXT_DDIC as text for column header.
p_hotspot: (Optional) If set to 'X', this field will be used
* as a hotspot area for cursor, alolowing the user
* to click on the field.
p_showasicon: (Optional) If set to 'X', this field will be shown
as an icon and the contents of the field will set
* which icon to show.
p_checkbox: (Optional) If set to 'X', this field will be shown
as a checkbox.
p_edit: (Optional) If set to 'X', this field will be editable.
p_dosum: (Optional) If set to 'X', this field will be summed
(aggregation function) according to the grouping set
by the order functions.
t_fieldcat: Table which contains the whole fieldcat.
FORM set_fieldcat2 USING
p_colpos p_fieldname p_ref_fieldname p_ref_tabname
p_outputlen p_noout
p_seltext_m p_seltext_l p_seltext_s p_reptext_ddic p_ddictxt
p_hotspot p_showasicon p_checkbox p_edit
p_dosum
t_fieldcat TYPE slis_t_fieldcat_alv.
DATA: wa_fieldcat TYPE slis_fieldcat_alv.
CLEAR wa_fieldcat.
General settings
wa_fieldcat-fieldname = p_fieldname.
wa_fieldcat-col_pos = p_colpos.
wa_fieldcat-no_out = p_noout.
wa_fieldcat-hotspot = p_hotspot.
wa_fieldcat-checkbox = p_checkbox.
wa_fieldcat-icon = p_showasicon.
wa_fieldcat-do_sum = p_dosum.
Set reference fieldname, tablenam and rollname.
If p_ref_tabname is not given, the ref_fieldname given
is a data element.
If p_ref_tabname is given, the ref_fieldname given is a
field of a table.
In case ref_fieldname is not given,
it is copied from the fieldname.
IF p_ref_tabname IS INITIAL.
wa_fieldcat-rollname = p_ref_fieldname.
ELSE.
wa_fieldcat-ref_tabname = p_ref_tabname.
IF p_ref_fieldname EQ space.
wa_fieldcat-ref_fieldname = wa_fieldcat-fieldname.
ELSE.
wa_fieldcat-ref_fieldname = p_ref_fieldname.
ENDIF.
ENDIF.
Set output length.
IF NOT p_outputlen IS INITIAL.
wa_fieldcat-outputlen = p_outputlen.
ENDIF.
Set text headers.
IF NOT p_seltext_m IS INITIAL.
wa_fieldcat-seltext_m = p_seltext_m.
ENDIF.
IF NOT p_seltext_l IS INITIAL.
wa_fieldcat-seltext_l = p_seltext_l.
ENDIF.
IF NOT p_seltext_s IS INITIAL.
wa_fieldcat-seltext_s = p_seltext_s.
ENDIF.
IF NOT p_reptext_ddic IS INITIAL.
wa_fieldcat-reptext_ddic = p_reptext_ddic.
ENDIF.
IF NOT p_ddictxt IS INITIAL.
wa_fieldcat-ddictxt = p_ddictxt.
ENDIF.
Set as editable or not.
IF NOT p_edit IS INITIAL.
wa_fieldcat-input = 'X'.
wa_fieldcat-edit = 'X'.
ENDIF.
APPEND wa_fieldcat TO t_fieldcat.
ENDFORM. "set_fieldcat2
======================== Subroutines called by ALV ================
*& Form top_of_page
Called on top_of_page ALV event.
Prints the heading.
form top_of_page.
call function 'REUSE_ALV_COMMENTARY_WRITE'
exporting
i_logo = <<If you want to set a logo, please,
uncomment and edit this line>>
it_list_commentary = t_heading.
endform. " alv_top_of_page
*& Form user_command
Called on user_command ALV event.
Executes custom commands.
form user_command using r_ucomm like sy-ucomm
rs_selfield type slis_selfield.
Example Code
Executes a command considering the sy-ucomm.
CASE r_ucomm.
WHEN '&IC1'.
Set your "double click action" response here.
Example code: Create and display a status message.
DATA: w_msg TYPE string,
w_row(4) TYPE n.
w_row = rs_selfield-tabindex.
CONCATENATE 'You have clicked row' w_row
'field' rs_selfield-fieldname
'with value' rs_selfield-value
INTO w_msg SEPARATED BY space.
MESSAGE w_msg TYPE 'S'.
ENDCASE.
End of example code.
endform. "user_command
*********************************ldb code start from here *************************************************************
DATABASE PROGRAM OF LOGICAL DATABASE ZBRM_3
top-include and nxxx-include are generated automatically
Do NOT change their names manually!!!
*include DBZBRM_3TOP . " header
*include DBZBRM_3NXXX . " all system routines
include DBZBRM_3F001 . " user defined include
PROGRAM SAPDBZBRM_3 DEFINING DATABASE ZBRM_3.
TABLES:
BKPF,
BSEG.
Hilfsfelder
DATA:
BR_SBUKRS LIKE BKPF-BUKRS,
BR_SBELNR LIKE BKPF-BELNR,
BR_SGJAHR LIKE BKPF-GJAHR,
BR_SBUDAT LIKE BKPF-BUDAT,
BR_SGSBER LIKE BSEG-GSBER.
BR_SBUZEI LIKE BSEG-BUZEI,
BR_SEBELN LIKE BSEG-EBELN,
BR_SEBELP LIKE BSEG-EBELP,
BR_SZEKKN LIKE BSEG-ZEKKN.
working areas for the authority check "n435991
for the company code "n435991
*TYPES : BEGIN OF STYPE_BUKRS, "n435991
BUKRS LIKE T001-BUKRS, "n435991
WAERS LIKE T001-WAERS, "n435991
RETCODE TYPE N, "n435991
END OF STYPE_BUKRS. "n435991
"n435991
*DATA : G_S_BUKRS TYPE STYPE_BUKRS, "n435991
G_T_BUKRS TYPE STYPE_BUKRS OCCURS 0. "n435991
"n435991
for the document type "n435991
*TYPES : BEGIN OF STYPE_BLART, "n435991
BLART LIKE BKPF-BLART, "n435991
RETCODE TYPE N, "n435991
END OF STYPE_BLART. "n435991
"n435991
*DATA : G_S_BLART TYPE STYPE_BLART, "n435991
G_T_BLART TYPE STYPE_BLART OCCURS 0. "n435991
"n435991
for the business area "n435991
*TYPES : BEGIN OF STYPE_GSBER, "n435991
GSBER LIKE BSEG-GSBER, "n435991
RETCODE TYPE N, "n435991
END OF STYPE_GSBER. "n435991
"n435991
*DATA : G_S_GSBER TYPE STYPE_GSBER, "n435991
G_T_GSBER TYPE STYPE_GSBER OCCURS 0. "n435991
"n435991
for the purchasing organization "n435991
*TYPES : BEGIN OF STYPE_EKORG, "n435991
EKORG LIKE EKKO-EKORG, "n435991
RETCODE TYPE N, "n435991
END OF STYPE_EKORG. "n435991
"n435991
*DATA : G_S_EKORG TYPE STYPE_EKORG, "n435991
G_T_EKORG TYPE STYPE_EKORG OCCURS 0. "n435991
"n435991
for the plant "n435991
*TYPES : BEGIN OF STYPE_WERKS, "n435991
WERKS LIKE EKPO-WERKS, "n435991
RETCODE TYPE N, "n435991
END OF STYPE_WERKS. "n435991
"n435991
*DATA : G_S_WERKS TYPE STYPE_WERKS, "n435991
G_T_WERKS TYPE STYPE_WERKS OCCURS 0. "n435991
"n435991
*DATA : G_F_TABIX LIKE SY-TABIX. "n435991
"n435991
working tables for array database access "n934526
*types : begin of stype_key, "n934526
bukrs type bkpf-bukrs, "n934526
belnr type bkpf-belnr, "n934526
gjahr type bkpf-gjahr, "n934526
end of stype_key, "n934526
"n934526
stab_key type standard table of "n934526
stype_key "n934526
with default key. "n934526
Initialwerte setzen
FORM INIT.
ENDFORM.
Selection Screen: Process before output
FORM PBO.
ENDFORM.
Selection Screen: Process after input
FORM PAI USING FNAME MARK.
CHECK MARK = SPACE.
ENDFORM.
Lesen BKPF und Uebergabe an den Selektionsreport
FORM PUT_BKPF.
define locla working areas "n934526
data : l_t_key type stab_key, "n934526
l_t_key_block type stab_key, "n934526
l_t_bkpf type standard table of bkpf. "n934526
"n934526
----------------------------------------------------------"n934526
"n934526
database seletion improved "n934526
at first read all FI doc keys into a lean table "n934526
data: wa like bkpf-belnr.
SELECT * FROM BKPF
where budat in br_budat
AND GJAHR EQ BR_GJAHR-LOW
AND BLART = 'RV'.
AND BLART IN BR_BLAR "n934526
"n934526
check sy-subrc is initial. "n934526
"n934526
then process the found FI doc keys in small blocks "n934526
do. "n934526
if l_t_key[] is initial. "n934526
exit. " no more keys -> leave this DO loop "n934526
endif. "n934526
"n934526
form small blocks with 100 FI docs each "n934526
refresh l_t_key_block. "n934526
append lines of l_t_key from 1 to 100 "n934526
to l_t_key_block. "n934526
delete l_t_key from 1 to 100. "n934526
"n934526
read the complete FI doc headers for the block "n934526
SELECT * FROM BKPF "n934526
into corresponding fields of table l_t_bkpf "n934526
for all entries in l_t_key_block "n934526
WHERE BUKRS = l_t_key_block-BUKRS "n934526
AND BELNR = l_t_key_block-BELNR "n934526
AND GJAHR = l_t_key_block-GJAHR. "n934526
"n934526
provide the complete structure for the PUT "n934526
loop at l_t_bkpf into bkpf. "n934526
process this company code : authority and read T001 "n934526
PERFORM F1000_COMPANY_CODE. "n934526
"n934526
go on if the first authority check was successful "n934526
CHECK : G_S_BUKRS-RETCODE IS INITIAL. "n934526
"n934526
set the currency key and save the keys "n934526
MOVE : G_S_BUKRS-WAERS TO T001-WAERS, "n934526
BKPF-BUKRS TO BR_SBUKRS, "n934526
MOVE BKPF-BELNR TO BR_SBELNR.
MOVE BKPF-GJAHR TO BR_SGJAHR . "n934526
BKPF-GJAHR TO BR_SGJAHR. "n934526
PUT BKPF. "n934526
endloop. "n934526
enddo. "n934526
ENDSELECT.
ENDFORM.
Lesen BSEG und Uebergabe an den Selektionsreport
FORM PUT_BSEG.
define local working areas "n934526
data : l_t_bseg type standard table of bseg. "n934526
"n934526
----------------------------------------------------------"n934526
BR_SGSBER = BR_GSBER-LOW.
"n934526
SELECT * FROM BSEG "n934526
WHERE BELNR EQ BR_SBELNR
AND GJAHR EQ BR_SGJAHR
AND GSBER EQ BR_SGSBER.
check sy-subrc is initial. "n934526
"n934526
loop at l_t_bseg into bseg. "n934526
MOVE BSEG-BUZEI TO BR_SBUZEI.
MOVE BSEG-EBELN TO BR_SEBELN.
MOVE BSEG-EBELP TO BR_SEBELP.
MOVE BSEG-ZEKKN TO BR_SZEKKN.
PUT BSEG.
endSELECT. "n934526
ENDFORM.
"n435991
FORM AUTHORITYCHECK_BKPF "n435991
"n435991
"n435991
*FORM AUTHORITYCHECK_BKPF. "n435991
"n435991
the authority-check for the company code was successful; "n435991
check authority for the document type here "n435991
"n435991
does the buffer contain this document type ? "n435991
READ TABLE G_T_BLART INTO G_S_BLART "n435991
WITH KEY BLART = BKPF-BLART BINARY SEARCH. "n435991
"n435991
CASE SY-SUBRC. "n435991
WHEN 0. "document type is known "n435991
"n435991
WHEN 4. "docment type is new --> insert "n435991
MOVE SY-TABIX TO G_F_TABIX. "n435991
PERFORM F1200_CREATE_BLART_ENTRY. "n435991
INSERT G_S_BLART INTO G_T_BLART "n435991
INDEX G_F_TABIX. "n435991
"n435991
WHEN 8. "document type is new --> append "n435991
PERFORM F1200_CREATE_BLART_ENTRY. "n435991
APPEND G_S_BLART TO G_T_BLART. "n435991
ENDCASE. "n435991
"n435991
set the return code "n435991
MOVE G_S_BLART-RETCODE TO SY-SUBRC. "n435991
"n435991
*ENDFORM. "authoritycheck_bkpf "n435991
"n435991
"n435991
FORM AUTHORITYCHECK_BSEG "n435991
"n435991
"n435991
*FORM AUTHORITYCHECK_BSEG. "n435991
"n435991
does the buffer contain this document type ? "n435991
READ TABLE G_T_GSBER INTO G_S_GSBER "n435991
WITH KEY GSBER = BSEG-GSBER BINARY SEARCH. "n435991
"n435991
CASE SY-SUBRC. "n435991
WHEN 0. "business area is known "n435991
"n435991
WHEN 4. "business area is new --> insert "n435991
MOVE SY-TABIX TO G_F_TABIX. "n435991
PERFORM F1300_CREATE_GSBER_ENTRY. "n435991
INSERT G_S_GSBER INTO G_T_GSBER "n435991
INDEX G_F_TABIX. "n435991
"n435991
WHEN 8. "business area is new --> append "n435991
PERFORM F1300_CREATE_GSBER_ENTRY. "n435991
APPEND G_S_GSBER TO G_T_GSBER. "n435991
ENDCASE. "n435991
"n435991
set the return code "n435991
MOVE G_S_GSBER-RETCODE TO SY-SUBRC. "n435991
"n435991
*ENDFORM. "authoritycheck_bseg "n435991
"n435991ABAP provides few tools to analyse the perfomance of the objects, which was developed by us.
Run time analysis transaction SE30
This transaction gives all the analysis of an ABAP program with respect to the database and the non-database processing.
SQL Trace transaction ST05
by using this tool we can analyse the perfomance issues related to DATABASE calls.
Perfomance Techniques for improve the perfomance of the object.
1) ABAP/4 programs can take a very long time to execute, and can make other processes have to wait before executing. Here are some tips to speed up your programs and reduce the load your programs put on the system:
2) Use the GET RUN TIME command to help evaluate performance. It's hard to know whether that optimization technique REALLY helps unless you test it out.
3) Using this tool can help you know what is effective, under what kinds of conditions. The GET RUN TIME has problems under multiple CPUs, so you should use it to test small pieces of your program, rather than the whole program.
4) Generally, try to reduce I/O first, then memory, then CPU activity. I/O operations that read/write to hard disk are always the most expensive operations. Memory, if not controlled, may have to be written to swap space on the hard disk, which therefore increases your I/O read/writes to disk. CPU activity can be reduced by careful program design, and by using commands such as SUM (SQL) and COLLECT (ABAP/4).
5) Avoid 'SELECT *', especially in tables that have a lot of fields. Use SELECT A B C INTO instead, so that fields are only read if they are used. This can make a very big difference.
6) Field-groups can be useful for multi-level sorting and displaying. However, they write their data to the system's paging space, rather than to memory (internal tables use memory). For this reason, field-groups are only appropriate for processing large lists (e.g. over 50,000 records). If you have large lists, you should work with the systems administrator to decide the maximum amount of RAM your program should use, and from that, calculate how much space your lists will use. Then you can decide whether to write the data to memory or swap space.
Use as many table keys as possible in the WHERE part of your select statements.
7)Whenever possible, design the program to access a relatively constant number of records (for instance, if you only access the transactions for one month, then there probably will be a reasonable range, like 1200-1800, for the number of transactions inputted within that month). Then use a SELECT A B C INTO TABLE ITAB statement.
8) Get a good idea of how many records you will be accessing. Log into your productive system, and use SE80 -> Dictionary Objects (press Edit), enter the table name you want to see, and press Display. Go To Utilities -> Table Contents to query the table contents and see the number of records. This is extremely useful in optimizing a program's memory allocation.
9) Try to make the user interface such that the program gradually unfolds more information to the user, rather than giving a huge list of information all at once to the user.
10) Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to be accessing. If the number of records exceeds NUM_RECS, the data will be kept in swap space (not memory).
11) Use SELECT A B C INTO TABLE ITAB whenever possible. This will read all of the records into the itab in one operation, rather than repeated operations that result from a SELECT A B C INTO ITAB... ENDSELECT statement. Make sure that ITAB is declared with OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to access.
12) If the number of records you are reading is constantly growing, you may be able to break it into chunks of relatively constant size. For instance, if you have to read all records from 1991 to present, you can break it into quarters, and read all records one quarter at a time. This will reduce I/O operations. Test extensively with GET RUN TIME when using this method.
13) Know how to use the 'collect' command. It can be very efficient.
14) Use the SELECT SINGLE command whenever possible.
15) Many tables contain totals fields (such as monthly expense totals). Use these avoid wasting resources by calculating a total that has already been calculated and stored.
Some tips:
1) Use joins where possible as redundant data is not fetched.
2) Use select single where ever possible.
3) Calling methods of a global class is faster than calling function modules.
4) Use constants instead of literals
5) Use WHILE instead of a DO-EXIT-ENDDO.
6) Unnecessary MOVEs should be avoided by using the explicit work area operations
see the follwing links for a brief insifght into performance tuning,
http://www.thespot4sap.com/Articles/SAPABAPPerformanceTuning_Introduction.asp
1. Debuggerhttp://help.sap.com/saphelp_47x200/helpdata/en/c6/617ca9e68c11d2b2ab080009b43351/content.htm
2. Run Time Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c6/617cafe68c11d2b2ab080009b43351/content.htm
3. SQL trace
http://help.sap.com/saphelp_47x200/helpdata/en/d1/801f7c454211d189710000e8322d00/content.htm
4. CATT - Computer Aided Testing Too
http://help.sap.com/saphelp_47x200/helpdata/en/b3/410b37233f7c6fe10000009b38f936/frameset.htm
5. Test Workbench
http://help.sap.com/saphelp_47x200/helpdata/en/a8/157235d0fa8742e10000009b38f889/frameset.htm
6. Coverage Analyser
http://help.sap.com/saphelp_47x200/helpdata/en/c7/af9a79061a11d4b3d4080009b43351/content.htm
7. Runtime Monitor
http://help.sap.com/saphelp_47x200/helpdata/en/b5/fa121cc15911d5993d00508b6b8b11/content.htm
8. Memory Inspector
http://help.sap.com/saphelp_47x200/helpdata/en/a2/e5fc84cc87964cb2c29f584152d74e/content.htm
9. ECATT - Extended Computer Aided testing tool.
http://help.sap.com/saphelp_47x200/helpdata/en/20/e81c3b84e65e7be10000000a11402f/frameset.htm
Performance tuning for Data Selection Statement
http://www.sap-img.com/abap/performance-tuning-for-data-selection-statement.htm -
How to measure the performance of sql query?
Hi Experts,
How to measure the performance, efficiency and cpu cost of a sql query?
What are all the measures available for an sql query?
How to identify i am writing optimal query?
I am using Oracle 9i...
It ll be useful for me to write efficient query....
Thanks & Regardspsram wrote:
Hi Experts,
How to measure the performance, efficiency and cpu cost of a sql query?
What are all the measures available for an sql query?
How to identify i am writing optimal query?
I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
You can use the following approach to get a deeper understanding of the operations performed by each row source:
alter session set statistics_level=all;
alter session set timed_statistics = true;
select /* findme */ ... <your query here>
SELECT
SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
OBJECT_NAME,
CARDINALITY,
LAST_OUTPUT_ROWS,
LAST_CR_BUFFER_GETS,
LAST_DISK_READS,
LAST_DISK_WRITES,
FROM V$SQL_PLAN_STATISTICS_ALL P,
(SELECT *
FROM (SELECT *
FROM V$SQL
WHERE SQL_TEXT LIKE '%findme%'
AND SQL_TEXT NOT LIKE '%V$SQL%'
AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
ORDER BY LAST_LOAD_TIME DESC)
WHERE ROWNUM < 2) S
WHERE S.HASH_VALUE = P.HASH_VALUE
AND S.CHILD_NUMBER = P.CHILD_NUMBER
ORDER BY ID
/Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Need Help to see why the performance is not good
Hi,
We have an application that all process are developed in PL/SQL on Oracle 9i Database :
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
PL/SQL Release 9.2.0.6.0 - Production
Why I have created this package. the application is a production management on chemical industries. I need to sometimes trace the Manufacturing order execution to eventually answer some incoherent data. If I analyze directly the data in the Table is not always responding because the origin of problem can be provide of some execution that perform some calculation.
In the procedure or function a use my package PAC_LOG_ERROR.PUT_LINE(xxxxxx) to save the information. This command save the information in the memory before. At the end of the procedure or function a perform the insert with the COMMIT calling PAC_LOG_ERROR.LOGS or PAC_LOG_ERROR.ERRORS on the catch exception.
This package is always call. On each routines performed I execute it. In the trace log of the database we have see a problem we the procedure GET_PROC_NAME in the package. We have identify that is called more that 800x and increase the performance. Who increase is this select command :
SELECT * INTO SOURCE_TEXT
FROM (SELECT TEXT FROM all_source
WHERE OWNER = SOURCE_OWNER AND
NAME=SOURCE_NAME AND
TYPE IN ('PROCEDURE','FUNCTION','PACKAGE BODY') AND
LINE <= SOURCE_LINE AND SUBSTR(TRIM(TEXT),1,9) IN ('PROCEDURE','FUNCTION ')
ORDER BY LINE DESC)
WHERE ROWNUM = 1;I use it to get the procedure or function name where my log proc is called. I now that I can pass in parameters, but I have think to use an automatic method, that can help to not have some problem with others developer team to make a copy/past and not update the parameters. Because the Log info is read by the Help Desk and if we have an error on the information, it not a good help.
COULD YOU PLEASE HELP ME TO OPTIMIZE OR SAID THE BETTER METHOD TO DO IT ?
Here my package :
create or replace
PACKAGE PAC_LOG_ERROR AS
-- Name : pac_log_error.sql
-- Author : Calà Salvatore - 02 July 2010
-- Description : Basic Error and Log management.
-- Usage notes : To active the Log management execute this statement
-- UPDATE PARAM_TECHNIC SET PRM_VALUE = 'Y' WHERE PRM_TYPE = 'TRC_LOG';
-- COMMIT;
-- To set the period in day before to delete tracability
-- UPDATE PARAM_TECHNIC SET PRM_VALUE = 60 WHERE PRM_TYPE = 'DEL_TRC_LOG';
-- COMMIT;
-- To set the number in day where the ERROR is save before deleted
-- UPDATE PARAM_TECHNIC SET PRM_VALUE = 60 WHERE PRM_TYPE = 'DEL_TRC_LOG';
-- COMMIT;
-- Requirements : Packages PAC_PUBLIC and OWA_UTIL
-- Revision History
-- --------+---------------+-------------+--------------------------------------
-- Version | Author | Date | Comment
-- --------+---------------+-------------+--------------------------------------
-- 1.0.0 | S. Calà | 02-Jul-2010 | Initial Version
-- --------+---------------+-------------+--------------------------------------
-- | | |
-- --------+---------------+-------------+--------------------------------------
PROCEDURE INITIALIZE;
PROCEDURE CLEAN;
PROCEDURE RESETS(IN_SOURCE IN VARCHAR2 DEFAULT NULL);
PROCEDURE PUT_LINE(TXT IN VARCHAR2);
PROCEDURE ERRORS(REF_TYPE IN VARCHAR2 DEFAULT 'SITE', REF_VALUE IN VARCHAR2 DEFAULT '99', ERR_CODE IN NUMBER DEFAULT SQLCODE, ERR_MSG IN VARCHAR2 DEFAULT SQLERRM);
PROCEDURE LOGS(REF_TYPE IN VARCHAR2 DEFAULT 'SITE', REF_VALUE IN VARCHAR2 DEFAULT '99');
END PAC_LOG_ERROR;
create or replace
PACKAGE BODY PAC_LOG_ERROR
AS
/* Private Constant */
CR CONSTANT CHAR(1) := CHR(13); -- Retour chariot
LF CONSTANT CHAR(1) := CHR(10); -- Saut de ligne
CR_LF CONSTANT CHAR(2) := LF || CR; --Saut de ligne et retour chariot
TAB CONSTANT PLS_INTEGER := 50;
sDelay CONSTANT PLS_INTEGER := 30;
/* Private Record */
TYPE REC_LOG IS RECORD(
ERR_DATE TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
ERR_TXT VARCHAR2(4000)
/* Private Type Table */
TYPE TAB_VALUE IS TABLE OF REC_LOG INDEX BY PLS_INTEGER;
TYPE TAB_POINTER IS TABLE OF TAB_VALUE INDEX BY VARCHAR2(80);
/* Private Variables Structures */
LOG_TRC PARAM_TECHNIC.PRM_VALUE%TYPE;
LIST_PARAM TAB_POINTER;
/* Private Programs */
FUNCTION GET_PROC_NAME( SOURCE_OWNER IN all_source.OWNER%TYPE
,SOURCE_NAME IN all_source.NAME%TYPE
,SOURCE_LINE IN all_source.LINE%TYPE) RETURN VARCHAR2
AS
SOURCE_TEXT all_source.TEXT%TYPE;
TYPE RECORD_TEXT IS TABLE OF all_source.TEXT%TYPE;
RETURN_TEXT RECORD_TEXT;
BEGIN
SELECT * INTO SOURCE_TEXT
FROM (SELECT TEXT FROM all_source
WHERE OWNER = SOURCE_OWNER AND
NAME=SOURCE_NAME AND
TYPE IN ('PROCEDURE','FUNCTION','PACKAGE BODY') AND
LINE <= SOURCE_LINE AND SUBSTR(TRIM(TEXT),1,9) IN ('PROCEDURE','FUNCTION ')
ORDER BY LINE DESC)
WHERE ROWNUM = 1;
IF SOURCE_TEXT IS NOT NULL OR SOURCE_TEXT != '' THEN
SOURCE_TEXT := TRIM(SUBSTR(SOURCE_TEXT,1,INSTR(SOURCE_TEXT,'(')-1));
SOURCE_TEXT := LTRIM(LTRIM(TRIM(SOURCE_TEXT),'PROCEDURE'),'FUNCTION');
SOURCE_TEXT := SOURCE_NAME||'.'|| TRIM(SOURCE_TEXT);
ELSE
SOURCE_TEXT := 'ANONYMOUS BLOCK';
END IF;
RETURN SOURCE_TEXT;
END GET_PROC_NAME;
PROCEDURE SELECT_MASTER(REF_TYPE IN VARCHAR2, PARAM_VALUE IN VARCHAR2, SITE OUT VARCHAR2, REF_MASTER OUT VARCHAR2)
AS
BEGIN
REF_MASTER := '';
SITE := '99';
CASE UPPER(REF_TYPE)
WHEN 'PO' THEN -- Process Order
SELECT SITE_CODE INTO SITE FROM PO_PROCESS_ORDER WHERE PO_NUMBER = PARAM_VALUE;
WHEN 'SO' THEN -- Shop Order
SELECT P.SITE_CODE,P.PO_NUMBER INTO SITE,REF_MASTER FROM SO_SHOP_ORDER S
INNER JOIN PO_PROCESS_ORDER P ON P.PO_NUMBER = S.PO_NUMBER
WHERE S.NUMOF = PARAM_VALUE;
WHEN 'SM' THEN -- Submixing
SELECT SITE_CODE,NUMOF INTO SITE,REF_MASTER FROM SO_SUBMIXING WHERE IDSM = PARAM_VALUE;
WHEN 'IDSM' THEN -- Submixing
SELECT SITE_CODE,NUMOF INTO SITE,REF_MASTER FROM SO_SUBMIXING WHERE IDSM = PARAM_VALUE;
WHEN 'PR' THEN -- Pourring
SELECT B.SITE_CODE,P.NUMOF INTO SITE,REF_MASTER FROM SO_POURING P
INNER JOIN SO_SUBMIXING B ON B.IDSM=P.IDSM
WHERE P.IDSM = PARAM_VALUE;
WHEN 'NUMSMP' THEN -- Pourring
SELECT SITE_CODE,NUMOF INTO SITE,REF_MASTER FROM SAMPLE WHERE NUMSMP = TO_NUMBER(PARAM_VALUE);
-- WHEN 'MSG' THEN -- Messages
-- SELECT SITE_CODE,PO_NUMBER INTO SITE,REF_MASTER FROM CMS_INTERFACE.MAP_ITF_PO WHERE MSG_ID = PARAM_VALUE;
ELSE
SITE := sys_context('usr_context', 'site_attribute');
END CASE;
EXCEPTION
WHEN OTHERS THEN
REF_MASTER := '';
SITE := sys_context('usr_context', 'site_attribute');
END SELECT_MASTER;
PROCEDURE ADD_LIST_PROC
AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
MERGE INTO LOG_PARAM A
USING (SELECT OWNER, TYPE
,NAME PROC
, CASE NAME WHEN SUBNAME THEN NULL
ELSE SUBNAME
END SUBPROC
FROM (
SELECT owner,TYPE,UPPER(NAME) NAME,UPPER(trim(substr(substr(trim(text),1,instr(trim(text),'(')-1),instr(substr(trim(text),1,instr(trim(text),'(')-1),' ')))) SUBNAME
FROM ALL_SOURCE where owner in ('CMS_ADM','CMS_INTERFACE')
and type in ('FUNCTION','PROCEDURE','PACKAGE BODY')
and (instr(substr(trim(text),1,instr(trim(upper(text)),'(')-1),'FUNCTION') = 1 or instr(substr(trim(text),1,instr(trim(upper(text)),'(')-1),'PROCEDURE')=1)
)-- ORDER BY OWNER,PROC,SUBPROC NULLS FIRST
) B
ON (A.OWNER = B.OWNER AND A.TYPE = B.TYPE AND A.PROC=B.PROC AND NVL(A.SUBPROC,' ') = NVL(B.SUBPROC,' '))
WHEN NOT MATCHED THEN
INSERT (OWNER,TYPE,PROC,SUBPROC) VALUES (B.OWNER,B.TYPE,B.PROC,B.SUBPROC)
WHEN MATCHED THEN
UPDATE SET ACTIVE = ACTIVE;
DELETE LOG_PARAM A
WHERE NOT EXISTS (SELECT OWNER, TYPE
,NAME PROC
, CASE NAME WHEN SUBNAME THEN NULL
ELSE SUBNAME
END SUBPROC
FROM (
SELECT owner,TYPE,NAME,upper(trim(substr(substr(trim(text),1,instr(trim(text),'(')-1),instr(substr(trim(text),1,instr(trim(text),'(')-1),' ')))) SUBNAME
FROM ALL_SOURCE where owner in ('CMS_ADM','CMS_INTERFACE')
and type in ('FUNCTION','PROCEDURE','PACKAGE BODY')
and (instr(substr(trim(text),1,instr(trim(text),'(')-1),'FUNCTION') = 1 or instr(substr(trim(text),1,instr(trim(text),'(')-1),'PROCEDURE')=1)
) WHERE A.OWNER = OWNER AND A.TYPE = TYPE AND A.PROC=PROC AND NVL(A.SUBPROC,' ') = NVL(SUBPROC,' '));
COMMIT;
EXCEPTION
WHEN OTHERS THEN
NULL;
END ADD_LIST_PROC;
PROCEDURE INITIALIZE
AS
BEGIN
LIST_PARAM.DELETE;
CLEAN;
-- ADD_LIST_PROC;
EXCEPTION
WHEN OTHERS THEN
NULL;
END INITIALIZE;
PROCEDURE CLEAN
AS
PRAGMA AUTONOMOUS_TRANSACTION;
dtTrcLog DATE;
dtTrcErr DATE;
BEGIN
BEGIN
SELECT dbdate-NUMTODSINTERVAL(to_number(PRM_VALUE),'DAY') INTO dtTrcLog
FROM PARAM_TECHNIC WHERE PRM_TYPE = 'DEL_TRC_LOG';
EXCEPTION
WHEN OTHERS THEN
dtTrcLog := dbdate -NUMTODSINTERVAL(sDelay,'DAY');
END;
BEGIN
SELECT dbdate-NUMTODSINTERVAL(to_number(PRM_VALUE),'DAY') INTO dtTrcErr
FROM PARAM_TECHNIC WHERE PRM_TYPE = 'DEL_TRC_ERR';
EXCEPTION
WHEN OTHERS THEN
dtTrcErr := dbdate -NUMTODSINTERVAL(sDelay,'DAY');
END;
DELETE FROM ERROR_LOG WHERE ERR_TYPE ='LOG' AND ERR_DATE < dtTrcLog;
DELETE FROM ERROR_LOG WHERE ERR_TYPE ='ERROR' AND ERR_DATE < dtTrcErr;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
NULL; -- Do nothing if error occurs and catch exception
END CLEAN;
PROCEDURE RESETS(IN_SOURCE IN VARCHAR2 DEFAULT NULL)
AS
SOURCE_OWNER all_source.OWNER%TYPE;
SOURCE_NAME all_source.NAME%TYPE;
SOURCE_LINE all_source.LINE%TYPE;
SOURCE_TEXT all_source.TEXT%TYPE;
SOURCE_PROC VARCHAR2(32727);
BEGIN
OWA_UTIL.WHO_CALLED_ME(owner => SOURCE_OWNER,
name => SOURCE_NAME,
lineno => SOURCE_LINE,
caller_t => SOURCE_TEXT);
IF SOURCE_PROC IS NULL THEN
SOURCE_PROC := SUBSTR(GET_PROC_NAME(SOURCE_OWNER,SOURCE_NAME,SOURCE_LINE),1,125);
ELSE
SOURCE_PROC := IN_SOURCE;
END IF;
LIST_PARAM.DELETE(SOURCE_PROC);
EXCEPTION
WHEN OTHERS THEN
NULL;
END RESETS;
PROCEDURE PUT_LINE(TXT IN VARCHAR2)
AS
PRAGMA AUTONOMOUS_TRANSACTION;
SOURCE_OWNER all_source.OWNER%TYPE;
SOURCE_NAME all_source.NAME%TYPE;
SOURCE_LINE all_source.LINE%TYPE;
SOURCE_TEXT all_source.TEXT%TYPE;
SOURCE_PROC VARCHAR2(128);
BEGIN
IF TXT IS NULL OR TXT = '' THEN
RETURN;
END IF;
OWA_UTIL.WHO_CALLED_ME(owner => SOURCE_OWNER,
name => SOURCE_NAME,
lineno => SOURCE_LINE,
caller_t => SOURCE_TEXT);
SOURCE_PROC := GET_PROC_NAME(SOURCE_OWNER,SOURCE_NAME,SOURCE_LINE);
IF LIST_PARAM.EXISTS(SOURCE_PROC) THEN
LIST_PARAM(SOURCE_PROC)(LIST_PARAM(SOURCE_PROC).COUNT+1).ERR_TXT := TXT;
ELSE
LIST_PARAM(SOURCE_PROC)(1).ERR_TXT := TXT;
END IF;
EXCEPTION
WHEN OTHERS THEN
NULL;
END PUT_LINE;
PROCEDURE LOGS(REF_TYPE IN VARCHAR2 DEFAULT 'SITE', REF_VALUE IN VARCHAR2 DEFAULT '99')
AS
PRAGMA AUTONOMOUS_TRANSACTION;
MASTER_VALUE ERROR_LOG.ERR_MASTER%TYPE;
SITE PARAMTAB.SITE_CODE%TYPE;
SOURCE_OWNER all_source.OWNER%TYPE;
SOURCE_NAME all_source.NAME%TYPE;
SOURCE_LINE all_source.LINE%TYPE;
SOURCE_TEXT all_source.TEXT%TYPE;
SOURCE_PROC VARCHAR2(128);
ERR_KEY NUMBER;
BEGIN
-- NULL;
OWA_UTIL.WHO_CALLED_ME(owner => SOURCE_OWNER,
name => SOURCE_NAME,
lineno => SOURCE_LINE,
caller_t => SOURCE_TEXT);
SOURCE_PROC := SUBSTR(GET_PROC_NAME(SOURCE_OWNER,SOURCE_NAME,SOURCE_LINE),1,128);
LIST_PARAM.DELETE(SOURCE_PROC);
-- SELECT NVL(MAX(ACTIVE),'N') INTO LOG_TRC FROM LOG_PARAM WHERE TRIM(UPPER((PROC||'.'||SUBPROC))) = TRIM(UPPER(SOURCE_PROC))
-- AND OWNER =SOURCE_OWNER AND TYPE = SOURCE_TEXT ;
-- IF LOG_TRC = 'N' THEN
-- LIST_PARAM.DELETE(SOURCE_PROC);
-- RETURN;
-- END IF;
-- SELECT_MASTER(REF_TYPE => UPPER(REF_TYPE), PARAM_VALUE => REF_VALUE, SITE => SITE, REF_MASTER => MASTER_VALUE);
-- ERR_KEY := TO_CHAR(LOCALTIMESTAMP,'YYYYMMDDHH24MISSFF6');
-- FOR AIX IN 1..LIST_PARAM(SOURCE_PROC).COUNT LOOP
-- INSERT INTO ERROR_LOG (ERR_KEY, ERR_SITE,ERR_SLAVE ,ERR_MASTER ,ERR_TYPE ,ERR_PROC,ERR_DATE,ERR_TXT)
-- VALUES (ERR_KEY,SITE,REF_VALUE,MASTER_VALUE,'LOG',SOURCE_PROC,LIST_PARAM(SOURCE_PROC)(AIX).ERR_DATE ,LIST_PARAM(SOURCE_PROC)(AIX).ERR_TXT);
-- END LOOP;
-- UPDATE SESSION_CONTEXT SET SCX_ERR_KEY = ERR_KEY WHERE SCX_ID = SYS_CONTEXT('USERENV','SESSIONID');
-- LIST_PARAM.DELETE(SOURCE_PROC);
-- COMMIT;
EXCEPTION
WHEN OTHERS THEN
LIST_PARAM.DELETE(SOURCE_PROC);
END LOGS;
PROCEDURE ERRORS(REF_TYPE IN VARCHAR2 DEFAULT 'SITE', REF_VALUE IN VARCHAR2 DEFAULT '99', ERR_CODE IN NUMBER DEFAULT SQLCODE, ERR_MSG IN VARCHAR2 DEFAULT SQLERRM)
AS
PRAGMA AUTONOMOUS_TRANSACTION;
MASTER_VALUE ERROR_LOG.ERR_MASTER%TYPE;
SITE PARAMTAB.SITE_CODE%TYPE;
SOURCE_OWNER all_source.OWNER%TYPE;
SOURCE_NAME all_source.NAME%TYPE;
SOURCE_LINE all_source.LINE%TYPE;
SOURCE_TEXT all_source.TEXT%TYPE;
SOURCE_PROC VARCHAR2(4000);
ERR_KEY NUMBER := TO_CHAR(LOCALTIMESTAMP,'YYYYMMDDHH24MISSFF6');
BEGIN
OWA_UTIL.WHO_CALLED_ME(owner => SOURCE_OWNER,
name => SOURCE_NAME,
lineno => SOURCE_LINE,
caller_t => SOURCE_TEXT);
SOURCE_PROC := SUBSTR(GET_PROC_NAME(SOURCE_OWNER,SOURCE_NAME,SOURCE_LINE),1,125);
SELECT_MASTER(REF_TYPE => UPPER(REF_TYPE), PARAM_VALUE => REF_VALUE, SITE => SITE, REF_MASTER => MASTER_VALUE);
IF LIST_PARAM.EXISTS(SOURCE_PROC) THEN
FOR AIX IN 1..LIST_PARAM(SOURCE_PROC).COUNT LOOP
INSERT INTO ERROR_LOG (ERR_KEY,ERR_SITE,ERR_SLAVE,ERR_MASTER,ERR_PROC,ERR_DATE,ERR_TXT,ERR_CODE,ERR_MSG)
VALUES (ERR_KEY,SITE,REF_VALUE,MASTER_VALUE,SOURCE_PROC,LIST_PARAM(SOURCE_PROC)(AIX).ERR_DATE, LIST_PARAM(SOURCE_PROC)(AIX).ERR_TXT,ERR_CODE,ERR_MSG);
END LOOP;
LIST_PARAM.DELETE(SOURCE_PROC);
ELSE
INSERT INTO ERROR_LOG (ERR_KEY,ERR_SITE,ERR_SLAVE,ERR_MASTER,ERR_PROC,ERR_DATE,ERR_TXT,ERR_CODE,ERR_MSG)
VALUES (ERR_KEY,SITE,REF_VALUE,MASTER_VALUE,SOURCE_PROC,CURRENT_TIMESTAMP,'Error info',ERR_CODE,ERR_MSG);
END IF;
UPDATE SESSION_CONTEXT SET SCX_ERR_KEY = ERR_KEY WHERE SCX_ID = sys_context('usr_context', 'session_id');
COMMIT;
EXCEPTION
WHEN OTHERS THEN
LIST_PARAM.DELETE(SOURCE_PROC);
END ERRORS;
END PAC_LOG_ERROR;This package is always call. On each routines performed I execute it. In the trace log of the database we have see a problem we the procedure GET_PROC_NAME in the package. We have identify that is called more that 800x and increase the performance. Who increase is this select command :
SELECT * INTO SOURCE_TEXT
FROM (SELECT TEXT FROM all_source
WHERE OWNER = SOURCE_OWNER AND
NAME=SOURCE_NAME AND
TYPE IN ('PROCEDURE','FUNCTION','PACKAGE BODY') AND
LINE <= SOURCE_LINE AND SUBSTR(TRIM(TEXT),1,9) IN ('PROCEDURE','FUNCTION ')
ORDER BY LINE DESC)
WHERE ROWNUM = 1;Complex SQL like inline views and views of views can overwhelm the cost-based optimizer resulting in bad execution plans. Start with getting an execution plan of your problem query to see if it is inefficient - look for full table scans in particular. You might bet better performance by eliminating the IN and merging the results of 3 queries with a UNION. -
Help to improve the performance of a procedure.
Hello everybody,
First to introduce myself. My name is Ivan and I recently started learning SQL and PL/SQL. So don't go hard on me. :)
Now let's jump to the problem. What we have there is a table (big one, but we'll need only a few fields) with some information about calls. It is called table1. There is also another one, absolutely the same structure, which is empty and we have to transfer the records from the first one.
The shorter calls (less than 30 minutes) have segmentID = 'C1'.
The longer calls (more than 30 minutes) are recorded as more than one record (1 for every 30 minutes). The first record (first 30 minutes of the call) has segmentID = 'C21'. It is the first so we have only one of these for every different call. Then we have the next (middle) parts of the call, which have segmentID = 'C22'. We can have more than 1 middle part and again the maximum minutes in each is 30 minutes. Then we have the last part (again max 30 minutes) with segmentID = 'C23'. As with the first one we can have only one last part.
So far, so good. Now we need to insert these call records into the second table. The C1 are easy - one record = one call. But the partial ones we need to combine so they become one whole call. This means that we have to take one of the first parts (C21), find if there is a middle part (C22) with the same calling/called numbers and with 30 minutes difference in date/time, then search again if there is another C22 and so on. And last we have to search for the last part of the call (C23). In the course of these searches we sum the duration of each part so we can have the duration of the whole call at the end. Then we are ready to insert it in the new table as a single record, just with new duration.
But here comes the problem with my code... The table has A LOT of records and this solution, despite the fact that it works (at least in the tests I've made so far), it's REALLY slow.
As I said I'm new to PL/SQL and I know that this solution is really newbish, but I can't find another way of doing this.
So I decided to come here and ask you for some tips on how to improve the performance of this.
I think you are getting confused already, so I'm just going to put some comments in the code.
I know it's not a procedure as it stands now, but it will be once I create a better code. I don't think it matters for now.
DECLARE
CURSOR cur_c21 IS
select * from table1
where segmentID = 'C21'
order by start_date_of_call; // in start_date_of_call is located the beginning of a specific part of the call. It's date format.
CURSOR cur_c22 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
CURSOR cur_c22_2 IS
select * from table1
where segmentID = 'C22'
order by start_date_of_call;
cursor cur_c23 is
select * from table1
where segmentID = 'C23'
order by start_date_of_call;
v_temp_rec_c22 cur_c22%ROWTYPE;
v_dur table1.duration%TYPE; // using this for storage of the duration of the call. It's number.
BEGIN
insert into table2
select * from table1 where segmentID = 'C1'; // inserting the calls which are less than 30 minutes long
-- and here starts the mess
FOR rec_c21 IN cur_c21 LOOP // taking the first part of the call
v_dur := rec_c21.duration; // recording it's duration
FOR rec_c22 IN cur_c22 LOOP // starting to check if there is a middle part for the call
IF rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48)
/* if the numbers are the same and the date difference is 30 minutes then we have a middle part and we start searching for the next middle. */
THEN
v_dur := v_dur + rec_c22.duration; // updating the new duration
v_temp_rec_c22:=rec_c22; // recording the current record in another variable because I use it for the next check
FOR rec_c22_2 in cur_c22_2 LOOP
IF rec_c22_2.callingnumber = v_temp_rec_c22.callingnumber AND rec_c22_2.callednumber = v_temp_rec_c22.callednumber AND
(rec_c22_2.start_date_of_call - v_temp_rec_c22.start_date_of_call) = (1/48)
/* logic is the same as before but comparing with the last value in v_temp...
And because the data in the cursors is ordered by date in ascending order it's easy to search for another middle parts. */
THEN
v_dur:=v_dur + rec_c22_2.duration;
v_temp_rec_c22:=rec_c22_2;
END IF;
END LOOP;
END IF;
EXIT WHEN rec_c22.callingnumber = rec_c21.callingnumber AND rec_c22.callednumber = rec_c21.callednumber AND
(rec_c22.start_date_of_call - rec_c21.start_date_of_call) = (1/48);
/* exiting the loop if we have at least one middle part.
(I couldn't find if there is a way to write this more clean, like exit when (the above if is true) */
END LOOP;
FOR rec_c23 IN cur_c23 LOOP
IF (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration
/* we should always have one last part, so we need this check.
If we don't have the "v_dur != rec_c21.duration" part it will execute the code inside only if we don't have middle parts
(yes we can have these situations in calls longer than 30 and less than 60 minutes). */
THEN
v_dur:=v_dur + rec_c23.duration;
rec_c21.duration:=v_dur; // updating the duration
rec_c21.segmentID :='C1';
INSERT INTO table2 VALUES rec_c21; // inserting the whole call in table2
END IF;
EXIT WHEN (rec_c23.callingnumber = rec_c21.callingnumber AND rec_c23.callednumber = rec_c21.callednumber AND
(rec_c23.start_date_of_call - rec_c21.start_date_of_call) = (1/48)) OR v_dur != rec_c21.duration;
// exit the loop when the last part has been found.
END LOOP;
END LOOP;
END;I'm using Oracle 11g and version 1.5.5 of SQL Developer.
It's my first post here so hope this is the right sub-forum.
I tried to explain everything as deep as possible (sorry if it's too long) and I kinda think that the code got somehow hard to read with all these comments. If you want I can remove them.
I know I'm still missing a lot of knowledge so every help is really appreciated.
Thank you very much in advance!Atiel wrote:
Thanks for the suggestion but the thing is that segmentID must stay the same for all. The data in this field is just to tell us if this is a record of complete call (C1) or a partial record of a call(C21, C22, C23). So in table2 as every record will be a complete call the segmentID must be C1 for all.Well that's not a problem. You just hard code 'C1' instead of applying the row number as I was doing:
SQL> ed
Wrote file afiedt.buf
1 select 'C1' as segmentid
2 ,start_date_of_call, duration, callingnumber, callednumber
3 from (
4 select distinct
5 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
6 ,sum(duration) over (partition by callingnumber, callednumber) as duration
7 ,callingnumber
8 ,callednumber
9 from table1
10* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349
Another thing is that, as I said above, the actual table has 120 fields. Do I have to list them all manually if I use something similar?If that's what you need, then yes you would have to list them. You only get data if you tell it you want it. ;)
Of course if you are taking the start_date_of_call, callingnumber and callednumber as the 'key' to the record, then you could join the results of the above back to the original table1 and pull out the rest of the columns that way...
SQL> select * from table1;
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
C21 15-MAR-2012 09:07:26 134676480 5581790386 0113496771567 219 100 10.16
C23 11-MAY-2012 09:37:26 134676480 5581790386 0113496771567 321 73 2.71
C21 11-MAY-2012 12:13:10 3892379648 1982032041 0631432831624 959 80 2.87
C22 11-MAY-2012 12:43:10 3892379648 1982032041 0631432831624 375 57 8.91
C22 11-MAY-2012 13:13:10 117899264 1982032041 0631432831624 778 27 1.42
C23 11-MAY-2012 13:43:10 117899264 1982032041 0631432831624 308 97 3.26
7 rows selected.
SQL> ed
Wrote file afiedt.buf
1 with t2 as (
2 select 'C1' as segmentid
3 ,start_date_of_call, duration, callingnumber, callednumber
4 from (
5 select distinct
6 min(start_date_of_call) over (partition by callingnumber, callednumber) as start_date_of_call
7 ,sum(duration) over (partition by callingnumber, callednumber) as duration
8 ,callingnumber
9 ,callednumber
10 from table1
11 )
12 )
13 --
14 select t2.segmentid, t2.start_date_of_call, t2.duration, t2.callingnumber, t2.callednumber
15 ,t1.col1, t1.col2, t1.col3
16 from t2
17 join table1 t1 on ( t1.start_date_of_call = t2.start_date_of_call
18 and t1.callingnumber = t2.callingnumber
19 and t1.callednumber = t2.callednumber
20* )
SQL> /
SEGMENTID START_DATE_OF_CALL DURATION CALLINGNUMBER CALLEDNUMBER COL1 COL2 COL3
C1 11-MAY-2012 12:13:10 8020557824 1982032041 0631432831624 959 80 2.87
C1 15-MAR-2012 09:07:26 269352960 5581790386 0113496771567 219 100 10.16
C1 31-JUL-2012 23:20:23 134676480 4799842978 0813391427349 556 40 5.32
SQL>Of course this is pulling back the additional columns for the record that matches the start_date_of_call for that calling/called number pair, so if the values differed from row to row within the calling/called number pair you may need to aggregate those (take the minimum/maximum etc. as required) as part of the first query. If the values are known to be the same across all records in the group then you can just pick them up from the join to the original table as I coded in the above example (only in my example the data was different across all rows). -
How to optimize the performance of crystal report?
Hi,
-I have to design a crystal report with best possible optimization. Optimization is main concern since report will run against 1-2 million data set. Though I am using parameter to fetch only the required data, required data can go till 1 million records.
-Based on the input passed by the user I have to group the data in report. And for each selected parameter the Detail section I am printing is different. For example:-If user selects store then detail section is different and if user select Host then detail section will be different.
-Report can be grouped by Time Field also. So to full fill this requirement I would have to create a sub report since other parameters are of string type and can be used in one formula to get parameter based grouping in report. However If I try to return Time filed from the same formula I get the errors the "Return type should be of String typeu201D. This forces me to create sub report for Time based grouping. If user selects Time Field to be grouped on, all the information in the main report gets suppressed and only the sub report gets printed.
If user select store, Host and User in parameter to be grouped on, sub report gets suppressed.
Now with the above mentioned points I tried to optimize the report in following way.
-Printing 1 million records in the report does not make sense; hence we wanted to show the summary of all the records in chart section but wanted to print just 5000 records in detailed section. Suppressing detailed section after 5000 records does not help much since suppressing just saves the time in printing and does not limit the number of records to be fetched from the DB.I have a subreport also so it fetches the data 2 times from the DB hence makes the performance of the report worse.
To solve this problem I used command object and put the charts in the subreport and detail in main report.
In main report's Command Object I limited the number to records to be fetched from the DB to 5000 using rownum<5000 but in subreport's Command Object I did not set any limit in the query but I am doing all my aggregation in SQL which means do summary operation in DB and get only summarized data from DB.
-To solve section problem I am using Template object (new feature added in CR 2008).In which I am returning the field based on the "Group By" parameter selected by user.
-For time Field I have created two sub reports, one for chart and other one for details in the same way described in point one(Printing 1 million recordsu2026u2026).
After implementing these points my crystal reports performance improved drastically. The report that was taking 24 minute to come back now taking only 2 minutes.
However I want my report to come back with one minute. It returns if I remove the sub reports for Time Based Grouping but I can not do so.
My questions here are,
-Can I stop Subreport from fetching the data from DB if itu2019s suppressed?
-I believe using Conditional Template Object is a better option rather than having multiple detailed sections to print the data for a selected Group. However any suggestion here to improve the performance will be appreciable.
-since crystal report does not provide any option to limit the number of records to be fetched from DB, I am forced to use command object with rownum in where condition.
Please let me know about other option(s) to get this done...If there is any.
I am using Crystal report 2008.And we have developed our application the use JRC to export crystal report in PDF.
Regards,
Amrita
Edited by: Amrita Singh on May 12, 2009 11:36 AM1) I have to design a crystal report with best possible optimization. Optimization is main concern since report will run against 1-2 million data set. Though I am using parameter to fetch only the required data, required data can go till 1 million records.
2) Based on the input passed by the user I have to group the data in report. And for each selected parameter the Detail section I am printing is different. For example:-If user selects store then detail section is different and if user select Host then detail section will be different.
3) Report can be grouped by Time Field also. So to full fill this requirement I would have to create a sub report since other parameters are of string type and can be used in one formula to get parameter based grouping in report. However If I try to return Time filed from the same formula I get the errors the "Return type should be of String typeu201D. This forces me to create sub report for Time based grouping. If user selects Time Field to be grouped on, all the information in the main report gets suppressed and only the sub report gets printed.
If user select store, Host and User in parameter to be grouped on, sub report gets suppressed.
Now with the above mentioned points I tried to optimize the report in following way.
1) Printing 1 million records in the report does not make sense; hence we wanted to show the summary of all the records in chart section but wanted to print just 5000 records in detailed section. Suppressing detailed section after 5000 records does not help much since suppressing just saves the time in printing and does not limit the number of records to be fetched from the DB.I have a subreport also so it fetches the data 2 times from the DB hence makes the performance of the report worse.
To solve this problem I used command object and put the charts in the subreport and detail in main report.
In main report's Command Object I limited the number to records to be fetched from the DB to 5000 using rownum<5000 but in subreport's Command Object I did not set any limit in the query but I am doing all my aggregation in SQL which means do summary operation in DB and get only summarized data from DB.
2)To solve section problem I am using Template object (new feature added in CR 2008).In which I am returning the field based on the "Group By" parameter selected by user.
Edited by: Amrita Singh on May 12, 2009 12:26 PM -
My MacBook Pro is running very slow is there anything I can do to improve the performance?
Is there anything I can do to improve the speed of my MacBook Pro?
Kappy's Personal Suggestions About OS X Maintenance
For disk repairs use Disk Utility. For situations DU cannot handle the best third-party utility is: Disk Warrior; DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption. Drive Genius provides additional tools not found in Disk Warrior for defragmentation of older drives, disk repair, disk scans, formatting, partitioning, disk copy, and benchmarking.
Four outstanding sources of information on Mac maintenance are:
1. OS X Maintenance - MacAttorney.
2. Mac maintenance Quick Assist
3. Maintaining Mac OS X
4. Mac Maintenance Guide
Periodic Maintenance
OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) See Mac OS X- About background maintenance tasks. If you are running Leopard or later these tasks are run automatically, so there is no need to use any third-party software to force running these tasks.
If you are using a pre-Leopard version of OS X, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep. Dependence upon third-party utilities to run the periodic maintenance scripts was significantly reduced after Tiger. (These utilities have limited or no functionality with Snow Leopard, Lion, or Mountain Lion and should not be installed.)
Defragmentation
OS X automatically defragments files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive except when trying to install Boot Camp on a fragmented drive. But you don't need to buy third-party software. All you need is a spare external hard drive and Carbon Copy Cloner.
Cheap and Easy Defragmentation
You will have to backup your OS X partition to an external drive, boot from the external drive, use Disk Utility to repartition and reformat your hard drive back to a single volume, then restore your backup to the internal hard drive. You will use Carbon Copy Cloner to create the backup and to restore it.
1. Get an empty external hard drive and clone your internal drive to the
external one.
2. Boot from the external hard drive.
3. Erase the internal hard drive.
4. Restore the external clone to the internal hard drive.
Clone the internal drive to the external drive
1. Open Carbon Copy Cloner.
2. Select the Source volume from the left side dropdown menu.
3. Select the Destination volume from the left side dropdown menu.
4. Be sure the Block Copy button is not depressed or is ghosted.
5. Click on the Clone button.
Destination means the external backup drive. Source means the internal startup drive.
Restart the computer and after the chime press and hold down the OPTION key until the boot manager appears. Select the icon for the external drive and click on the upward pointing arrow button.
After startup do the following:
Erase internal hard drive
1. Open Disk Utility in your Utilities folder.
2. After DU loads select your internal hard drive (this is the entry with the
mfgr.'s ID and size) from the left side list. Note the SMART status of the
drive in DU's status area. If it does not say "Verified" then the drive is
failing or has failed and will need replacing. SMART info will not be
reported on external drives. Otherwise, click on the Partition tab in the
DU main window.
3. Under the Volume Scheme heading set the number of partitions from the
drop down menu to one. Set the format type to Mac OS Extended
(Journaled.) Click on the Options button, set the partition scheme to
GUID then click on the OK button. Click on the Partition button and wait
until the process has completed.
Restore the clone to the internal hard drive
1. Open Carbon Copy Cloner.
2. Select the Source volume from the left side dropdown menu.
3. Select the Destination volume from the left side dropdown menu.
4. Be sure the Block Copy button is not selected or is ghosted.
5. Click on the Clone button.
Destination means the internal hard drive. Source means the external startup drive.
Note that the Source and Destination drives are swapped for this last procedure.
Malware Protection
As for malware protection there are few if any such animals affecting OS X. Starting with Lion Apple has included built-in malware protection that is automatically updated as necessary.
Helpful Links Regarding Malware Protection:
1. Mac Malware Guide.
2. Detecting and avoiding malware and spyware
3. Macintosh Virus Guide
For general anti-virus protection I recommend only using ClamXav, but it is not necessary if you are keeping your computer's operating system software up to date. You should avoid any other third-party software advertised as providing anti-malware/virus protection. They are not required and could cause the performance of your computer to drop.
Cache Clearing
I recommend downloading a utility such as TinkerTool System, OnyX 2.4.3, Mountain Lion Cache Cleaner 7.0.9, Maintenance 1.6.8, or Cocktail 5.1.1 that you can use for periodic maintenance such as removing old log files and archives, clearing caches, etc. Corrupted cache files can cause slowness, kernel panics, and other issues. Although this is not a frequent nor a recurring problem, when it does happen there are tools such as those above to fix the problem.
If you are using Snow Leopard or earlier, then for emergency cleaning install the freeware utility Applejack. If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the command line. Note that AppleJack 1.5 is required for Leopard. AppleJack 1.6 is compatible with Snow Leopard. (AppleJack works with Snow Leopard or earlier.)
Installing System Updates or Upgrades
When you install any new system software or updates be sure to repair the hard drive and permissions beforehand.
Backup and Restore
Having a backup and restore strategy is one of the most important things you can do to maintain your computer. Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
1. Carbon Copy Cloner.
2. Deja Vu
3. SuperDuper!
4. Synk Pro
5. Tri-Backup
Visit The XLab FAQs and read the FAQs on maintenance and backup and restore.
Always have a current backup before performing any system updates or upgrades.
Be sure you have an adequate amount of RAM installed for the number of applications you run concurrently. Be sure you leave a minimum of 10% of the hard drive's capacity or 20 GBs, whichever is greater, as free space. Avoid installing utilities that rely on Haxies, SIMBL, or that alter the OS appearance, add features you will rarely if ever need, etc. The more extras you install the greater the probability of having problems. If you install software be sure you know how to uninstall it. Avoid installing multiple new software at the same time. Install one at a time and use it for a while to be sure it's compatible.
Additional suggestions will be found in:
1. Mac OS X speed FAQ
2. Speeding up Macs
3. Macintosh OS X Routine Maintenance
4. Essential Mac Maintenance: Get set up
5. Essential Mac Maintenance: Rev up your routines
6. Five Mac maintenance myths
7. How to Speed up Macs
8. Myths of required versus not required maintenance for Mac OS X
Referenced software can be found at CNet Downloads or MacUpdate.
Add more RAM or run fewer applications concurrently. -
Windows 7 64-bit with MS Outlook 2010. I've seen this now probably 10 times. When I first installed iTunes, I selected:
Preferences -> Devices -> "Prevent iPhonse, iPhones and iPads from syncing automatically".
and on my iPhone, I selected:
Info -> Sync Contacts with: Outlook.
Info -> Sync Calendars with: Outlook
Info -> Other -> Sync notes with: Outlook
This works great, and the settings stick. As soon as apple releases any new version (even minor) of iTunes and I install, I immediately receive: "...the information on the iphone is synced with another user account", and have to merge my calendar and contacts, but then once I get into iTunes, I find that ALL of the above settings (prevent syncing, sync calendar, contacts, notes) get reset and are turned off! I am running as a user account with Admin privaleges, but I don't right-click -> "Run as administrator" when I perform the iTunes update.
To reiterate, once I change the settings back, things work normally until the next iTunes update at which it gives me that sync error, and resets my sync options.
Any idea? Thanks!
-BrianIs someone else using the PC and iTunes?
Create another user for the PC with your name, and try syncing that way.
This way it wont conflic with the other user.
Maybe you are looking for
-
I have tried connecting a VCR a Camcorder and other devices and nothing happens. Its like they are not even connected. Has anyone else had a problem. What am I doing wrong. This is an audio video input is it not. Help please!
-
Running Leopard on one partition and Tiger on another partition?
I have recently purchased a family pack of Leopard, and I am reading up on a lot of the posts, here, before I venture the installation on two machines. I intend to install Leopard one machine at a time and give a few weeks between installations. One
-
HI,I want to ask one question about SRP 521W
I have one SRP 521W router, It is slow when I configure it,And the memory utilized very high(94%),but the CPU is nice. I was upgraded the last release firmware,The memory utilization keep in 78%,But I am still feeling it slowly when I configur
-
My back camera is blury every time I try to take a picture.
It it really bothers me that I can't take a clear picture. I tried focusing the camera but it doesn't help at all.
-
I lost safari and the app store on my iphone 4, how to get them back?