NNM iSPI Performance for Traffic silentInstall*.properties missing for Linux
HP -- I've been trying to figure out a way to get this information to you, and this is the best I could find. When performing a silent installation of HP NNM iSPI Performance for Traffic, you need the silentInstallExt.properties, silentInstallMaster.properties, and silentInstallLeaf.properties files. These files are not supplied with the Linux installation DVD image (Software_NNM_iSPI_Perf_Traf_Linux_9.20_Eng_TB770-15009.iso) -- you might want to remedy that. If anyone needs to find the files for a silent install on Linux, you can get them from the Windows installation DVD image (Software_NNM_iSPI_Perf_Traf_10.00_Win_TB770-15014.iso) in the Traffic_NNM_Extension, Traffic_Master, and Traffic_Leaf directories.
HP -- I've been trying to figure out a way to get this information to you, and this is the best I could find. When performing a silent installation of HP NNM iSPI Performance for Traffic, you need the silentInstallExt.properties, silentInstallMaster.properties, and silentInstallLeaf.properties files. These files are not supplied with the Linux installation DVD image (Software_NNM_iSPI_Perf_Traf_Linux_9.20_Eng_TB770-15009.iso) -- you might want to remedy that. If anyone needs to find the files for a silent install on Linux, you can get them from the Windows installation DVD image (Software_NNM_iSPI_Perf_Traf_10.00_Win_TB770-15014.iso) in the Traffic_NNM_Extension, Traffic_Master, and Traffic_Leaf directories.
Similar Messages
-
Slow TCP performance for traffic routed by ACE module
Hi,
the customer uses two ACE20 modules in active-standby mode. The ACE load-balances servers correctly. But there is a problem with communication between servers in the different ACE contexts. When the customer uses FTP from one server in one context to the other server in other context the throughput through ACE is about 23 Mbps. It is routed traffic in ACE:-( See:
server1: / #ftp server2
Connected to server2.cent.priv.
220 server2.cent.priv FTP server (Version 4.2 Wed Apr 2 15:38:27 CDT 2008) ready.
Name (server2:root):
331 Password required for root.
Password:
230 User root logged in.
ftp> bin
200 Type set to I.
ftp> put "|dd if=/dev/zero bs=32k count=5000 " /dev/null
200 PORT command successful.
150 Opening data connection for /dev/null.
5000+0 records in.
5000+0 records out.
226 Transfer complete.
163840000 bytes sent in 6.612 seconds (2.42e+04 Kbytes/s)
local: |dd if=/dev/zero bs=32k count=5000 remote: /dev/null
ftp>
The output from show resource usage doesn't show any drops:
conc-connections 0 0 800000 1600000 0
mgmt-connections 10 54 10000 20000 0
proxy-connections 0 0 104858 209716 0
xlates 0 0 104858 209716 0
bandwidth 0 46228 50000000 225000000 0
throughput 0 1155 50000000 100000000 0
mgmt-traffic rate 0 45073 0 125000000 0
connections rate 0 9 100000 200000 0
ssl-connections rate 0 0 500 1000 0
mac-miss rate 0 0 200 400 0
inspect-conn rate 0 0 600 1200 0
acl-memory 7064 7064 7082352 14168883 0
sticky 6 6 419430 0 0
regexp 47 47 104858 209715 0
syslog buffer 794624 794624 418816 431104 0
syslog rate 0 31 10000 20000 0
There is parameter map configured with rebalance persistant for cookie insertion in the context.
Do you know how can I increase performance for TCP traffic which is not load-balanced, but routed by ACE? Thank you very much.
RomanDefault inactivity timeouts used by ACE are
icmp 2sec
tcp 3600sec
udp 120sec
With your config you will change inactivity for every protocol to 7500sec.If you want to change TCP timeout to 7500sec and keep the
other inactivity timeouts as they are now use following
parameter-map type connection GLOBAL-TCP
set timeout inactivity 600
parameter-map type connection GLOBAL-UDP
set timeout inactivity 120
parameter-map type connection GLOBAL-ICMP
set timeout inactivity 2
class-map match-all ALL-TCP
match port tcp any
class-map match-all ALL-UDP
match port tcp any
class-map match-all ALL-ICMP
match port tcp any
policy-map multi-match TIMEOUTS
class ALL-TCP
connection advanced GLOBAL-TCP
class ALL-UDP
connection advanced GLOBAL-UDP
class ALL-TCP
connection advanced GLOBAL-ICMP
and apply service-policy TIMEOUTS globally
Syed Iftekhar Ahmed -
I have searched through the forums and there are a number of posts that are similar but all the checks they list seem to not apply to this one.
My current setup is as follows
All Servers are 2012 R2
1 x DC server
1 x RDS Gateway server with RDS Web installed
1 x Session Host Server
Certificate supplied by godaddy with 5 names. (included is the name of the RDS Gateway/Web server in the certificate, the internal name of the session host server is not included as the internal names are differnet to the external)
My tests are as follows
Navigating to the RDSWEB page from a machine inside the same network (windows 7 sp1) but not on the same domain is fine no errors and logging in and launching any published application is fine with no errors.
However logging in on another machine that is external from the network (windows 7 sp1) is ok up to the point of launching any of the published apps I get the error about ""A Revocation check could not be performed for the Certificate". this
prompts twice but does allow you to continue and login and use the app till the next time. If I view the certificate from the warning message all appears to be ok with all certs in the chain.
I have imported the root and intermediate certs to each of the gateway/rdsweb server and session host server into the computer cert store just to be on the safe side. This has not helped, I have also run the following command from both windows 7 machines
with no errors on either
certutil -f –urlfetch -verify c:\export.cer
I cant seem to see where this is failing and I am beginning to think there is something wrong with godaddy cert itself somehow.
If I skip rdsweb and just use MSTSC with the gateway server settings then I can login to any machine on the network with no errors so this is only related to launching published apps on the 2012 R2 RDWEB or session host servers.
Any help appreciatedHi,
1. Please make sure the client PCs have mstsc.exe (6.3.9600) installed.
2. If you are seeing a name mismatch error, you can set the published name via this cmdlet:
Change published FQDN for Server 2012 or 2012 R2 RDS Deployment
http://gallery.technet.microsoft.com/Change-published-FQDN-for-2a029b80
To be clear, the above cmdlet changes the name that shows up next to Remote computer on the prompt you see when launching a RemoteApp. You should have a DNS A record on your internal network pointing to the private ip address of your RDCB server.
Additionally, in RD Gateway Manager, Properties of your RD RAP, Network Resource tab, you should select Allow users to connect to any network resource or if you choose to use RD Gateway Managed group you will need to add all of the appropriate names to the
group.
For example, when launching a RemoteApp you would see something like Remote computer: rdcb.domain.com and Gateway server: gateway.domain.com . Both of these names need to be on your GoDaddy certificate.
Please verify the above and reply back so that we may assist you further if needed. It is possible you have an issue with the revocation check but I would like you to make sure that the above is in place first.
Thanks.
-TP
Thanks for the response.
To be clear I am only seeing a name mismatch and revocation error if I assign a self signed cert to the session host as advised earlier in the thread by "Dharmesh Solanki", if I remove this and assign the 3rd party certificate I then
just get the revocation error , I have already ran the powershell to change the FQDN's but this has not resolved the issue although the RDP connection details now match the external url for RDWEB when looking at one of the remoteapp files. The workspace
ID still shows an internal name though inside this same file.
RD Gateway is already set to connect any resource, when connecting using remote app both names (RDCB/RDGateway) show as being correct and are contained within the same UCC certificate. I also already have a DNS entry for the Connection broker pointing to
the internal ip.
Do you know if the I need the internal name of the session host servers contained within the same UCC certificate seeing as they are different fqdn's than what I am using for external access ? I resigned the UCC certificate and included the internal name
of the session host server to see if this would help but for some reason I am still seeing the revocation error. I will check on a windows 8 client pc this evening to see if this gets any further as the majority of the testing has been done on windows 7 sp1
client pc's
Thanks -
Hi all,
I am trying to install Weblogic 4.5.1 on Red Hat Linux 6.1
When I set the weblogic.system.nativeIO.enable to true within the
weblogic.properties file, I receive an exception:
java.lang.UnsatisfiedLinkError: no muxer in shared library path
(libmuxer.so)
I tried looking through the installation documentation and was unable to
find anything related specifically to using the performance pack under Linux
(there was information specific to AIX and Solaris, however).
So I guess my question is: Is the performance pack supported for Linux? If
so, what might I have to modify in order for it to work correctly? Thanks-
Matt GrochAndrew Nishigaya wrote:
Hi,
I just saw your post about there being no Linux NativeIO performance
pack. Is there any planned?Yes.
If so, could you provide a general
timeline?
It should be available in the Denali (5.1) release of WLS or shortly
thereafter. Denali is GA at the end of March.
>
BTW, Can you enlighten me as to what the libWeblogicLinux1.so file do
that is in the linux directory?I believe that it is some platform-specific debug code that we include.
-- Rob
We were hoping that this was the
NativeIO pack, but I guess we were wrong.
Any help would be appreciated!
-- nori
Srikant Subramaniam wrote:
Unfortunately, we don't support the performance pack for linux at this point in
time.
Srikant.
Matthew Groch wrote:
Hi all,
I am trying to install Weblogic 4.5.1 on Red Hat Linux 6.1
When I set the weblogic.system.nativeIO.enable to true within the
weblogic.properties file, I receive an exception:
java.lang.UnsatisfiedLinkError: no muxer in shared library path
(libmuxer.so)
I tried looking through the installation documentation and was unable to
find anything related specifically to using the performance pack under Linux
(there was information specific to AIX and Solaris, however).
So I guess my question is: Is the performance pack supported for Linux? If
so, what might I have to modify in order for it to work correctly? Thanks-
Matt Groch
Andrew Nishigaya Chief Technology Officer and Founder
Miradi, Inc. Believe. Create. Collaborate.
Soquel, CA [email protected]
(831) 477-0561 http://www.miradi.com -
Hi,
I would like to enquire is there anyway that i can improve the performance for table BKPF from the ABAP code point of view.
Because we have customise one program to generate report for the asset master listing.
one of the select statement are show as below:
SELECT SINGLE * FROM BKPF WHERE BUKRS = ANEP-BUKRS
AND GJAHR = ANEP-GJAHR
AND AWKEY = AWKEYUS.
I would like to know how it different from the select statemene below:
SELECT SINGLE * FROM BKPF INTO CORRESPONDING FIELDS OF T_BKPF
WHERE
BUKRS = ANEP-BUKRS
AND GJAHR = ANEP-GJAHR
AND AWKEY = AWKEY.
Which of the select statements above can enhance report,because currently we have face quite bad issue on this report.
Can i post the ABAP code on this forum.
Hope someone can help me on this. thank you.Hi,
As much as possible use the primary keys of BKPF which is BUKRS, BELNR and GJAHR. Also, select only the records which are needed so to increase performance. Please look at the code below:
DATA: lv_age_of_rec TYPE p.
FIELD-SYMBOLS: <fs_final> LIKE LINE OF it_final.
LOOP AT it_final ASSIGNING <fs_final>.
get records from BKPF
SELECT SINGLE bukrs belnr gjahr budat bldat xblnr bktxt FROM bkpf
INTO (bkpf-bukrs, bkpf-belnr, bkpf-gjahr, <fs_final>-budat,
<fs_final>-bldat, <fs_final>-xblnr, <fs_final>-bktxt)
WHERE bukrs = <fs_final>-bukrs
AND belnr = <fs_final>-belnr
AND gjahr = <fs_final>-gjahr.
if <fs_final>-shkzg = 'H', multiply dmbtr(amount in local currency)
by negative 1
IF <fs_final>-shkzg = 'H'.
<fs_final>-dmbtr = <fs_final>-dmbtr * -1.
ENDIF.
combine company code(bukrs), accounting document number(belnr),
fiscal year(gjahr) and line item(buzei) to get long text.
CONCATENATE: <fs_final>-bukrs <fs_final>-belnr
<fs_final>-gjahr <fs_final>-buzei
INTO it_thead-tdname.
CALL FUNCTION 'READ_TEXT'
EXPORTING
client = sy-mandt
id = '0001'
language = sy-langu
name = it_thead-tdname
object = 'DOC_ITEM'
ARCHIVE_HANDLE = 0
LOCAL_CAT = ' '
IMPORTING
HEADER =
TABLES
lines = it_lines
EXCEPTIONS
id = 1
language = 2
name = 3
not_found = 4
object = 5
reference_check = 6
wrong_access_to_archive = 7
OTHERS = 8.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
if successful, split long text into start and end date
IF sy-subrc = 0.
READ TABLE it_lines TRANSPORTING tdline.
IF sy-subrc = 0.
SPLIT it_lines-tdline AT '-' INTO
<fs_final>-s_dat <fs_final>-e_dat.
ENDIF.
ENDIF.
get vendor name from LFA1
SELECT SINGLE name1 FROM lfa1
INTO <fs_final>-name1
WHERE lifnr = <fs_final>-lifnr.
lv_age_of_rec = p_budat - <fs_final>-budat.
condition for age of deposits
IF lv_age_of_rec <= 30.
<fs_final>-amount1 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 30 AND lv_age_of_rec <= 60.
<fs_final>-amount2 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 60 AND lv_age_of_rec <= 90.
<fs_final>-amount3 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 90 AND lv_age_of_rec <= 120.
<fs_final>-amount4 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 120 AND lv_age_of_rec <= 180.
<fs_final>-amount5 = <fs_final>-dmbtr.
ELSEIF lv_age_of_rec > 180.
<fs_final>-amount6 = <fs_final>-dmbtr.
ENDIF.
CLEAR: bkpf, it_lines-tdline, lv_age_of_rec.
ENDLOOP.
Hope this helps...
P.S. Please award points for useful answers. -
HOW TO USE A SINGLE PERFORM FOR VARIOUS TABLES ?
perform test TABLES t_header.
select
KONH~KNUMH
konh~datab
konh~datbi
konp~kbetr
konp~konwa
konp~kpein
konp~kmein
KONP~KRECH
FROM konh INNER JOIN konp
ON konpknumh = konhknumh
into table iTABXXX
"ANY TEMPERARY INTERNAL TABLE.
for all entries in t_header
where
konh~kschl = t_header-kschl
AND konh~knumh = t_header-knumh.
endform.
how can I use above perform for various internal tables of DIFFERENT LINE TYPES but having the fields KSCHL & KNUMH.u can use single perform....
just see this example......hope this is what u r expecting....
tables : pa0001.
parameters : p_pernr like pa0001-pernr.
data : itab1 like pa0001 occurs 0 with header line.
data : itab2 like pa0002 occurs 0 with header line.
perform get_data tables itab1 itab2.
if not itab1[] is initial.
loop at itab1.
write :/ itab1-pernr.
endloop.
endif.
if not itab2[] is initial.
loop at itab2.
write :/ itab2-pernr.
endloop.
endif.
*& Form get_data
text
-->P_ITAB1 text
-->P_ITAB2 text
form get_data tables itab1 structure pa0001
itab2 structure pa0002.
select * from pa0001 into table itab1 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
select * from pa0002 into table itab2 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
endform. " get_data
Regards
vasu -
Performance for the below code
Can any one help me in improving the performance for the below code.
FORM RETRIEVE_DATA .
CLEAR WA_TERRINFO.
CLEAR WA_KNA1.
CLEAR WA_ADRC.
CLEAR SORT2.
*To retrieve the territory information from ZPSDSALREP
SELECT ZZTERRMG
ZZSALESREP
NAME1
ZREP_PROFILE
ZTEAM
INTO TABLE GT_TERRINFO
FROM ZPSDSALREP.
*Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
LOOP AT GT_TERRINFO INTO WA_TERRINFO.
SELECT SINGLE * FROM KNA1 INTO WA_KNA1
WHERE KUNNR = WA_TERRINFO-SALESREP.
SELECT SINGLE * FROM ADRC INTO WA_ADRC
WHERE ADDRNUMBER = WA_KNA1-ADRNR.
IF NOT WA_ADRC-SORT2 IS INITIAL.
CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
MOVE SORT2 TO WA_TERRINFO-SORT2.
MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
APPEND WA_TERRINFO TO GT_TERRINFO1.
CLEAR WA_TERRINFO.
ENDIF.
CLEAR WA_KNA1.
CLEAR WA_ADRC.
ENDLOOP.
ENDFORM. " RETRIEVE_DATAHi
The code is easy so I don't think you can do nothing, only u can try to limit the reading of KNA1:
FORM RETRIEVE_DATA .
CLEAR WA_TERRINFO.
CLEAR WA_KNA1.
CLEAR WA_ADRC.
CLEAR SORT2.
*To retrieve the territory information from ZPSDSALREP
SELECT ZZTERRMG
ZZSALESREP
NAME1
ZREP_PROFILE
ZTEAM
INTO TABLE GT_TERRINFO
FROM ZPSDSALREP.
SORT GT_TERRINFO BY SALESREP.
*Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
LOOP AT GT_TERRINFO INTO WA_TERRINFO.
IF KNA1-KUNNR <> WA_KNA1-KUNNR.
SELECT SINGLE * FROM KNA1 INTO WA_KNA1
WHERE KUNNR = WA_TERRINFO-SALESREP.
IF SY-SUBRC <> 0.
CLEAR: WA_KNA1, WA_ADRC.
ELSE.
SELECT SINGLE * FROM ADRC INTO WA_ADRC
WHERE ADDRNUMBER = WA_KNA1-ADRNR.
IF SY-SUBRC <> 0. WA_ADRC. ENDIF.
ENDIF.
ENDIF.
IF NOT WA_ADRC-SORT2 IS INITIAL.
CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
MOVE SORT2 TO WA_TERRINFO-SORT2.
* MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
APPEND WA_TERRINFO TO GT_TERRINFO1.
CLEAR WA_TERRINFO.
ENDIF.
ENDLOOP.
ENDFORM. " RETRIEVE_DATA
If program takes many times to upload the data from ZPSDSALREP, you can try to split in sevaral packages:
SELECT ZZTERRMG ZZSALESREP NAME1 ZREP_PROFILE ZTEAM
INTO TABLE GT_TERRINFO PACKAGE SIZE <...>
FROM ZPSDSALREP.
SORT GT_TERRINFO BY SALESREP.
*Preparing Corporate ID from KNA1 & ADRC and storing it in SORT2 field
LOOP AT GT_TERRINFO INTO WA_TERRINFO.
IF KNA1-KUNNR <> WA_KNA1-KUNNR.
SELECT SINGLE * FROM KNA1 INTO WA_KNA1
WHERE KUNNR = WA_TERRINFO-SALESREP.
IF SY-SUBRC <> 0.
CLEAR: WA_KNA1, WA_ADRC.
ELSE.
SELECT SINGLE * FROM ADRC INTO WA_ADRC
WHERE ADDRNUMBER = WA_KNA1-ADRNR.
IF SY-SUBRC <> 0. WA_ADRC. ENDIF.
ENDIF.
ENDIF.
IF NOT WA_ADRC-SORT2 IS INITIAL.
CONCATENATE 'U' WA_ADRC-SORT2 INTO SORT2.
MOVE SORT2 TO WA_TERRINFO-SORT2.
* MODIFY GT_TERRINFO1 FROM WA_TERRINFO.
APPEND WA_TERRINFO TO GT_TERRINFO1.
CLEAR WA_TERRINFO.
ENDIF.
ENDLOOP.
ENDSELECT.
Max -
How to check performance for Stored procedure or Package.
Hi ,
Can any one please tell me , how to check performance for Stored procedure or Function or Package
Thanks&Regards,
Sanjeev.user13483989 wrote:
Hi ,
Can any one please tell me , how to check performance for Stored procedure or Function or Package
Thanks&Regards,
Sanjeev.Oracle has provided set of Tools to monitor the Performance.
Profilers being one of them; If you wish to understand more on PL/SQL Optimization, please read PL/SQL Optimization and Tuning.
See example of DBMS_PROFILER.
See example of PLSQL Hierarchial Profiler -
To improve performance for report
Hi Expert,
i have generated the opensales order report which is fetching data from VBAK it is taking more time exectuing in the forground itself.
it is going in to dump in foreground and i have executed in the background also but it is going in to dump.
SELECT vbeln
auart
submi
vkorg
vtweg
spart
knumv
vdatu
vprgr
ihrez
bname
kunnr
FROM vbak
APPENDING TABLE itab_vbak_vbap
FOR ALL ENTRIES IN l_itab_temp
*BEGIN OF change 17/Oct/2008.
WHERE erdat IN s_erdat AND
submi = l_itab_temp-submi AND
*End of Changes 17/Oct/2008.
auart = l_itab_temp-auart AND
*BEGIN OF change 17/Oct/2008.
submi = l_itab_temp-submi AND
*End of Changes 17/Oct/2008.
vkorg = l_itab_temp-vkorg AND
vtweg = l_itab_temp-vtweg AND
spart = l_itab_temp-spart AND
vdatu = l_itab_temp-vdatu AND
vprgr = l_itab_temp-vprgr AND
ihrez = l_itab_temp-ihrez AND
bname = l_itab_temp-bname AND
kunnr = l_itab_temp-sap_kunnr.
DELETE itab_temp FROM l_v_from_rec TO l_v_to_rec.
ENDDO.
Please give me suggession for improving performance for the programmes.hi,
you try like this
DATA:BEGIN OF itab1 OCCURS 0,
vbeln LIKE vbak-vbeln,
END OF itab1.
DATA: BEGIN OF itab2 OCCURS 0,
vbeln LIKE vbap-vbeln,
posnr LIKE vbap-posnr,
matnr LIKE vbap-matnr,
END OF itab2.
DATA: BEGIN OF itab3 OCCURS 0,
vbeln TYPE vbeln_va,
posnr TYPE posnr_va,
matnr TYPE matnr,
END OF itab3.
SELECT-OPTIONS: s_vbeln FOR vbak-vbeln.
START-OF-SELECTION.
SELECT vbeln FROM vbak INTO TABLE itab1
WHERE vbeln IN s_vbeln.
IF itab1[] IS NOT INITIAL.
SELECT vbeln posnr matnr FROM vbap INTO TABLE itab2
FOR ALL ENTRIES IN itab1
WHERE vbeln = itab1-vbeln.
ENDIF. -
Estimating range of performance for a query
I wonder if this is an intractable problem:
Using information available from the data dictionary and from the tables referenced in a query, estimate actual minimum and maximum run times for a given query, predict performance for various combinations of realistic values for the variables that influence performance.
Variables would include:
what kinds of IO are used
how much of each type of IO is used
atpr - average time per read for various types of IO
data relationships - min/max/avg #child records per parent record
server caching - how many gets require IO
clustering factor of indexes
I think a related item would be client caching
continued rows - how many per N primary rows
Type of plan - initally I think perhaps all NL and all hash joins are simple starting points
Some of the variables are observable from target systems ( atpr, data relationships, clustering factor, .. ).
I know, the optimizer already does this.
I know, it's faster to just run test queries.
Repeated work with such ideas would cause refinement of the method and
eventually might result in reasonably accurate estimates and improved understanding
of the variables involved in query performance, the latter being more important
than the former.
Please tell me why this is a bad or good idea. I already mentioned a couple of counter-arguments above,
please don't bother elaborating on them unless you're shedding light and not flaming. I think this would be
somewhat like the index evaluation methods in Lahdenmaki/Leach's book, but more Oracle-centric.
Or maybe I'm kidding myself..Justin Cave wrote:
Could you possibly invert the problem a bit and use some statistical process control techniques to set some baselines and to identify statements that are moving outside of their control limits?
For example, a control chart could be built for each SQL statement based on actual execution performance data in the AWR-- you just need to know the average and the standard deviation execution time for that which should be relatively easy to derive. You can then gather the performance data for every snapshot interval, add a new data point to the chart. There are a number of different sets of rules for determining a "signal" from this data as opposed to normal random variation. That would generally be a reasonable way for everyone to agree on what performance should really be expected for a SQL statement and it would be a good early warning system that "something has changed" when you see a particular query start to run consistently slower (or faster) than it had been. That, in turn, might lead to better discussions, i.e. "sql_id Y is starting to run more slowly than we were expecting based on prior testing because we just introduced Process X that is generating a pile of I/O in the same window your query runs in. We can adjust Y's baseline to incorporate this new reality. Or we can move when Y runs so that it isn't competing with X. Or we can try to optimize Y further. Or we can get the folks that own X and Y into a room and determine how to meet everyone's requirements". Particularly if your performance testing can identify issues in test before the new Process X code goes into prod.
JustinThose are interesting ideas. Better discussions would be a good thing.
Re inverting the problem from prediction to reaction:
I have done some work with the script at http://kerryosborne.oracle-guy.com/2008/10/unstable-plans/ which of course work only for as much AWR data as you keep. I've had mixed results. I haven't tried to set it up to alert me about problems or to monitor a specific set of sql_ids. I've found it to be useful when users/developers are complaining about general slowness but won't give you any useful details about what is slow.
Here are a few complicating factors re identifying significant divergences in query performance or resource use - There are queries that accept different inputs that rightly generate completely different workloads for the same query, e.g., a product/location/time query whose inputs allow wide variations in the selectivity for each of the dimensions. There are applications that never heard of a bind variable, and there are queries that rightly do not use bind variables ( yes, even in the age of sql injection ).
In general , aside from the usual Grid Control and Nagios alerts re CPU/Memory/IO thresholds, and some blocking-locks alert programs, it's up to our users/developers to report performance problems.
Re my original question - I'll admit I was pretty sleep deprived when I wrote it. Sleep deprivation isn't usually conducive to clear thinking, so it will be interesting to see how this all looks in a month. Still, given that so much testing is affected by previous test runs ( caching ), I thought it made sense to try to understand the worst-performing case for a given execution plan. It's not a good thing to find that the big query that was tested to death and gave users a certain expectation of run time runs in production on a system where the caches have been dominated for the last N hours by completely unrelated workloads, and that when the query runs every single block must be read from a spindle, giving the users a much slower query than was seen in test.
Maybe the best way to predict worst-case performance is to work from the v$ metrics from test, manipulating the metrics to simulate different amounts of caching and different IO rates, thus generating different estimates for run-time. -
Technical Performance for new PC: RAM
I need a new PC and have a quad core. By now I have 8 RAM but it seems sometime slowwhen working on a series of 300-800 photos
What would be the effect of 16 RAM with the new 4. generation PCs quad core 4x 2,0 GH
ThanksHallo,
I have no specific step in mind, but always when you need to render and change format / size (2:1)
14 mio pixel
Thanks
Johannes
Von: dj_paige [email protected]
Gesendet: Donnerstag, 5. Dezember 2013 16:29
An: JoeSchmid
Betreff: Technical Performance for new PC: RAM
Re: Technical Performance for new PC: RAM
created by dj_paige <http://forums.adobe.com/people/dj_paige> in Photoshop Lightroom - View the full discussion <http://forums.adobe.com/message/5903287#5903287 -
Traffic lights are missing on Safari
Traffic lights are missing on Safari. Please help.
To regain your 'traffic lights' go to top of screen and minimise, then drag the screen to the required size. Every time I maximised the lights disappeared, but when I minimised it they showed up. I eventually caught on and then dragged the minimised screen to the required size, Easy when you know how!!
-
Standard Report check whether Goods Issue has been performed for DN
Hi All,
My users is having a list of DNs where they would like to check if Post Goods Issue has been performed for them. I would like to ask is there any standard reports as to check if a Delivery Note has PGI performed?
I have tried report VL06G but in this report we cannot give the DN No. as the key.
Thanks.There is no standard reports to check GI by DN number.
You have to find out the Outbound Number first then check by Outbound number.
In SE16N, key VBFA table
Pump in your DN number under "Follow-on doc."
Under "Prec.doc.categ." select "J".
with the outboud number, you can proceed to check using above report suggested.
Alternatively, using same VBFA table. Pump in list of outbound number under "Preceding Doc." and select "Subs.doc.categ." as "R".
If you can set-up query reports, you can also get all the information directly with the DN Numbers. -
how to check the usage of ram and cpu Performance for the particular application like sqlserver ,ms word
rankiHi,
You can use Performance Monitor and add the required counters.
Check the below Technet article on Performance Monitor.
http://technet.microsoft.com/en-us/library/cc749249.aspx
Below are the steps to monitor the process in Performance Monitor.
- Go to the Performance Monitor.
- Right-click on the graph and select "Add Counters".
- In the "Available counters" list, open the "Process" section by clicking on the down arrow next to it. Select "% Processor Time" (and any other counter you want).
- In the "Instances of selected object" list, select the process you want to track. Then click on "Add >>" button. Click on OK.
Regards,
Jack
www.jijitechnologies.com -
How to improve the performance for integrating third party search engine
hi,
I have been working on integrating verity search engine with KM. the performance for retrieving search result totally depend on how many search result returned, for example if there is less than 10 records, it only takes 3 seconds, but if there are 200 records, it takes about 3 minutes, is it normal? anyway to improve it? Thanks!
T.J.Thilo,
thanks for the response, would you recommend some document for configuring KM cache service, I did change memory cache, and also dynamic web reposity, whatelse out there that I can change? right now, I have one instance(EP6.4 sp11) works well, it returns 200 records from Stellent within 6s. But when I put this KM global service on EP6.0 sp2 (our current system) it takes about 15s. I am not sure is this because of different EP version, or something else. I have tried my best to slim down SOAP component from Stellent. I don't think anything else can be done from that side anymore. before I changed the SOAP, it tooks about 60s. just wonder what else I can do on KM side to improve it performance? Thanks!
T.J.
Maybe you are looking for
-
Hi, We just migrated our technology platform on one of the six servers yesterday on production d/b as follows : OAS Server - Linux Patch Applied (2.6.18-53.el5 #1 SMP Wed Oct 10 16:34:19 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux) D/B Server - From Orac
-
Standard reports for Sale Order costing
Hi all Can anybody Please tell sale Order costing reports. Thanks Sandeep
-
How to set external number for SAP shipment ?
How to manually assign external number in alphanumeric like AB07089999 for SAP shipment (via VT01N) ? Edited by: MicMic on Sep 19, 2011 6:13 AM
-
Is it possible to integrate Nokia E62 with Cisco Call Manager 4.x?
The Nokia E-61 can be integrated with Cisco Call manager 4.x using a Nokia Call Connect 1.0 client. 1.Is it possible to do the same with Nokia E62.If it can be done,what would be the requirements? 2.Is Nokia E-62 PDA a Cisco Compatible Extension devi
-
BEx query_Calculated key figure
Hi , I have CalMonth/Year as the time characteristics and Sales amount as keyfigure.When is give a particular month in prompt ie.,Say September ,it should calculate the sales of previus months ie.,Jan to August...and so on ...If April then Jan to Mar