Any Tool to Measure Portal Performance

Does anyone know of a tool to measure OIP performance? Similar to the "Web Trends" application. Or anything that would measure the statistic performance of the OIP.
Thanks in advance!

The reports in the performance monitoring toolkit provide performance data in plain text. If you prefer a friendly format to present to the business, you would be able to take queries from the reports and customize them with HTML code. The easiest way is most likely to use the PL/SQL toolkit to include the HTML code. Include a style sheet as well and you can easily change the layout according to your business needs.
An alternative which I have never tested myself but which has been suggested by Oracle Consultants is to use Discoverer as reporting tool for the performance data. You would be able to reuse the existing queries for e.g. top10 pages etc. Alternatively you can write your own SQL-queries to retrieve information which has not yet been covered by the reports in the Performance Monitoring toolkit.

Similar Messages

  • Tools for measuring BDB performance

    Hi all,
    Are there any tools available for benchmarking the performance of BDB ?
    Also how to use the test suite available with BDB distribution ?
    Thanks,
    david.

    Hello,
    MVCC alleviates reader/writer contention at the cost of additional memory.
    When MVCC is not used, readers trying to read data on page X might block for a writer modifying content on page X. With MVCC, the readers will not block. The fact that readers are not blocked will result in better performance (throughput and/or response time).
    So, your test should have some "hot spots" which would create this contention. You should have transaction mix of readers and writers. Without MVCC, you'll see that readers block (resulting in lower throughput). With MVCC, you should see that readers continue, thus getting more transactions per second.
    As long as the rows that you store in Berkeley DB are smaller than a page size, my opinion is the row size doesn't matter (Berkeley DB does page level locking).
    As mentioned above, memory requirements for MVCC are higher, so to get the additional performance benefit, the 4.6.21 run will probably need a bigger cache.
    That's the general answer.
    This is an interesting exercise. Good luck!
    Warm regards.
    ashok

  • Is there any tool to measure the instruction set usage?

    Hi,
    Is there a tool to measure the instruction set usage?
    I think there must be. But where is it? Is there a
    command in Solaris?
    Thanks.
    wizzard1

    Hi,<br><br>
    Your question is a bit vague. Are you talking about the SPARC<br>
    instruction set usage, profiling, coverage or something else?<br>
    There are a number of utilities distributed as part of the Sun<br>
    compiler that may help you, such as:<br><br>
    prof, tcov, looptool.<br><BR>
    You might also want to have a look at this White Paper<br>
    Delivering Performance on Sun: Optimizing Applications for Solaris<br>
    <br>
    Hope this helps.<br>
    <br><br>
    Caryl<br>
    Sun DTS<br>

  • Any parameter to measure the Performance between  two server

    Currently I am running more 20 dvlp database in 2 cpu 2 GB RAM Windows2000
    server
    We are curently in process of upgrading the infarasturcutre
    We are moving the DB the a 2003 STD R 2 server with 4gb Ram and 2 processor
    I ahve configure everything in the new server except Moving of Dbs from old to NEW server
    versions 8,9 10gr2
    But starngley i feel the new server performs slowly when compared to old server not frpm DB point of view but while copying between different disks in the same NEW server
    It takes long time than usual across our office
    Tomorrow i will be moving a few DBS to the new machine
    Everythinh s going to be same in the Init.ora.No change in SGA or INIT parameters execept the driectory structure
    i want run the DBs in OLD and NEW machine and compare the response time
    Willbe sufficent enough to give an idea whether the new server is performing better or worse
    Any susgestions
    Message was edited by:
    Maran Viswarayar

    I don't think so, it depends on the way you conduct
    your testing environment, how you build it, and
    what's the goal of this test. I wouldn't name it
    synthetic test, I name it standarized test
    environment.Just to be clear, I have nothing against doing synthetic tests, and I don't intend "synthetic" to be in any way derogatory. This sort of testing can be quite valuable. You just need to be careful about extrapolating the performance of this sort of testing to the performance that your application will actually achieve. Since the workload your application is performing is generally going to be quite different than the synthetic workload you're describing, the comparison may not be direct.
    OS performance metrics can be gathered directly with
    the OS party. But knowing exactly how your database
    will perform in your specific environment ... You'll
    have to make up a testing environment.All true. Knowing how your database will perform, particularly on I/O intensive operations, though, doesn't tell you how a particular application running in your database may perform. Your application may, for example, be CPU bound or may be doing very non-random IO operations.
    Given that the original poster is seeing odd disk behavior, and his primary concern is with the IO subsystem, I would suggest starting the test there.
    Performance problems have always been multifactorial,
    and this always makes a tuning approach to be
    obscure. Unless a professional has enough practical
    experience, it becomes a black box problem where
    interactig subsystems will make it difficult the goal
    of finding the most meaningful performance thread and
    its interactions.Very true
    I have used this test approach and it has assisted me
    in obtaining an environment free of subjectivities
    where I have been able to benchmark Oracle behaviour
    on different platforms.
    This kind of test has also helped me in creating
    controled stressing situations where I can
    proactively plot potential bottlenecks and meassure
    different rdbms architectural aspects such as
    transactional mechanism, sorting, undo segments,
    latches, networking, etc. just to name a few, at
    different load scenarios.Synthetic test loads are excellent for this sort of database performance investigation, agreed.
    It is sometimes difficult to find hundreds of
    volunteers to test the application to find the point
    of maximum sessions with minimum response time. This
    testing approach has been useful in hiring a variable
    number of virtual volunteers that are willing to test
    the environment any time. So it has also allowed me
    to create useful reports, such as the "users load vs.
    response time" which helped me in predicting my
    system operational ceilings, and it has been pretty
    accurate.If we're talking about testing application performance, rather than testing generic database performance, I'd maintain that you shouldn't need any volunteers. You should have scripts that replicate the key business operations the application does and you should have a harness that can start up arbitrary numbers of concurrent sessions (admittedly, you may need a handful of volunteers to launch these scripts from a sufficient number of laptops).
    It all depends on the way you define your test
    environment.100% agreed.
    Justin

  • Tools to measure Content DB performance?

    Dear All,
    Is there a tool to measure content DB performance not size and specially if it is having a lot of sites is it possible to know monitor this specific site performance on the content DB?
    Kind Regards, John Naguib Technical Consultant/Architect MCITP, MCPD, MCTS, MCT, TOGAF 9 Foundation. Please remember to mark the reply as answer if it helps

    Hi John,
    Let's say there is, what would you like to see as the output for the DB performance?
    The way I see it users are accessing SharePoint sites, and you want to know if these are loading fast enough for them. In this case, you can use developer dashboard to see why pages might load slower than others.
    If your overall performance is really bad, try taking a look at the SharePoint components that are responsible for showing the page. Some things to check:
    - Server resources (CPU/RAM/disk space/disk IO) on all SharePoint and Database servers
    - Caching (BLOB, object etc.)
    - Distributed Cache
    Please let me know if you have any additional questions.
    Nico Martens
    SharePoint/Office365/Azure Consultant

  • Is there any tool to check the WinCE network performance???

    Dear Developers,
    Greetings!!!!!
    As, I am interested to test our wifi module performance and I tried to search it on google but I am unable to find any tool. I used netlogctl but this is not sufficient. As, our main objective to see TCP(Mbps) for Tx and Rx

    Hi,
    If you have the test kit installed i.e. the CTK , then you will find it under the corresponding test folder. For example , i have it under C:\Program Files (x86)\WindowsEmbeddedCompact7TestKit\tests\target\
    The test harness files , tux and kato can be found under 
    C:\Program Files (x86)\WindowsEmbeddedCompact7TestKit\harnesses\target\
    The above two files tux and kato would be required for running any tests on Windows embedded compact platforms.
    Depending on your platform , you may choose to use the corresponding binaries in the sub directory.
    Regards,
    Balaji.

  • Portal performance KPIs

    I need to define an SLA for Portal performance.
    What would be the top KPIs or metrics for this?
    What are the tools/reports to measure it?

    Hi,
    Usually customer expect in SLA to be determined :
    a)  average response time (or processing time) KPI
    This KPI should be carefully defined for well-described process dialog steps (for UI based scenarios), as well as for well-defined amount of processed data.
    For background jobs processing time can be defined, with direct relation to the volume of data to be processed.
    The network environment (LAN/WAN) has significant impact on response time. The system hardware (CPU, Memory, Disk I/O) too. This means that you have to describe the conditions (fast CPU, enough memory, good network, etc) with as concrete as possible parameters, directly in the SLA together with the KPI.
    b) processing throughput or capacity , e.g. number of concurrent users, parallel tasks, orders per hour, and so on
    The KPI of how many users I can run on the system sometimes is a KPI that customers require. This, similar to the response time KPI, should be documented together with the hardware requirements and the volume of data requirements.
    Regards,
    Markus

  • Measuring the performance of Networking code

    Lately I've had renewed interest in Java networking, and been doing some reading on various ways of optimizing networking code.
    But then it hit me.
    I dont know any way of benchmarking IO/Networking code. To take a simple example, how exactly am I supposed to know if read(buf,i,len) is more efficient than read() ? or how do I know the performance difference between setting sendBufferSize 8k and 32k? etc
    1)
    When people say "this networking code is faster than that", I assume they are referring to latency. Correct? Obviously these claims need to be verifiable. How do they do that?
    2)
    I am aware of Java profilers ( http://java-source.net/open-source/profilers), but most of them measure stuff like CPU, memory, heap, etc - I cant seem to find any profiler that measures Networking code. Should I be looking at OS/System level tools? If so, which ones?
    I dont want to make the cardinal sin of blindly optimizing because "people say so". I want to measure the performance and see it with my own eyes.
    Appreciate the assistance.
    Edited by: GizmoC on Apr 23, 2008 11:53 PM

    If you're not prepared to assume they know what they're talking about, why do you assume that you know what they're talking about?Ok, so what criteria determine if a certain piece of "networking code" is better/faster than another? My guess is: latency, CPU usage, memory usage - that's all I can think of. Anyway, I think we are derailing here.
    The rest of your problem is trivial. All you have to do is time a large download under the various conditions of interest.1)
    hmm.. well for my purpose I am mainly interested in latency. I am writing a SOCKS server which is currently encapsulating multiplayer game data. Currently I pay an apprx 100 latency overhead - I dont understand why.. considering both the SOCKS client (my game) and SOCKS server are localhost. And I dont think merely reading a few bytes of SOCKS header information can potentially cause such an overhead.
    2)
    Let's say I make certain changes to my networking code which results in a slightly faster download - however can I assume that this will also mean lower latency while gaming? Game traffic is extremely sporadic, unlike a regular HTTP download which is a continuous stream of bytes.
    3)
    "timing a large download" implies that I am using some kind of external mechanism to test my networking performance. Though this sounds like a pragmatic solution, I think there ought to be a formal finely grained test harness that tests networking performance in Java, no?

  • OSB Monitoring - Any impact/overhead on overall performance?

    Hi all,
    We are trying to use the out of the box feature of monitoring in OSB to capture the processing times for each message including service callouts.
    We have been able to use it well and find the results realistic.
    One thing that we want to confirm is the impact/overhead involved by enabling monitoring in OSB.
    We did do some sample runs to capture the results with and without monitoring enabled, but did not find any significant difference.
    Going by the general understand of similar EAI tools, we expect that there will be some overhead involved by enabling monitoring.
    Can anyone confirm this? If yes, what is that we can do to minimize the impact?
    Please give your comments/suggestions on this.
    Thanks,
    Patrick

    Hi Anuj,
    We are using the OSB built-in feature monitoring to measure the performance. We enable the monitoring at Action Level in the proxy service Operational Settings.
    Then when we fire some sample requests, we get to capture the actual performance metrics at every stage of the ProxyService on the Dashboard under the Service Health tab.
    OSB gives the performance metrics like total message count that were processed, the average processing time, maximum processing time, etc ...
    Let me know if you need more information.
    Thanks,
    Patrick

  • How to measure the performance of sql query?

    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i...
    It ll be useful for me to write efficient query....
    Thanks & Regards

    psram wrote:
    Hi Experts,
    How to measure the performance, efficiency and cpu cost of a sql query?
    What are all the measures available for an sql query?
    How to identify i am writing optimal query?
    I am using Oracle 9i... You might want to start with a feature of SQL*Plus: The AUTOTRACE (TRACEONLY) option which executes your statement, fetches all records (if there is something to fetch) and shows you some basic statistics information, which include the number of logical I/Os performed, number of sorts etc.
    This gives you an indication of the effectiveness of your statement, so that can check how many logical I/Os (and physical reads) had to be performed.
    Note however that there are more things to consider, as you've already mentioned: The CPU bit is not included in these statistics, and the work performed by SQL workareas (e.g. by hash joins) is also credited only very limited (number of sorts), but e.g. it doesn't cover any writes to temporary segments due to sort or hash operations spilling to disk etc.
    You can use the following approach to get a deeper understanding of the operations performed by each row source:
    alter session set statistics_level=all;
    alter session set timed_statistics = true;
    select /* findme */ ... <your query here>
    SELECT
             SUBSTR(LPAD(' ',DEPTH - 1)||OPERATION||' '||OBJECT_NAME,1,40) OPERATION,
             OBJECT_NAME,
             CARDINALITY,
             LAST_OUTPUT_ROWS,
             LAST_CR_BUFFER_GETS,
             LAST_DISK_READS,
             LAST_DISK_WRITES,
    FROM     V$SQL_PLAN_STATISTICS_ALL P,
             (SELECT *
              FROM   (SELECT   *
                      FROM     V$SQL
                      WHERE    SQL_TEXT LIKE '%findme%'
                               AND SQL_TEXT NOT LIKE '%V$SQL%'
                               AND PARSING_USER_ID = SYS_CONTEXT('USERENV','CURRENT_USERID')
                      ORDER BY LAST_LOAD_TIME DESC)
              WHERE  ROWNUM < 2) S
    WHERE    S.HASH_VALUE = P.HASH_VALUE
             AND S.CHILD_NUMBER = P.CHILD_NUMBER
    ORDER BY ID
    /Check the V$SQL_PLAN_STATISTICS_ALL view for more statistics available. In 10g there is a convenient function DBMS_XPLAN.DISPLAY_CURSOR which can show this information with a single call, but in 9i you need to do it yourself.
    Note that "statistics_level=all" adds a significant overhead to the processing, so use with care and only when required:
    http://jonathanlewis.wordpress.com/2007/11/25/gather_plan_statistics/
    http://jonathanlewis.wordpress.com/2007/04/26/heisenberg/
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Portal performance monitoring scripts : (Unable to generate reports)  HELP

    Hi,
    Using 10.1.2.0.0
    I followed README.html document to load the logs files to generate reports for Portal Performance.
    First of all while running loadlogs.pl I keep getting the following error. I even tried adding -nodirect but still gets the same error. Don't know why. But it looks like there is some data loaded in OWA_LOGGER table
    C:\ORACLE_PRODUCTS\PORTAL_AS\portal\admin\plsql\perf\loader>perl loadlogs.pl -logical_host localhost -connection owa_perf/owa_perf@orcl -http_logfile C:\ORACLE_PRODUCTS\PORTAL_AS\Apache\Apache\logs\error_log.1130457600 -webcache_logfile C:\ORACLE_PRODUCTS\PORTAL_AS\webcache\logs\access_log -oc4j_logfile C:\ORACLE_PRODUCTS\PORTAL_AS\j2ee\OC4J_Portal\application-deployments\portal\OC4J_Portal_default_island_1\application -nodirect
    25-Oct-05 13:20:17, Copying abc:C:\ORACLE_PRODUCTS\PORTAL_AS\Apache\Apache\logs
    \error_log.1130241600
    25-Oct-05 13:20:17, Loading C:\DOCUME~1\whitesox\LOCALS~1\Temp\abc_error_log.1130
    241600.20051025.132017
    25-Oct-05 13:20:21, Copying abc:C:\ORACLE_PRODUCTS\PORTAL_AS\j2ee\OC4J_Portal\a
    pplication-deployments\portal\OC4J_Portal_default_island_1\application
    25-Oct-05 13:20:21, Loading C:\DOCUME~1\whitesox\LOCALS~1\Temp\abc_application.20
    051025.132021 -nodirect
    SQL*Loader-350: Syntax error at line 127.
    Token longer than max allowable length of 258 chars
             end",
            ^
    25-Oct-05 13:20:22, Copying abc:C:\ORACLE_PRODUCTS\PORTAL_AS\webcache\logs\acce
    ss_log
    25-Oct-05 13:20:31, Loading C:\DOCUME~1\whitesox\LOCALS~1\Temp\abc_access_log.200
    51025.132022Then I ran reports.sql but I don't see any reports being generated, but running this script did populate some other tables. I tried running some other scripts also but somehow I don't see any reports being generated as opposed to what is said in the README.HTML document i.e. "A sample web page (reports.html) is included which provides links to the generated reports.". How really I get to see the reports, where are the reports generated, is it something else that I am missing. No matter what script I run I don't see any report being generated. The document is not so clear. Can someone please help me out here. Thanks

    Hi!
    You have to change to directory
    ORACLE_HOME$/portal/admin/plsql/perf/scripts
    (you can find reports.sql in it) before you run reports.sql script!
    It will produce several .txt files.
    After running the script just open reports.html, that will point the generated files.
    A better place to ask questions like this:
    Portal Performance and Scalability
    http://forums.oracle.com/forums/forum.jspa?forumID=15

  • How to measure the performance of a SQL query?

    Hello,
    I want to measure the performance of a group of SQL queries to compare them, but i don't know how to do it.
    Is there any application to do it?
    Thanks.

    You can use STATSPACK (in 10g its called as AWR - Automatic Workload Repository)
    Statspack -> A set of SQL, PL/SQL, and SQL*Plus scripts that allow the collection, automation, storage, and viewing of performance data. This feature has been replaced by the Automatic Workload Repository.
    Automatic Workload Repository - Collects, processes, and maintains performance statistics for problem detection and self-tuning purposes
    Oracle Database Performance Tuning Guide - Automatic Workload Repository
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/autostat.htm#PFGRF02601
    or
    you can use EXPLAIN PLAN
    EXPLAIN PLAN -> A SQL statement that enables examination of the execution plan chosen by the optimizer for DML statements. EXPLAIN PLAN causes the optimizer to choose an execution plan and then to put data describing the plan into a database table.
    Oracle Database Performance Tuning Guide - Using EXPLAIN PLAN
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14211/ex_plan.htm#PFGRF009
    Oracle Database SQL Reference - EXPLAIN PLAN
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9010.htm#sthref8881

  • Portal performance report

    Hi Portal GURUs,
    We have configured Portal Activity Reports for monitoring hits but we required to monitor Portal Performance.
    Please suggest me if there is any in built features to get these reports?
    Regards
    Kiran

    Hi,
    you can monitor the portal performance by using  CCMS (and GRMG).
    Below link will also help you.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/48fead90-0201-0010-6e83-b43f5dd4d338https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3ba6a290-0201-0010-d684-c94b1c765ae9
    Raghu

  • Portal Performance Trend Report

    Hi Portal Gurus,
    I have one request.
    For ABAP Based system we can get historical performance report.
    But for Portal , how we can get daily / weekly or monthly based report for performance.
    We have Mercury Tool to get report for few 'Pages' based on users & response time.
    I have checked response summary from monitoring. But we have multiple dialog instance & node.
    Collecting data from there is not choice.
    I'm looking for better way to way to get portal performance.
    Regards
    Sumanta Chatterjee

    Hi
    There are some portal activity report available in portal.chk user administration->activity report.Also you can customize this report according to your requirement.Basically a user activity report.
    Also in netweaver administration (http://<portal url>:<port>/nwa)you can access all kinds of usage data.
    Shankar

  • Any tool to generate 1 million xml files?

    Hi All,
    I have customer file and want to generate 1 million xml files to start load performance testing.
    Let me know if any tool available.
    Thanks
    Shruthikaa

    Hi Shruthikaa,
    As suggested LoadGen tool can be used for testing stress and performance, also you can do it with visual studio -
    Load Testing using Visual Studio
    Maheshkumar S Tiwari|User Page|Blog|BizTalk
    Server : How Map Works on Port Level

Maybe you are looking for

  • 2 podcast list, right one won't show to sync with iPhone

    I synced my iPhone 4 with my MacBook today. I havn't done that in quite some time. I now have 2 Podcasts list (I didn't notice a second one there before today) and the 2nd/new one only has 1 of my podcasts listed in it. My other Podcast list has all

  • Problem with CUSTOMER_REL

    Hi, We are executing the initial download in CRM 4.0 system. When we proceed to download the relationships, BDOCs shows an strange error (althought the relationships are correctly created). The error is <b>Service that caused the error: BP_REL_CRM_TO

  • Two credit cards

    Does the on-line Apple Store allow two credit cards to be stored?  I would like to store a business and a personal credit card so I can charge purchases to the appropriate card.

  • Networkstats & latencycheck in forms6i patch 10

    Hi, I'm experimenting with forms6i patch 10 servlet listener on a Solaris 8 machine. My app's works well but now I'll like to use networkstats and latencycheck. I changed the formsweb.cfg: networkStats=true latencyCheck=true and also the baseie.htm:

  • Sensor de Proximidade 4S

    O seguinte, quando estou em uma ligação e encosto o aperelho no rosto ele desliga a tela, até ai normal, só que o problema e que quando afasto o aparelho do rosto ele volta como se o brilho estivesse no minimo e pra voltar ao normal tenho que encosta