Gathering Hardware Utilization Statistics for UCS B/C Series via SNMP - Is it possible?

It is possible to gather typical cpu, memory, fan speed, and hdd utilization statistics from the Cisco MIBs for UCS B / C series servers?
The excellent Cisco UCS Monitoring Resource Handbook (https://communities.cisco.com/docs/DOC-37197) provides a link to the following document for MIB Loading Order and Statistics Collection Details:
MIB Reference for Cisco UCS Standalone C-Series Servers:
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/mib/c-series/b_UCS_Standalone_C-Series_MIBRef.pdf
In Table Four of the above referenced document, it contains various MIBs/OIDs used for gathering statistics.
For example, this processor section:
Processor
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB
.1.3.6.1.4.1.9.9.719.1.41 is the parent OID where the key statistics reside.
processorEnvStats—Provides all CPU power and temperature statistics for every CPU socket.
processorUnit—Provides all CPU statistics for every CPU.
When walking the following C Series with the suggested OID:
snmpwalk -v2c -c XXXXXXX -m ALL XXXXXX sysdesc
SNMPv2-MIB::sysDescr.0 = STRING: Cisco Integrated Management Controller(CIMC) [UCS C220 M3S], Firmware Version 1.5(1l) Copyright (c) 2008-2012, Cisco Systems, Inc.
I get the following output:
snmpwalk -v2c -c XXXXXXX -m CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB XXXXXXXX cucsProcessorUnitTable
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitDn.1 = STRING: "sys/rack-unit-1/board/cpu-1"
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitDn.2 = STRING: "sys/rack-unit-1/board/cpu-2"
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitRn.1 = STRING: cpu-1
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitRn.2 = STRING: cpu-2
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitArch.1 = INTEGER: xeon(179)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitArch.2 = INTEGER: xeon(179)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitCores.1 = Gauge32: 4
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitCores.2 = Gauge32: 4
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitCoresEnabled.1 = Gauge32: 4
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitCoresEnabled.2 = Gauge32: 4
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitId.1 = Gauge32: 0
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitId.2 = Gauge32: 1
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitModel.1 = STRING: Intel(R) Xeon(R) CPU E5-2643 0 @ 3.30GHz
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitModel.2 = STRING: Intel(R) Xeon(R) CPU E5-2643 0 @ 3.30GHz
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitOperState.1 = INTEGER: operable(1)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitOperState.2 = INTEGER: operable(1)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitOperability.1 = INTEGER: operable(1)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitOperability.2 = INTEGER: operable(1)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitPerf.1 = INTEGER: ok(1)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitPerf.2 = INTEGER: ok(1)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitPower.1 = INTEGER: unknown(0)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitPower.2 = INTEGER: unknown(0)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitPresence.1 = INTEGER: equipped(10)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitPresence.2 = INTEGER: equipped(10)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitRevision.1 = STRING: unknown
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitRevision.2 = STRING: unknown
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitSerial.1 = STRING: Not Specified
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitSerial.2 = STRING: Not Specified
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitSocketDesignation.1 = STRING: CPU1
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitSocketDesignation.2 = STRING: CPU2
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitSpeed.1 = INTEGER: 3300
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitSpeed.2 = INTEGER: 3300
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitStepping.1 = Gauge32: 0
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitStepping.2 = Gauge32: 0
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitThermal.1 = INTEGER: unknown(0)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitThermal.2 = INTEGER: unknown(0)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitThreads.1 = Gauge32: 8
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitThreads.2 = Gauge32: 8
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitVendor.1 = STRING: Intel(R) Corporation
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitVendor.2 = STRING: Intel(R) Corporation
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitVoltage.1 = INTEGER: unknown(0)
CISCO-UNIFIED-COMPUTING-PROCESSOR-MIB::cucsProcessorUnitVoltage.2 = INTEGER: unknown(0)
Typical CPU performance graphs provide percentage integers in vertical or y-axis and time in the horizontal or x-axis. The suggested OID yields no percentage utilization which I can graph.   Am I correct in concluding that we must poll the hypervisor for this data instead of the CIMC directly?
Thanks,
Amir

Hi tak,
my suggestion is to place the main prog outside the loop B but inside loop A. This makes sure, that Loop A is turning once with every execution of the main prog. Beside this Loop B is executing independently. Then you need some communication between your main program and the loop B which can be implemented easiest with global variables, one for each communication direction. This could look like this to abort the main prog on "emergency":
and to tell the Loop B to stop on finish of the Main prog:
In the main prog- VI (which is a bit weird, because the while loop execute always and only once and therefore is unnecessary) your experiment shoult execute in a while loop, since it is abortable dynamically. The for loop will execute the predefined count, defined by wiring the "loop count"- terminal or wiring an array with autoindexing.
I hope this helped a bit,
dave
Message Edited by daveTW on 06-17-2006 02:13 AM
Greets, Dave
Attachments:
multiple_loop_w_different_duration.png ‏4 KB
experiment.png ‏3 KB

Similar Messages

  • Utilization statistics for output devices

    Hi all,
    My boss would like a statistic about the utilization of our printers in our SAP ECC 5.0 system. Basically, how much pages were printed on the printers in a given period of time (day, week, month, etc.)?
    Is there any standard way of getting these data?
    If not, do you not about a user-exit during printing?
    Thanks,
    Gábor

    If you set profile parameter rspo/stat/jobs = 1, you can collect the data from table TSPJSTAT, which is entered by the SPO work processes.  You can also put code in user exit SPOOACC, the spool accounting user exit.
    There is more documentation on the user exit here:  <a href="http://help.sap.com/saphelp_nw04s/helpdata/en/e0/989de87a6111d39a1d0000e83dd9fc/content.htm">http://help.sap.com/saphelp_nw04s/helpdata/en/e0/989de87a6111d39a1d0000e83dd9fc/content.htm</a>
    Rich

  • Disable Statistics for specific Tables

    Is it possible to disable statistics for specific tables???

    If you want to stop gathering statistics for certain tables, you would simply not call DBMS_STATS.GATHER_TABLE_STATS on those particular tables (I'm assuming that is how you are gathering statistics at the moment). The old statistics will remain around for the CBO, but they won't be updated. Is that really what you want?
    If you are currently using GATHER_SCHEMA_STATS to gather statistics, you would have to convert to calling GATHER_TABLE_STATS on each table. You'll probably want to have a table set up that lists what tables to exclude and use that in the procedure that calls GATHER_TABLE_STATS.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Managing statistics for object collections used as table types in SQL

    Hi All,
    Is there a way to manage statistics for collections used as table types in SQL.
    Below is my test case
    Oracle Version :
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE    11.2.0.3.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    SQL> Original Query :
    SELECT
         9999,
         tbl_typ.FILE_ID,
         tf.FILE_NM ,
         tf.MIME_TYPE ,
         dbms_lob.getlength(tfd.FILE_DATA)
    FROM
         TG_FILE tf,
         TG_FILE_DATA tfd,
              SELECT
              FROM
                   TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
         )     tbl_typ
    WHERE
         tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:02.90
    Execution Plan
    Plan hash value: 3970072279
    | Id  | Operation                                | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                         |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  1 |  HASH JOIN                               |              |     1 |   194 |  4567   (2)| 00:00:55 |
    |*  2 |   HASH JOIN                              |              |  8168 |   287K|   695   (3)| 00:00:09 |
    |   3 |    VIEW                                  |              |  8168 |   103K|    29   (0)| 00:00:01 |
    |   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|              |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   5 |      FAST DUAL                           |              |     1 |       |     2   (0)| 00:00:01 |
    |   6 |    TABLE ACCESS FULL                     | TG_FILE      |   565K|    12M|   659   (2)| 00:00:08 |
    |   7 |   TABLE ACCESS FULL                      | TG_FILE_DATA |   852K|   128M|  3863   (1)| 00:00:47 |
    Predicate Information (identified by operation id):
       1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
    Statistics
              7  recursive calls
              0  db block gets
          16783  consistent gets
          16779  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
    select
         index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
    from
         all_indexes
    where table_name in ('TG_FILE','TG_FILE_DATA');
    INDEX_NAME                     BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR     NUM_ROWS SAMPLE_SIZE
    TG_FILE_PK                          2        2160        552842             21401       552842      285428
    TG_FILE_DATA_PK                     2        3544        852297             61437       852297      852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
    But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
    So my question is, is there any way by which I can change the statistics while using collections in SQL ?
    I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
    Modified query with hints :
    SELECT    
        /*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
        9999,
        tbl_typ.FILE_ID,
        tf.FILE_NM ,
        tf.MIME_TYPE ,
        dbms_lob.getlength(tfd.FILE_DATA)
    FROM
        TG_FILE tf,
        TG_FILE_DATA tfd,
            SELECT
            FROM
                TABLE
                        SELECT
                             CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                             OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
                        FROM
                             dual
        tbl_typ
    WHERE
        tf.FILE_ID     = tfd.FILE_ID
    AND tf.FILE_ID  = tbl_typ.FILE_ID
    AND tfd.FILE_ID = tbl_typ.FILE_ID;
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 1670128954
    | Id  | Operation                                 | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                          |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   1 |  NESTED LOOPS                             |                 |       |       |            |          |
    |   2 |   NESTED LOOPS                            |                 |     1 |   194 | 29978   (1)| 00:06:00 |
    |   3 |    NESTED LOOPS                           |                 |  8168 |  1363K| 16379   (1)| 00:03:17 |
    |   4 |     VIEW                                  |                 |  8168 |   103K|    29   (0)| 00:00:01 |
    |   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |  8168 | 16336 |    29   (0)| 00:00:01 |
    |   6 |       FAST DUAL                           |                 |     1 |       |     2   (0)| 00:00:01 |
    |   7 |     TABLE ACCESS BY INDEX ROWID           | TG_FILE_DATA    |     1 |   158 |     2   (0)| 00:00:01 |
    |*  8 |      INDEX UNIQUE SCAN                    | TG_FILE_DATA_PK |     1 |       |     1   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN                      | TG_FILE_PK      |     1 |       |     1   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID             | TG_FILE         |     1 |    23 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
       9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
           filter("TF"."FILE_ID"="TFD"."FILE_ID")
    Statistics
              0  recursive calls
              0  db block gets
             16  consistent gets
              8  physical reads
              0  redo size
            916  bytes sent via SQL*Net to client
            524  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              2  rows processed
    Thanks,
    B

    Thanks Tubby,
    While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
    But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
    http://www.oracle-developer.net/display.php?id=427
    If we go across the document, it has mentioned in total 3 hints to set statistics :
    1) CARDINALITY (Undocumented)
    2) OPT_ESTIMATE ( Undocumented )
    3) DYNAMIC_SAMPLING ( Documented )
    4) Extensible Optimiser
    Tried it out with different hints and it is working as expected.
    i.e. cardinality and opt_estimate are taking the default set value
    But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
    With CARDINALITY hint
    SELECT
        /*+ cardinality( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     5 |    10 |    29   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With OPT_ESTIMATE hint
    SELECT
         /*+ opt_estimate(table, e, scale_rows=0.0006) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Execution Plan
    Plan hash value: 4043204977
    | Id  | Operation                              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                       |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   1 |  VIEW                                  |      |     5 |   485 |    29   (0)| 00:00:01 |
    |   2 |   COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     5 |    10 |    29   (0)| 00:00:01 |
    |   3 |    FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    With DYNAMIC_SAMPLING hint
    SELECT
        /*+ dynamic_sampling( e, 5) */*
    FROM
         TABLE
              SELECT
                   CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
                   OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
              FROM
                   dual
         ) e ;
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 1467416936
    | Id  | Operation                             | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                      |      |     2 |     4 |    11   (0)| 00:00:01 |
    |   1 |  COLLECTION ITERATOR CONSTRUCTOR FETCH|      |     2 |     4 |    11   (0)| 00:00:01 |
    |   2 |   FAST DUAL                           |      |     1 |       |     2   (0)| 00:00:01 |
    Note
       - dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
    I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
    By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
    Regards,
    B

  • SQL Developer Behavior When Gathering Table/Index Statistics

    Hey All,
    Not sure if this has been posted yet. I did a search and did not find any threads on the topic though.
    I noticed with SQL Developer 2.x, when you using the context menu to gather table/index statistics for a given table, you get no modal progress/waiting window like you did in 1.x. It just kind of "does nothing", even though it did actually execute the DBMS_STATS package. If you press cancel and try to navigate around, you get multiple "Connection is Busy" errors. Eventually it will come back and say "Statistics gathered for table <whatever>". In the old versions there was just a modal window with an animated progress bar while it ran the DBMS_STATS package. What happened to that? Or is this something unique to my install? Anyone else ran into this? Is there a fix or somewhere I can report this as an official bug? FWIW I'm running 2.1.1.64, and this did occur in the initial 2.0 release.
    It is very confusing the first time you run into it... I pressed the "apply" button several times thinking it didn't take, but it ended up running the DBMS_STATS for every click I did.
    Thanks!

    Same happens to all the other context menu opened dialogs. Indeed very confusing at first and disturbing.
    The only official site to report bugs is Metalink/MOS, but you might be lucky if someone from the team picks it up here.
    Regards,
    K.

  • Null statistics for tables but still got optimizer problem

    Hi,
    We got a batch application on 10.2.0.4 Oracle database that does a lot of deletes and inserts when run it. Nightly statistics gathering was not enough so I deleted and locked the statistics for all tables. Now it has null statistics so it supposes to use dynamic sampling instead of statistics. At beginning the query executions were ok, however after the application ran for a well, it still chosen a very bad query execution plan (with cost over ten times more that it was normal).
    What is happening here? Null statistics is still not good enough for the highly volatile database? Is there anything else I can do?
    Thanks in advance.

    Provide information like the structure of your table (including indexes), your compartibilty and optimizer settings, your SQL Statement and sample explain plan of the SQL.

  • Create a site utilization report for SharePoint Site with these conditions

    HI,
    How do we create a site utilization report for a SharePoint 2007 site. I want to include the following conditions in the report
    a) The list of users who are accessing the site
    b) The list of users who have not accessed the site ( Can we do some filtration based on some conditions )
    c) When was the last date the user has accessed the site

    Hi Kalpana,
    Sorry for delay in reply. I don't think this can be possible from front end without involving SQL dbo users table. If you found any other alternate please share here.
    You can get site collection / sub site user details via
    SP user manager tool and for last access date - time you can use SharePoint object model. - ref : http://blogs.msdn.com/b/varun_malhotra/archive/2010/05/12/moss-2007-get-last-accessed-date-for-a-site.aspx
    Let us know if this helps
    Regards,
    Pratik Vyas | SharePoint Consultant |
    http://sharepointpratik.blogspot.com
    Posting is provided AS IS with no warranties, and confers no rights
    Please remember to click Mark As Answer if a post solves your problem or
    Vote As Helpful if it was useful.

  • Generate Prime Interface Availability and Utilization Report for unified APs

    Hi,
    I´m trying to generate interface availability and interface utilization report for unified APs on Prime Infrastructure 2.0, but it doesn´t display any information. I have created device health and interface health templates under desing/Monitor configuration/My templates and deployed under Deploy/Monitoring deployment, but it still don´t show any information,
    thaks for your help.

    Hi Alejandro,
    Did you solve this problem? Or is it a bug?
    I face the some issue with you, I just run "Report/Report Launch Pad/Deivce/Interface Utilization"
    and then I create a report for interface utilization.
    But it display nothing when the report run finished.
    I ask some guys in this forum, they said maybe it's a PI2.1 bug.
    BR
    Frank

  • Are there any information gathering tools or scripts for Sun VDI 3.1.1?

    Hi,
    Are there any information gathering tools or scripts for Sun VDI 3.1.1?
    for problem reporting or service supportting , such as
    ut_gather, a ksh based tool to collect all Sun Ray related information from a Sun Ray server.
    http://www.sun.com/bigadmin/jsp/descFile.jsp?url=descAll/ut_gather_1_4_6
    http://www.sun.com/service/gdd/index.xml
    Sun Explorer Data Collector in The Sun Services Tools Bundle (STB)
    http://www.sun.com/service/stb/index.jsp
    http://www.unix-consultants.co.uk/examples/scripts/linux/linux-explorer/
    http://www.slideshare.net/Aeroplane23/information-gathering-2
    Windows MPSreports, msinfo32
    Redhat sysreport
    Suse Siga reportconfig
    Any advice would be appreciated.
    Thanks,

    ut_gather versions are available on MOS under reference #1260464.1

  • How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?

    1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
    When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed  in the stat plan  ?
    2. Does rowsource statistics gives some kind of  understanding of Extended stats ?

    You can get Row Source Statistics only *after* the SQL has been executed.  An Explain Plan midway cannot give you row source statistics.
    To get row source statistics either set STATISTICS_LEVEL='ALL'  in the session that executes theSQL OR use the Hint "gather_plan_statistics"  in the SQL being executed.
    Then use dbms_xplan.display_cursor
    Hemant K Chitale

  • How can i see visitor statistics for web page hosted on osx lion server

    Hello
    how can i see visitor statistics for web page hosted on osx lion server
    Thanks
    Adrian

    Just click inside the url address bar. Full url address highlighted, will appear.
    Best.

  • Hardware ID's for problem with dv7t-3300 wrong drivers?

    I saw the message regarding the other person with problems with his video drivers so here is a screen shot of the hardware id's for my display adapter:

    Hi,
    Please try the following driver:
       http://h10025.www1.hp.com/ewfrf/wc/softwareDownloadIndex?softwareitem=ob-100095-3&cc=us&dlc=en&lc=en...
    Regards.
    BH
    **Click the KUDOS thumb up on the left to say 'Thanks'**
    Make it easier for other people to find solutions by marking a Reply 'Accept as Solution' if it solves your problem.

  • Hardware sizing recommendations for B2B Server

    My customer Welch Foods Inc. is on 11i Oracle eBusiness Suite and is planning to uptake the latest 1Sync integration features in the PIM product. For out-of-box AS2 connectivity with 1Sync - they are planning to use Oracle B2B Integration Server 10.1.2.3.
    They have an average transaction volume of 30 transactions per month. One time - initial load of 20,000 transactions.
    Based on the above estimates, they are looking for hardware sizing recommendations for the B2B server.
    Your assistance is much appreciated.
    Asmi Maharishi
    SDM for Welch Foods.

    Thanks for your reply!
    Here are the responses to your queries:
    1. Is B2B instance going to run alone in a box
    Yes, B2B instance will run alone on a box.2. what will be the size of message
    Messages can be anywhere from 5-20 KB3. how many messages will be part of a transaction.
    It should be 2 - Registration and publication. But sometimes it depends on how successful the first “registration” goes. Typically we get one or two errors that the users go into PLM, correct and re-send.However, Looking at your current requirement, 30 transaction per month, we can easily address in a 4 GB machine itself.
    Additionally, Oracle B2B supports 10+ messages per second in 32GB, 4 processor machine.
    Memory suggested above is 4GB, does that take into account memory used by the 10G App server foot print or this is only to take care of messages? Also, how many Processors (Risk IBM) will be needed?

  • Apple Hardware Test software for late '07 Macbook?

    Where do I get the Apple Hardware Test software for a late '07 Macbook?  Original disk is long lost and it doesn't appear Apple has this up for download.
    The battery is not taking any charge, totally dead after 772 cycles.  But don't know if its the battery or the Macbook's internal battery charging hardware.

    Using How to identify MacBook models your Macbook is a Macbook 3,1. At this link https://github.com/upekkha/AppleHardwareTest click on Macbook3,1 link which will download the AHT software. Here is the direct link - MacBook3,1 Mac-F22788C8.

  • To get Run time Statistics for a Data target

    Hello All,
    I need to collect one month data (ie.Start time and End time of the cube) for the documentation work. Could someone help me to find out the easiest way to get above mentioned data in BW Production system.
    Please guide me to know the query name to get the runtime statistics for the cube
    Thanks in advance,
    Anjali

    it will fetch the data if the BI stats are turned on for that cube.....
    please verify these links
    http://help.sap.com/saphelp_nw04s/helpdata/en/8c/131e3b9f10b904e10000000a114084/content.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/43/15c54048035a39e10000000a422035/frameset.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm

Maybe you are looking for